No real AI progress

AI is a hard problem, and even if we had a healthy society, we might still be stuck. That buildings are not getting taller and that fabs are not getting cheaper and not making smaller and smaller devices is social decay. That we are stuck on AI is more that it is high hanging fruit.

According to Yudkowsky, we will have AI when computers have as much computing power as human brains.

The GPU on my desktop has ten times as much computing power as the typical male human brain, and it is not looking conscious.

Artificial consciousness will require an unpredictable and unforeseeable breakthrough, if it is possible at all, because we now understand aging, but do not understand consciousness

A self driving car would be true AI, or a good step towards it, as would good machine translation. You don’t mind the computer sometimes getting the meaning backwards when it does translation, but you would mind if a self driving car drives into someone or something.

With chess playing computers, it seemed that we were making real progress towards AI but it was eventually revealed that chess playing computers do not figure out chess moves the way humans do, and the difference matters, even if a computer can beat any human chess player.

With big data, it seemed that we were making real progress towards AI, but it was eventually revealed that just as chess playing computers do not figure out chess moves the way humans do, neither does big data.

When humans found the Rosetta stone, they were able to learn Egyptian. Having fair number of words and sentences to start with, they could then figure out other words from context, giving them more words, and more words gave them more grammar, and more grammar and words enabled them to figure out yet more words from context, until they had pretty much all Egyptian. Google’s machine translation learns to translate from a thousand Rosetta stones. It is not doing what we are doing, which is understanding meaning and expressing meaning.

Humans do not use big data, just as they do not examine a million possible chess moves.

Atop each Google self driving car is a great big device that performs precision distance measurements in all directions, giving the computer a three dimensional image of its surroundings. It is a big object because it collects a huge amount of data very fast. Human eyes just collect a small amount of data, from which the human constructs a three dimensional idea of his surroundings. Humans don’t need that much data to drive, and could not handle that much data if their senses provided it.

The Google car has a centimeter scale map of the world in which it is driving. It can see traffic lights because it knows exactly where traffic lights are in the world. If one day, someone moved the traffic lights six inches to the right, problem. If roadworks, problem. If a traffic accident, problem. And that is why the cars need someone in the driver’s seat.

Maybe big data will produce acceptable results for self driving cars, acceptable being that in the rare situations that the computer cannot handle the car, the computer realizes that there is something wrong, comes to a halt, and asks for human assistance. But it is not quite there yet, and if it does produce acceptable results, will not produce human like, or even non human animal like, performance.

What is missing seems to be consciousness, and we don’t really know what that is. Intelligence seems to require huge numbers of neurons. Consciousness seems to require considerably smaller number.

The male human brain has around eighty six billion neurons.

The maximum output of any one neuron is about three hundred hertz, but only a small fraction are running near the maximum output, because if all of them were running near maximum output, the brain’s oxygen supply could not keep up.

This output from any one neuron is a summary of the data received by a large number of synapses, typically a thousand or so synapses.

So we can suppose that the male human brain processes something like three terabytes per second if we look at neuron output, or something like three thousand terabytes per second if we look at neuron input.

The GPU on my desktop generates eight thousand single precision gflops per second, which is thirty two terabytes per second.

It seems obvious to me that the problem is not artificial intelligence. By any measure of intelligence, computers are highly intelligent. The problem is that they are not conscious. And the reason they are not conscious is that we do not have the faintest idea what consciousness is. Maybe is something supernatural, maybe it is something perfectly straightforward, but we cannot see what it is for the same reason that a fish cannot see water.

Furthermore, interacting with non human animals, it seems obvious to me that a tarantula is conscious, as conscious as I am, and a GPU is not. The genetic basis for brain structure is the same in the tarantula as the human, which hints that the common ancestor of the protostomes and deuterostomes, which had a complex brain, but was otherwise scarcely more than a bag of jelly, was conscious – that consciousness was the big breakthrough that resulted in protostomes and deuterostomes dominating the earth, that consciousness is a single big trick, not a random grab bag of tricks.

From time to time, I read claims that someone is successfully emulating a cat brain or some such, but no one has successfully emulated the nervous system of Caenorhabditis elegans, which has three hundred and two neurons, nor has anyone successfully emulated the human retina, in which data flows through three layers of neurons with only local interactions, so that if we understood a single tiny patch of the retina, a dozen neurons or so, we would understand all of it. In this sense, neurons are doing something mysterious, in that quite small systems of neurons remain mysterious.

This does not prove that consciousness is magic, but, as far as our current knowledge goes, it is indistinguishable from magic.

69 Responses to “No real AI progress”

  1. Cloudswrest says:

    It’s looking like even bees can be trained to do stuff, just like dogs.

    https://youtu.be/exsrX6qsKkA

    • jim says:

      What is interesting is that they can learn stuff by watching other bees, which requires them to imagine themselves as that other bee, to have a theory of mind, to have empathy.

      The common ancestor of bees and men, the urbilatarian, is also the common ancestor of lobsters and men, and lobsters and men use the same neurohormones to regulate and respond to social status, so the common ancestor of men and bees had social status interactions, which makes it likely he was aware that other beings like himself existed.

  2. Anon says:

    We are making progress with neural networks, they designed to work in a similar fashion to synapses, where each node acts a certain way based on what kind and how much input is given. The complexity comes from organising them in a way that can make decisions. More intelligent and self-made decisions arise from neural networks. We’re a long way from consciousness but there are no theoretical limits.

    • jim says:

      There are no theoretical limits because there is no theory. We not only do not understand how consciousness works, but what it does, and why it is so remarkably useful and effective.

  3. Dude says:

    Hot from Reddit: What an autonomous Tesla Car sees while driving

    http://i.imgur.com/AsdcLec.gifv

  4. Sam says:

    “…#FF which is (15?×?16^1) + (15?×?16*0) = 256”

    That didn’t come out right it’s (15 x 16^1) + (15 x 16^0)= 255 + 1 for zero, to give 256 for all possible combinations.

  5. Sam says:

    By a super quick read. “…One important feature of dendrites, endowed by their active voltage gated conductances, is their ability to send action potentials back into the dendritic arbor. Known as backpropagating action potentials, these signals depolarize the dendritic arbor and provide a crucial component toward synapse modulation and long-term potentiation…”

    The calculations you’ve been using are multiplications. Wouldn’t it instead be binary factorial? For example if you have two inputs in it’s not (1 x 2) it’s all possible binary combinations. Binary11 which is four possible combinations. Higher factorial combinations are possible with analog potentation. For example what if they were hexadecimal? Highest combination is #FF which is (15?×?16^1) + (15?×?16*0) = 256 possible combinations (counting zero as a combination). The numbers get real high, real fast. Even possible the dendrite spines act similar making the numbers astronomical very fast. Even if the dendrite spines or dendrites act as groups I believe there is a mechanism more complicated than adding which would again mean we would have to use a factorial to properly describe them.

  6. Stephen W says:

    Here is and example of petty bureaucrats trying to thwart innovation for nonsensical reasons. Even when its not covered by there remit.

    http://motherboard.vice.com/blog/these-are-the-companies-the-faa-has-harassed-for-using-drones

    http://motherboard.vice.com/blog/the-little-known-fued-thats-shaping-the-future-of-delivery-drones

  7. Hawk Spitui says:

    I’m inclined to call the entire AI project misguided, at least with the technology currently available.

    A processor is merely a glorified abacus. Despite the fact that it can be made to appear to emulate certain activities a brain performs, underneath it all it’s merely performing basic arithmetic operations.

    Given that I see no reason to believe that brains operate on the same principle, I wouldn’t expect a processor to be able to behave like one. Most processors these days are digital devices, brains appear to operate on more of an analog principle (if that comparison is even meaningful).

    I do not think the measure of a processor and it’s ancillary software is how well they can fool us into believing they are something which they are not. A processor is a closer relative to a vacuum cleaner than it is to a brain. I have no more reason to expect them to evolve consciousness any more than I do vacuum cleaners. I submit that there are more constructive uses for processors than trying to coax them into fooling us into believing that they’re something they ain’t.

    • Innocent bystander says:

      I think you are possibly confusing a difference in degree with a difference in kind. If my calculation that current GPUs (eg GTX780) are about a factor of 10,000 less powerful than the brain, that is a huge gap. Think about the difference between a town of 1,000 people and New York City. Or a tinker toy and a powerful racing car.

      There is a widespread perception that the software is the bottleneck. But see here https://drive.google.com/file/d/0B_hpownP1A4PdUFEUWR1b25jTGs/edit?usp=sharing The basic idea is that machine learning removes the need to explicitly program everything, and that historically computers have performed somewhat above what you would expect based purely on raw processing power.

      • Hawk Spitui says:

        First, how are you quantifying “powerful”? I can quantify a processor in MIPS, in GHz, or by the number of transistors it has. What metric are you using to quantify a brain?

        You see the problem – you can’t really quantify brains and processors by a common metric, other than comparing how they perform specific tasks where their functionality overlaps, like maybe finding a square root. If your calculation were correct, a brain should be able to calculate a square root much quicker than any modern computer. If fact, even the cheapest calculator can calculate one faster than any human outside of a few savants.

        Your google document doesn’t want to open for me, so I’m unable to comment.

        I have no doubt that machines can be programed to “learn”, they do so already. Your web browser will suggest an url when you begin typing an address based on your history. Even certain polymers demonstrate characteristics analogous to “remembering” and “learning”.

        But I don’t see AI in the sense of “consciousness” emerging any time soon based on processor technology and it’s ancillary software. There’s an assumption in there that brains operate in the same capacity as software, namely, that it accepts an input, process it according to a set of instructions, and produces an output. While brain do produce responses based on stimuli, they’re also capable of generating their own stimuli, i.e. they can be set upon solving a problem by curiosity, or from an emotional reaction to their environment, etc. Machines don’t yet generate their own problem sets to solve, and as it’s still unclear what makes a brain do this, I don’t see that functionality being replicated in a machine.

        Also, let’s remember “intelligence” is merely a byproduct of a brain, the main business of a brain is maintaining and managing an organic body. “Intelligence” evolves correspondingly to the complexity of that task.

  8. spandrell says:

    I also don’t know to what extent “knowing” is a useful concept. If one can behave as to produce a satisfactory result, how is that different from knowing? How does one determine knowing? I know plenty of people who are capable of doing stuff but they can’t properly explain how they do it. Do they know what they’re doing, any more than a computer knows how to play chess?

    • jim says:

      Consider the proof that you cannot deduce the parallel postulate from the other postulates. A computer could not do that, because it depends on knowing, not on manipulating symbols according to rules.

  9. RiverC says:

    AI is a funny subject. By the definition of what AI actually means, we have it already. Artificial intelligence implies that you are making something behave intelligently, and the history of AI development tells us that this is done for a specific purpose.

    In my view, the big hurdle here is that people get confused about AI, perhaps bad futurism or bad religion or both (i.e. reading too much into Heinlein’s thought experiment in The Moon is a Harsh Mistress for example.) The point is that there are two things going on here, not one.

    The first thing going on, the fundamental error (and many programmers have it) is confusing natural intelligence with artificial intelligence. Such people want to construct a natural intelligence, artificially. In my view this is either impossible or at the very least has nothing at all to do with AI itself.

    The second thing going on parallels the development of the definition of AI itself – attempts to describe what we’re trying to get the computer to do that are normally only done by natural intelligence. Talib and Wolfram are in this camp effectively (even if they think that simple logical heuristics are the key to artifical natural intelligence – I believe not) in that they are simply asking the question “what does it need to do / how does it need to behave?” and then, “what is the best way to make it well behaved?”

    An example of how to make more ‘well behaved’ AI’s in games is to make the world less of a prop, producing a small scale ecology. If the soldiers need to take breaks, sleep and eat once in awhile, this makes the game world more well behaved. Even moreso if you give soldiers preferences; instead of relying on random rolls, a randomly or genetically assigned set of preferences with more than one layer removes the need for a random number being generated and if set up right, makes the AI’s even more ‘well behaved’ than before.

    But ultimately it is still quantity and not quality; you’re assigning a numerical weight to a meaning, in the case of a simple if/then it’s the most simple weighting: 1 versus 0.

    • Alrenous says:

      Such people want to construct a natural intelligence, artificially. In my view this is either impossible or at the very least has nothing at all to do with AI itself.

      It’s like obsessing over making artificial muscle fibres when we already have servos and such.

      • RiverC says:

        precisely; if you can make tiny, strong servos that function as well-behaved muscles, for prosthetics you’ve already won. I have no desire for humanoid robots unless the human form is entailed in them being well behaved. Likewise, I’m fine with machines being alien intelligences; ‘uncanny valley’ tells us that Armitage III is less likely than R2D2.

      • jim says:

        A servo is stronger and quicker than a muscle. Artificial translation is no substitute for real translation.

        And the reason it is no substitute is that the translator knows he is translating, while the computer knows nothing.

        • RiverC says:

          I guess that depends on if ‘understanding’ is necessary for ‘well behavedness’? In translation it is to some extent – Idioms and such. But I don’t think a computer can do anything but reduce things to mathematical operations (even the simple 1/0 one called ‘true/false’) and thus AI is defined by proper behavior, not actual knowledge.

  10. […] No real AI progress « Jim’s Blog […]

  11. Glenfilthie says:

    I am convinced true AI is impossible. Put it this way – we will have AI when we understand the scientific framework for the soul – and the emotions that shape it.

    There is no magic in computers. All they do is Boolean math, accomplished by on/off switches. It doesn’t matter how many switches you have, it doesn’t matter how many registers or how much memory you have, it doesn’t matter how fast the computer calculates – not of this puts the machine any closer to artificial intelligence.

    • RiverC says:

      In my view ‘true’ AI already exists for a wide variety of purposes. It’s artificially created natural intelligence that is impossible.

  12. Barnabas says:

    I’m about halfway through Tyler Cowan’s Average is Over. One interesting prediction that he makes is that we might begin to see more systems dumbed down so as the be comprehensible to crude AI and that this dumbing down will be quite frustrating to humans. Imagine tearing out complex highway interchanges so that Google cars don’t crash, etc.

  13. Luca says:

    The possibility that consciousness is not only not necessary for intelligence – even immense intelligence – but also an impediment to it, is a dominant theme of Peter Watts’ brilliant and highly unsettling science fiction novel “Blindsight.”

  14. peppermint says:

    maybe if we knew what intelligence was in the first place, we could figure out how much processing power it takes

    a few years ago some people came up with a program that was able to do okay on part of an IQ test

    • jim says:

      Computers do OK when doing things that only humans can do. They do very badly on things that spiders, bees, and humans can all do pretty well.

    • Gian says:

      Surely false, Jim. Computer does not know anything. A human knows what addition is, computer does not.

      • jim says:

        A computer does not know what addition is, but it can do addition a whole lot better than a human can. Similarly, chess.

  15. Thales says:

    Some definition, please, for sake of clarity:

    Sentience = processing stimuli, making decisions (what the spider does)

    Sapience = abstract thinking (thinking about what the spider does)

    Consciousness = autobiographical self-awareness (thinking about thinking about what the spider does)

    Consciousness per se is (probably) memetic, and thus unyielding to empirical reductionism, nevermind that we’ve only a tenuous grasp on the two lower-order functions.

    • jim says:

      Intellectuals like a definition of consciousness wherein only intellectuals are conscious, and their closest competitors in the status race are not conscious.

      But when someone anesthetizes a spider, everyone knows what that means. It means the experimenter has rendered the spider unconscious.

    • Alrenous says:

      @ Thales & Jim

      Thales just defined consciousness to not exist. If thinking can be done unconsciously, then thinking about thinking can be done unconsciously. Take a program that thinks, and feed it the output of a program that thought. The mystery is in the first person subjective.

      I see an apple.

      There is the apple, the photons from the apple, the reaction between the photons and my retina, the visual cortex processing, and the first person subjective. (It can be broken down further but I don’t see any point.)

      If I am rendered unconscious by the right anaesthesia, the visual cortex continues normally but the subjective disappears. If I am dreaming, the visual cortex can be abnormal while the subjective is normal.

      It’s the dream thing which is mysterious.

      • spandrell says:

        It’s Julian Jaynes’s definition. Consciousness as the ability to introspect using language. Not even most humans alive do it though.

  16. The Five Jays says:

    Jim needs to stick to writing about something he knows, like Japanese children’s cartoons.

  17. Deogolwulf says:

    “The problem is that [computers] are not conscious.”

    Or rational. No more or less than is the case with light-switches or abacuses.

    Greater complexity, vast computation, even an infinite network of switches — no amount of quantitative accumulation bridges the gap between non-consciousness and non-rationality, on the one hand, and consciousness and rationality, on the other. Or, to put it another way: 01+ 02+ 03+ 04+ 05+ 06 . . . + 0? = 0.

    The rationality of computers is derived, not intrinsic, in the same way, for instance, that the function of a screwdriver is derived and not intrinsic. It exists and has its meaning as such only by and for our purposes. In other words, the “rationality” of computers is an extension of our rationality. Input, output, and process have meaning and rational aspect only by our lights, and are never intrinsic to the computer itself.

    The belief on the part of rational beings that their computing tools are or can be intrinsically (rather than derivatively) rational is a failure or an abnegation of their own rationality. The belief, also, that something can come from nothing (e.g., 01+ 02+ 03+ 04+ 05+ 06 . . . + 0? > 0) is sub-magical thinking.

    That (conceptual) thought is immaterial is provable. (See James Ross and Edward Feser, amongst others.) I would say further that the understanding and acceptance of this is necessary for the rationalist not prone to irrationalism.

    • Deogolwulf says:

      Sorry for the errors in formatting. The non-zero numbers are supposed to be subscript, and the “?” is meant to be the infinity-symbol in subscript.

  18. Gian says:

    The machine is non-intelligent. It does not and can not know anything.
    To know requires a mind.

  19. B says:

    Leibniz (building on Maimonides) said that consciousness was a function of the monads, the souls of objects. Further, that all the components, subcomponents and so on of all objects had their own monads. That intellect was limited to the higher level monads.

    I am not sure how you could manipulate this theory to produce intellect/consciousness from a created object.

    • Matthew says:

      Julian Barbour’s been trying to model a monadic universe using graphs with unlabeled nodes. What distinguishes each node is its set of relationships to other nodes.

      • B says:

        Alright. Then what? Once you have a model, how do you get consciousness to appear where there is none? It would involve creating another monad, right?

  20. […] No real AI progress « Jim's Blog Go to this article […]

  21. Alrenous says:

    I can prove consciousness is nonphysical. (I can also strongly suggest it is nonphysical in several non-proof ways.)

    I have found a candidate hole in physics. If I’m correct, recursing quantum mechanics in the right way leads to a machine that has no defined next-state probability. It would be trivial to build this machine in a neuron – all the components are known to be there, it would just be a matter of hooking it up right.

    A couple interesting coincidences. The machine only works if you try to make the information output non-probabilistic. No other property, such as position, can work.* Second, while it does go down the consciousnessquantum mechanics route, it exploits decoherence rather than superposition. Decohering faster and more reliably would be an asset.

    *If nonphysical, basically magic. If magic, then why not magic positions etc? Because these properties are not candidates. Only magic thinking.

    Intelligence is information manipulation.
    Information manipulation has three components – gathering, processing, and generation. Or learning, logic, creativity. Computers can do all these things already. They’re usually not optimized well for creativity or learning, though.

    • Zarf says:

      Would you mind explaining this in ordinary words, Alrenous? For example, what do you mean by “recursing”, “non-probabilistic”, “decoherence”, and “superposition”? (I’m actually interested, not being snide.)

      • Alrenous says:

        Recursing is the present participle of recurse. The system feeds back into itself. A recursive function in programming is one that calls itself. A recursive machine is one that feeds its own output into its input.

        Non-probabalistic. In Newton’s day, the world was thought to be deterministic and/or mechanistic. It was revealed to be stochastic – that is, it has deterministic averages and distributions, but the behaviour of any particular element is not predictable. This machine wouldn’t have an average; it is even less predictable than a stochastic system.

        A superposition is literally several classical states superposed upon one another. To conserve probability, each has a partial weight, like 50% or 22%. When a science journalist says a particle is in two places at once, they mean it is in a superposition of being 50% over here and 50% over there.

        A quantum system in a superposition is called coherent, so one of the names for superposition collapse is decoherence. It’s when the Schrodinger wave equation suddenly and (so far) inexplicably changes states. So, it goes from being 50% over here and 50% over there, to hitting something with 100% force over here. It is as far as we’re aware, truly random. (As opposed to chaotic, which can be modeled/computed, though not in real time. It’s debatable whether a flipped coin is chaotic or is a quantum event and thus truly random.)

        • peppermint says:

          heyyyyyyy a machine that feeds back into itself is called a recursing machine.

          also, I didn’t know that flipped coins were arguably quantum instead of merely classically chaotic, or that a wavefunction that is not an eigenstate of a basis is called coherent.

          those 5 university-level courses seem to have helped.

    • peppermint says:

      decohering quickly is the easiest thing in the world

      quantum mechanics is inherently about superpositions

      since you don’t seem to know much about quantum mechanics, i don’t expect you to have a serious theory of “recursing quantum mechanics”. Why don’t you keep it under wraps until you have it fully developed, then register the copyright on your manuscript.

      • Alrenous says:

        I’ve passed five sequential university level courses on quantum mechanics. How about you?

        • jim says:

          Your proposition, however, is not intelligible. Does not sound like it makes sense.

          • Alrenous says:

            It’s pretty mind-bending even when it isn’t compressed to hell.

            Even if I’m wrong, it’s a example of going truly outside the box. It should look like it doesn’t make sense.

  22. Red says:

    The fundamental flaw in most AI research is the idea that the brain learns a lot of new things and can be shaped to do anything as needed(Blank slatism). From my own reading I’ve come to understand the brain really just picks different already pre-existing groups of heuristics and applies them.

    As I understand it the fundamental rule of the biological universe is heuristics upon heuristics. Biological system work on heuristics the way the physical universe works works on laws, but in the case of biology, you can’t apply the same rule to 100% of the cases.

    Brains are less learning machines than they are heuristic sorting machines that quickly or not so quickly pick out the best heuristic for a given situation. Some of these heuristics used are probably new, but they may just as well be rules that that brains learned to apply in a different context using the same code just as DNA uses the same code with different parameters to form fingers and arms. Thus the limitation with AI is the lack of belief in heuristics and the lack of useful and generalizable heuristics that we’ve codified.

    In the book “How Doctors Think” Jerome Groopman pointed out that we’ve had AI machines better than 70-90% of doctors when it comes to diagnosing patients since 2005 or so, but the machines have not been allowed to come into wide usage primarily because the people working on the AI machines codified the diagnostic ability into about 100 rule heuristic. Doctors refused to believe it was that simple. For a self driving car you need a lot more situational heuristics, but with enough work it does indeed seem possible.

    In Antifragile: Things That Gain from Disorder by Nassim Nicholas Taleb makes a very convincing case that system like the stock market, fiscal planing, and diet are better determined by testing simple heuristics and applying the ones that work pretty consistently instead of trying to understand everything about a problem. The extra information is just noise in the system that a good heuristic will just ignore.

    Consciousness is a different issue. I have no evidence of it, but I believe that consciousness springs from whatever it is that makes life, life. Call it the sum of the drives in the biological systems or the spark of life given by a creator deity, but living things are fundamentally different from non living things.

    • laofmoonster says:

      > The fundamental flaw in most AI research is the idea that the brain learns a lot of new things and can be shaped to do anything as needed(Blank slatism).

      Not that I’m an expert, but the more I read about AI/singularity, the more I think it’s based on bad metaphysics. In the same way that blank-slatists ignore the physical component of human personality, LW-types haven’t gone far enough in rejecting Platonic idealism. They correctly reject syllogistic definitions of things that are actually similarity clusters ( http://lesswrong.com/lw/od ), but probability, maps, utility, etc, are still seen as mathematical concepts in the aether.

      All usable information is physics. Rational empiricism is implemented physically.

  23. Daniel Schmuhl says:

    We don’t even really understand how intelligence, creativity, consciousness etc work in humans yet in any great detail. There is some neuroscience of intelligence but it isn’t very developed. We don’t know what exactly (general intelligence) G is (is it processing speed?).

    The singularity is very far away if it’s coming at all. This won’t stop Kurzweil from selling more books and diet supplements however.

  24. jim says:

    If counting dendrites, need to count transistors. A GPU has around three billion transistors clocked at around a billion cycles per second, thus 3*10^18 operations per second, three hundred times your estimate for a male human brain.

    • Candide III says:

      If counting transistors, you are going to need to count much smaller entities than dendrites — at least to one level below synapses/dendrite spines. Cortical neurons have approximately 200,000 spines.

      • jim says:

        A dendritic spine is the connection to the synapse, so the number of spines is, more or less, the number of synapses. Guessimated number of synapses in the male human brain, 1.5*10^14.

        Thus, 4.5*10^16 bit operations per second, one hundredth as many bit operations per second as the GPU underneath my desk.

        • Eric says:

          Neurons do considerable processing internally. The Hopfield model (synapse = decision point) has been outdated for quite some time; a better approximation is synapse = IO port, and # of synapses is roughly compatible with number of Ethernet ports.

          That being said, however, I agree with you that humans are not meat machines, of whatever algorithm, and there is absolutely no evidence that my computer is roughly as intelligent as a bacterium, much less a human being.

          • jim says:

            If, as seems likely, a synapse is the equivalent of an internet port, rather than a transistor, then neurons are doing something remarkable that we have no good description of nor understanding of, which is to say, indistinguishable from magic.

    • There’s a simpler upper bound: use the Landauer entropy. It’s about 5 *10^19 64 bit ops. It’s reasonable to assume that the brain isn’t Landauer efficient (though it’s probably a lot better than transistors in a Xeon Phi). I wrote about this a couple of years ago:
      http://scottlocklin.wordpress.com/2010/05/18/youre-smarter-than-you-think/

      Of course, nobody has any idea. For all we know, Penrose might be at least part right that brains have some weird physics going on. But the Landauer limit is worth taking into consideration.

    • Innocent bystander says:

      I spent a fair bit of time reading thick neuroscience texts before coming up with the calculation above. At first I was unsure what the effective unit of calculation was and how much it did. My conclusion was that the unit is the dendrite and effectively it does two things. 1. If a number of inputs within a certain spatial area fire, then the dendrite passes a signal to the center. The axons are more or less passive in this process – they pass the signal to the next neurons in the chain. The dendrie process can be modeled as an addition plus a comparison. 2. Synapse strength is changed in response to coincident inputs and outputs. This is a coincidence detector plus an increment function. This occurs even for synapses that are not currently active ie ‘potential’ synapses.

      You can then compare this calculation with the amount of CPU power needed to simulate the parts of the brain that we understand fairly well (eg low level visual cortex) and it matches reasonably well.

      You can also compare the overall behavior of computers of a given size with animals with brains of various sizes. Again they match up – computers 10 years ago were roughly on parity with insects and behaved accordingly. These days they are a little smarter but few would compare them to humans in general intelligence.

      While you could achieve higher efficiency with a custom transistor setup, the functionality above is well past the single transistor level

      • jim says:

        We don’t understand the low level visual cortex fairly well. We cannot even emulate the retina, let alone the visual cortex.

        Computers of ten years ago are not on par with insects

        Computers of today are not on par with insects. They are not within a factor of a billion or a trillion of being on par the worm elegans.

        Some time ago I had an encounter with a tarantula.

        I got into my car, and noticed a spider as big as my hand on the roof. Did not like it, so got a stick, intending to chase him out of the car. I harassed him with the stick, and he jumped on the stick, which stopped me from poking him.

        OK, thinks I, I will carry him out of the car to the wood heap, this being about a hundred yards, so he will not come back.

        So I carry him about a hundred yards, and the whole time he hangs tightly to the stick, watching my face. As we approach the woodheap he makes a huge leap from the stick into one of the woodheap crevices.

        Now you might suppose he is a robot following the rule “If in the vicinity of a great big monster, remain still, unless there is crevice in jumping range that is small enough that monster cannot follow”

        But evolution would not have prepared this rule for the case that he is being carried around. How does he realize that I, who appear to be stationary relative to him, am the monster, that the woodheap, which is moving relative to him, is not the monster, and that such crevices as the gaps in my loose fitting clothes are seriously unsuitable

        Robotic behavior is inflexible and brittle. The slightest change in circumstances, any case that was not specifically and explicitly programmed for, will break the robot. The spider’s behavior, on the other hand, was at all times reasonable. He behaved sensibly in a situation that evolution had not specifically prepared him for.

        Further, he knew the wood heap was a woodheap, that I was a monster. Google’s cars have hundreds of times the power of a normal desktop computer, but they do not know a stoplight is a stoplight, except that a human has marked it on a map as a stoplight.

        • bringdanoize says:

          I read a blog post somewhere about an Australian spider that figured out how to activate a motion sensor at night to turn on a spotlight to help it catch bugs.

  25. Innocent bystander says:

    “The GPU on my desktop has ten times as much computing power as the typical male human brain, and it is not looking conscious.”

    According to my calculations the latest GPUs are at least a factor of 10,000 below the human brain. We are 20 years away from parity. This is a big difference. try building a car with a 1/100 horsepower engine!

    The calculation goes like this: 100 billion neurons * 1000 connections(the processing happens in the dendrites) * 100 cycles/second = 10^16 operations per second. GPUs are nowhere near this. Possibly this is a big understatement because there is a lot happening in terms of latent connections becoming actual connections. This may add as much as a tenfold greater processing capacity.

    You provide no evidence at all that consciousness is the key to the problem. That is simply an unjustified assumption.

    • Candide III says:

      +1

    • Congo Sam says:

      If IIS7 were generating web pages 10,000 times faster than now, it still wouldn’t be able to decide what to do about a dog in the road.

      Processing power alone isn’t an answer, that’s for sure. What’s “consciousness” anyway? If Jim’s saying we don’t have a clue as to the nature of the software, that’s a good point.

      Nor do we have much of a clue as to the nature of the “processor” itself. It’s massively parallel, in all likelihood, wildly fault-tolerant, nondeterministic. The kinds of “programs” that work best on computers don’t run well on it at all. It needs a laboriously acquired lookup table to add small numbers slowly. Adding large numbers is such a cluster###k that most units can’t do it at all. Reprogramming it can take years.

      The “processing power” metaphor is a snare and a delusion. The two machines are nothing alike.

    • Innocent bystander says:

      Possibly this could be wrong eg Roger Penrose has claimed that neurons have quantum tubules within them that have phenomenal processing power. If this is correct then wet brains are very powerful indeed.

      I have looked at Penrose’s claims and his basic argument seems to be, as Ray Kurzweil put it “minds are mysterious, quantum theory is mysterious, so minds must be explained by quantum theory”. Or to be less kind, Emeritus Professor syndrome.

Leave a Reply for Innocent bystander