Lately this blog has largely been about the left singularity:Â That leftism leads to more leftism, which leads to even more leftism even faster, until everything goes to hell. The best known singularity, however is the information technology singularity, the rapture of the nerds
What of the theory that information technology leads to more information technology?
Well, in a sense, in the long run, looking back over the last several million years, it is obviously true. Problem is that it is far from obvious that dramatic technological change is coming any time soon. We have had quite a few dark ages interrupting the process, and there is what looks like a dark age coming up now.
The distinguishing feature of the technological singularity, what makes it singular, is accelerating progress. Progress has been accelerating from the sixteenth century to the early twentieth, but during the twentieth century, in one field after another, progress has slowed, usually stopping altogether within the west, while continuing at a somewhat slower pace in Asia.  Accelerating progress continues in DNA reading, but that is the last place where it is still evident that progress is accelerating. Progress may well have stopped in DNA writing, and any future advances in DNA writing are likely to come from Asia. Rapid progress continues in integrated circuit manufacture, but that progress is not accelerating, and the shrinking number of fabs and increasing cost of fabs threatens to end that progress.
If yet another dark age hits, then when the next civilization rises, the high point of western civilization will be dated precisely to 1972 – last man on the moon, tallest buildings in the west, coolest muscle cars.
Information technology has been growing exponentially – everyone these days has a powerful computer connected to the internet with a lot of storage, giving them instant access to all the information in the world, which they mostly use to download badly made porn and worse written romantic fiction.
The continued advance of computing power, data storage, and internet bandwidth, will likely give us … lots and lots of 3D computer animated porn taking place in virtual worlds with reasonable physics..
That everyone now has access to all the information in the world has not accelerated the scientific progress that underlies technological progress:Â Arguably scientific progress was fastest around the 1870s, and it has slowed dramatically after 1942, when peer review was introduced.
DNA, the unification of information technology with biology?
One of the areas of information technology that is progressing rapidly, and the only place where it still accelerating, is DNA reading.
DNA sequencing, protein sequencing, new techniques for revealing the three dimensional structure of proteins, and many other breakthroughs have generated a rapidly increasing flood of data, which is now running into the limits of computers to keep up with it. Back in the 1990s, everyone expected this would result in huge flood of useful biotechnology, that biotechnology would be where fortunes were made, as previously computers were where fortunes were made, that there would be extraordinary and rapid progress in medicine and other practical fields.
Instead, the number of new medical entities has been falling rapidly, with the result that drug companies are facing big troubles as their patents run out.
In the late 2000s, social decay outran information advance.
What is happening with biotechnology is cultural decline, the decline of science, social decay. Biomedical research is rapidly becoming less and less reproducible, so biotech companies cannot use academic research to produce new medical treatments.
Peer Review is science by consensus. Instead of experimentalists telling the scientific community what they see, the scientific community tells experimentalists what they should see. Science ceases to be science, and becomes the consensus of the most holy synod. The academy is generating theology, not biology, and when biotech companies attempt to apply the latest advances in theology to produce actual treatments, the treatments of course do not work. Biotech companies have found that some time in recent years, biomedical scientific research ceased to reproducible. Should a university’s biology department try to do biology, it is going to fall short on its goals for affirmative action and number of papers published. Reproducibility seriously slows down the production of papers, and oppresses female ways of knowing.
DNA synthesis
DNA writing may be progressing also, though this is less clear. The high point was Venter synthesizing a functional one million base pair chromosome to create a simple bacterium. It remains to be seen whether this is the start of creating synthetic organisms, or like the landing on the moon, a civilizational high point signalling the beginning of social decay. The next step would be to create synthetic chloroplasts based on blue green algae that can synthesize organic nitrogen from atmospheric nitrogen using sunlight, thus giving bioengineered crop plants a huge advantage over natural weeds, and then, eventually work our way to creating synthetic humans, free from accumulated genetic load.
It is probable that a human free from genetic load would be an all round gold medal Olympic athlete, extremely smart, with a reaction time of less than a hundred milliseconds, rather than the usual three hundred milliseconds. There are large variations in human reaction time and very fast reaction times appear to have no downside – people with very fast reaction times tend to be generally smarter and saner. Reaction time is a good indicator of brain efficiency and effectiveness. People with good reaction times not only make decisions faster, they make better decisions faster.
Since evolution has every reason to select for faster reaction time, and no reason to select for slower, the fact that we have any significant variation in this, that we are not all close to the shortest possible time, suggests that the problem is genetic load.
On average, humans acquire about seventy new single nucleotide variants per genome per generation. This causes genetic load.
The difference between someone whose reaction time is one standard deviation below average, and someone whose reaction time is one standard deviation above average, is apt to be decisive in a fight, overwhelmingly decisive in a fight with deadly weapons, and also quite important when dancing to live music in the presence of members of the opposite sex. These are big differences.
Reaction time is is highly heritable and one would expect it to have a large effect on survival and reproductive success, so, if not already minimized, random mutations must be increasing it as fast as evolution is shortening it.
There is evidence that reaction time has increased by substantially more than a standard deviation since first measured in Victorian times, probably due to the relaxation in selection and the resulting accumulation of genetic load due to random mutation.
Silverman IW. Simple reaction time: it is not what it used to be. American Journal of Psychology. 2010; 123: 39-50.
Unfortunately, Venter’s one million base pair chromosome may turn out to have been the high point of DNA writing, much as 1972 was the last man on the moon. No further functional chromosomes have been synthesized de novo, and plans to create them are slipping further away, not getting closer.
If Moore’s law for DNA writing is still on trend, we should be synthesizing human eggs and sperm around 2016 or so. This does not seem at all likely, suggesting that the trend has broken, just as the space travel trend broke in 1972, though it is too early to be sure yet. In the short run, Moore law graphs are bouncy, but this bounce is troublingly large.
DNA Reading is not going to change things in itself:
Reading genomes, DNA sequencing, is still zipping along on a Moore’s law curve with a very fast exponential.  Pretty soon, it will be reasonable to do a complete high accuracy gene read on every human that shows up for medical treatment. Unfortunately our ability to make sense of gene reads is not improving, so it is far from obvious that this will produce substantial medical benefits. It turns out that most of “non coding” or “junk” DNA is not junk. Instead of coding for proteins, it codes for which proteins will be produced under what circumstances, and what shall be done with those proteins. Unfortunately it is very hard to make sense of it
We should be able to get a low accuracy read of someone’s genome for about a hundred dollars in 2018. Unfortunately a low accuracy read is not particularly useful, because you want to be able to detect rare mutations, since every single person carries thousands of rare mutations. So we should get good information on individual genomes in around 2021, at the present rate of progress. This will provide significant medical benefits, but it is unlikely to be a game changer.
How about artificial intelligence?
Artificial intelligence is doing fine. Unfortunately, artificial consciousness is going absolutely nowhere. Chess playing computers are very impressive, and so is Google translate. Unfortunately, Google translate suffers from the Chinese Room syndrome. If you look at the typical errors that Google makes, it is obviously translating words into the nearest word without knowing what words mean, without even knowing that words have meanings. It is, like the Chinese Room, not conscious, and the lack of consciousness shows.
If you look at the progress we are making towards conscious computers, it is as if we wanted to build a plane, a machine that flies like a bird, but had no idea about aerodynamics, engines, or even that air exists, and so we found ourselves building bigger and better pogo sticks. No matter how big and how good the pogo sticks we build, they are not going to be plane, not getting any closer to being a plane, and no matter how intelligent the computers we program become, they are not getting any closer to being conscious.
Maybe in future we will create artificial consciousness, but at the moment there are no good indications we are headed in that direction, or even know what direction to head. Maybe computer consciousness will quietly appear out of artificial intelligence, but there are no indications that this is likely.
Perhaps the reason we are not making any obvious progress is that we have no clear idea what consciousness is, how it works, or even what it does, as if we were trying to build a plane, but did not realize that air existed.
If we cannot design brains, maybe we can copy them: How about uploads, simulating brains in software and hardware?
The tiny worm caenorhabditis elegans has just three hundred and two neurons, whose connections have been completely mapped. While a lot of its behavior is arguably well described and well explained in terms of those neurons, no one has actually managed to create a simulated worm driven by simulated neurons. Neurons frequently process information in ways that are not altogether obvious, not always easy to explain or describe, and each attempt to upload the worm reveals that we are even further from the capacity to do so than we thought. We are not making progress towards uploading, we are making progress towards realizing how difficult the task is.
Nanotechnology. When line widths get down to one nanometer, is that not nanotechnology?
Well it is not the nanotechnology that is likely to transform the world. What people are hoping for is nanoassemblers in your laptop, so that you can download a new computer and new nanoassembler, and have your old system make the new one. Laptops could create new laptops as rabbits create new rabbits, except that they could create anything, not just rabbits.
That sort of nanotechnology would mean that everyone would become largely independent of physical trade. The entire economy would move to the internet. We could settle space and inhospitable parts of the earth, because we would not need to do shopping trips every few days. We would only need energy and raw materials, which tend to be abundant in inhospitable unsettled places. It would also mean an end to gun control. The government would find it mighty difficult to stop people from settling the unsettled places.
If we were heading towards nanotechnology in that sense, fabs would be getting smaller and cheaper. Instead, they are getting bigger and more expensive.
If we were using printers, rather photolithography, we could be on that path.
The way we make very small things is that we coat a surface in resist, then we illuminate the resist with an image, so that some parts of the resist are changed by the light, and some parts unchanged by the darkness, which is to say, we create a photo. Chemical processes then transform the surface, so that we get some detailed structure on the surface. Rinse and repeat.
For example, to generate the electrical leads, we coat the entire surface in a very thin layer of conductive metal, then we coat the metal with resist. The resist that is exposed to light becomes non soluble. We dissolve away the unexposed resist, just like developing an old style black and white photo. We then dissolve away the metal except where it is protected by the exposed resist.
Now suppose, instead of using light, we had print heads that made mechanical contact with the surface, laying the resist down, or scraping it off, or transforming it with electrical current. Then the trend would be towards larger and large numbers of smaller and cheaper fabs.
But instead, each generation of fabs is bigger and more expensive than the last one, threatening an end to Moore’s law for semiconductors. We are very close to having only one fab in the world, and the next step after that is no fabs.
Even if photolithography can take us all the way down to nanometer scale, which is not clear, as fabs get ever fewer and more expensive, it still not the nanotechnology that is hoped for, the nanotechnology of nanofabs.
The failure.
Continued progress with existing technologies will not get us to the technological singularity. We need some genuinely new technologies to get us there, and the rate at which we have been introducing genuinely new technologies has been slowing down markedly since the nineteen forties, arguably since the eighteen seventies. Biotechnology provides a clear and striking recent illustration of social decay outpacing technological advance.
In 1967, the writers of Startreck assumed that by the 1990s, we would have large nuclear powered orbit to orbit interplanetary craft with large crews. Given the progress that had been happening, that seemed at the time a reasonable expectation. Progress has slowed, slowed strikingly and obviously
No technological singularity is likely until we recover from social decay. Right now we are heading downwards fast, and have no idea how to turn around once we hit bottom, or how deep bottom is going to be. While China and Russia are recovering nicely from their respective left singularities, post Roman Britain tells us that a dark age can be very dark indeed.
You are missing a few things. First, the internet. I would say that it continues to deliver force multiplication to human intelligence at an exponential rate. Second, the paleo movement, born on the internet. This is where the real medical progress is occurring. Third, the resurgence of white nationalism and the emergence of HBD and Neanderthal identity. These hold the promise of a much faster reboot of Western civilization in racially pure enclaves. Fourth, the new political theory of which you are a part, formalism. Fifth, Game, another key element in the fast reboot of Western civ, since it enables the resurgence of Patriarchy in a post-industrial economy.
So I agree that the current Western system is declining, however I disagree that progress has stopped. I see the seeds of a new system already sprouting forth tender shoots of what will one day be a mighty forest.
But until it is a mighty forest, likely not much technological progress.
We might also add Austrian economics, which has broad sway on the internet even though suppressed by the ailing system. Another muscular component ready to be slotted into the powerful body of the New West.
[…] The left singularity versus the technological singularity « Jim’s Blog […]
China and Russia have recovered from their left singularities because they are populated by Chinese and Russians respectively. African countries don’t recover from left singularities and it is doubtful that countries that are significantly non-Asian or non-European can.
I’m no scientist but my layman’s impression is that understanding in many fields has massively progressed in the last 15 years. The understanding of the brain and endocrine system was very crude thirty years ago compared to now. Major progress has been made in mathematics and computer science in fields like machine learning. Solutions like bayesian tracking didn’t exist 30 years ago. You poo-poo the newish algorithms behind working automated translation, but I wouldn’t; it’s still early innings. And a couple of cognitive architecture research efforts are in fact showing some progress towards building “understanding” machine minds.
The applications of all the new knowledge in recent years hasn’t been as spectacular as a man on the moon, but we’ll see. A talking autonomous android may be on the horizon.
A lot of this is theological drift rather than scientific progress, changing fashions in the official religion.
The algorithms provide translation without understanding, without knowing meanings, without knowing there are such things as meanings, after the fashion of the Chinese Room. The problems that we encounter are the problems one would expect of such algorithms, for example translating an idea as “socialism” when the intended meaning has to be something like “society”, “social expectations”, “conformity”, or “social pressure” or some such. So it is not early innings yet. They are doing the best that can be expected of translation without consciousness. Further improvements in a pogo stick will not turn it into a plane. To do better, would need to know what words mean.
> To do better, would need to know what words mean.
Ignoring the metaphysical questions on “knowing”, the understanding of a word and its relationship to all words in other languages is probably just a question of forming a sufficiently complicated hypergraph. I think you underestimate progress on this front and the promising work in developing a complex fusion of different probabilistic reasoning algorithms. People are hard at work trying genuinely new things with success; it’s not stagnant. We shall see in coming years how it goes, shan’t we. I’m not sure I share your pessimism.
The late roman and Chinese empires had great technological successes left and right until the very day the barbarians came over the walls. They just couldn’t afford to implement them on a large scale. The Romans had industrial scale grain milling that was water powered. They only built one or two of them because of the cost. I’ve read about large northern Chinese cities built with cannon towers to keep the manchu out but the cannons were never installed due to production costs. Roman armies went from being fully funded by the state with chain mail armor to being little more than militia with whatever gear they could scrounge up.
It’s the quality of your people who determine the rate of technological development and implementation. Japan had better guns than the Europeans within 30 years of adopting the tech. New tech being too costly to implement is a sign that the general populace is no longer productiveness enough to support a technological society.
In the area of DNA, computers, and a few others the gains to be made are still cheep enough that the general decline in the quality of the western peoples is not enough to stop them. In ares like cars, power, I.E. areas where the really cheep gains have all been exploited we are seeing major slow downs and even declines from what our parents knew or we are paying a much higher price just to tread water.
No they did not. The Roman civilization was technologically and mathematically inferior to the Greek civilization, (look up the Antikythera mechanism). Gibbons called the art, science, and technology of the late Roman empire “The second childhood of human reason”.
When Europeans encountered the Chinese, the Chinese cannon were ancient, and their newer cannons were more dangerous to those firing them, than to those fired upon. The technological level of Chinese guns had deteriorated. Probably their ships had deteriorated also, but that is hard to tell since ships do not last as long as guns.
Rome taxed above the Laffer limit. So, unable to raise more money by raising taxes, proceeded to debase the currency. Remarkably, they achieved hyperinflation with a metallic currency. So Diocletian proceeded to raise more money by instituting a fully command economy, Pharaoh style. If overtaxed people will not work, make them work. In the short run Diocletian was able to tax well above the Laffer limit, but of course, over time the economy collapsed.
One word can stand for lots of different things. What thing it stands for is usually obvious from the context, from the meaning of the context. What is the correct translation depends on what it stands for. The errors characteristic of Google Translate reveal that, like the Chinese Room, it does not know that words stand for things.
We are hoping for a plane, keep getting bigger and better pogo sticks.
I agree about google translate. However, “Chinese Room” is not the phrase you want to use here. Searle invented this thought experiment to demonstrate something completely different, namely something about the nature of consciousness. The Chinese Room program can in theory be as simple or sophisticated as you want. This does not affect Searle’s thought experiment. The Room program can understand the meaning of words, use context, appreciate poetry, whatever, but the human “executing” it still does not understand Chinese. Instead of inaccurately saying “Chinese Room”, why not just say “google translate”? Those who know about it probably know it’s dumb in its particular autistic/narcissistic way. If they don’t know the difference, they probably won’t understand even if explained to.
The lesson some people, for example Penrose, draw from the Chinese Room is that simulating consciousness by algorithmic methods is not consciousness, that consciousness is not reducible to algorithms, or not in the way that the Turing Test would lead us to attempt – that the Turing Test points in the wrong direction – that attempting to imitate what conscious creatures can do without understanding of how they do it or what they are doing is not going to get us anywhere.
Google translate simulates consciousness by algorithmic methods. On close inspection, whether or not consciousness is reducible to algorithms, that it is not the real thing shows through.
Oh, I see. I don’t agree with Penrose, and if Searle argued the same way I don’t agree with him either. Frankly, I think all this consciousness mysticism is a load of new-age crap.
People are apt to explain consciousness with new age crap, but that consciousness is not explained nor understood is not new age crap. It could well be that there is nothing very special about consciousness, and the reason we are not making progress is similar to the reason we are not making progress in many other fields. Nonetheless, we are not making progress.
Indeed. Please explain how it works, I’m all ears.
My point is that no one understands how consciousness works, which may well explain our lack of progress in creating it.
You know what our discussions of mind and consciousness remind me of? A cat or a dog before a mirror. It jumps, paws and sniffs at it, but the poor thing doesn’t understand that what it sees is just a reflection.
Computers will never be able to think unless they know context.
But they will always be guessing context, especially in short statements.
And the more obscure the usage of a term, the more likely it would result in an incorrect translation (What’s the delta?; In French there is no term for wife, only husband.).
If names of places aren’t deciphered, they will always be incorrectly translated. How would it translate: Division Street, Milwaukee and State, Picking Way?
OT: Kurzweil now works for Google as Director of Engineering. I guess the other singularity will come with mandatory advertising. Whee!
> Since evolution has every reason to select for faster reaction time, and no reason to select for slower, the fact that we have any significant variation in this, that we are not all close to the shortest possible time, suggests that the problem is genetic load.
What about energy cost. Price of using the structures, price of generating and maintaining them.
Not saying load isn’t more likely.
Depends on the mechanism. If faster reaction time rested on running neurons hotter, yes. Might well be the case. On the other hand, the correlation of reaction time with intelligence and other indicators of competence suggests that faster reaction time primarily reflects better organized neuronal pathways – the signal simply takes a more direct path.
People you gotta read Julian Jaynes. Consciousness isn’t about magic but it’s not what we think it is. Emulating a speaking mind is not about consciousness. A speaking machine shouldn’t be that hard really. Retards talk too, people are very predictable once you get a hold on them.
As a polyglot let me tell you that Google Translate sucks very much, and isn’t getting any better in the last years. A big chunk of language meaning is about context, you can’t even translate it properly. Book translations are copyrighted for a reason.
I’ve read him. Quite ingenious and persuasive. Neal Stephenson used his theory in “The Snow Crash”. “The Diamond Age” is very interesting too, regardless that it’s SF.
He argues that consciousness is recent, that not only are worms not conscious, humans were not conscious until recently. That is an entirely silly position. If you run into animals in the wild, they are obviously conscious.
We have old books, the oldest being the Epic of Gilgamesh. Obviously the guys that wrote that were conscious, and metaconscious. They thought about beliefs, and asked how they knew what they knew. They not only thought, they thought about thinking.
Well he does found his theory on the Iliad. And his points are eerily interesting.
Saying that animals in the wild are conscious is like saying that humans aren’t conscious. Consciousness is either a human exclusive or it doesn’t mean anything at all.
You can make the point that there’s a continuum of consciousness depending on neural complexity. But that’s real boring and doesn’t explain the internal dialogue only humans have. And not even all humans have it. Trust me on that.
Now, why would consciousness be something exclusive of humans? The Wikipedia definition does not equate it to sapience, but rather with sentience. Animals (especially higher order animals, such as birds and mammals) are obviously possessed of sentience, as Jim says. They can learn and communicate and even sometimes realize that their reflection is themselves. Some primates have been shown to be capable of reason – such as the case of the chimp, who, upon being told by the researcher to pour water on a carrot, instead threw the carrot outside. The researcher asked why, and the chimp answered that the carrot is wet – it was raining outside.
How do you know that only humans do have internal dialogue? I’m not going to trust you on this.
To say that animals, and probably all bilaterians are conscious, says that bees have something that assassin drones do not have.
And it very much looks as if bees do have something that assassin drones do not have.
Computer programs are brittle. They fall apart when they hit a case that has not been specifically tested for. Conscious beings are not brittle.
Bees, like milkmen, have a regular round, visiting the same flowers in the same order at the same time of day. A scientist marked bees, so he would know which one was due where, and then kidnapped a bee into a cigar box as it left the hive, transported the box a long way in a random direction. On being released, the bee immediately went up high, presumably so that it could see where it was, and resumed its rounds, going to where it should have been had it not been interrupted.
You could program a robot to do that easily enough, but natural selection did not program bees to deal with scientists and cigar boxes.
Similarly, if cattle get a chance to take a look at the inside of a slaughter house before being stunned, they figure out what is up fast enough, even though natural selection never prepared them specifically for slaughterhouses.
I carry a very large tarantula spider, a big jumping spider, on a stick to the woodheap, since I don’t want it around the house, and the woodheap seems like a good place for huge spiders. As we approach the woodheap, it make an impressive jump from the stick to the woodheap and vanishes into the woodheap. Competent and appropriate behavior, but I doubt that evolution gave spiders programming for the specific case that they are being carried around.
Robotic behavior is brittle, each very very specific case has to be very very specifically programmed. Animal behavior is not. Animal behavior gives an overwhelming impression of consciousness. Robots do not.
You and AnnoDomini should pay attention to what Jaynes says consciousness is. His definition is rather more restricted than the usual vague notions. E.g. do animals narratize their experience? See pages 3-8 in his short article “Consciousness and the Voices of the Mind” [pdf]
I very crudely skimmed through the article. It seems Jaynes proposes 2 things: a definition of consciousness; and that human were bicameral (hence non-conscious) as later as 3000 years ago. However this will mean that, the human of many primitive aboriginal groups, who has separated from the European/Asian culture for tens of thousands of years, should be bicameral and non-conscious. It doesn’t seem right, both in the definition of consciousness and in the theory of bicameral behavior.
Do I narratize my experience?
Most of the time I don’t. Nonetheless, I am obviously conscious. Most of the time, I only have voices in my head when I am preparing a blog comment, or some such.
He is trying to cook up a definition of consciousness that excludes everyone except intellectuals, and so focuses of features that are trivial, reflecting the culture of intellectuals, rather than the characteristics of intelligence.
Sometimes I hear voices in the sounds of sea and wind and rain, but if they actually said anything substantial, that would not be consciousness, but madness.
We have speaking machines. Have had them since 1877. What they lack is not what every retard possesses, but what every hunting spider, and perhaps most worms, also possess.
And if we knew how to give a machine what a worm possesses, we would probably have uploaded the tiny worm caenorhabditis elegans by now.
That does not necessarily imply that what a worm possesses is magical, but it does imply that what a worm possesses is very hard to understand.
As an AI worker I think that, to found a theory of strong AI is very hard, general relativity + quantum mechanics level hard. But unlike instensive engineering problems like genetics engineering, it only takes one person to found the theory. Once it’s there strong AI would develop very quickly. It is anyone’s guess when the lightning will strike the right spot.
In the long run, the trajectory of modern civilization is quite obvious once you realize why it happened. Exponential growth of what anthropologists call cultural complexity is an anomaly.
What do you think it is that drives science, technology, economy and all the rest? It’s human intelligence. Does this mean that we today are more intelligent than the people of the Dark Age? That’s precisely what it means. Since at least the 1980s we have known that IQs in the most advanced countries have gone up by 25 or 30 IQ points during the 20th century. This is known as the Flynn effect. It is caused most likely by more intense schooling and also by better nutrition, mass media, and a lot of other improvements.
Here is how we got to where we are: Before the Industrial Revolution, Europe had a marriage system that ensured that only those who could afford it ever married and had children. Therefore the smartest and especially the richest had the most children. The best review of this is Vegard Skirbekk’s paper in the online journal Demographic Research (volume 18, article 5). Little by little people got smarter, until they got the Industrial Revolution rolling. And with it they got better living conditions, schools for every child, and all the rest. This made the children of the next generation even brighter. These children then made even more inventions than their parents, improved the environment even more, so that their children got even brighter… That’s how you get exponential growth of everything, including intelligence, technology, prosperity and all the rest.
There are 2 reasons why this exponential growth will not end with a singularity. The first is that human brains are not infinitely malleable. No matter how good schools, nutrition, health and so on are, you are going to hit the wall when people are functioning close to their biological limits already. You get diminishing IQ returns on environmental improvements. And when intelligence is at its limits, you will soon reach a limit for innovation and economic growth as well.
The second problem is that once people have an average IQ of at least 65 or 70, they start limiting their fertility. Worse, the brightest start it first, and even today in Europe and North America, those with higher IQ have the lower number of children. Internationally, low-IQ countries have higher fertility than high-IQ countries. Therefore once the environmental conditions are near-optimal everywhere, the average IQ of the world population is not only not rising anymore. It starts declining.
That’s what we see today. We see it in scientific studies that track IQ trends. Right now we have Flynn effects in many developing countries, but IQs are stagnating in the more advanced parts of the world, for example in Denmark, Norway and Britain. Knowing this literature, I can only laugh at the naive idea that exponential growth is going to continue forever. The real question is: When and how will this civilization end, given that there are no barbarians at the borders who can take over and try again later?
There’s a good article by Peter Norvig on statistical machine translation:
http://norvig.com/chomsky.html
Much though it pains me to agree with Chomsky, Chomsky is correct, and the article you cite is wrong:
The article tells us:
I choose, at random, not cherry picking, a famous Chinese quote, because I just used it in a previous comment about the war in Syria:
Usually translated as
Translated by Google translate as
Statistical translation works better when languages have a recent common ancestry, for example Danish and English.
Which reveals what statistical translation is not doing.
But the fact remains that Googlian translation works better than Chomskian translation, which I don’t think ever worked at all.
This “understanding” argument reminds me of chess programs, whose brute force approach was once thought to being uncapable of achieving the understanding of the game that human players possess. However, as it turned out, sufficient computing power was enough to elevate the best programs to a level at which their play is incomprehensible even to grandmasters. If we didn’t know how they were programmed and if they didn’t output variations, we’d probably think that they possess knowledge of the game that we can never match.
*incapable
As for the Chinese proverb, I don’t know Chinese, but I can’t help but wonder whether someone who does know Chinese but doesn’t know any proverbs would be able to translate it properly. And familiarity with proverbs is not understanding, it’s knowledge. Not that Google translates “ordinary” Chinese text (such as, for example, news articles and blog posts) particularly well… but I’m not sure that this is caused by the languages not having common ancestry. They may not have enough data yet.
Mandarin’s grammar is pretty similar to English, same word order, same lack of complex morphology, etc. So parsing a text such as news articles, which use to have a very limited syntax isn’t that hard.
More creative Chinese texts are butchered by Google Translate, becaues the Chinese syntax grows more complex.
Google Translate might be said to work properly when it becomes able to translate properly between English/Japanese or English/Arabic, languages with very different grammar. As of now they are particularly bad, and not for lack of data; Japan translates heaps of English text every year.
I’m often surprised by Google Translate’s proficiency in Japanese. Intuitively, statistical translation can’t possibly work that well, but it does. This, to me, indicates that what’s missing might be just quantitative, not qualitative.
Well you are easily surprised. GT’s Japanese can’t handle basic polysemy. With the sheer amounts of data it has, that must mean the algorithm is very simple. It just can’t handle context, and that hasn’t changed in years.
The thing is the statistical approach will probably mean that you can achieve a certain level of accuracy, say 60%, but anything beyond that must be impossible. It’s hard to take context or fine grammatical nuance from data. If GT becomes capable of translating ?? distinctly that would be greater than most human translators.
I’m not sure that GT has that much data. It does have vast amounts of Japanese text on its own, and it does have vast amounts of English text on its own, but does it have vast amounts of text in both English and Japanese, professionally translated?
The errors that google translate makes, the kind of errors that it makes, for example of Chinese blog posts, reveals lack of understanding.
That a robot assassin drone’s behavior is brittle, while a bee’s is not, reveals that the bee is aware, and the robot assassin is not.
Computers can play chess very well. Anything that requires intelligence, they can do very well. Something else is missing, something that is not intelligence, something that we cannot quite put our fingers on, nor entirely explain.
Chinese proverbs are mostly classical Chinese, which is a different language altogether. Putting them through Google’s Mandarin translator isn’t quite right.
I agree thought that statistical translation won’t work anytime soon.
Fair enough. I googled up Syria in Chinese, got a bunch of blogs and suchlike reporting the conflict. They seemed somewhat intelligible, unlike the proverb:
You can figure out what it means, but, evidently, google could not.
Well, adapting to novel situations is more or less the definition of general intelligence.
Chess is an example in which we went from “these stupid machines will never be able to beat an international master” to “machines are godlike and no grandmaster wants to play them” without a breakthrough, without discovering any magic ingredient. What was missing were just computing resources (mostly). And now no human can explain why a program played that move. Computers “understand” chess better than humans, even though we know that they don’t understand it at all.
It is possible, I think, that all of AI is like that. We may just need relatively simple algorithms, coupled with enormous computing power and massive amounts of data, for the machine to appear intelligent or conscious. And when it does, we’ll not be able to explain why it appears intelligent or conscious, even though we have written the algorithms and provided the data.
But we have enormous computing power and massive amounts of data, and Google still does not appear conscious.
Exactly, and the two possible extrapolations from this state are (1) GT will never appear intelligent, no matter how much power and data it has, or (2) GT will move almost imperceptibly from “stupid” to “superhuman”, as chess programs did, without any fundamental changes (and then Baker will tell us that GT is nothing special and we just like to mystify it). In the chess program case, most people were in the (1) camp, and they turned out to be wrong. This doesn’t prove anything, of course, but it does hint that (2) is possible.
Were they? That is not at all how I remember it. To my recollection, from the very first days of computing, as soon as a chess playing computer became possible, everyone always believed that with sufficient computer power and program improvements, machines would beat humans at chess, and that this would be accomplished in the very near future.
Data is not a prerequisite for intelligence. A baby is born with zero data, but is intelligent. More computing power can make one look smarter, but computing power is not what defines intelligence.
> Anything that requires intelligence, they can do very well. Something else is missing, something that is not intelligence.
No. “Intelligent people invented and perform a task” does not imply “performing the task requires intelligence”. “Performing a task as well as an intelligent person can do” does not imply “it is as intelligent as the person”.
You can look up “weak AI” and “strong AI”. A weak AI can perform a well defined task. A strong AI can perform all tasks an intelligent agent can (as long as the appropriate inputs and outputs are provided), including adapting to arbitary environment to perform arbitary tasks to optimize arbitary goals (this is what separates it from weak AI).
The Turing test says: If a machine acts as capable as an intelligent agent, then it is as intelligent as the agent. If we adapt this as the definition of machine intelligence, then only the strong AI qualifies as a intelligence. Some think that if you keep pushing the capability of the weak AI eventually it will become a strong AI. But it hasn’t happened yet.
A bee cannot pursue arbitrary goals, only pre programmed goals such as collecting honey, but it can and does pursue such goals untroubled by scientists messing with it. It adapts to disruptions that natural selection cannot have specifically programmed for. “Intelligence” is not what is missing. What is missing is what bees have.
“To my recollection, from the very first days of computing, as soon as a chess playing computer became possible, everyone always believed that with sufficient computer power and program improvements, machines would beat humans at chess, and that this would be accomplished in the very near future.”
People who were actually good at chess didn’t believe it, some of them (notably Kasparov) right until it happened. Their argument was that machines lack understanding and creativity, and they were correct in that. However, it turned out that the appearance of understanding and creativity is an emergent byproduct of sufficiently powerful brute-force search.
Got a quote from Kasparov?
It was not that he was expecting that computers would never be better than him, but that he was not expecting that computers would be better than him in 1997, and, in view of Deep Blue’s uncharacteristic play, I rather think he was right to not expect it in 1997.
That he was even with computers until 2003 suggests that the Deep Blue team did, as he claimed, cheat by giving the computer human advice in the middle of the game, resulting in anomalously un computer like play in the 1997 match.
I can’t find a quote from Kasparov that properly reflects what I remember of his and other grandmasters’ opinions. The best I could do was this:
“If a computer can beat the world champion, the computer can read the best books in the world, can write the best plays, and can know everything about history and literature and people. That’s impossible.”
The link supports your recollection, and contradicts mine.
> “If a computer can beat the world champion, the computer can read the best books in the world, can write the best plays, and can know everything about history and literature and people. That’s impossible.â€
The computer has already beaten the world chess champion, but cannot do the rest, proving this statement wrong. If a computer can learn to (not be programmed to) do all of these and more, then it is a strong AI, and is intelligent by Turing’s definition.
The opinions of chess grandmasters on the nature of AI is irrelevant. They are not AI scientists. There are multiple fallacies in their understanding.
On consciousness: What Jaynes was talking was conscious awareness, i.e. the internal dialogue people have with themselves that forms a concept of self. That’s what psychologists call consciousness. Obviously animals don’t have that.
Your idea that consciousness is what makes the difference between a bee and a drone, well you engineers are just annoying like that. Can’t you use a different word? Recursiveness, or something like it. The concept is important, no doubt about it. But we aren’t talking about the same thing. Bees’ behaviour is orders of magnitude more complex than any human-coded program, but that doesn’t make them sentient. They are still instinct-driven automatons.
It’s possible that the difference here is also quantitative rather than qualitative. Humans and animals with a similar IQ have remarkably similar behavior. A bee with an IQ of 100, if such a thing could exist, may well have a concept of self and be consciously aware ot itself. Dolphins certainly appear to be.
The thing is that IQ is not the only variable. I’ve met smarter people than me with way less self-awareness. Jaynes’ strongest point is that self-awareness is a cultural trait, it is taught to children.
If consciousness were a function of IQ nobody would write a book on that. It’s boring and quite testable.
Then your definition of consciousness is irrelevant to the subject being discussed here and you shouldn’t bring up Jaynes in the first place. They were talking about the general intelligence that enables one to speak and understand meanings, to learn and adapt environment. Such behaviors do not require the cultural trait of consciousness as Jaynes defined, and I doubt it is what common people think of what they speak of consciousness.
I just like bringing up the subject, it always produces interesting discussions.
Sorry to annoy you engineer fellas. Never heard consciousness talked about as the ability to adapt. I do think that general intelligence should just be called general intelligence, but anyway.
According to wikipedia, the word encompasses a much wider meaning than Jaynes’s definition. There is no precious definition on it.
We can break down to more specific descripton:
to be environment aware: a one-pass brain. sensory input -> neural pattern ouput -> action. This is roughly the state of mind when a person is in autopilot.
to be self-aware: to be able to perceive one’s own mind pattern. sensory input -> neural pattern output -> neural pattern input -> … -> action. This is the state of mind when one is reasoning on a problem.
to be meta-self-ware: to be able to perceive one’s self-awareness. This is the “philosopher’s mind”, when one’s mind detaches from oneself, and observe and reason on his place in the universe.
Meta-self-wareness is just a more advanced form of self-wareness, since self-awareness itself is a neural pattern which can be perceived. Jaynes defined consciousness as meta-self-awareness (though I don’t agree with his detail of bicameral mind), but I think self-awareness is closer to the commonsense understanding of the word.
Robot behavior is brittle. Insect behavior is not brittle. I don’t think consciousness is the ability to adapt, but I suspect it comes mighty handy in making behavior flexible.
If bees were instinct driven automatons, I would expect them to fail hopelessly in any situation other than those very specific situations that evolution has very specifically prepared them for. They don’t. Bees’ behavior is better described by bees wanting to collect honey, rather than as a collection of reflexes that, in the ancestral environment, happens to result in honey being collected. If they were mere instinct driven automatons, they would go totally off the rails as soon as humans made a significant change in the situation.
Instinct can mean instinctive desire (eg to desire food) or instinctive reflex (eg when baby is hungry, suck things). A brain allows one to achieve instinctive desires in ways more flexible than hardcoded instinctive reflex. A brain is by definition a re-wireable statistical model that can flexibly adapt to achieve desires. Computer algorithms are still mostly hardcoded “if … then …” case checks. Large data allows it to check a trillion cases, but still it doesn’t adapt, which make it brittle.
Statistical computing like Watson try to emulate brain by training statistical models. But the hardcoded training and deduction algorithm is scenario specific, therefore it is still brittle outside of its usage scenario. Human language is a textual representation of the flow of general intelligence, obviously it is much harder than Watson’s task (Watson can parse a sentence and record word relations into its statistical models, but cannot integrate a paragraph’s meaning, that’s way it sometimes gives silly answers). But still it is less complicated than general intelligence since language doesn’t involve problem solving.
Chess playing machines like deep blue is nothing special. Some people just like to mystified them. They are just possible-move search assisted with statistcal models of pass games. The computer’s vast memory and calculation speed allows it to calculate many more move scenarios than human players. Chess playing is a very limited task and is not a sign of general intelligence.
Self-awareness allows higher adaptiveness then just environment-awareness because self-awareness allows multiple-pass analyses and re-modeling of memory. If we invent an a self-aware adaptive brain with the memory and computing speed of computer, that would be singularity.
[…] book. My Twitter. Glorious Hat! Stagnating technology: Manna, a short […]
(totally unrelated)
Good for you Jim! You’re blowing up! Now, focus on a book, and subject it to the world, and its critique.
JEAH!
After all, wasn’t it MF that said that Science is a cooperative enterprise?
Per paragraph, per post, you offer at least as many potential insights as Robin Hanson. Unlike he, you have fewer citations, and I suspect, a small respect for such citations.
That being said, did you want your dive to have a large splash or a small one?
At least cash in on a very intriguing perspective brah!
(SHIT I love this blog.)
No offence; I admire WordPress, but their sites tidy away archived stuff so it’s almost unfindable.
.
@the eponymous jim — I can’t find if you have any material on nuclear scepticism. If not, please consider nukelies.org which has a now-closed forum with quite a few highly competent commenters. Start with the section on Hiroshima. I’m aware to newcomers this sounds weird: but so did holocaust revisionism to many people.
There were and still are a very large number of witnesses to the power of nukes. A city turned to fire in an instant.
And pretty much everyone who says the holocaust did not happen also says it should have.
And the guys who say that fire cannot cause tall steel buildings to fall have a backup story ready when you point out that fire can and regularly does cause tall steel buildings to fall, and another backup story ready when you point to the flaws in the first backup story.