Lead Image © sebastien decoret, 123RF.com

Lead Image © sebastien decoret, 123RF.com

The limits and opportunities of artificial intelligence

Rocket Science

Article from ADMIN 57/2020
By
We talked to Peter Protzel, an academic with experience in knowledge-based systems and process automation, about the future of artificial intelligence.

Despite, or perhaps because of, research successes in artificial intelligence (AI), powerful AI seems to be further away today than it was 10 years ago. Will AI ever step toward automated super-intelligence, or are artificial narrow-band geniuses all that can be expected for the foreseeable future?

ADMIN interviewed Peter Protzel, who studied Electrical Engineering and received his Ph.D. from the University of Braunschweig, Germany, in 1987. He then spent five years as a staff scientist at the NASA Langley Research Center in Virginia, followed by seven years at the Bavarian Research Center for Knowledge-Based Systems in Erlangen, Germany, where he headed the neural networks research group. Since 1998 he has been a full professor for automation technology at the University of Chemnitz, Germany.

ADMIN: Prominent advocates of the idea of super-intelligence – Ray Kurzweil, Stephen Hawking, Elon Musk – are apparently afraid of its independence. Our current experience, on the other hand, is an AI that distinguishes dog from cat when trained but fails at the simplest tasks if not prepared for them. How does that add up?

Peter Protzel: The core of the problem in the AI discussion is a clean distinction of terms. We always need to make a strict distinction between whether we are talking about the current "narrow" or "weak" AI or about the "general" or "strong" AI imaginable in the future (artificial general intelligence, AGI). The terms "narrow" and "general" illustrate the difference: The narrow AI we have now is a machine that we construct (and train) to solve a specific problem. A facial recognition system does not recognize speech or translate text. AlphaGo plays Go, but not Nine Men's Morris.

The interesting thing about narrow-band AI is that in deep learning, for example, we now have a method that solves quite different pattern recognition tasks better than before. We can train neural networks to recognize faces or speech or text. This certainly does not work perfectly – as anyone can see even after a short conversation with their speech assistant – but at least better than before with specially constructed feature detectors.

This is like the transition from drilling and filing by hand to a machine tool: It produces different parts of better quality more easily and quickly if we enter the appropriate CAD specifications beforehand. This machine is universal in some ways, because it can produce completely different parts. But it is not universal in the sense that we can tell it, "Well, build us a car, and come up with a chic new design" (Figure 1).

Figure 1: Robots build cars better and faster than a human could, but they cannot design them. © Kittipong Jirasukhanont, 123RF.com

ADMIN: That would be a metaphor for powerful AI then? But even weak AI can do many things faster and better than humans – for example, play chess or Go. How big is the difference between weak and strong AI?

Protzel: The difference is actually so great that nobody knows exactly how to get there. The crux of the matter is not how well a machine solves a single task, but rather how flexibly it solves different tasks. The welding robot welds better and faster than any human being, but it can't get beer out of the fridge. Although a computer is a universal machine, it needs a separate program for each task, and each machine fulfills its one task better than a human being; otherwise, it would not have been built. Superhuman performance is therefore not a unique selling point of AI, but the basic principle of all automation.

The goal of strong AI is the construction of machines that solve a wide range of tasks, with continuous learning ability and a human-like flexibility in solving unknown problems. According to a neat comparison by Florian Gallwitz [1], we are currently as far away from this as a New Year's Eve rocket is from interstellar space travel. So, if weak AI has so little intelligence in this sense, should we even call it AI? According to Gallwitz, that would be as if we were to call the bang in the sky – the New Year's Eve rocket – weak interstellar space travel.

ADMIN: Proponents of singularity such as Ray Kurzweil, however, assume that the exponentially increasing computing power will soon exceed the capacity of the human brain, making superhuman AI inevitable.

Protzel: It is true that microelectronics is the only branch that has grown exponentially in recent years in terms of computing power and storage capacity. In contrast, comparatively little has happened in the core algorithms of AI. Deep learning and convolutional neural networks are still based on ideas from the 1980s and 1990s. The success of deep learning in recent years is largely the result of the availability of extremely high computing power and masses of training data. Even more computing power and even more data will certainly lead to even better pattern recognition, but not to strong AI. We simply have to distinguish between hardware and software. If an algorithm cannot in principle produce the desired result, more computing power only leads to getting the wrong result faster.

ADMIN: If huge amounts of data alone do not solve the problem, what other skills need to be added for strong AI? And aren't there also problems for which we don't have enough data – and maybe never will have? Or problems that are fundamentally unsuitable for machine learning and for which, for example, a rule-based approach is more appropriate?

Protzel: AI progress in recent years is mainly based on "supervised" learning, wherein an artificial neural network is fed a large volume of training data of known pairs of input and output patterns. Judea Pearl, the inventor of Bayesian networks and winner of the Turing Award, disrespectfully refers to this as "curve fitting." He therefore sees it as a rather unspectacular function approximation, like the linear regression from his school days, but non-linear and with many parameters. When asked whether he was not impressed by the successes, he answered: "No, I'm very impressed, because we did not expect that so many problems could be solved by pure curve fitting" [2].

But curve fitting is ultimately only a matter of statistical pattern recognition, and the fundamental limits of the possible applications are becoming increasingly clear. One of the main problems is a lack of generalization capability when processing data that did not appear in the training set. From the errors that such networks make [3] and from the results that can be generated with manipulated input data (adversarial attacks) [4], it is clear that this type of statistical pattern recognition has little in common with human perceptual ability. Additionally, neuronal networks are a black box: You neither know exactly how they work, nor can you predict how they will react to previously unseen input data. If this kind of network makes a mistake and recommends the wrong film or shampoo, it is certainly not so tragic. But for all safety-critical applications such as autonomous driving, this is a problem.

More data and larger networks do not automatically lead to strong AI and will not lift these basic limitations. The major problems of AI – such as common sense, the representation of (fuzzy) knowledge on different hierarchical levels, causal reasoning, continuous and self-monitored learning – remain unsolved.

However, the limits of classical rule-based approaches have also been explored for decades. Just think of the rise and fall of expert systems or fuzzy logic. Hybrid approaches seem necessary, but no one knows exactly what that might look like. The recently published book Rebooting AI [5] sums up the not very grandiose state of today's AI very well but does not offer any concrete solutions either. There is indeed an incredible contrast between the euphoria of some media and the frustration of many researchers. Geoffrey Hinton, one of the most famous pioneers of neural networks, expressed it in an interview as follows: "My view is throw it all away and start again" [6].

ADMIN: If the mood among scientists is fairly sober, then where did all this hype come from?

Protzel: The initial successes of deep learning after 2012 (think ImageNet) generated enthusiasm and high expectations among researchers. This led – first in the US – to a massive market entry of large IT companies and of venture capital-financed start-ups, which are now under pressure to succeed. Universities tend to present the results of their research in a fairly restrained manner at specialist conferences. Companies, on the other hand, market their results, often through public relations departments, in a somewhat louder way and without direct indications of limitations and weaknesses. This then reinforces some media, which in turn has led to the notorious AI headlines and hype. Gary Marcus offered some very succinct analysis of this phenomenon with many examples [7]. "The path of scientific news" on the PhD Comics website is also enlighteningly funny [8].

ADMIN: But isn't there also something positive about the hype? After all, billions in private and public funding is suddenly flowing into AI research and applications such as Industry 4.0.

Protzel: That's right; Germany also jumped on the bandwagon (with quite some delay). This doesn't have to be negative – many trains travel in the wrong direction. But in 2019, the AI motto was The Year of Science, and many initiatives were launched.

However, basic research into strong AI cannot be accelerated at will just by spending more money. Research funds are fertilizer needed for the delicate blossoms of science, but these blossoms do not grow faster the more you put on them. Sometimes you need a Newton or Einstein, and you can't just conjure one up out of thin air. Strong AI is in good company here, in a similar situation to fusion energy, for example: Since 1960, the breakthrough has always been a matter of 30 years away.

The situation is somewhat different for application-oriented research and development on weak AI, which is what most of the programs are aiming at. The idea here is to put existing processes into practice. Fortunately, people are now looking more closely at which processes have potential for improvement – for example, for pattern recognition in quality control or for early error detection, the optimization of process sequences, or the analysis of data sets.

However, I have the impression that you don't need the very latest AI results like deep learning to solve many problems. Sometimes even methods that have existed for 20 or 30 years are better. Every now and then, though, we probably need a new wave of hype and new terms like Industry 4.0 to whip progress forward. In the 1980s, by the way, we used to refer to this as CIM – computer-integrated manufacturing.

ADMIN: One application closely linked to AI development is autonomous driving. A human driver perceives more than just moving objects in their environment. If they see a tram, for example, they know that it cannot change sides of the road, but it can stop at tram stops, unless it is not in service. Can a machine learning system acquire this kind contextual world knowledge at all?

Protzel: Autonomous driving is indeed a prime example of the use of AI (Figure 2). Paradoxically, you simply drive off, with just two eyes and without a laser scanner, GPS, radar, high-resolution digital maps, or radio connection to the next traffic light. So it can't be that difficult after all, right?

Figure 2: For the foreseeable future, autonomous driving will remain limited to specific, easily manageable situations. © thelightwriter, 123RF.com

However, we forget that behind those two eyes is also a brain – a strong natural intelligence. Over many years, this brain has developed a world model that determines its perception and contains precisely this contextual knowledge. Additionally, it learns very quickly after a single mistake (one-shot learning), without the need for 10,000 training examples.

Creating this kind of world model capable of learning is the holy grail of strong AI. In attempts at end-to-end learning, a neural network receives two camera images as input, provides the values for steering angle and accelerator or brake as output, and can then be trained with test drives of any length. This method even works surprisingly well after the learning phase in, for example, 95 percent of all situations. When driving autonomously, however, it would be better to train 99.999… percent of all situations. Again, even larger networks and more training data won't help, so we are back to the fundamental difference between weak and strong AI.

Completely autonomous driving on all roads and in all weather conditions requires strong AI at the wheel, and that doesn't exist yet. But there are gradations on the way to autonomous driving [9], with different levels of difficulty for automation and with the human driver as backup in emergencies. The level of difficulty always depends on how structured the environment is. For example, a highway in good weather without traffic lights, pedestrians, and oncoming traffic is a very structured environment. Here, the AI only has to recognize the white lines and obstacles in its own lane.

But if you get into city traffic with road works and drizzle, the complexity increases exponentially, so we can't extrapolate progress in a linear way and say: If it works on highways today, it will work everywhere in five years. Many people believe that Google/Waymo invented autonomous driving a few years ago, but the first autonomous car on German roads was a prototype by Ernst Dickmanns as early as 1994 that took charge of the vehicle autonomously in normal motorway traffic. Looking at the corresponding video [10], it seems that, compared with the development of computing speed, disappointingly little has happened since then, but this is how things are when you're looking at exponentially increasing complexity.

ADMIN: If strong AI is so complex and so far away, will we ever be able to develop something like this? And what about typical human characteristics: creativity, emotion, or a sense of aesthetic?

Protzel: There is, after all, proof of the existence of all this, and that is our own brain. If we assume that there are no supernatural phenomena at work there, but a complex kind of information processing that relies on chemical reactions and electrical signals, then we can recreate this as soon as we have understood it.

Intelligence will always be associated with learning ability, and strong AI will certainly be able to extend its own programs. It will create links between old and new knowledge and thus be creative. Whether we need (or want) emotions in AI is an open question. For human-machine interaction, empathy and a model of how the human counterpart ticks (theory of mind) would be very important. We would certainly not want a choleric robot.

ADMIN: If it is now "only" a matter of understanding our natural intelligence, how far has brain research come? The Human Brain Project [11] is the European Union flagship project with funding of EUR1 billion. Is there any cross-fertilization? That is, does AI help in brain research and are artificial neural networks, for example, suitable as models for the way the brain works?

Protzel: Unfortunately, it will probably take some time in this area as well – progress remains rather modest. AI – or better general computing power, visualization possibilities, data analysis, and storage – are important tools of brain research, and you almost drown in the flood of data, but it is very difficult to gain a deep understanding of the underlying mechanisms without a theory and only on the basis of data.

As a thought experiment, imagine Edison finds a running PC from the future on his desk in 1900. He can now measure the voltage curve at various points on the circuit board and observe the brightness distribution of the screen as he presses the keyboard. Without a theory of the computer, what chance would he have of understanding how it works just on the basis of his measurement data? Here, Arthur C. Clarke's law applies: "Any sufficiently advanced technology is indistinguishable from magic."

But it gets worse. We now know that the function of a single natural synapse is orders of magnitude more complex than that of an entire artificial neuron, so the latter doesn't even pass for a caricature of a natural neuron and therefore won't help much as a basis for a theory.

For 30 years, researchers have been trying to understand the exact function of the nematode C [aenorhabditis ] elegans , which has exactly 302 neurons, and they are currently creating a computer simulation on OpenWorm [12]. The human brain has about 100 billion neurons. I am afraid that we will have to start at a very low level and wait until biochemistry can explain the function of molecular machines in more detail – just as Edison would have had to wait for the invention of the transistor and for John von Neumann. Nevertheless, there is no reason to assume we cannot do it in principle. We will eventually understand biological information processing in the brain, and then there will be strong AI.

ADMIN: If strong AI is coming, in principle, even if it will take a long time: Should we rather look forward to it or should we fear the moment? Are there any ideas for minimizing the potential risks of super-intelligence?

Protzel: Stuart Russell, one of the best-known AI researchers and author of the standard textbook on the subject, has just discussed this question in detail in his new book, Human Compatible: Artificial Intelligence and the Problem of Control [13]. Super-intelligence could therefore be a blessing for mankind (defeating all diseases, revolutionizing scientific and technological developments, and thus, for example, also controlling climate change with technology), or it could be a curse with enormous potential for misuse.

Even though other researchers say that fear of strong AI would be like worrying about overpopulation on Mars, Russell convinced me that it is not too early to think about it. If we knew that in 100 years an asteroid could devastate Earth, how long would we wait to before starting to prepare?

ADMIN: Where do we go from here? Would you risk a prediction for the new decade?

Protzel: I agree with the opinion of Gallwitz [1]: We will still not have AI worthy of the name in 2029. What we will see is the normal progress of automation, but not a revolution. Pattern recognition will improve a little. Robots will be able to pick more parts out of a box in an unordered way. Cars will be able to drive autonomously in more situations than now – but not all of them.

We have a strange sense of time when it comes to progress. If we look back five years, we feel that not much has happened, and we get quite impatient and frustrated about slow development. Fifty years ago, things looked different (PC, Internet, cellphones, etc.). Five hundred years ago we almost lived in another world. Nobody predicted the success of the World Wide Web and its possible effects on society when it was created in 1991, so you need to be cautious with predictions, especially with regard to the distant future.

Infos

  1. [In 2029 There Will Still Be No Artificial Intelligence Worthy of the Name] by Florian Gallwitz, Wired2029 in GQ, 14 December 2018: https://www.wired.de/article/auch-2029-wird-es-keine-kuenstliche-intelligenz-geben-die-diesen-namen-verdient (in German)
  2. "To Build Truly Intelligent Machines, Teach Them Cause and Effect" by Kevin Hartnett, Quanta Magazine , 15 May 2018: https://www.quantamagazine.org/to-build-truly-intelligent-machines-teach-them-cause-and-effect-20180515
  3. "Do Neural Nets Dream of Electric Sheep?" by Janelle Shane, March 2018: https://aiweirdness.com/post/171451900302/do-neural-nets-dream-of-electric-sheep
  4. "Breaking Neural Networks with Adversarial Attacks" by Anant Jain, Towards Data Science , 9 February 2019: https://towardsdatascience.com/breaking-neural-networks-with-adversarial-attacks-f4290a9a45aa
  5. Marcus, Gary, and Ernest Davis. Rebooting AI: Building Artificial Intelligence We Can Trust . Pantheon, 2019
  6. "Artificial Intelligence Pioneer Says We Need to Start Over" by Steve LeVine, Axios , 15 September 2017: https://www.axios.com/artificial-intelligence-pioneer-says-we-need-to-start-over-1513305524-f619efbd-9db0-4947-a9b2-7a4c310a28fe.html
  7. "An Epidemic of AI Misinformation" by Gary Marcus, The Gradient , 30 November 2019: https://thegradient.pub/an-epidemic-of-ai-misinformation
  8. "The Science News Cycle" by Jorge Cham, PhD Comics , 2009: http://phdcomics.com/comics/archive.php?comicid=1174
  9. "The Levels of Autonomous Driving" by Katrin Leicht, Tüv Nord Group #explore , 24 January 2019: https://www.tuev-nord.de/explore/en/explains/the-levels-of-autonomous-driving/
  10. Ernst Dickmanns' VaMoRs Mercedes Van, 1986-2003, Computer History Museum , 19 February 2016: https://youtu.be/I39sxwYKlEE
  11. Human Brain Project: https://www.humanbrainproject.eu/en/
  12. OpenWorm: http://openworm.org
  13. Russell, Stuart. Human Compatible: Artificial Intelligence and the Problem of Control . Viking, 2019

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=