29 Jan 2021
How bright is ‘I’ in AI?
Apple’s smart voice assistant SIRI puts a human voice, if not a face, to artificial intelligence (AI) for many people, but one of the fathers of SIRI believes that AI, as widely understood, does not exist.
Luc Julia, a co-creator of SIRI and now Senior Vice President for Innovation at Samsung Electronics Strategy and Innovation Centre, said the term has been used to cover sets tools that have, historically, often disappointed.
He made the remark when presenting an overview of AI at the online Asian Financial Forum held earlier this month.
The term “AI” was first applied in 1956 when it was billed as a new science. Mathematical equations were used to describe neurons, which built up to neural networks and then brains, hence creating “intelligence”.
However, Mr Julia said the origins of machines that perform human thought functions go back much further than 1956 – all the way to 1642 when another French mathematician, Blaise Pascale, made the first mechanical calculator – a very efficient tool allowing addition and subtraction (main picture).
Shortly after the first foray into machine brains in the 20th century, AI entered its first “winter” in the 1960s when interest and funding dried up as these machine brains failed to deliver the benefits initially touted. The 1950s AI developers used mathematical tools in an attempt to make machines that could use natural language – the most difficult problem they could face. The attempt failed and projects were defunded.
In the 1970s AI developers tried using expert systems. This new mathematics involved logic, based on rules instead of neurons. If a condition is met, the machine acts. Expert systems achieved one of the big breakthroughs attributed to AI in 1997 when the expert-system based Deep Blue beat global chess champion Garry Kasparov. The win was a triumph of large-scale number crunching as a chess game has 1049 possible moves.
In mid-1990s, the commercial internet rolled out across the world. The neural networks idea returned, with the possibility of incorporating big data. Machine learning and deep learning also returned in the 2000s.
Giving an example of big-data driven machine learning, Mr Julia said there are many pictures of cats on the internet.
Developers built a tool that looked at many thousands of cat pictures to generate a concept of “cat” within the system. The resultant feline-spotting system had a 97% correct rate.
“But my two-year-old daughter sees just two instances of cats and then always knows cats!” he said, underlining the shortcomings of even the latest forms of AI.
In 2016, AlphaGo’s Deepmind program took on Go world champion Lee Seedol and won. “Go is a much more complex game than chess,” Mr Julia said, with mathematicians estimating there could be anything from 10200 up to as many as 10600 moves. Deepmind had access to the power of 2,000 computers in small data centre, consuming 440 kilowatts just to play Go! But the machine’s opponent Lee used just 20 watts in his brain.
Deepmind is also extremely focused – all it does is play Go. Humans can play Go and do many other things, he pointed out.
Mr Julia warned that relying on big data could create hazards –mistakes could appear as a result of data choice. An AI credit-card issuer, for example, gave women half the credit limit given to men just because the historical bias was built into its datasets.
He said another problem with AI was that the public see it as an impenetrable “black box”. This was because the mathematicians developing algorithms could understand the mathematics, but not communicate it to those outside their peer group.
In 1914, French mathematician Gaston Julien discovered fractals. But he expressed them as equations, not images – non-mathematicians could not understand fractals. Mr Julien was teaching at Polytechnique 40 years later and one of his students was Benoit Mandelbrot who went on to work for IBM in the United States. He plotted out the equations on screen, creating beautiful plant-like patterns showing recursivity. It took 40 years for fractals to become explicable to the public!
Mr Julia said truly autonomous cars (of level-5, the top automation level) should never exist and AI researchers should not say otherwise. There would always be a situation a truly autonomous car could not deal with because it was not written into the car’s system, but a human driver could deal with such a novel occurrence. Conversely, level 4 could exist and save many lives.
Duncan Chiu, Co-Founder, Radiant Venture Capital, joined Mr Julia at the webinar and pointed out that many are scared of AI will displace them from their jobs.
Mr Julia said AI’s potential to displace thinking humans was limited. For example, OpenAI, founded by Elon Musk and other Silicon Valley leaders, created a text generator that could produce content which appeared to have been written by humans. But this was done with a deep-learning machine with a neural model that needed 175 billion machine-leaning parameters.
This still could not be regarded as intelligent because it is writing text based on specific parameters. The result is simply a human's work. Generating the texts was also very expensive in terms of energy.
Mr Julia was concerned that running vast amounts of data through algorithms might not generate good results but had become very popular since it was relatively easy. He saw a need for using smaller, more well-selected datasets.
There was also the possibility of a trade war over data, with some countries banning their rivals’ tools from access to their nation's data, he noted.
As regards adapting education for an AI world, Mr Julia said a well-rounded, AI-friendly STEM (science, technology, engineering and mathematics) education is important, as is the study of history. Asian countries’ emphasis on STEM learning put them in a good position for AI, he added.