It’s not difficult to see why. In July an AI system created by researchers at the University of California solved the Rubik’s Cube in just over a second. The same month researchers from the Ecole Polytechnique Fédérale de Lausanne developed tiny 10-gram robots that can communicate with each other, assign roles among themselves, and complete complex tasks together. Perhaps more significantly, last December an AI program developed by Google’s DeepMind beat a team of biologists in predicting the shapes of proteins, ‘the basic building blocks of disease’.
All of which indicates the huge potential of AI in solving some of the world’s greatest problems and the attention being paid to its development. Google, for example, has gone from being a mobile-first company to an AI-first company, while Amazon, Apple and Facebook are investing heavily in acquiring AI start-ups and building their own capabilities. All view AI as critical to their future.
“We use our investments into machine learning research to make our products more useful to everyone,” says Joyce Baz, a spokesperson for Google MENA. “You can already see this in products like Photos, you can use machine-learning based computer vision to search through your photos for specific objects. In Gmail, machine learning has improved our spam filter. In Google Translate, moving to a Neural Machine Translation system has improved the quality by an average of 0.5 points on a 6 points scale, improving the quality more at once than in the past 10 years combined. We also use machine learning to power some of our ads products.”
In May, Sony and Microsoft announced that the two companies will partner to enhance customer experiences in their directto-consumer entertainment platforms and AI solutions. That means exploring the incorporation of Microsoft’s advanced AI platform and tools in Sony’s consumer products, providing “highly intuitive and user-friendly AI experiences”. Microsoft is deeply involved in AI on both a consumer and business level and bought five AI companies in 2018 alone, amongst them the Silicon Valley-based start-up Lobe.
Yet we are only scratching the surface of what is possible. What those not embedded in the world of computer science refer to as AI is in fact ‘weak’ or ‘narrow’ AI. That is, artificial intelligence that is designed for one particular task. Think Google Translate or Amazon’s recommendations. Indeed, there are those who take umbrage with any use of the term ‘AI’ whatsoever, stating that it is not only misleading but downright incorrect. “The term artificial intelligence, like 4G and 5G before it, has been adopted way before the technology has rolled out, or even exists,” says Faris Yakob, an author and co-founder of Genius Steals, a nomadic creative consultancy. “A simple test as to whether a system is intelligent: you ask it to do something, it asks why and then says no.
“What we currently have are increasingly sophisticated domain-specific machine learning algorithms – very interesting, but not even really ‘weak’ AI in standard terminology. Pattern recognition based on absorbing huge corpuses of data is fascinating – non-invasive cancer diagnosis is an important recent area being explored, but it’s not AI.”
What is generally and colloquially referred to as AI is in fact an all-encompassing concept that includes both machine learning and deep learning. The latter is what enables software to recognise speech and images and, more recently, to develop perceptual abilities such as vision. However, machines are still unable to fully understand the world around them and are incapable of reasoning or thought.
Which is why the ultimate goal for many is artificial general intelligence, whereby a machine can learn and teach itself anything, surpassing human capabilities in any number of different fields. How far are we from artificial general intelligence? Decades, possibly centuries, depending on who you talk to.
“Usage of AI is just going to get deeper and broader across all aspects of business (and our lives in general),” says Dave Coplin, chief envisioning officer at The Envisioners. “The technology will continue to evolve, but perhaps more importantly our understanding and experience of AI’s strengths and weaknesses will help to ensure that we get maximum value while minimising risk. Perhaps most importantly, it is unlikely in five to 10 years that we will see a ‘general artificial intelligence’, which means the usage will still be restricted to specific scenarios and applications.
“In a 10-year timeframe, a world run by robots or AI agents alone remains a concept that is purely science fiction. However, a world run by robots in conjunction with humans is entirely likely. To get most value, however, this should not be thought of as the human master/machine slave concept that science fiction and pop culture has propagated for the last century, but should instead be a world where humans and machines work together as companions, complementing each other’s unique abilities.”
What is fundamentally clear is that companies that can effectively harness AI will have a competitive advantage in the future. The question is, of course, what evolutionary course will AI take, and what positive or negative impacts will it have?
“Businesses will save a lot of money currently spent on vast numbers of humans who do tedious repetitive administrative jobs,” says Yakob. “This may not seem positive to those workers though. WalMart is the largest private employer in the world, with about 2.3 million staff. It is pursuing automation in various forms with gusto and by its own estimation will replace 30 to 50 per cent of its employees with some kind of automated solution.”
The negatives? “What happens to all those people without jobs?” replies Yakob. “What happens to all the other customers of WalMart and every other business that has been replaced? How do they keep buying things from WalMart?”
The potential for wide-scale job losses is very real. According to the World Economic Forum’s Future of Work Report 2018, machines will perform more current work tasks than humans by 2025, compared with the 71 per cent being performed by humans today. In a joint study, Citibank and the University of Oxford estimated that 57 per cent of jobs within OECD (Organisation for Economic Co-operation and Development) countries were at high risk of automation within the next few decades. “Fifty per cent of the work that we conduct today in our daily lives as professionals, in whatever field, will be automated between now and 2030,” said Karim Sabbagh, the chief executive of DarkMatter Group, during a session at the Knowledge Summit 2018 in Dubai last December. “And I think it’s also reasonable to assume that anywhere between 15 and 30 per cent of the jobs that exist today will be displaced – i.e. they will not exist in the normal way they exist. And probably three per cent to five per cent of the jobs that exist today will not have a home in the future.”
However, the World Economic Forum has also estimated that the rapid evolution of machines and algorithms could create 133 million new roles in place of the 75 million that will be displaced between now and 2022. That’s 58 million net new jobs. “I want to remind everyone that in the course of human history, every time a new technology was introduced and a single job was lost, around 2.5 new jobs were created,” added Sabbagh. “That’s the empirical evidence. That’s the positive note. And many of us are living examples of this.”
Continue Reading with Magzter GOLD
Log-in, if you are already a subscriber
Get unlimited access to thousands of curated premium stories and 5,000+ magazines
READ THE ENTIRE ISSUE