By Tine Lavrysen, Maarten Anckaert and Hans Van Herreweghe
At the beginning of each new year, people and organizations tend to look ahead to see what’s in store for them in the year to come, and beyond. At icapps, we aim to stay ahead of the curve, so we invited a visionary speaker late last year: Peter van Hees, Innovation Manager at KBC. His insights were so thought-provoking that we decided to share some of them with you, and offer our perspective on them.
1. AI will gradually change our lives - and jobs
Artificial intelligence is improving almost every single day. Applications that seemed mere science-fiction just a few years ago, are now becoming reality. One prediction for the not so far future: Google being capable of a correct diagnosis for your disease, based on your search history. Imagine you entering a series of symptoms as search terms. If Google has seen this same combination of terms before, and the persons entering this combination often started a search for a specific disease, Google may suggest: “have you considered that you might have disease X?” This doesn’t require medical knowledge and yet it may prove very helpful.
For us at iCapps there are also advantages, although a little more mundane. AI’s continuous improvements may mean a world of difference in our multi-lingual apps. If machine translation becomes good enough to generate reliable versions in different languages, leaving just the final check to human effort, this will considerably speed up our roll-out process.
AI can also contribute to faster sketch-to-code cycles for iCapps’ design team, a technique that has been introduced at Airbnb already: if the system can recognize sketches, turn them into wireframes, and automate the coding and testing processes, this can be a huge improvement in the entire design cycle.
2. Singularity is overrated
The smarter our computers get, the more we should fear that humans become superfluous. Especially if these machines are clever enough to realize that man is Earth’s worst enemy, they may conclude that our planet would be better off without us. Just like supercomputer Ultron did in the second Avengers movie, with almost disastrous consequences, remember? Not that such doom scenarios are very likely to move from Hollywood to the real world. We will probably much sooner face a different kind of threat: AI becoming a powerful weapon in the hands of whoever controls the algorithms. (A concern that even Putin shares, by the way.)
And what about job losses? Are smarter machines a threat to our jobs, because they can outperform us both in quality and in speed? For some jobs, this risk is indeed real and imminent. But not all jobs are as easily replaced by robots. If you want to find out how safe your job is, you should take a look here. Design and development professionals seem to be safe. For now anyway.
3. Invasion of the body sensors
Proud owner of a chip inside his hand, Peter is a firm believer in the added value of processing power within your body. In a first phase, mostly as a sensor to monitor health and as an authentication device. But gradually, it should become a ‘natural’ extension of our own capabilities, ranging from chip implants to exo-skeletons.
For developers and designers like us, this reality has immense implications. Today already, we focus on small form factors such as smartphones and smartwatches when developing our apps and interfaces. In the future, we may need to develop with smart lenses and even bionic eyes in mind.
4. “Siri, I was thinking...”
Processing power within our body, can it get any closer? Well, how about our brain? What if you only have to think a command for your computer/robot/device to execute it? Event that is no longer restricted to the realm of science-fiction. For now, the applications are limited to impaired or even locked-in people, who can thus regain their motor skills. But it’s a relatively small step to consider the next application: computers reading everybody’s minds and executing whatever wish you have formulated.
Practically speaking, though, it will take ages to reach this level if interaction. The current interfaces are limited to interpreting directions and move or click commands. But I am quite certain that we will not be around anymore by the time someone can just think a command ‘What is the capital of North Korea again?’ and some device can interpret this thought and come up with the result.
There are not only practical and technical limitations - thoughts are a lot more complex than our verbal utterances, and even phrases are sometimes hard to interpret for even the most sophisticated AI around - there is still the ethical dimension to consider: ‘how about our privacy if computers can read our mind’, and ‘if devices can be controlled by our minds, isn’t there a chance that our minds can be controlled by the devices?’.
“Siri, I was thinking...” “I know what you were thinking, sir, and the answer is Pyongyang.” A realistic scenario? Not in our lifetime, I think.