“Simplicity is the ultimate sophistication.” Though erroneously attributed to everyone from Steve Jobs to Leonardo da Vinci, artist Leonard Thiessen’s quote is still endorsed by the history of information technology, which has really been a story of our building ever more natural and intuitive interfaces.
Early punched-card input/output was exclusively the province of PhDs. Command-line interfaces, only slightly less Byzantine, led to a generation of professional computer operators having to take night school classes to keep up with the pace of change. There were graphical user interfaces, then mobile became de rigueur, and now we’ve got to the point where the idea of an interface requiring instructions, rather than being intuitively easy to use, is starting to seem anachronistic. Today’s conversational interfaces (think smart speakers and phone-based digital assistants) and emerging augmented reality/virtual reality (AR/VR) overlays require only that you are able to speak your native language, or physically gesture towards your intentions, respectively.
If smart speakers and AR get us ‘beyond the glass’, ambient interface technologies (a collection of autonomous devices and technologies that interact and are sensitive to human needs) move us ‘beyond the device’ entirely, creating digital awareness in the user’s entire environment. The prospect of 15 separate digital assistants in every room and context is unwieldy, and thus unlikely. As such, the next wave of interfaces is likely to ‘get out of the way entirely’, becoming cloud services in much the same way that yesterday’s servers and desktops did before them.
Why bother asking: “What’s the weather?” when you can simply think that question and have it answered?
In this projected scenario: “[Device], what’s the weather?” gives way to a far simpler: “What’s the weather?” The idea here is that the most qualified digital entity snaps to attention with the highest-confidence answer, as brokered and/or subcontracted through a network of digital assistants, moving all the way down the line. In this digital Downton Abbey, the user need not know the names of all his or her staff.
And in the furthest conceivable future? Brain-computer interfaces. As startling as ‘microchips in brains’ may seem from the present, when looked at through a long lens, this proposition is simply the removal of the final communicative barrier between human and machine: speech. Why bother asking: “What’s the weather?” when you can simply think that question and have it answered? Or, when the sun goes down in the evening, enjoy the thermostat’s doting response to the subconscious call of your parasympathetic nervous system for warmth?
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataMind the digital gap
While underlying enabling technologies grow more complicated, their reach, accessibility and usability grow exponentially. Leaders would be wise to plan for a world where every interaction is mediated through a technological interface. In 2011, venture capitalist Marc Andreessen famously said, “Software is eating the world.” His statement recognised the fact that it is bits and bytes, rather than bricks and mortar, that will define our future. Digital experiences are more scalable than physical. Software, thanks to updates, can improve over time. Physical hardware depreciates while compiled code is protected and less prone to reverse engineering. Covid-19 has further catalysed this shift. As the pandemic stressed physical supply chains beyond their limits, digital networks proved as elastic and resilient as ever. The primacy of digital is no longer in doubt.
The governance and policy takeaways are myriad, but the increasing primacy of digital, virtual and ambient experiences brings with it a risk of an exponentially widening digital divide. Today, commercial sports stadia are being constructed that require the use of a smartphone to enter (digital ticket), transact (digital wallet) and engage (digital scoreboard). Will tomorrow’s public services be designed in such a way as to require digital IDs? AR glasses? It may be critical to ensure that access to necessities doesn’t gradually begin to require, or even presume, the availability of certain commercial technologies.
Intelligence is as intelligence does
Forrest Gump said, “stupid is as stupid does”. Cinema’s consummate everyman recognised that a person should be judged by their actions, not their appearance. Our research suggests that Gump’s homespun wisdom applies equally well to the future of information and machine intelligence. Indeed, a perspective informed by the long arc of history suggests that even the term ‘artificial intelligence’ (AI) may well become an anachronism – a label belonging to a transitional time, one in which we were yet to realise that, whether it sits in vivo or in silico, intelligence is intelligence.
As the pandemic stressed physical supply chains beyond their limits, digital networks proved as elastic and resilient as ever. The primacy of digital is no longer in doubt.
Consider the impact of AI: as machines become more capable, feats considered to require intelligence are often stopped from helping to define AI. ‘AI’ has thus become a catch-all term for whatever machines cannot do yet. Our human need to feel exceptional finds us simultaneously dismissive of past advances in machine intelligence (for example, the computer Deep Blue beat chess champion Garry Kasparov in 1996, and in 2015 AlphaGo became the first computer to beat a human professional at Go without handicaps) and doubtful about upcoming milestones.
Our psychological fragility notwithstanding, AI’s next act is likely to be affective intelligence: the ability to discern and emulate human emotions and, in turn, to begin to engage in empathic interactions and even relationships. Imagine humorous machines, charming machines, or even spiritual machines.
To the degree that humour, charm or spirituality continue to become describable by data, they in turn become increasingly learnable by deep neural networks. There is probably little in the way of individual human skills – even soft and creative skills or the coherent expression of beliefs – that, given enough information and computation, machines won’t one day emulate.
And after that? The furthest conceivable informational futures point toward versatility. Machines’ astonishing ability to learn and subsequently master individual skills is one thing, but the ability of a single machine to emulate a well-rounded individual’s skills and personality is still a long way off. That said, when it comes to general intelligence the standard for ‘success’ isn’t set with reference to Albert Einstein or William Shakespeare. In their earliest incarnations, digital personas will likely underwhelm. It is likely, though, that thanks to exponential increases in training data and processing power – and an increasing symbiosis between technology and human biology – we will likely see mechanical minds quickly follow an upward path towards eventual parity with, and even superiority to, our own.
The rise of the machines is already well under way, and accelerating. Popular science fiction tends to make this a story about malevolent sentience – mechanical minds as super villains with dark agendas. In truth, software has always been neutral, manifesting the explicit orders and tacit biases of its developers. As information technology continues to evolve from our telling machines what to calculate towards teaching machines what to discern, it will be increasingly important for organisations, governments and regulators to closely monitor the ‘curriculum’. How can we develop artificial intelligence that embodies our explicitly shared financial, social and ethical values? We must train our digital children well, training them to do as we say, not necessarily as we’ve done.
Invest in moonshots
Our species has always been defined (or at the very least, differentiated) by our ability to learn, create and adapt. Roughly 2.6 million years ago, homo habilis created the first stone tools, thus freeing time and energy for higher-order pursuits. The Sumerians created the first written language 5,000 years ago as a life hack for offloading knowledge. Like stone tools, this advance also freed time and energy for other pursuits, except this time of an even higher order. Five hundred years ago, the printing press similarly provided a life hack for communication, and 75 years ago, the digital computer one for calculation. When seen through this long lens, projected advances in computing are neither hero nor villain. Rather, they represent the latest in our species’ long series of transformative adaptations in the pursuit of efficiency.
Though the challenges we face are becoming progressively more complex, our collective creativity and intelligence appears to be evolving faster than the challenges themselves. Humanity’s ability to come up with life hacks – whether made of stone or bits and bytes – seems set to continue to give us an exponential edge in raising a response both to today’s threats and tomorrow’s perils.
Leaders should consider allocating time, mindshare and money for moonshots – projects that might not help us compete today, but given enough inspiration and perspiration, can help us create tomorrow.
American architectural pioneer Daniel Burnham (known for his city plan of Chicago) captured the clarion call of the long view in 1891: “Make no little plans; they have no magic to stir [our] blood and probably themselves will not be realised. Make big plans; aim high in hope and work, remembering that a noble, logical diagram once recorded will never die, but long after we are gone be a living thing, asserting itself with ever-growing insistency. Remember that our [children and grandchildren] are going to do things that would stagger us. Let your watchword be order and your beacon beauty.”
Inspirational quotes are not business cases. But in the context of the long view (looking both forward and backward) they remind us it’s imperative that we as business, civic and academic leaders spend time thinking beyond quarterly numbers and quarrelling constituents. Indeed, we must plant seeds in a field we will never harvest.
That’s not just stewardship, that’s leadership.
This article is an edited excerpt from ‘Technology Futures: Projecting the Possible, Navigating What’s Next’, an insight report published in April 2021 from the World Economic Forum in collaboration with Deloitte.
To download the full report, visit here.
Home page graphic courtesy of the WEF.