The Future of AI: How Artificial Intelligence Will Change the World

Home > Blogs > General > The Future of AI: How Artificial Intelligence Will Change the World

On a nondescript building close to downtown Chicago, Marc Gyongyosi and the small but growing crew of IFM/Onetrack.AI have one principle that rules them all: think simple. The words are written in simple font on a sheet of paper that’s stuck to a rear upstairs wall of their industrial two-story workspace. What they’re doing here with artificial intelligence, however, isn’t simple at all.

Sitting at his cluttered desk, located near an oft-used ping-pong table and prototypes of drones from his college days suspended overhead, Gyongyosi pulls up grainy video footage of a forklift driver operating his vehicle in a warehouse. It was captured from overhead courtesy of a Onetrack.AI “forklift vision system.”

Artificial intelligence future

Artificial intelligence is shaping the future of humanity across nearly every industry. It is already the main driver of emerging technologies like big data, robotics and IoT, and it will continue to act as a technological innovator for the foreseeable future.

Employing machine learning and computer vision for detection and classification of various “safety events,” the shoebox-sized device doesn’t see all, but it sees plenty. Like which way the driver is looking as he operates the vehicle, how fast he’s driving, where he’s driving, locations of the people around him and how other forklift operators are maneuvering their vehicles. IFM’s software automatically detects safety violations — like cell phone use — and notifies warehouse managers so they can take immediate action. The main goals are to prevent accidents and increase efficiency. The mere knowledge that one of IFM’s devices is watching, Gyongyosi claims, has had “a huge effect.”

“If you think about a camera, it really is the richest sensor available to us today at a very interesting price point,” he says. “Because of smartphones, camera and image sensors have become incredibly inexpensive, yet we capture a lot of information. From an image, we might be able to infer 25 signals today, but six months from now we’ll be able to infer 100 or 150 signals from that same image. The only difference is the software that’s looking at the image … Every customer is able to benefit from every other customer that we bring on board because our systems start to see and learn more processes and detect more things that are important and relevant.”

The Evolution of AI

IFM is just one of countless AI innovators in a field that keeps growing. For example, of the 9,130 patents received by IBM inventors in 2021, 2,300 were AI-related. Tesla founder and tech titan Elon Musk donated $10 million to fund ongoing research at the non-profit research company OpenAI — a mere drop in the proverbial bucket if his $1 billion co-pledge in 2015 is any indication.

After several decades marked by sporadic dormancy during an evolutionary period that began with “knowledge engineering,” technology progressed to model- and algorithm-based machine learning and increasingly focused on perception, reasoning and generalization. Now AI has re-taken center stage like never before — and it won’t cede the spotlight anytime soon. 

Why is artificial intelligence important?

AI is important because it forms the very foundation of computer learning. Through AI, computers have the ability to harness massive amounts of data and use their learned intelligence to make optimal decisions and discoveries in fractions of the time that it would take humans.

What industries will change?

There’s virtually no major industry modern AI — more specifically, “narrow AI,” which performs objective functions using data-trained models and often falls into the categories of deep learning or machine learning — hasn’t already affected. That’s especially true in the past few years, as data collection and analysis has ramped up considerably thanks to robust IoT connectivity, the proliferation of connected devices and ever-speedier computer processing.

Some sectors are at the start of their AI journey, others are veteran traveler’s. Both have a long way to go. Regardless, the impact AI is having on our present day lives is hard to ignore.

  • Transportation: Although it could take some time to perfect them, autonomous cars will one day ferry us from place to place.
  • Manufacturing: AI powered robots work alongside humans to perform a limited range of tasks like assembly and stacking, and predictive analysis sensors keep equipment running smoothly.
  • Healthcare: In the comparatively AI-nascent field of healthcare, diseases are more quickly and accurately diagnosed, drug discovery is sped up and streamlined, virtual nursing assistants monitor patients and big data analysis helps to create a more personalised patient experience.
  • Education: Textbooks are digitised with the help of AI, early-stage virtual tutors assist human instructors and facial analysis gauges the emotions of students to help determine who’s struggling or bored and better tailor the experience to their individual needs.
  • Media: Journalism is harnessing AI, too, and will continue to benefit from it. Bloomberg uses Cyborg technology to help make quick sense of complex financial reports. The Associated Press employs the natural language abilities of Automated Insights to produce 3,700 earning reports stories per year — nearly four times more than in the recent past.
  • Customer Service: Last but hardly least, Google is working on an AI assistant that can place human-like calls to make appointments at, say, your neighborhood hair salon. In addition to words, the system understands context and nuance.

But those advances — and numerous others — are only the beginning. There’s much more to come. 

“I think anybody making assumptions about the capabilities of intelligent software capping out at some point are mistaken,” says David Vandegrift, CTO and co-founder of the customer relationship management firm 4Degrees.

With companies spending billions of pounds on AI products and services annually, tech giants like GoogleAppleMicrosoft and Amazon spending billions to create those products and services, universities making AI a more prominent part of their curricula, big things are bound to happen. Some of those developments are well on their way to being fully realised; some are merely theoretical and might remain so. All are disruptive, for better and potentially worse, and there’s no downturn in sight.

“Lots of industries go through this pattern of winter, winter, and then an eternal spring,” former Google Brain leader and Baidu chief scientist Andrew Ng told ZDNet. “We may be in the eternal spring of AI.”

The Impact of AI on Society

How will AI change work

During a lecture at Cambridge University, AI expert Kai-Fu Lee championed AI technology and its forthcoming impact while also noting its side effects and limitations. Of the former, he warned:

“The bottom 90 percent, especially the bottom 50 percent of the world in terms of income or education, will be badly hurt with job displacement … The simple question to ask is, ‘How routine is a job?’ And that is how likely [it is] a job will be replaced by AI, because AI can, within the routine task, learn to optimise itself. And the more quantitative, the more objective the job is—separating things into bins, washing dishes, picking fruits and answering customer service calls—those are very much scripted tasks that are repetitive and routine in nature. In the matter of five, 10 or 15 years, they will be displaced by AI.”

In the warehouses of online giant and AI powerhouse Amazon, which buzz with more than 100,000 robots, picking and packing functions are still performed by humans — but that will change.

AI in the near future

In Mendelson’s view, some of the most intriguing AI research and experimentation that will have near-future ramifications is happening in two areas: “reinforcement” learning, which deals in rewards and punishment rather than labeled data; and generative adversarial networks (GAN for short) that allow computer algorithms to create rather than merely assess by pitting two nets against each other. The former is exemplified by the Go-playing prowess of Google DeepMind’s Alpha Go Zero, the latter by original image or audio generation that’s based on learning about a certain subject like celebrities or a particular type of music.

On a far grander scale, AI is poised to have a major effect on sustainability, climate change and environmental issues. Ideally and partly through the use of sophisticated sensors, cities will become less congested, less polluted and generally more livable. 

“Once you predict something, you can prescribe certain policies and rules,” Nahrstedt says. Such as sensors on cars that send data about traffic conditions could predict potential problems and optimise the flow of cars. “This is not yet perfected by any means,” she says. “It’s just in its infancy. But years down the road, it will play a really big role.”

Check out who’s hiring here

Will AI take over the world?

AI is projected to have a lasting impact on just about every industry imaginable — as 60 percent of businesses are predicted to be affected by it. We’re already seeing artificial intelligence in our smart devices, cars, healthcare system and favorite apps, and we’ll continue to see its influence permeate deeper into many other industries for the foreseeable future.

AAI and privacy risks

Of course, much has been made of the fact that AI’s reliance on big data is already impacting privacy in a major way. Look no further than Cambridge Analytica’s Facebook shenanigans or Amazon’s Alexa eavesdropping, two among many examples of tech gone wild. Without proper regulations and self-imposed limitations, critics argue, the situation will get even worse. In 2015, Apple CEO Tim Cook derided competitors Google and Facebook for greed-driven data mining.

“They’re gobbling up everything they can learn about you and trying to monetize it,” he said in a 2015 speech. “We think that’s wrong.”

Later, during a talk in Brussels, Belgium, Cook expounded on his concern.

“Advancing AI by collecting huge personal profiles is laziness, not efficiency,” he said. “For artificial intelligence to be truly smart, it must respect human values, including privacy. If we get this wrong, the dangers are profound.”

Plenty of others agree. In a 2018 paper published by UK-based human rights and privacy groups Article 19 and Privacy International, anxiety about AI is reserved for its everyday functions rather than a cataclysmic shift like the advent of robot overlords.

“If implemented responsibly, AI can benefit society,” the authors wrote. “However, as is the case with most emerging technology, there is a real risk that commercial and state use has a detrimental impact on human rights.”

The authors concede that the collection of large amounts of data can be used for trying to predict future behavior in benign ways, like spam filters and recommendation engines. But there’s also real threat that it will negatively impact personal privacy and the right to freedom from discrimination.

RELATED READINGOnline Privacy: A Guide to How Your Personal Data Is Used

Preparing for the Future of AI

Speaking at London’s Westminster Abbey in late 2018, internationally renowned AI expert Stuart Russell joked (or not) about his “formal agreement with journalists that I won’t talk to them unless they agree not to put a Terminator robot in the article.” His quip revealed an obvious contempt for Hollywood representations of far-future AI, which tend toward the overwrought and apocalyptic. What Russell referred to as “human-level AI,” also known as artificial general intelligence, has long been fodder for fantasy. But the chances of its being realized anytime soon, or at all, are pretty slim.

“There are still major breakthroughs that have to happen before we reach anything that resembles human-level AI,” Russell explained. 

Russel also pointed out that AI is not currently equipped to fully  understand language. This shows a distinct difference between humans and AI at the present moment: Humans can translate machine language and understand it but AI can’t do the same for human language. However, if we reach a point where AI is able to understand our languages, AI systems would be able to read and understand everything ever written. 

“Once we have that capability, you could then query all of human knowledge and it would be able to synthesize and integrate and answer questions that no human being has ever been able to answer,” Russell added, “because they haven’t read and been able to put together and join the dots between things that have remained separate throughout history.”

This offers us a lot to think about. On the subject of which, emulating the human brain is exceedingly difficult and yet another reason for AGI’s still-hypothetical future. Longtime University of Michigan engineering and computer science professor John Laird has conducted research in the field for several decades.

“The goal has always been to try to build what we call the cognitive architecture, what we think is innate to an intelligence system,” he says of work that’s largely inspired by human psychology. “One of the things we know, for example, is the human brain is not really just a homogenous set of neurons. There’s a real structure in terms of different components, some of which are associated with knowledge about how to do things in the world.”

That’s called procedural memory. Then there’s knowledge based on general facts, a.k.a. semantic memory, as well as knowledge about previous experiences (or personal facts) which is called episodic memory. One of the projects at Laird’s lab involves using natural language instructions to teach a robot simple games like Tic-Tac-Toe and puzzles. Those instructions typically involve a description of the goal, a rundown of legal moves and failure situations. The robot internalises those directives and uses them to plan its actions. As ever, though, breakthroughs are slow to come — slower, anyway, than Laird and his fellow researchers would like.

“Every time we make progress,” he says, “we also get a new appreciation for how hard it is.”

Is AI a threat to humanity? 

More than a few leading AI figures subscribe (some more hyperbolically than others) to a nightmare scenario that involves what’s known as “singularity,” whereby super machines take over and permanently alter human existence through enslavement or eradication.

The late theoretical physicist Stephen Hawking famously postulated that if AI itself begins designing better AI than human programmers, the result could be “machines whose intelligence exceeds ours by more than ours exceeds that of snails.” Elon Musk believes and has warned that AGI is humanity’s biggest existential threat. Efforts to bring it about, he has said, are like “summoning the demon.” He has even expressed concern that his pal, Google co-founder Larry Page could accidentally shepherd something “evil” into existence despite his best intentions. Say, for example, “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.” Even IFM’s Gyongyosi, no alarmist when it comes to AI predictions, rules nothing out. At some point, he says, humans will no longer need to train systems; they’ll learn and evolve on their own.

“I don’t think the methods we use currently in these areas will lead to machines that decide to kill us,” he says. “I think that maybe five or 10 years from now, I’ll have to reevaluate that statement because we’ll have different methods available and different ways to go about these things.

While murderous machines may well remain fodder for fiction, many believe they’ll supplant humans in various ways.

Oxford University’s Future of Humanity Institute published the results of an AI survey. Titled “When Will AI Exceed Human Performance? Evidence from AI Experts,” it contains estimates from 352 machine learning researchers about AI’s evolution in years to come.

There were lots of optimists in this group. By 2026, a median number of respondents said, machines will be capable of writing school essays; by 2027 self-driving trucks will render drivers unnecessary; by 2031 AI will outperform humans in the retail sector; by 2049 AI could be the next Stephen King and by 2053 the next Charlie Teo. The slightly jarring capper: By 2137, all human jobs will be automated. But what of humans themselves? Sipping umbrella drinks served by droids, no doubt.

Diego Klabjan, a professor at Northwestern University and founding director of the school’s Master of Science in Analytics program, counts himself an AGI skeptic.

“Currently, computers can handle a little more than 10,000 words,” he explains. “So, a few million neurons. But human brains have billions of neurons that are connected in a very intriguing and complex way, and the current state-of-the-art [technology] is just straightforward connections following very easy patterns. So going from a few million neurons to billions of neurons with current hardware and software technologies — I don’t see that happening.”

How will we use AGI?

Klabjan also puts little stock in extreme scenarios — the type involving, say, murderous cyborgs that turn the earth into a smoldering hellscape. He’s much more concerned with machines — war robots, for instance — being fed faulty “incentives” by nefarious humans. As MIT physics professors and leading AI researcher Max Tegmark put it in a 2018 TED Talk, “The real threat from AI isn’t malice, like in silly Hollywood movies, but competence — AI accomplishing goals that just aren’t aligned with ours.” That’s Laird’s take, too.

“I definitely don’t see the scenario where something wakes up and decides it wants to take over the world,” he says. “I think that’s science fiction and not the way it’s going to play out.”

What Laird worries most about isn’t evil AI, per se, but “evil humans using AI as a sort of false force multiplier” for things like bank robbery and credit card fraud, among many other crimes. And so, while he’s often frustrated with the pace of progress, AI’s slow burn may actually be a blessing.

“Time to understand what we’re creating and how we’re going to incorporate it into society,” Laird says, “might be exactly what we need.”

But no one knows for sure.

“There are several major breakthroughs that have to occur, and those could come very quickly,” Russell said during his Westminster talk. Referencing the rapid transformational effect of nuclear fission (atom splitting) by British physicist Ernest Rutherford in 1917, he added, “It’s very, very hard to predict when these conceptual breakthroughs are going to happen.”

But whenever they do, if they do, he emphasized the importance of preparation. That means starting or continuing discussions about the ethical use of AGI and whether it should be regulated. That means working to eliminate data bias, which has a corrupting effect on algorithms and is currently a fat fly in the AI ointment. That means working to invent and augment security measures capable of keeping the technology in check. And it means having the humility to realize that just because we can doesn’t mean we should.

“Most AGI researchers expect AGI within decades, and if we just bumble into this unprepared, it will probably be the biggest mistake in human history. It could enable brutal global dictatorship with unprecedented inequality, surveillance, suffering and maybe even human extinction,” Tegmark said in his TED Talk. “But if we steer carefully, we could end up in a fantastic future where everybody’s better off — the poor are richer, the rich are richer, everybody’s healthy and free to live out their dreams.”

Why Use Blockchain Technology?