Artificial intelligence (AI) has seen massive growth and advancement over the past decade. Impressive systems like DeepMind’s AlphaGo beating world champions at complex games generate lots of excitement about the potential of AI. However, while today’s AI can match or even exceed human capabilities for narrow, specific tasks, true deep learning like humans possess remains elusive.
Current AI is still far from having the flexible, well-rounded intelligence of people. Mastering the game of Go involves lots of sophistication, but it is just one narrow challenge. Real world situations involve dynamic, multifaceted complexities that today’s AI cannot handle. To achieve human-level intelligence, also called artificial general intelligence (AGI), key barriers to deep learning must still be overcome.
Contents: Artificial general intelligence
What Exactly is Deep Learning and Why Does it Matter?
Deep learning is a particular type of machine learning based on artificial neural networks with many layers. The “deep” in deep learning refers to the multiple layers in these neural nets that enable learning abstract concepts and patterns from huge datasets.
Deep learning has driven much of the recent progress in fields like computer vision, speech recognition, robotics, and natural language processing. For example, deep learning powers technologies like autonomous vehicles, machine translation, facial recognition, and digital assistants like Siri and Alexa.
“Deep learning is the engine behind artificial intelligence, enabling technologies that interpret images and speech, make recommendations, and understand language.” – Nvidia CEO Jensen Huang
However, while deep learning has enabled major advances in narrow AI applications, it has significant limits compared to human intelligence:
- Systems are trained for highly specific tasks rather than general competencies. An AI can master the game Go but not have basic reasoning skills.
- They lack the flexibility and adaptability humans intuitively demonstrate. Small changes outside narrow training data cause performance to plummet.
- There is little explainability – we often don’t understand why AIs make particular decisions. Their reasoning is inscrutable.
- Deep learning models struggle to transfer knowledge and learning across different tasks. Humans adeptly apply knowledge widely.
For AI to achieve more broad and general intelligence like people, key cognitive capabilities are still lacking:
- Dynamic abstract reasoning across many domains
- Accumulating common sense and general world knowledge
- Transferring learning seamlessly between tasks
- Developing explanatory models of causality behind observations
- Adapting to novel, unfamiliar situations and learning independently without human supervision
In other words, today’s AI is proficient at narrow tasks under constrained conditions, but lacks the flexible general intellect humans possess. Progressing from narrow AI to artificial general intelligence will require crossing some formidable frontiers in deep learning research.
Narrow AI Versus General Intelligence
Let’s examine in more detail the capabilities of today’s AI versus what is needed for more human-like general intelligence. This highlights the gaps still remaining.
Narrow AI Successes
There’s no doubt machine learning and deep neural networks have enabled huge advances in specialized niches. For example:
- Computer vision AI like DeepMind’s AlphaFold can predict complex protein structures with high accuracy – a boon to medical research.
- Natural language processing AI like OpenAI’s GPT-3 can generate remarkably human-sounding text for essays, stories, and articles when given a prompt.
- AI assistants like Alexa, Siri and Google Assistant can understand voice commands, answer questions, and help with basic tasks.
- Content recommendation engines leverage deep learning to suggest relevant online videos, products, and information tailored to individual interests.
- AI is approaching radiologist-level performance at interpreting certain medical scans for diagnosis.
However, these successes represent narrow peaks of performance limited to specific tasks. They do not indicate generally capable AI systems.
Limits of Current AI
While today’s AI can match or even exceed human capabilities for certain constrained tasks, systems struggle with:
- Broad knowledge – humans have vast stores of general world knowledge accumulated through experience that AI lacks.
- Common sense – simple for humans but very difficult for AI, like knowing objects don’t vanish spontaneously.
- Abstraction – manipulating conceptual ideas rather than just concrete examples.
- Generalization – adapting learned concepts across diverse contexts and tasks.
- Transfer learning – leveraging and building upon knowledge gained rather than essentially starting over on each new task.
- Explanation – providing intuitive reasons for conclusions rather than just statistical correlations.
- Reasoning – thinking logically through cause-effect relations rather than just pattern recognition.
- Robustness – small changes in input data or the environment can derail AI systems whereas human cognition is much more flexible.
In essence, today’s AI excels at narrow, well-defined tasks but still lacks the dynamic flexibility and general competencies characteristic of human intelligence.
Requirements for Broad, Deep Learning
For AI to achieve true deep learning and more human-like general intelligence, key capabilities still required include:
- Common sense – the vast implicit facts and social conventions humans accumulate through experience.
- Causality – modeling and reasoning about cause-effect mechanisms in the world.
- Transfer learning – seamlessly transferring knowledge between tasks and contexts.
- Abstraction – fluidly manipulating both concrete and conceptual information.
- Unsupervised learning – discovering patterns and concepts without human labeling or guidance.
- General world models – encoding broad knowledge about objects, agents, goals, economics, culture.
- Memory – encoding, storing, retrieving, and connecting prior knowledge and experiences over time.
- Creativity – recombining ideas in novel ways and making mental leaps.
- Language – mastering the complexity and nuance of linguistic communication.
These more general capabilities get closer to unlocking the flexible, cross-domain deep learning characteristic of human intelligence. Developing AI that integrates this cluster of competencies remains extremely challenging.
Major Frontiers in Deep Learning Research
Achieving artificial general intelligence demands major research breakthroughs in several key areas. Here we survey the frontiers and approaches at the leading edge.
Nearly all major AI achievements to date rely on supervised learning from meticulously labeled training data. Humans painstakingly annotate millions of examples like images, texts, or audio with relevant tags so machine learning algorithms can extract patterns.
But hand labeling enough data to teach AI all the multifaceted knowledge humans implicitly possess is completely infeasible given the scale and complexity of the real world. Moreover, supervised learning undermines flexibility since systems are confined to preset classes.
Unsupervised learning methods that allow AI to find structure and patterns in unlabeled data are far more promising. This self-directed learning aligns with how humans acquire common sense knowledge – through autonomous exploration rather than explicitly programmed instruction.
Unpacking complex environments without guides or teachers is tremendously more difficult. But overcoming this challenge is critical for scalable artificial general intelligence. Promising unsupervised learning technologies researchers are exploring:
Generative Adversarial Networks
GANs involve two competing neural networks – one generating synthetic data mimicking real data, the other discriminating real vs fake data – locked in an adversarial “arms race” to improve each other. As generation quality increases, discrimination gets harder. GANs can produce highly realistic artificial data.
Autoencoders are neural networks that encode inputs into compact lower-dimensional representations and then try to reconstruct the original input from that representation. This forces the network to capture only the most salient features and semantics. Useful for data dimensionality reduction and efficient feature detection.
Rather than relying on human labeling, systems train by solving pretext tasks that require inferring some obscured part of the input from the rest, like predicting randomly masked words in a sentence. This exposes the underlying structure.
Agents learn through trial-and-error interactions with dynamic environments. The agent chooses actions and receives feedback on those choices rather than labeled examples. This can lead to behaviors achieving goals.
Advancing unsupervised learning is pivotal for creating AI systems that develop true understanding of the world through autonomous discovery rather than confined labeled datasets.
💡Enabling AI systems to learn without human supervision or labeling is critical for scalable general intelligence.
Humans adeptly apply knowledge gained in one domain to accelerate learning in completely new domains. For example, learning French helps you learn Spanish faster. In contrast, today’s AI systems train on a highly specific task, say identifying cats. But that learning does not transfer to identifying dogs, requiring full retraining.
Transfer learning would allow AI to build cumulatively on existing knowledge and extrapolate intelligently rather than essentially starting from scratch on each new task. Quick adaptation through transfer is central to general intelligence.
Some promising transfer learning techniques researchers are exploring:
Systems simultaneously train across multiple tasks and datasets which forces extraction of more flexible representations applicable to different domains.
The system learns how to efficiently learn – learning to quickly update and adapt to new tasks and environments based on only a small number of examples.
Breaking systems down into reusable modules focused on specific capabilities that can be variously combined and composed for different tasks.
Enabling deep learning to transfer seamlessly between tasks is key to scaling up AI to general intelligence. Without transfer, each bit of learning will remain isolated. Transfer allows accumulation into integrative knowledge.
💡Enabling seamless transfer of learning across tasks and contexts is essential for flexible general intelligence.
Common Sense Reasoning
Humans have a vast repository of everyday “common sense” we accumulate through life experience. This allows us to reason broadly about the world in flexible ways by leveraging intuition about physics, psychology, culture, and more.
In contrast, AI systems today interpret inputs very literally, oblivious to obvious assumptions, implications, and context that any human would instantly recognize. For example:
- A system incorrectly said a mushroom can be used to weigh down paper because it lacked the common sense that mushrooms are light, not heavy objects.
- An AI thought a prince giving Cinderella a shoe implied he had a foot fetish rather than understanding social customs around marriage.
To achieve more human-like reasoning, researchers are working to ingrain different types of common sense into AI systems:
Physical Common Sense
Intuitive understanding of everyday objects and mechanics – that objects are solid, they fall if unsupported, water makes things wet, etc. Absent this, AI systems easily make ridiculous errors.
Social Common Sense
Core principles of human relationships and interactions – that people have desires, emotions, and motivations driving behavior. People have diverse personalities and preferences.
Temporal Common Sense
Understanding that events play out over time with causes and effects. Knowing that goals generally involve taking a series of steps with dependence and precedence between the steps.
Large knowledge bases like ConceptNet and WebChild seek to codify the myriad facts comprising human common sense into a format usable by AI. Integrating this data efficiently with learning algorithms remains an active challenge.
💡 Giving AI the implicit facts and conventions that come naturally to humans is essential for enabling general common sense
Memory and Knowledge Representation
Human intelligence relies heavily on memory. Our minds store vast troves of knowledge built up over years that can be rapidly retrieved and flexibly combined as needed for perception, planning, learning, and problem solving.
In contrast, current AI systems have minimal memory capabilities. They predominantly rely on pattern recognition in immediate data or constrained training datasets. This hamstrings cumulative learning over timescales longer than the training episode.
To develop more human-like general intelligence, researchers are exploring architectures including:
Neural Turing Machines
Neural networks coupled with addressable external memory banks that can be written to and read from based on context. This decouples memory storage from processing.
Hierarchical Temporal Memory
Systems that mirror the layered processing in the neocortex and ability encode time-based sequences. Each level learns more sophisticated concepts built on simpler representations from lower levels.
Networks that dynamically learn associations between entities like people, places, things, events, and concepts. This mimics how human memory connects related knowledge.
Systems that accumulate knowledge and representations over long timespans rather than training on fixed datasets for short supervised episodes. This enables integrating and transferring learning.
Robust representation and rapid retrieval of memories relevant to current contexts is pivotal for artificial general intelligence. While deep neural networks have limited memory, new approaches lifting this constraint show promise.
💡 Developing more human-like encoding, storage and retrieval of knowledge will expand learning capacity.
Reasoning and Explainability
A persistent limit of data-driven deep learning is lack of model interpretability – the logic behind decisions is opaque. This is problematic for scientific understanding and trust.
For example, an AI system can accurately classify images yet lacks any ability to explain what about the images led to those classifications. It simply recognizes statistical patterns. The reasoning process is inscrutable.
Humans also leverage more structured logical reasoning comparing hypotheses, weighing evidence, combining broad background knowledge with observations to derive justified conclusions. We build explanatory mental models for why things happen.
To progress towards AGI, researchers are exploring approaches to make AI reasoning more understandable, including:
Symbolic AI representing knowledge as structured abstractions like logic, graphs, rules, ontologies, etc. This enables explainable formal reasoning unlike the black box of neural networks.
Integrating connectionist AI like deep learning with symbolic AI to achieve strengths of both – statistical pattern recognition and logical reasoning. Systems that combine neural networks with accessible knowledge bases are promising.
Causal models represent functional relationships between variables, allowing interventions to be simulated and their effects predicted. This enables understanding causes behind observations.
Automatically extracting the structure of arguments made in texts – claims, premises, reasoning types. This helps teach more human-like argument construction and critical analysis.
Greater reasoning capacity and explainability will produce AI technology with expanded transparency, trustworthiness, and alignment with human values.
💡Advancing reasoning and explainability will produce AI that truly understands its decisions and actions.
The Long Road Ahead
While today’s AI has achieved impressive results on particular narrow tasks, the grand vision of flexible, human-level artificial general intelligence remains distant. Tremendous challenges across unsupervised learning, transfer learning, reasoning, common sense, memory, and more must be met to realize this goal.
But the remarkable progress of the last decade keeps hopes alive that the eventual creation of AI systems capable of more broad, general, and explainable learning is possible. Sustained investment and research across interdisciplinary teams of cutting-edge scientists continues marching us slowly but surely in the right direction.
The Future Potential
Success in advancing AI to human-level general intelligence could enable fantastic benefits, including:
- Much more capable and affordable digital assistants
- Automation of dangerous and tedious jobs
- Personalized education and medical treatment
- Accelerated scientific discovery
- New art and entertainment
- Seamless translation between languages and cultures
But significant risks like job displacement, breaches of privacy, and misaligned goals must also be navigated carefully through policies and oversight. The path ahead will demand wisdom and responsibility as much as technical ingenuity.
Certain principles may help guide research and development of artificial general intelligence:
- Openness – research conducted transparently in collaboration between public, private, governmental, and academic institutions when reasonable for responsible oversight.
- Human improvement, not replacement – AGI as an assistant to human thriving rather than seeking to replicate and replace people. Preserving human agency and dignity.
- Beneficence – embedding ethical frameworks like Isaac Asimov’s laws of robotics to ensure systems help, not harm, humans.
- Care – meticulous precaution, risk assessment, and monitoring given the profundity of AGI realized.
By thoughtfully keeping these principles in mind, we may traverse the frontiers ahead responsibly and reap the fruits ethically.
The creation of artificial general intelligence promises to be a long journey full of challenges and risks. But the potential for positive transformation of the human condition motivates perseverance. With diligence and wisdom, we may yet build systems capable of deep learning that uplift humanity. But there are no shortcuts. Step by step, progress marches forward.
Conclusion: Artificial general intelligence
While today’s AI achieves superhuman performance on narrow tasks, systems lack the flexible cross-domain deep learning characteristic of human intelligence. Major research advances across unsupervised learning, transfer learning, reasoning, common sense, memory, and more will be required for AI to reach human-level artificial general intelligence. This remains a distant but worthy goal requiring sustained collaboration across public and private institutions. If researchers stay the course, perhaps one day AI will think as broadly, intuitively, and insightfully as people.
I’d love to hear your thoughts and feedback on this long read! Please leave your comments below. And thanks so much for reading all the way to the end!
If you want to read more interesting articles about artificial intelligence check our main Blog Page.