Development Philosophy Behind AGI

Print Friendly, PDF & Email

In this special guest feature, nationally recognized entrepreneur and software developer, Charles Simon, BSEE, MSCs, discusses the development philosophy behind AGI and where the AGI field seems poised to go in the near future. Mr. Simon’s technical experience includes the creation of two unique Artificial Intelligence systems along with software for successful neurological test equipment. Combining AI development with biomedical nerve signal testing gives him the singular insight. He is also the author of two books – Will Computers Revolt?: Preparing for the Future of Artificial Intelligence and Brain Simulator II: The Guide for Creating Artificial General Intelligence – and the developer of Brain Simulator II, an AGI research software platform that combines a neural network model with the ability to write code for any neuron cluster to easily mix neural and symbolic AI code.

When you hear the term “Artificial Intelligence or AI,” it really refers to narrow AI – a system which may have superhuman “mental” abilities, but is limited to the use of software to study or accomplish a narrow area of pre-learned expertise, such as a problem-solving or task reasoning.

That contrasts sharply with strong AI or artificial general intelligence (AGI), the ability of an artificial entity to learn and understand any intellectual task that a human can. But while we’re now surrounded by examples of AI (think Siri or Alexa or any number of video games), experts predict true AGI is still decades away.

What needs to be done to help the current version of AI evolve into true AGI? Rather than tackling the hardest problems first, the essential foundation for genuine intelligence to emerge can be found simply by considering the way in which a three-year-old learns by playing with blocks. The child immediately recognizes that physical objects like the blocks are generally permanent and exist in a physical reality. He or she learns that blocks don’t fall through each other because they’re solid, and that round blocks roll while square blocks don’t. The child also realizes that a stack of blocks must be built before it can fall down, and so understands the passage of time. Moreover, the child can learn any language that is heard and use it to describe the surroundings.

The three-year-old, of course, has a few advantages over AI. The child gets multisensory input and can manipulate objects. That means he or she knows that a block is more than just its appearance or the words used to describe it. The child has an internal mental model of his or her environment, so knows that the blocks still exist even if they can’t be seen or touched. The child can use this mental model to imagine and plan. He or she also has the equivalent of a Universal Knowledge Store which can create links relating all types on input. With this ability, everything the child learns can be placed in the context of everything else previously learned, creating a basis for understanding.

The same holds true of AGI. Without the ability to understand and interact with the real world in which it exists, AI can never truly become AGI. Because the human brain is the only working AGI model we have at present, it makes sense to start there. We know, for example, that intelligence and thinking occur in neurons in the neocortex due to their spiking function. Because not much DNA data is devoted to neocortex formation, though, the maximum complexity of AGI software must be limited, with the thought that general intelligence will be created from millions of instances of a small number of unique, but fairly simple, neural circuits and rules for connecting them.     

We also know that while human intelligence has evolved, the brain’s structure has remained the same. This suggests that intelligence develops within the context of human goals, emotions, and instincts – none of which would provide a strong basis for AGI. While human intelligence is largely about survival, AGI can be planned and is primarily about being intelligent. As such, it is unlikely to resemble human intelligence and likely will require robotics in order to learn about and deal with the complexity and variability of the real world. 

Prototype AGI already possesses modules for vision, hearing, robotic control, learning, modeling, planning, imagining, and forethought, all of which enable it to do a number of impressive activities, from building up a mental model of its simulated environment to moving objects and planning a series of actions to achieve a goal. It is unable to handle multiple activities at once, however, or to respond effectively to a complex series of data inputs.

Ultimately, it is the prototype’s ability to take these steps and understand everything to be learned within its two-dimensional, simulated environment that will open the door to three-dimensional simulation. By gradually learning about such basics as the passage of time and the simple physics of gravity, it will finally begin to approach the capabilities of a three-year-old, gaining the skills it needs to interact with the real world.      

Given the progress that already is being made on multiple fronts, it is clear that while we may have not yet created true AGI, its realization is within our grasp.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1

Speak Your Mind

*