Qualcomm is working on technologies to make smartphones even smarter. At last week’s EmTech Conference in Cambridge, Massachusets, the mobile chip maker revealed plans to build artificial intelligence into its hardware.

Researchers at Qualcomm have been developing “deep learning” technologies to incorporate into upcoming smartphones. Deep learning is a branch of computer science which is at the cutting edge of artificial intelligence work. Systems called “neural networks” are designed to emulate their namesakes in the human brain. And, like their biological counterparts, these systems can be trained to “learn” over time. The more they work, the better they get at doing their job.

We’ve seen a number of applications of deep learning in the past year. Among these are language tools which learn to analyze and translate. And, the more these systems work at translating, the more they seem to learn and understand how to do it.

 So it only seems logical that Qualcomm would be looking to incorporate these powerful AI tools into their mobile technology. In fact, it was a call from mobile device manufacturers to make their phones and tablets smarter at working with images which sparked the current Qualcomm projects.

An example of this type of tool is an app which Qualcomm has developed for smartphone cameras. These cameras typically have “scene modes”, which are designed to give the best exposure and settings for various types of scenes. The Qualcomm app can use visual cues in its view to recognize what type of scene it’s seeing, and choose the optimal settings to get the best pictures.

Taking this concept a step further, Charles Bergan, VP of Engineering at Qualcomm, envisions software for smartphones which will choose the best moment to take a picture. The program can learn to recognize a particular event, such as the moment in a soccer game when the ball takes flight, and decide to snap the picture when that occurs.

These applications were demonstrated at the EmTech Conference, which was sponsored by MIT Technology Review. In order for the scene-classifying tool to learn to categorize images, its electronic neural networks were shown thousands of images to analyze. We’ve already seen similar experiments with image classification, including Microsoft’s Project Adam, which studied a huge database of images, and subsequently learned to classify breeds and even sub-breeds of dogs.

This isn’t the first time that Qualcomm has worked with hardware designed to function like the human brain and nervous system. They have already experimented with with chips that have been called “neuromorphic”, as their architecture is patterned after that of the human nervous system.

Earlier this year, Qualcomm demonstrated a robot called Pioneer, which is powered only by a smartphone processor and some specialized highly intelligent software. Using very little in the way of system resources, the robot was trained to recognize and sort objects in a room based on its previous exposure to other objects with a similar appearance. And after only being shown once where to deliver those first objects, it can apply the same processes to put away similar looking objects that it’s only seeing for the first time.

Admittedly, systems like Pioneer are not quite ready for prime time. But Bergan does foresee that in the near future, existing chipsets could be enhanced with accelerators to add these new deep learning functionalities.

As with many of the latest advances in artificial intelligence, I’m both awed by the technology, and perhaps a bit concerned about humans building machines that continue to grow smarter, and maybe even smarter than their makers. What do you think? I’d like to hear your thoughts in the comments section below.

Source: MIT Technology Review
Website: Qualcomm Home Page

About the author

Fred Scholl

I'm an unabashed enthusiast of all things Android, open-source, and technology in general. I'm also an avid music lover and musician, playing guitar, bass, keyboards, and a host of other stringed instruments.