Industry Buzzwords Column #4: Neuromorphics

Photo credit: _DJ_ on Flickr

In an age of ‘smart’ technology, we’ve only just begun to see what computers are capable of – and expectations for the future are high, with scientists hoping to create machines that can learn and use their intuition to predict what we want them to do.

When engineering smart technology, biology is often used for inspiration. Neuromorphics, one of the biggest buzzwords of the year, is no different.  It aims to mimic the way our brain’s neurons react to information, as well as amongst themselves.

In order to build computers that behave in this way, Neuromorphics combines three facets: the mind (perception, planning and emotion), the brain (software), and the body (performing a task), so that when these facets work together, the result is that a computer can perform an action and adjust itself according to the information it’s getting – in short, having the ability to recognise and to learn. This is a far cry from more traditional computers, which solely rely on programmed algorithms to make calculations or take action.

A recent achievement in the field was Google’s cat test, where a computer trained itself to recognise cats (though it took 10 million images and 16,000 processors), and the company aims to use its findings to create a new way of image search which will help computers locate more specific photos. A small step from this would be a computer learning what someone looked like from a photo, and recognising them in future images – or, at a giant leap into the future, becoming ‘eyes for the blind’ and recognising objects and obstacles, which it could then describe to the user via audio.

More wild predictions include medical care that could monitor a patient’s condition and alert staff to potential problems; drones that could remember where they’d been and navigate their surroundings more effectively and computers that could make accurate predictions about the weather, based on historical happenings – but what does all this mean for retail?

Well, these types of computers – ones that can manipulate, navigate and communicate – give some clues as to what can be achieved in the coming years.

 You could be served by a robot shop assistant

Pioneer, Qualcomm’s robot, learnt to sort objects based on their similarities, even navigating across a room in order to put them in their right place – think stock replenishment. And a machine that can have a conversation means that a customer’s needs can be more easily met.

You could buy a car that drives itself

Google’s driverless cars have long been a subject of interest in the media – and it’s thought that they’d be able to navigate their surroundings, read road markings, and even predict behaviours of other vehicles (eg. running a red light).

Your phone could tell you what to do

By constantly taking in information about your surroundings and learning your habits and routines, phones will be able to anticipate your needs – like letting you know when a sale’s about to go down at your favourite department store.

With the likes of Siri, Cortana and Now, it may seem like some of these visions are a reality – but it’s important to remember that these personal assistant programmes heavily rely on connecting to larger, faster and more powerful computers (the cloud) to process requests – and that the typical supercomputer runs on 17,600,000W – over 700 times more than the human brain needs (which is a mere 20W).

With thanks to the below sources:

http://www.technologyreview.com/featuredstory/526506/neuromorphic-chips/
http://www.nytimes.com/2013/12/29/science/brainlike-computers-learning-from-experience.html?_r=0http://www.washingtonpost.com/blogs/innovations/wp/2014/01/02/neuromorphics-the-first-big-tech-buzzword-of-2014/

Advertisements