Project Soli: interact with technology… without touching it!
The time to say goodbye to keys and buttons is approaching and you should be getting ready to welcome the technology of the future, which will really change your life!
Al Google I/O 2016 Google shows some special initiatives coming from the laboratories of the Advanced Technology and Projects team. In previous years they presented the Project Ara and the Project Jacquard. The last demonstration that has arrived from Mountain View, however, is Project Soli.
Project Soli is a concentrate of sophisticated technology in a device measuring barely two centimetres, which can spatially “perceive” what surrounds it thanks to the use of radar sensors. All you have to do is to place your hand near the device and make a gesture that is reminiscent of what you want done: think of the simplest way you would use your fingers to select, zoom, increase volume, scroll down a list.
Project Soli has two RF aerials of 60 GHz which register up to 10 thousand frames a second: it can trace where your hand is, or each single finger, in real time and with a sensitivity of a bare millimetre. But that’s not all. While the radar can acquire an impressive quantity of data, Google’s self-learning algorithms can learn to interpret them and to distinguish every movement, as far as recognising people on the basis of the unique nature of their behaviour. In short, by recognising movements that indicate mere intentions, Project Soli can convert them into commands to be sent to the devices.
Of course, we are already thinking of the impact that this technology could have on our everyday life, especially if applied to devices that we can wear. Soli has been miniaturised so much that it can even be inserted into a smartwatch. This gadget can thus take avail of more space on the display and it can do without the costly and hardly intuitive projectors which project the commands onto the back of the hand.
In the future, Soli could be used in computers, large screens, television sets, smartphones and in all those devices equipped with a display that need input from the user. And, why not, also in cars. According to Continental, the car of the future will be self-driving, or controlled by gestures. In fact, the company has produced an innovative technology that recognises gestures, by which you can communicate certain commands to the car without having to take your hands off the wheel in order to scroll through the various menus or even to answer or refuse an incoming phone call. The manufacturers’ idea is to get rid of every tactile interaction between the driver and the system, which reduced the control over the vehicle in the case of emergency manoeuvres.
It seems that gesture recognition can create contact between the language of computers and that of the human body, thus creating a more immediate “link” between man and machine compared to text and graphic interfaces that are now primitive. In our future, all this could make the traditional input devices, like the mouse, the keyboard and the touch-screen, almost superfluous.