Chris Middleton reports on how sensors’ cost and sophistication are rapidly taking second place to new data analysis techniques and low power consumption.
Electric carmaker Tesla has acquired computer vision software provider DeepScale, according to a LinkedIn post from the startup’s CEO. The financial terms of the deal have not been disclosed.
The company provides full-stack deep learning abilities that enable cohesive integration of AI software with different processors for self-driving vehicle applications. In other words, the system helps a car’s onboard processors to decipher complex information from a variety of different sensors quickly and at low power consumption – ideal for bringing down the cost of self-driving cars, as the quality of the sensors themselves becomes less important.
The move is part of Tesla’s strategic aim of moving past driver-assistance functions in its electric vehicles and towards full autonomous driving. In founder Elon Musk’s view, this unleashes the potential for cars to earn back their costs by working as driverless taxis when owners are at work or asleep – a logical idea, if a leap in terms of trust, practicality, security, and energy costs. In turn, this could mean fewer vehicles in use overall – or a spike in ownership in a gig-based economy.
In other sensors news, MIT researchers have been working on solar-powering RFID tags, using perovskite cells and integrated circuitry. The findings have been published in the journals Advanced Functional Materials and IEEE Sensors.
The strategic aim is to power a variety of low-cost sensors using ambient light, removing the need for environmentally damaging batteries in billions of sensors as the Internet of Things (IoT) grows.
“The perovskite materials we use have incredible potential as effective indoor-light harvesters,” explained Ian Mathews, a researcher in MIT’s Department of Mechanical Engineering. “Our next step is to integrate these same technologies using printed electronics methods, potentially enabling extremely low-cost manufacturing of wireless sensors.”
The issues of cost and power consumption in networked sensors are a factor in other recent research, according to a report on Phys.org.
Tracking and tracing air pollution in crowded cities typically demands research-grade sensors, which can cost hundreds of thousands of dollars. For many smart city schemes, these costs would be prohibitively high, so the challenge is to expand the capabilities of low-cost sensors and massively improve the analysis of data they capture.
Research conducted by an international team at the India Institute of Technology’s Delhi campus, led by Professor Jesse Kroll of MIT’s departments of Civil and Environmental Engineering and Chemical Engineering, found that, while low-cost sensors were not sensitive enough to detect some pollutants directly, such as particulate matter, a data analysis technique called non-negative matrix factorisation allowed their presence to be inferred.
Indeed, the researchers found that data collected by commodity-grade sensors captured enough information for them to distinguish both primary and secondary sources of air pollution using the technique.
The findings not only promise to lower the cost of anti-pollution programmes, but also to expand the amount of data that can be gathered and analysed from across the globe.
“One of the strengths of low-cost sensors is that they can provide information about air quality and pollution sources in places that are under-studied – and many of these places, such as cities in the developing world, tend to have some of the worst pollution in the world,” said Kroll.
Communities worldwide face other environmental challenges, such as flooding and extreme weather – in some case triggered by global warming from air pollution.
The National Science Foundation has published outline research on futurity.org describing how a Florida University research team, led by Civil Engineering Professor Jennifer Bridge – a case of nominative determinism – has deployed an in-house-developed sensor system to measure wave impacts on bridges in the state, again using minimal energy and processing.
The subtext of all these research programmes is the same: while high-end sensor hardware will always have a role in specialist applications, the economics of using them fails at mass IoT scale. The environmental impact of powering billions of processing-intensive devices is also unsustainable.
This is why the Holy Grail of sensor research is developing low-power (or self-powered) mass-produced chips and other devices that can pass the data analysis task to software – code that itself uses minimal resources, and so may be able to run on the devices themselves.
Edge and cloud processing are part of this mix too, where the next trade-off is the system’s latency for time-critical tasks.