Do not reply to emails from dlzpgroupcareers.com. For job inquiries, send resume to resume@dlzpgroup.com. Contact info@dlzpgroup.com for questions.

Blog
Radar, Lidar, and the Internet of Things

In 1886 German physicist Heinrich Hertz proved that radio waves could reflect from solid objects. A decade later, Alexander Popov, a physics professor at the Imperial Russian Navy school was testing an instrument for communicating between two ships. During the test, a third vessel passed through – providing what Popov described as an “interference beat”. Popov wrote that this “beat” may be useful for the detection of objects: but left the idea there, in his observations.

Over the next twenty years, several breakthroughs were made with this new technology, albeit slowly and with little fanfare.

Before World War II, radar became the subject of wide interest among world powers and soon that interest led to what we might call the “radar race”, as researchers began work in great secrecy in the United Kingdom, France, Germany, Italy, Japan, the Netherlands, the Soviet Union, and the United States. It was this race that gave us the radar we know today.

But what is the radar we know today? For many, our knowledge begins and ends with the movie scenes depicting the technology: we know it for its bleeps, sweeps, creeps, and the fact that it can be “jammed”. It may seem antiquated – the green line spinning across a screen picking up small dots at each interval, and for a long time this was exactly how this incredibly powerful technology operated.

However, radar has made leaps and bounds in its advancements, and once again largely under the noses of the public. In the last several years radar has proven that it still has a long way to go, and it may in fact outline the future of some of the most interesting and innovative technologies to come.

For all the good the movies have done, let’s dig into what really makes radar tick, so we can begin to understand why so many are enamored with the possibilities it presents.

The Basics

A radar, or “Radio Detection and Ranging”, system begins with a transmitter. As the radar operates, the transmitter emits radio signals in fixed directions. Given that these waves reflect from objects, especially those of considerable conductivity, a few of these signals then reflect directly back toward their origin. Because of this, the radar is equipped with a receiver placed close to the transmitter. These two components facilitate the “call and response” relationship that acts as the foundation of a readable radar system.

These signals are paired with “carrier signals”, which are modulated to provide more range and accuracy to the radio signal. This carrier signal acts as a vessel for the radio signal which would otherwise be heavily limited in capability. Beyond range, this modulation allows for smaller antennas to propagate the signal, and to avoid the mixing of the signal with others.

There are two fundamental ways active radar works today. The first is known as continuous-wave (CW) radar – and functions by transmitting a stable frequency and receiving reflections. Because of the Doppler effect, these reflections have a different frequency than the transmission and can be detected by filtering out the initial frequency. CW is energy demanding but offers an accurate display of the information gathered by the frequency

Another method, pulse radar, operates in a similar yet crucially different way. In pulse radar the transmitter is turned off before the measurement of the signal is finished. By emitting short and powerful pulses and receiving the echo signals, pulse radar offers an image of objects at a greater distance.

Continuous-wave and pulse radar make up the fundamental approaches to radar, but they are certainly not the only contenders for modern radar applications. Another technology, which is now finding its way into high-end smartphones, is ultra-wideband (UWB). The key offering of ultra-wideband is in the bandwidth it transmits information through. By spreading the information over a sizeable bandwidth, under the right circumstances, ultra-wideband can share the spectrum with other users and other signals. The UWB approach is widely known for its low-power, low-cost qualities, and has quickly become the subject of attention for applications in target positioning and wireless communications.

Obviously, these systems are far more advanced than the information presented in these brief explanations, but a basic understanding is crucial when considering the possible and proposed applications for radar technology in today’s world. Before we explore those possibilities, however, there is one more pressing topic to cover – lidar.

Combining the Senses

Light detection and ranging, or lidar, takes on many of the same principles of radar, but with high-frequency laser light acting as the transmission. Similarly to radar, the emission is sent out and a sensor measures the reflection. Lidar is commonly used to make high resolution maps as it offers 3D rotation imaging, whereas radar only operates in 2D. Because of the nature of light interference, lidar currently works best for short-range applications, but offers high accuracy with its readings

Radar and lidar offer their own key utility when considering an imaging endeavor. Radar can directly measure an object’s location, speed, and distance, while lidar can provide 3D mapping of the position of objects nearby. When used in tandem with each other, or different systems, radar and lidar provide a robust and detailed image key physical information.

This technology is crucial in creations such as the self-driving automobile

A self-driving car needs, more than anything, to be able to accurately perceive and interact with the world around it in the same capacity as (more likely with less error than) a human driver. Radar and lidar do not make this possible on their own, rather they are often used together or with additional camera devices and odometers togather an image of what is perceived by the vehicle.

Furthermore, these sensors do not operate separately from one another, but are instead merged via the process of AI sensor fusion. Using an algorithm known as a Kalman filter, a computer can understand the state of a dynamic system in the present, the past, or the future. With this, AI can effectively merge the information it gathers from the radar, lidar, and various systems to both process the current state of the vehicle and predict the future position of the objects around it. By combining the inputs received, AI sensor fusion allows for a representation that is greater than the sum of its parts.

AI is trained to provide this image via the implementation of an autoencoder. Given that the variety of sensors feeding information to the system bring along a mess of signal “noise” an autoencoder works to learn a representation of the data while ignoring that noise. In its most basic state, an autoencoder attempts to clear up this noise and send back a representation that is as close to the original input as possible. This is achieved by intentionally constraining the vector space in which arbitrary input data is represented and incentivizing a neural network to reconstruct the original data from this now limited dataset. Thus, the network is forced to prioritize storing information about only the most important features of the input data. Training models can then be applied to the system, which will help the system dictate what actions it will take when presented with a variety of data. By feeding a large amount of data to the system it can eventually learn how to handle spontaneous events and can theoretically become the most safe and efficient driver on the road.

This is all accomplished with technology that is largely still in its infancy. Still today we are seeing transformations in the sensors themselves as engineers are transitioning away from expensive and hard-to-maintain rotating arrays in favor of inexpensive, faster, solid state sensors. In the last two years alone the advent of solid state lidar has shattered the pricing barrier of the technology, and some predict the cost of lidar may drop by over $70,000 per unit – making lidar much more reasonable for other industries to employ.

A World of Possibility

While no conversation about radar and lidar would be complete today without mentioning self-driving cars, the most exciting aspect of these advancements is perhaps yet to come. With more economically available sensors, other industries can make use of the technology. This means that these sensors and machine learning systems will soon leave the confines of expensive auto development and could soon work their way into a myriad of industries.

For instance, Apple has recently started including lidar sensors on their new devices, offering better image quality and recording quality for their users. Beyond this, the sensors can map out objects and immediately offer measurements with the click of a button. In commerce, companies like IKEA are utilizing this technology to allow customers to insert a 3D scanned copy of their products for users to “place” in their home, essentially allowing them to visually test how an item will look in their home.

While these are just a few of the possibilities radar and lidar have started to offer, there is no question that the technology will soon become engrained in everyday life. Radar and lidar technologies are a far cry from the sci-fi-like depictions we’ve seen of them in media, and it’s urgent for us to realize this is only the tip of the iceberg.

To learn more, contact us at 281.912.DLZP or sales@dlzpgroup.com



This website uses cookies. For more information view our Privacy & Cookies Policy.