[Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] Yes, there we go. The bio isn’t that long by
the way but if you’re interested in it, you can come and see me after the talk. So, thank you all for being here. As the title suggests, this
is about Radar but first of all, one slide about who IMEC is; who we
are, what we do. We’re actually an independent R&D centre based or
headquartered here in Levin close to Brussels in Belgium and we do open
innovation and bilateral one-on-one collaborations. So, different ways of working, it’s always high-tech but the business model can be very different
in different programs. What I will talk about in this talk is mostly open innovation programs where we have multiple partners joining at IMEC
to do pre-competitive research. We have global partnerships in ICT,
healthcare and energy, ICT is of course very broad. We have a very big
activity in semiconductor scaling, where we’re working on the 5 and the 7 nanometre CMOS nodes but we also
have very big activities in really design-oriented system-oriented R&D.
We try to position ourselves between academia and industry both in the majority of the solutions that we deliver as in the time frame, the time to market. So, we work on kind of shorter
term things than academia, typically, but longer than industry. We’re
headquartered, as I said, in Levin here in Belgium with R&D activities
in the Netherlands, USA and Taiwan. The US, that’s a very recent office
that we opened where we will be mostly working on imaging activities,
also millimetre wave imaging. And then we have representative offices
in Japan, China, in India also, so we have a global footprint, I can
say. To give you an idea of the size revenue of approximately 450 million euro and headcount of more than 2K people. So, that’s really in a nutshell what IMEC is. Now, if you saw that the title of the talk, you would
expect that it’s about Radar. Yes, that’s definitely true, it’s about
Radar but what is the last thing in the title is even more important –
it’s about true perception and that’s kind of what we try to bring across.
Radar has moved beyond or at least for automotive, it’s not a simple
ranging and detection device anymore, it’s really part of a sensor suite
that is built to do true perception. And to tell you what I mean by true
perception, what better image to use them one from Magritte here in
Brussels, the city of Magritte, if you have time by the way, I would highly recommend you to go and see the Magritte Museum, it’s quite unique. Magritte already knew that this is
not a pipe. You might know why it’s not a pipe, this is merely an image
of a pipe and if we apply that to a more automotive context, you might
say that this is not a pedestrian, it’s not a pedestrian, it’s an image
of a pedestrian and that’s what I mean by true perception. Well, I know it’s a toy problem but some very simple image processing algorithms would see
this as a pedestrian; obviously, it’s flat, it’s 2d, it’s not moving.
We all know that as humans, our environment is built for humans and
we do true perception and that’s what we want to bring to a car also.
Now, we’re doing Radar, my point is you will need Radar in that sensor suite. Radar is, of course, not the
Holy Grail in itself. Well, first of all, to contrast the true perception
with what is classically known as ‘machine vision’, I think machine
vision today is mostly imaging based so 2D arrays of pixels and true
perception for me is, first of all, it’s sensor fusion based, so it’s radar and lidar, radar and imagers, definitely, in automotive, there seems to
be a consensus about that in some scenarios. Definitely, also, lidar
has unique capabilities, there’s other sensory modalities. And it’s
more than just ranging and detection, it’s really classification,
interpretation and even prediction; because as humans that’s what we do.
As I will show later in the talk, we do a lot of prediction without even
knowing it because we’re used to the environment that we that we live in.
So, sensor fusion – why do you need sensor fusion? I think by now this
has become obvious but honestly speaking, up to one year ago, some
people claimed that you could do everything with a camera in a car. It’s not possible, it’s physics, it’s not
technology. This kind of image, the Sun is a very bright source, you
need extremely high dynamic range to deal with that and well, in the case
of a driver, it’s the windshield, in the case of a camera, it’s the
lens; if it’s dirty, it just doesn’t work, the signal does not pass. It
can be the weather also, this kind of thing the imager will just fail.
On the other hand, is the radar the Holy Grail? No, you have the symbolic
coke cam on the road; we know that this can be a disproportionately large object for Radar which can cause
your cars to do stupid things if it has a stupid radar and there’s some
things which radar will be very hard-pressed to see in the first place, like traffic signs and lane markings,
their camera, of course, is inherently better suited. So, I think
sensor fusion is the way to go and today, auto motor seems to agree on that. One year ago, honestly speaking,
some people, at least publicly, were claiming that the camera could
do it all, we don’t think so. Then what is in it for radar, what are
the new things because the title is also evolving? Radar – I think for
automotive, of course, 24gigahertz is the biggest volume today. The next
big thing, I think, is 77-79, we are working at IMEC at 79. The radars in general, why do we believe they will be there? Extremely robust sensing
modality, that’s one thing and second, fully sealed and invisible.
And by the way, that’s reason why we think that these developments are driven nowadays by automotive but they will be applied in many many other
fields in our daily lives. This is the image that we have of radar
really 360-degree coverage, whereas today, it’s mostly a narrow pie in
front of the car, it will be 360 and even integrated in the side of the
cars, this is important as you will see later. What we see today is the
radar evolution and this is really over the scale of, well, since the Second World War when radar was invented. It moved from a very big and bulky and fixed and power-hungry platform to what we are working on today,
so this is real hardware operational in the lab, systems that fit on a fingertip literally. And we are even pushing it further by bringing the
antennas on chip so that’s a real full sensor coming directly out of
the fab and these two things, I will give you a little bit more details
about in the coming slides. All this, actually, is driven in large part, not only but, by semiconductor scaling, that’s also traditionally where
IMEC is very active. The fact that you can integrate it all in such a small form factor in such a small power envelope is just simple semiconductor
scale, that’s not our merits, that’s the merit of the semiconductor guys but we use it with the smart designs. So, what we do here is
actually a single chip radar in CMOS 28 nanometre technology and by single chip, I really mean single chip. So, when we started this program,
that was somewhere back in 2012, we came from backgrounds in 60 gigahertz
communications, so really a consumer background, the first thing people
said is, “you will never do 78-79 gigahertz in a 28nanometer CMOS
node, it’s not built for that, it’s a digital technology”. We said, “well,
challenge accepted”. The PA was kind of the biggest bottleneck, 10 DBM was the goal that we had to strive for, so we did this guy here a transmitter
and we squeezed out 11 DBM and more importantly, this performance
held pretty well over temperature, which is of course critical for
automotive. Gave our partners enough confidence to move on. We did a
transceiver, by the way, this was published at ISSEC – that’s the I Triple E Flagship Conference in Solid-state
Design – so, that’s kind of the reference in chip design. The transceiver was at ISSEC in 2015 and now
we actually have a chip operational in the lab and I’ll give you some more details about that but it contains two transmit parts all the way
up to the PA, so there is no external component in the signal part, just
the antenna. Two receive chains, spill-over cancellation, 2 Giga sample per second A to D converters, 2 Giga sample per second in a few milliwatts, that’s true, that’s what CMOS can
do nowadays – it is silicon, it is working – and then very high-speed
digital fabric for the ranging, correlation and accumulation. This
is a PMCW radar chip, it’s a phase modulated radar, it’s not an FM CW radar. And honestly speaking, I didn’t
know whether I had to go into this topic today while many of you are
familiar with the topic, we’ve had quite some discussions about this.
Today, SMCW is the standard in automotive, we believe PM CW has some very promising aspects, notably in the interference robustness, in doing
codomain MIMO for very big reach, so towards MIMO radar, for instance. I
didn’t include details in the slide because I feel at this point, that
the debate sometimes is very non-scientific.
FF CW, everything out there is a FMCW, nowadays there’s huge vested interest in there and there are non-scientific arguments used sometimes against PM CW. We’re not going
to go into these, we are a scientific Institute, what we are doing is
we’re building the hardware, we’re putting it out there, we’re putting it in relevant scenarios and we try to
prove that it has merits in certain scenarios and that’s what we do. So,
hence, no slides on PMCW versus FMCW, you can come to me after the
talk and we can discuss more. Anyhow, this is the module that we built. The
front side has four transmitters, four receivers in a MIMO configuration, I’ll tell a little bit more on that in the next slide because I think it’s a very elegant method, also
quite young in radar community. And on the back, you have two of those chips, so each chip has two transmitters, two receivers, so two plus
two is four. And this module is actually currently operational in our
lab, it’s not in an automotive scenario, it’s a lab context – very
controlled and a coiid environment, radar targets and all that – so, it’s
definitely not a product but I think this is, well, a very complex system that goes beyond the capability of typical academic groups, for
instance, that’s what I meant by IMEC being between industry and academia.
Just as I said in the previous slide, I think it’s worthwhile spending some time on the MIMO principle. Also, when we started in 2012, there was
a lot of scepticism about MIMO radar, the most often cited paper was
titled MIMO Radar: Promise or Snake oil or something and that was really a radar expert coming from the military, of course, and there was a lot
of confusion about the MIMO versus Phased array and so. By now, it has
become standard, MIMO radar does have its merits and basically, it gives
you higher angular resolution for a given power consumption in hardware antenna count. The principle is fairly simple: for a phased array, you
typically have lambda spacing between the antennas and you can scan your
beam electronically to find the angle of arrival of your targets. In a
MIMO radar, you would space your antennas non-uniformly and non-lambda
over two, this is just one example but there can be more configurations
optimized for a certain scenario. But assume, for instance, this is the
receiver space at lambda, so not lambda over two but twice as big. The
transmitter, non-uniformly and the colours, they indicate that we will
transmit orthogonal signals on each antenna; it can be orthogonal in the
time domain, so the first one, then the second one, the third one, the
fourth one, can be orthogonal in the frequency domain, so one slice in one band and so on. Or with a go-domain MIMO radar, it can be orthogonal
in the go-domain and that’s actually what we do. It’s very similar to
CDMA communications and that works, that has been proven by now. You
apply an orthogonal code on each transmitter. You send these at the same time in the same frequency band, because they are orthogonal, you can do so. And in the receiver, actually, you receive them on all the antennas. And you separate them and this is then the virtual array that you create,
you synthesize it digitally in the receiver. And you see these colours
coming back here, from the first transmitter, the second, the third
and the fourth and it’s a virtual array, it’s not there in hardware because this is the hardware, but it’s larger than the physical hardware
that you have. And that means, so for the radar, larger aperture is
higher angular resolution; you get more angular resolution than you could get with the same number of elements
in a lambda over two configuration. As simple as it sounds, so this
paradigm generated a lot of confusion in the literature, by now it has
been accepted and this is definitely the way to go, we believe, also for
automotive radar. As a side note also, this is where, as I said in
the second slide, we have open innovations/partnerships with many
top players in the industry. In this program, we did a press release together with the Infineon also here in Brussels at our Technology forum,
that Infineon is collaborating with us in this program and that does give
us confidence that, well, we are kind of state of the art and we have
our bases covered, I would say. 79 is, for us, not the end, of course,
we always try to push the boundaries; we are also working on 140 gigahertz radar. 140 gigahertz radar, today,
is definitely not yet a lot on the agenda of automotive, many people
are still now struggling with the transition from 24 to 79 but there are very good reasons to grow up in
frequencies still. Last year, when we asked the automotive community, most
people said, “well, give us time, we’re doing 79, we’ll work on that and maybe in few years we’ll come back
to you”. This year already, people start saying, “maybe we should, on
the one hand, investigate the basic technologies that we will meet for
that, so clear the technology route forward let’s say and on the other
hand, also start to think about legislation and trying to obtain bands to do that also for automotive
applications”. Of course, we want to contribute to this technology pathfinding and this is what this program is
all about. But first, why would you go to 140 degree Hertz in the first
place? And let me dwell a little bit on the basics, the very basics of
radar, any kind of radar. Radar resolution; resolution – so, smaller is better – has 3 dimensions basically. One is the speed resolution –
that’s the Doppler shifts that you get from a radar, it’s a very unique
capability of radar, an imager does not have it and you can do a lot of
things with it in radar as you will see later. So, then improves with
the carrier – so you already see 79 to 140, you have twice the Doppler
shift. The angular resolution, as I said with the MIMO radar, the bigger
the antenna, the finer the angular resolution, but that is the electrical size of the aperture; it is the size
in wavelength, so the same aperture, so the same physical sensor size
with the carrier that uses twice the frequency – so, 140 instead of 79 – does have twice the angular resolution more or less, ballpark numbers.
And then the range; that improves with wider bandwidth, so not carrier
bandwidth but typically in most wireless systems, the higher the carrier also, the higher the bandwidth because technically, it’s the proportional bandwidth that matters. So, one
gigahertz at 60 gigahertz is very feasible just as 100 megahertz at 6
gigahertz is kind of technically the same ballpark number. And if you
define all these three, you can do a voxel, like a pixel in 2D, angle
that can be elevation in azimuth, so that’s another dimension but let’s
make an abstraction of that. If you put that voxel on the 24-gigahertz
band, the 77-gigahertz band, which has one gigahertz of bandwidth, 79
which has 4 gigahertz of bandwidth and then 140, then you see that this
voxel is ever getting smaller. So, that is the basic reason why you go
to 140 gigahertz, why you go up in frequency for radar, you get more resolution. Final a resolution, for now there’s plenty of bandwidth available
because there’s just nothing going on at these frequencies and well,
consequently, it’s also interference free at least today. There’s a
technical problem though, designs at above ever higher frequencies, they
get ever more challenging. Good for us, because that’s the reason why we
are existing and working on these topics but it’s not so easy. Typically, today, most, if not all, automotive
radars, they use an antenna – you can call it in package on PCB,
whatever the antenna is not on the chip – but it gets ever harder to go
off chip to the antenna at higher frequency. So, bombed wires, we already don’t use them anymore, we use
flip chips or the solar ball but even that gets harder on 40gigahertz
and that’s one reason why you want to bring your antenna, which is now
on the board, why you really want to investigate bringing it on chip. Of
course, it takes chip area which is hugely more expensive than boards
area, but the good thing is that the antenna also gets smaller, so you
use less chip area at least. So, comparing equivalent sizes at 10 gigahertz of an antenna, at 60 gigahertz and at 140 gigahertz, you see
that the antenna gets ever smaller. So, an antenna with, well, comparable
figure of merit in terms of gain and so on, they get smaller and so that
means that it gets feasible to move them on chip. At this point, it’s too early to say whether it’s economically viable in a given applications
and so on, but at least it’s technologically feasible. So, on the one hand, it gets harder to go off chip
and on the other hand, it gets easier to stay on chip, that’s fortunately going in the same direction, for these two reasons, we want to
investigate bringing the antenna on chip. When we started designing this,
we had targets, specifications in mind based on literature and simulations. By now, we’ve done our first
step out, we’re starting with the measurements. By now, we already know
that this will be tough but I won’t tell you everything today, if you want to join the program, of course, you
will know more about that. This is just a small recap of what I just said, so if you plot, roughly speaking, over the years, we went up in frequency and if you go up in frequency,
we went from discrete antennas – so, a big dish and some electronics –
towards antenna on package on PCB on board – what we have today for
automotive –and we will venture, eventually, into the antenna on chip area. And this is what we are currently exploring; is it 100 gigahertz, is it 140 gigahertz, is it 200 gigahertz? That’s kind of at this point
unclear, it’s also a non-technical discussion but at least, this is the
trend and we are exploring it above 100 gigahertz. And just like we had
the public partnership with Infineon, there’s more partners but they’re
not all public, we have been working on this topic with Panasonic and
this is announcement that we did at IMS/RFIC in San Francisco earlier
this year. So far for the radars. I also said that perception, for us, it’s also classification, interpretation and prediction and if you
see these three terms, everybody, of course, thinks about machine learning
and indeed, we want to apply machine learning to radar. Today, machine
learning, machine vision, is very much driven by optical sensors, so
again, 2D arrays of pixels, we apply this knowledge to radar signals
and try to do fancy things with it. Why would you do that? Again, a recap, single sensor versus sensor fusion. A single sensor can work very
well in specific scenarios like highway cruise control, a radar works perfectly for that but it doesn’t cover all scenarios like density traffic and so on. So, you go to sensor fusion, you can cover any scenario but the problem is then that it gets exponentially complex to add more sensory
modalities, add more scenarios and cover it all. So, with the classical
programming, it gets very hard to do that and that’s why we want
to add the second axis – the intelligence axis – and say not only
classical programming, well, for want of a better term, but also machine
learning has a lot to bring in radar below some sensory fusion applications. Same thing, you have benefit by
using a single sensor, so we do use high-resolution radar and we apply
machine learning techniques to it and you can get fancy things as I
will show later but it’s not really scalable. So, this is kind of the holy grail quadrant but also there are
challenges, for instance, how can you guarantee determinism, that was
also mentioned in previous sessions. 99.9 percent is not good enough for
automotive. So, what kind of machine learning do you apply for a certain
application? This is the kind of topic that we are working on also. Important to say, again, classification, yes, but also as I said in the
beginning of the talk, prediction. As humans, we do this all the time, we
predict all the time. Here, we know what is going to happen, you don’t have to be a rocket scientist for that. The car needs to know that too and the car will know that in the future. Also, some other things like
if you walk in a busy environment like a train station and two people
walk across each other, without knowing it we are making sure that we
don’t bump into each other and we are predicting what the other will do
and it’s partly culturally dependent and so on but there are some basic mechanisms that we use as humans and our whole environment is built on that,
our traffic situation assumes that we have that prediction capability,
and fortunately, machine learning can also give us that to some degree.
This, of course, is a toy problem, this is the real deal. Start
predicting situations in this kind of complexity, it’s a challenge but again, we want to accept that challenge and work on it. If you here ‘machine
learning’, it’s important to stress that it’s also the more classical statistics-based techniques that we use as opposed to the neural net-based,
which are nowadays all the hype. So, yes, there are very impressive
techniques here which have made great progress in the past years because, well, the processing power is dropping dramatically and that’s what enables
these things – both the supervised and supervised ones – but let’s not forget about these, and in certain scenarios, these are really much more suitable, especially for instance, if you talk about determinism.
To give you one example and maybe if someone can from the back, from
the AV guys, if someone can activate a movie here, if there is someone in the first place, yes. So, as I said, we also apply machine learning to
one single sensor, or just the radar, so if you move here with the mouse
here you can play it. And I’ve included here a kind of a toy problem
to show you that, indeed, a radar signal, a micro doppler signal has a lot of information. Well, we don’t see it as yet. If you move the mouse and
you click on the thing, normally, it works, there’s an embedded movie,
we just tried it before the talk. Ah, there you go. So, what you see
here is this is time, this is the micro doppler signal – micro doppler
is just a high resolution doppler signal – and what you see here is an
average positive speed and a quite spiky behaviour and that’s actually a pedestrian walking. So, if you walk, your body has a certain average
speed and your hands move back and forth and that’s the kind of spiky behaviour that you see in the radar signal. If we now do the same for the
biker, can you play this one too? Then you will see that the average
speed is higher because it’s a biker and you get this nice sinusoidal movement, that’s indeed the pedalling, so if you project a pedalling
movement onto one axis, you get the sine wave and that’s this nice wave that you see here. Of course, these are toy problems that we can interpret
and we can understand like, oh yes, there is something in the radar
signal that you can use how to characterize the target to see this is a pedestrian, this is a biker. We want to apply that, of course, to infinitely
more complex scenarios which are not human intelligible but it just
goes to say that pattern recognition, the classical way can also be applied
to radar and, especially if you fuse the data of multiple sensors, you will be able to do very impressive things in the future. I have two
other views on the same data I don’t think we have to show them right now, it’s illustrating the same point here. So, yes, you can do machine
learning-based classification for purely radar based also sensor fusion
based and it’s very similar to what you do in imaging. So, well, range
of 50 – that’s using an FM CW radar – you’ll expect some micro doppler
with a short time Fourier transforms Wigner’s, even wavelets. You build
a dictionary of features using BCA, the classical things and then
in a second step, you apply a classification either using the neural networks – indeed, they can be applied to radar – or the more classical
techniques which are very well known in literature. And important to say,
to make the choice between all these things, it’s very important that
you have the right domain knowledge. Because applying an algorithm and
doing fancy things, that’s one, but you need to have the right envelope
in terms of latency for automotive latency – I mean, it goes without
saying – accuracy; determinism of the algorithm, maybe 99.9 percent is not
good enough; suitability for fusion, how can you combine metrics from different sensors and distribute it to architecture at the same time using
a certain envelope for logic, memory, power and the connectivity that
you used to, for instance, interact with the data centre to train your
neural network. And that brings me to the last power part which I did
not stress before but also full machine perception, and that’s also
something that we do, is connected and cooperative. As humans, we
collectively learn and machines will also have to do that. In neural network, you only have to train it once and
you can apply to your whole fleet of cars. I’m going to skip this for the
sake of time and immediately jump here. What is the advantage? Well,
the basic the fundamental advantage of having a cooperative perception,
first of all, in some scenarios, you can look around corners with the
data connection – that you cannot do with any sensor that I know of nowadays. You can offload some processing
– so, again, coming back to this memory, power, logic envelope – some
processing does not have to happen real-time. So, for instance, your
car sees something it cannot make anything of that but it can collect all the data, it can offload it and you
can do in the data centre, all the number-crunching you can do, you can
even have a human looking at it and tagging the scene like the Amazon
Mechanical Turk and then once you know what it is, once you have the knowledge, you can download it to your whole fleet and that’s exactly the
third point – you only have to see one thing once and then this reads
it over your whole fleet, and that’s of course, assuming that your car
is a connected car and that’s why that is very critical to take into
account if you’re doing sensor fusion designs for cars, you have to take
that connectivity roadmap into account. DSRC versus 5G, that’s a
debate that we also closely follow in order to hit the right kind of time
to market for these techniques. This could be a generic architecture –
the one that we use – so, the basic functionalities, sensing, computation, connectivity, a central fusion
platform – that’s something that also has become more common at least today
in automotive application – and then a connectivity to the cloud for some non-real-time non-latency critical tasks. And that brings us to the
conclusion. What I try to illustrate in this talk is that, for us, true
machine perception it is, first of all, classification, interpretation
and very importantly, prediction. So, true perception, as we humans do
it because our environment was built for us and that’s what our machines
have to be able to mimic, it will be based on sensor fusion. I did not
talk about cameras but yes, there is significant progress still there
and they will get better and they will keep on being used. The radars,
we did stress them. There will be more radars all around the car, so
small that you can put them in the side of the car, they will be smaller
lower-cost in high volume – it’s driven by volume, CMOS is not the low-cost technology, CMOS is a low-cost technology in high volume – lower
power and higher resolution, for instance, by going up in frequency.
Lidars, previous talk about lidar, we do believe that they will be very
instrumental in services in certain scenarios and it will be based on machine learning, importantly leveraging the appropriate domain knowledge to choose the right techniques for the right application. And then finally,
it will be connected and cooperative, the machines will collaborate
to learn together. Thank you all for your attention and also thank you
to Auto Sens for having us here. [Applause] Moderator: Any questions
for him? Question One: I have one. I have a feeling you kind of down
played, a little bit, the deep learning which has the very powerful
characteristic, it doesn’t need any supervision, you don’t need to instruct the system, it’s just about training
it against the image databases you have now. So, can you expand a little
bit why you’re not seen to think that deep learning is the way forward
– and I think there’s many Nvidia talk? Speaker: Yeah, sure, sure, yeah. I didn’t want to downplay it, so
it does have a prominent place here on the slide, I think if we go back
there. So, this is kind of the deep learning and the one that you mentioned, the unsupervised techniques, they have made great progress, they
do things nowadays which people five years ago did not think were possible. So, yes, they will play a major role. The thing is I do think that other techniques also have their merits. For instance, unsupervised learning
is very inefficient from a power consumption point of view. In some certain scenarios, it’s just better to use another technique. Determinism,
it’s not always deterministic what comes out of the network, if you
don’t know how the training happens, in some scenarios, you cannot afford
that. Or maybe you can put the cars on the road and you can have them
doing thousands and thousands or millions of miles and only after then
say, “ok, we believe that this technique is safe enough, that might be a way but in the meantime, you would use another technique”. And then
last but not least, the thing that you said, training them. The way
imaging is done today, a lot of the training is apparently done by humans who draw bounding boxes and say, “this is a pedestrian, this is…”, that
will be much harder for Radar because it’s a 3D or even a hypercube. And
this toy problem with the pedestrian and the biker, if there’s clutter,
if there’s multiple targets and so on, we will not be able to do that.
So, obtaining this training data is also very different for radar
than for [indistinct]. Yeah, so I do think they have their merits they
will be used, they will be applied to radar, they will be applied to
sensor fusion but they’re not the Holy Grail at this point, I think. Question 2: [indistinct] Speaker: Yeah. So, first of all, when you compare
a [ciggy] and a CMOS note, we’re assuming that you compare a CMOS 28 with a CV130 or something and they have,
from an RF point of view, similar performance. Ciggy in itself, the
device is superior for RF. However, on a system level, what we see
nowadays, what we measure in terms of output power, noise figure, linearity
and so on, is very similar. From a pure device point of view, gallium
arsenide silicon germanium CMOS goes from superior to inferior. From
a cost in high volume, it goes from horrible to very very very attractive. But like when we started this working CMOS for 60 gigs, for 79 gigs, that was the big thing for a mobile
device – CMOS is the only way you can scale it to consumer volumes –
and it was just emerging and indeed, if you did attend DB MVA, people were like, “wow, this is a great result worthwhile publishing at ISSEC”.
Nowadays, I think we moved beyond that building block level to system level and at system level, the electrical performance is very comparable, I would say, it’s different but comparable and it’s much more a discussion
about how do you integrate it into the architecture, what is the volume point that you want to hit, what is the cost what is the model. So,
Ciggy was often a model of integrated IDMs – so, design and technology
in the same house – nowadays, like we use TSMC technologies, the fabs
are in Taiwan, we never ever even see them and that’s much more driving
these developments than the purely technical building book electrical
specs, I think. Moderator: Another question there, two more. Question 3: I have a question regarding the susceptibility of this system to weather conditions if it is raining so hard
and so, especially at these extremely high frequencies. Could you talk a
little about this? Speaker: I don’t have specific measurements for 140
versus 79 for heavy rain and so on. I do think they might be available
because backhauling systems, [wager] systems, for them it’s very critical,
the availability of the link depends on how much weather, how much
rain you have and so on. So, I do think this data is available, I don’t
have it at hand, so I cannot give you a precise answer. In general, though, they don’t suffer much; so, you lose some SNR but I think you really
need torrential rain to disturb the radar also at 140 gigahertz but I
don’t have the hard data, honestly speaking. Question 4: Hi, you
mentioned that the spatial resolution improves as you go to higher wavelength or frequencies, actually. So, could you comment on what is achievable with 79 versus 130/140 gigahertz
in terms of resolution? Speaker: Yes, that’s very interesting. So, if
you compare a lidar to a radar, the big strong point of a lidar, today, is that you have sub degree resolutions – so, very high resolution –
that’s something with radar you can only dream of nowadays. But that all
depends on the size of the aperture; if you were to make a 79gigahertz
radar which has the aperture of the bumper of your car, you get sub
degree resolution and that’s all just a matter of how badly do you want
it as an OEM. Today, OEMs tell you – they don’t tell us, we don’t sell
directly to them but we know what’s going on – “you need to make a sensor and we will put it behind the bumper”, it’s like we make you a headlight
and we’ll put it behind the bumper, everyone in a right mind would not
do that but with the radar people do that and then it has to perform
from there, that’s very bad from a technical point of view. If, however,
you can go to an integration scheme where you integrate the radar into
the bumper, for instance, and make the aperture very wide, then there
is not such a fundamental limitation on the angular resolution of the radar and I do think you can go to sub degree resolutions at an acceptable
cost but they have to want it badly enough to finance the development
and then to integrate it into the form factor that the car needs. And
azimuth not in elevation; so, of course, if the aperture is like this,
you get azimuth high resolution not necessarily in elevation but for a
car, you need some elevation but typically, less than in azimuth. There’s a lot of topologies; you can do bi-static radar and multi static radar, so there are radar techniques to
also increase that, you can think of cooperative radar with multiple
sources, passive radar, all that is being done now, so to give you one
ID, which I find this very elegant. Some passive radar use for aerospace,
you use, for instance, the broadcast signal of BBC to detect planes from
somewhere else and these things are possible with radar but that’s,
of course, at this point, in an automotive context, that’s kind of dreaming. But it just goes to say that fundamentally, from a physics point of view, there is no really hard lower limit. Moderator: Thank you very much. Speaker: Thank you [Applause]

2 thoughts on “Evolving RADAR: from ranging and detection to true perception”

  1. Virtual arrays, nice! I first saw this idea in 2013 in the beamforming context. I was skeptical, but it turned out it worked great for the audio applications I was working on at the time. It gave us almost twice the angular resolution for no extra hardware. We were able to do things nobody else could then.

    I don't know what everyone else is doing, but it seems to me that virtual arrays are a hidden gem in the multiple antennae world.

    Great talk!

Leave a Reply

Your email address will not be published. Required fields are marked *