I’m not sure what it was. Might have been the Predator movies, might have been something else.
Heat vision. The ability to see heat was something I wanted for a while.
Back in the old days, however, any sort of a heat vision camera cost more than I’d made in a decade.
You might think “superpower!”, but that’s not what I had in mind.
I had an idea, an image in my mind. Something that is trivial to photoshop, but is absurdly hard to actually see. Things and scenes lit up by the light emitted from a human. The glow of our heat lighting up the scene.
What would it take to make a picture like that? What makes a camera? And how can you make a heat-seeing camera yourself?
Centuries ago it was known that if you stand inside a dark box on a sunny day and poke a hole in one of the walls, you’ll get the image of the outside on the other wall. In the days of photography this concept became known as a pinhole camera. The diffraction of light producing the image.
It’s a mechanism that is independent of the nature and wavelength of radiation it is being used with.
At the back of the box there was a glass plate in the earlier cameras, then film, then a digital sensor – something that can record the image. Of course, the pinhole was quickly replaced with more efficient optics, like lenses.
But for our goal it’s important to remember, because infrared light does not pass through glass.
Which light?
There are many bands of invisible light out there. The visible spectrum is, as the name suggests, what we see.
Above it, the shorter waves are the ultraviolet – light which is progressively more harmful, and also progressively dimmer due to our opaque atmosphere. Nice to try, not that interesting to linger there.
Below, the longer waves are the bands of infrared.
You might have seen the online guides on how to turn your digital camera into an “infrared camera”. Some even have the nerve to call it “heat vision” camera.
How that hack works is based on the fact that the light response of silicon detectors used in most cameras includes the near infrared (NIR) band, which is just below the visible red. To approximate what the eye would see, the cameras are fitted with a filter (“Hot mirror”).
Silicon sensitivity:
Human eye sensitivity:
NIR is an interesting band – the grass and leaves are reflective in it, and you can get some surreal images of pitch black sky with brilliant-white trees.
But it’s not heat vision: you have to be 400° C or higher hot to glow in NIR.
Below the NIR there are:
SWIR – the short-wave infrared that is used in optic fiber communications.
MWIR – the middle/medium-wave IR, where things start to get interesting – you glow at just over 100° C.
And LWIR -long/far-wave IR where all the glowy human stuff happens.
Unfortunately, glass is opaque below NIR and can’t be used in heat vision cameras.
There are optical materials that pass and focus the lower bands, but they all cost money and are difficult to get.
This is where the pinhole camera comes into play – a pinhole focuses everything you shine through it, eliminating the need for an exotic lens.
So, how do you make a digital camera?
We start with a light-proof box.
In the box there is a coin-sized hole. Over the hole there is the flap with the pinhole, poked with a needle in a piece of aluminum foil.
Inside the box is a coordinate table – an X-Y scanning rig to rasterize the image.
A regular digital camera would have a matrix – huge array of detectors all working in parallel. That can’t be made at home.
What can be made at home is a single light detector, based on a photodiode.
The signal from the photodiode gets amplified by a huge factor with a transimpedance amplifier, then fed through an ADC to get the digital data for the microcontroller to store on a microSD card.
Gigaohm resistors are needed to get the right gain. Fortunately, they were made in USSR and the stock still exists.
This detector is mounted on the caret of the X-Y rig which scans it across the image – up and down, left, up and down, left – repeat a thousand times and you get a picture!…
…of noise.
The amount of sensitivity needed to pick up the faint light from the pinhole is massive. And we live in the age of AC electricity – an omnipresent hum at 50Hz.
Fortunately, it’s easy to filter out with an age-old arrangement known as a Faraday cage. Radio waves cannot penetrate into a conductor, so if you wrap your box with metal foil and use the foil as your ground reference, then the noise vanishes.
And you start to see something.
More amplification, and new noise appears.
Now it’s the motors and internal electronics. The gains are enormous and everything that can be picked up gets picked up.
We need more foil.
Much more foil…
Actually, the foil above was an early failed experiment, and the foil below is all it takes – just put the actual detector into it’s own faraday cage!
With that done, the picture clears up.
Not much noise left.
A megapixel worth of vision, wrought by your own hands.
What would the sunset look like?
Not great, I guess. It’s still a black and white camera.
Let’s improve on that. To get colors we need more sensors.
But first, let’s fast forward a bit. This project spans years, and when I started there were no 3D printers. Eventually I made one, and all the clumsiness of the old rig got replaced with modern, 3D printed, well-fitted parts.
Sensor shielding got a solidity improvement as well.
Most importantly, I moved the ADC into the sensor box, and that allowed me to attach many boxes on the same digital bus.
Add some filters…
And you can start seeing the shades of the world.
There are many types of light bulbs. Let’s look at a room with infrared (red) and visible (blue) filters.
See how the lights have distinct colors? The bright red are incandescent, spewing out infrared waste. The blue are CCFLs, barely producing anything but visible.
Efficiency vision!
Spot the LEDs.
Remember how I said grass is reflective in the NIR? Let’s combine NIR, green and blue.
But back to the true color.
Outdoors.
Indoors.
The bands are there because the sensors are separated by some distance, so the images don’t overlap completely.
And of course, no camera would be complete without a self-portrait.
It takes 5 minutes to complete a picture, so here i am reading about color balancing algorithms on a phone and trying not to move.
It’s surprisingly not easy to get the colors right when all you have is the raw red, green and blue channels. I’ll never curse “whoever programmed the Auto White Balance on that stupid camera” again.
But what about the unseen? The whole idea was to see what no one saw before.
That’s a whole another story.
The first part of it is easy — let’s look at the ultraviolet.
Thick, opaque air.
Dark landscapes.
A sun that looks dim.
You don’t really see the ozone layer, just the air itself glowing – same blue sky, only thicker. Would have been nice to see the fog of the ozone, but it’s hidden behind the air.
Now, for the magic.
To see in MWIR, we need an InAs photodiode, something that’s not really that easy to come by. They are made by a Japanese company Hamamatsu, and do not cost that much.
Or so I thought…
This was my first encounter with ITAR – the “how dare you want interesting stuff?” restrictions. Before then I never realized just how USA, uh…, loves the whole world.
Basically, the Hamamatsu guys wanted me to provide a ton of paperwork proving that I won’t be making nuclear bombs and guided missiles with their parts, cause that’s what they are commonly used for. MWIR is where jet fighters glow brightly in front of a pitch-black sky.
Well, crap.
Fortunately, we in Russia also know how to make heat-seeking missiles, and after some searching I found a local supplier that was more than happy to provide me with a couple of 3.4 μm InAs photodiodes.
That is where the trip into the never-before-seen begins.
And it begins with figuring out that all of the above assumptions about pinholes and billion-to-one amplifiers don’t apply.
First, the InAs photodiode drifts. A lot. That’s what a blank picture looks like.
Then, with a pinhole and maximum gain I can give it, that’s what a soldering iron looks like:
Okay. The purpose of the problems is to be solved.
Let’s dump the pinhole and order a ZnSe lens from China.
Turns out that since the project’s inception the situation changed — China was making plenty of CO2 lasers, which use ZnSe lenses. And these lenses focus the light I need perfectly fine. Oh, and they are very cheap.
Some 3D printing later i got a proper lens assembly with focus capability.
In the dark of the night you can see a lot with it now (regular photodiode).
Next, the drift.
It’s not that fast, and we can assume that it’s stable during one scan line.
So, let’s add a flap at the end…
And modify the processing software to even out the image based on the reference band.
Let’s look at that soldering iron again.
The most amazing part is not that it glows, but that it glows brightly enough to illuminate the stand.
It’s not just the “temperature mapped to an image” of a regular heat vision camera, we see the actual long-wave light being emitted and reflected – a soldering iron turned into a lightbulb!
The low limit for the glow is somewhere around 100*C. Here are some hot resistors:
MWIR is neither heat, nor is it light. It’s both – if you look outside you’ll see the world illuminated by the MWIR radiation from our sun.
These kinds of scenes were never seen before by anyone. Outside of, perhaps, some specialty labs that make sensors and the people who built the cameras for the climate watching satellites like Terra and Aqua.
What other sensors are there? I mentioned SWIR, the band that is used for the optic fiber. There, the parts are easy to come by and are not restricted.
Unfortunately, the SWIR photodiode is also sensitive to the regular light.
Fortunately, silicon is opaque all the way to upper SWIR, so it would filter out all the things we don’t want to see.
Unfortunately, the raw silicon wafers you can find on ebay are not really transparent. Looking at a lightbulb through it produces only a blur.
Fortunately, my father’s old solar panel fab is still in business and they have some scrap of wafer-grade silicon.
That worked great.
Let’s look around.
Eh, it’s kinda cute but not that special. Same deep black sky the NIR is famous for. The snow is almost black, since there is no sky light for it to reflect I guess.
Big difference is that the vegetation is not as reflective, so you get the “blackness of space” sky with regular-ish landscapes.
It’s almost like being on the airless, derelict Earth – preserved under the void after whatever disaster befell it.
That is about all I got as far as sights not seen before go.
Unfortunately, no sensor that I can easily get or use can see deep enough in the LWIR to make the humans glow. You need liquid nitrogen cooling on an InGaAs photodiode to do that, and it’s not quite that easy to get.
So the story will continue.
You might wonder about a few bits of this build, a few words I said and a few unusual solutions. All of them hint at another use in mind. Another kind of radiation to see in…? Oh yes, there is one more.
But that would be a whole other story, for another time. Stay tuned!
Uau ! Love the post. Reading it was almost like a documentary of a scientific breakthrough. Love to hear about your progress forward in other wavelengths !
Thank you, Artem, for documenting and sharing your project. This was a really cool read.
Wow! This is amazing!
I hope we will see a lot more from you. This project is very interesting.
It might also be interesting to see a “normal camera” comparison images to really see what the IR bands are adding :)
You’re on Hacker News, number one post right now
https://news.ycombinator.com/item?id=11686049
I saw your post on hacker news. Thermal IR can be achieved cheaply without special sensors.
FLIR makes an affordable (US $250) camera that uses an array of MEMS microbolometers. You could try something similar, using a thermocouple on your scanner stage.
Yep, they do now. You can get the Lepton module for $175 or so.
Back when i began in 2009, however, nothing like that was available and FLIR cameras started well in the four digit $ price range.
Nicely done. Thanks for sharing!
You might also want to try pyroelectric detectors in order to see thermal radiation (although you would need to add a chopper since they only detect ac signals). We used to get ours from Infratec.eu when I was a researcher although we had some that had come from somewhere in Russia too. Otherwise for the long infrared, we used liquid nitrogen cooled Mercury cadmium telluride detectors but you may need to fill in those problematic forms for these kinds of detectors.
Cool project and good luck!
Molodets!
Have you tried kirlian photography? apparently you can see human energy fields! worth try for nothing and you may break some new records! ;)
Can I just say that you are my hero for writing this. I didn’t even need to read all of it. Just your stream of consciousness was enough. The collective consciousness of the internet owes you a beer.
/\ \/
I hate to breeze by the obvious – that being an amazing post documenting amazing work – but this line: “…Fortunately, we in Russia also know how to make heat-seeking missiles…” I had to laugh out loud- and I thought, “Competition is good!” Also, I love the point you make about”efficiency vision” – there may be a product / company in there somewhere!
You can also try to use zone plates instead of lenses. While they work best with monocrome light, they still should focus something within 10% wavelength range. There’ll be some spectral abberations — crisp focus in target wavelength and halos for close wavelengths.
The good thing is they can be made for any frequency. For far IR feature sizes should be quite large and easy to make. Maybe it’ll possible to fit ~6 rings on a plate. Maybe they can be combined with some pyroelectric sensors and box cooling for room-temperature wavelength.
Alpha particles, baby.
Great post and great project !
I learned a lot.
Awesome blog post, best read in a long while. Your lessons learned helped a lot to better understand the internals and terms of digital sensor technology.
Bravo! Awesome stuff. The world outside of the visible light spectrum does get interesting. One question: Do you notice a glow in MWIR from electrical outlets? Do you have any idea what that is if so? (Not 400 C heat, at least not if your electrician is competent.)
Super cool stuff mate. Keep on the good work! You made my day!
You may want to look into compressed sensing.
Basically, it’s a way to physically transform the input of a sensor system into a basis that is more compressed. So, for example, if you decompose an image into a Fourier basis (ala JPEG), which is much more sparse than the pixel basis, and you need correspondingly fewer measurements to achieve an accurate image. It actually turns out that using a random basis, rather than a Fourier basis, is much better, but the principle is the same. Compressed sensing can in this case be implemented, for example, by taking the DMD (digital micromirror device) out of a DLP projector, bouncing the image’s light off of it, and then focusing the bounced-off light into a single pixel. Measurement is accomplished by displaying patterns on the DMD, and measuring the total “weight” of that pattern. Decompressing it is easy — just multiply each pattern by the weight and add them all up. Because only a single pixel is needed, this is a cheap way to make effective cameras for weird wavelengths (though the DMD still has to be able to reflect that wavelength — they are mostly made of silicon AFAIK).
More info at http://dsp.rice.edu/cscamera
One interested in hacking FLIR cameras may have a look on this https://www.youtube.com/watch?v=NtqUE67BUDI and read follow-up here http://www.eevblog.com/forum/testgear/flir-e4-thermal-imaging-camera-teardown/
I loved reading about your decades-long adventure into infrared! Sorry to hear that my country got in your way. Hope you keep posting!
Oh, that’s what 1 ruble coins are for!
Very nice. By the way, a pinhole camera or a “dark room” (camera obscura) does not use diffraction. It is simple ray-tracing. If the hole is small enough to get significant diffraction you won’t get an image. Or I should say you would need some good 2D image processing to get an image from the diffraction pattern. There are formulas for the optimal hole size for a given box size and wavelength and the holes are surprisingly large.
If you are interested in topic: how to make money doing online marketing – you should read about Bucksflooder first