ElectroOptical Innovationshttps://electrooptical.net/News/2022-11-21T19:03:33.467171+00:00Silicon Photomultiplier Module Design2021-01-25T16:30:28+00:002022-08-05T01:19:40.609354+00:00Simon Hobbshttps://electrooptical.net/News/author/simon/https://electrooptical.net/News/silicon-photomultiplier-module-design/<p>Internal Developments<br/><br/>In the last year or two we've been doing a lot of work aimed at replacing photomultiplier tubes (PMTs) in instruments, using <em>avalanche photodiodes</em> (APDs) and <em>silicon photomultipliers</em> (SiPMs). These devices are arrays of single-photon detectors, so they're also known as <em>multi-pixel photon counters</em> (MPPCs). Our main application areas include biomedical instruments such as flow cytometers and microplate readers, which have to measure low light levels very precisely but don't need the ultralow dark current of PMTs. (Follow-on articles will talk about our SiPM work in airborne lidar and SEM cathodoluminescence, as well as on improving the performance of actual PMTs.)<br/><br/>PMTs have been around since the 1930s, and remain the undisputed champs for the very lowest light levels. We love PMTs, but we have to admit that they're delicate and not that easy to use—they tend to be bulky, they need high voltage, and they need regular replacement. Most of all, PMTs are very expensive.</p>
<p><br/>We've been working with several customers on developing products using Hamamatsu, Broadcom, and On Semi (formerly SensL) SiPMs. They have different strengths, but all three series are excellent devices that have far better linearity in analog mode than we initially expected. (There's a fair amount of doom-and-gloom about that in the specialized technical literature.)<br/><br/>Our first product design used the Hamamatsu S13362, and can go from counting single photons to working in analog in dim room lights, with just the twist of a knob. Subsequently we've had the opportunity to do a couple of devices for time-of-flight lidar using OnSemi's MicroFCs, which we developed from our existing IP. Recently we've been consulting on microplate and flow cytometry applications. All of these applications have in common that they're moving to the newer solid state option and away from PMT-based designs.</p>
<p><br/>These applications are challenging enough without having to develop the photodetection hardware. With so much customer interest, we've been focusing on developing a series of SiPM modules that act as drop-in replacements for traditional PMT modules, including all their nice features such as wide-range voltage-controlled gain, ±5 V input, and selectable bandwidths from DC–200 kHz to DC–300 MHz. Our existing designs are available on a flexible licensing model that generates considerable savings compared with either purchased PMT modules or internally-funded development, and gives you complete control over your supply chain.<br/><br/>Because these technologies are new, we can provide customized proof-of-concept (POC) demos showing how they work in your exact application. We've delivered prototypes and POCs in as little as one week at low cost, so you can make a real-world engineering evaluation without sacrificing a lot of budget or schedule.<br/><br/>For more information on our SiPM/MPPC designs, or help with your low-light measurements, send us an <a href="mailto:pcdhobbs@electrooptical.net">email</a> or give us a call at <a href="tel:914-236-3005">+1 914 236 3005</a>—we're interested in solving your detection and system problems.</p>Signal to Noise Ratio and You, Part 22021-01-24T13:05:47+00:002022-01-20T13:29:48.986793+00:00Philip Hobbshttps://electrooptical.net/News/author/pcdh/https://electrooptical.net/News/signal-to-noise-ratio-and-you-part-2/<p>In <a href="https://electrooptical.net/News/digital-lock-in-principles/">Part 1</a>, we discussed ways to get better measurements by improving the <em>signal to noise ratio</em> (SNR), and saw that although it was often a win to measure more slowly and use lowpass filters, going too far actually makes things worse, because of the way noise concentrates at low frequency. Here we introduce a more sophisticated approach that generally works better: the <em>lock-in amplifier.</em></p>
<!-- In building an ultrasensitive instrument, we're always fighting to improve our signal-to-noise ratio (SNR). The SNR is the ratio of signal power to noise power in the measurement bandwidth, and is limited by noise in the instrument itself and the noise of any background signals, such as the shot noise of the background light or the slight hiss of a microphone. </p>
<p>If the signal is weak, it will have proportionally more noise, so that the apparatus has to be designed to get rid of as much noise as possible. There are a number of ways to do this. The best is to get more signal or reduce the noise, for instance by increasing the laser power and using a <a href="/articles/lc120d-ultraquiet-diode-laser-system/">laser noise canceller</a>, but eventually we hit a practical limit. At that point, we're left with several options, all of which boil down to filtering in one form or another.</p>
<p>Filters can be hardware or software, but their job is to pass the desired signal frequencies and reject noise at other frequencies. Of course some of the noise lands on top of our signal and so makes it through the filter anyway.</p>
<p>A low-pass filter passes frequencies below its cutoff and attenuates higher ones. If the signal is concentrated below the cutoff frequency, the filter rejects the high-frequency noise while preserving the signal (and the low-frequency noise, of course). By slowing down the measurement, for example by reducing the scan speed, the bandwidth of the signal's frequency spectrum can be reduced and the filter made correspondingly narrower.</p>
<p> A problem with this simple approach is that in most cases there's a concentration of noise at low frequencies (near DC), so filtering doesn't help as much as one might expect--in fact, it's not uncommon for the noise to get <em>worse</em> as the measurement gets slower, which is rather unintuitive. It's because there is a lower limit to the signal spectrum as well as an upper. If we're taking 1000 measurements, each with an averaging time of a millisecond, then the signal spectrum is predominantly contained between 1 Hz and 1 kHz. A measurement that takes a second doesn't contain much signal information or noise between 0 Hz (DC) and 1 Hz. Slowing it down to one measurement per hundred seconds reduces the lower cutoff to (1/100) Hz and the upper cutoff to 10 Hz. That narrows the bandwidth, all right, but interestingly it typically makes the noise worse rather than better. Let's look at why.</p>
<p>To find the total noise, we have to add up the noise contributions at all frequencies in the filter passband. In other words, the total noise power is the integral of the noise power spectral density (PSD). The low frequency noise PSD often goes like 1/<em>f</em>, whose integral is ln(<em>f</em>). Thus if the passband is between <em>f</em><sub>1</sub> and <em>f</em><sub>2</sub>, the total noise goes as ln(<em>f</em><sub>2</sub>) - ln(<em>f</em><sub>1</sub>) = ln(<em>f</em><sub>2 </sub>/ <em>f</em><sub>1</sub>). Because the ratio <em>f</em><sub>2 </sub>/ <em>f</em><sub>1</sub> is the same in both the fast and slow measurements, the 1/<em>f</em> noise is also the same—sacrificing a factor of 100 in speed hasn't improved things at all. In fact, since things like thermal drifts rise more steeply than 1/<em>f</em>, going slower is likely to make things worse in real cases. So lowpass filtering can help, but only up to a point.</p>
-->
<p>We were considering a typical <em>baseband</em> signal, one that goes from near DC to some much higher frequency. Audio is a typical example, with a bandwidth usually quoted as 20 Hz to 20 kHz. To escape the low frequency noise, we need to move our signal up in frequency, out of baseband. In lock-in detection we make the signal periodic in time at some <em>carrier</em> frequency <em>f<sub>c</sub></em> chosen to be several times higher than the required bandwidth. This is generally pretty easy to do, as we'll see, and doing so ensures that none of the signal we care about remains near DC. Our noise rejection filter now needs to be a narrow bandpass centered at <em>f<sub>c</sub></em>, so as to reject both low-and high-frequency noise. We'll also need some means of measuring the amplitude and phase of the AC signal. That's more complicated, of course, but with this setup we can narrow the bandwidth as much as we like and still get the full SNR improvement. A lock-in amplifier is a device for making such narrow-band AC measurements conveniently. It's basically a radio that measures the phase and amplitude of its input, so that we recover a lowpass-filtered version of the baseband modulation signal that we care about, with no 1/<em>f</em> noise pollution to worry about. At this point we need to geek out a little bit and talk about <em>modulation</em>, which is what we mean by moving the signal away from baseband.</p>
<p>An AC signal that passes through a narrowish filter can be looked at as a sine wave with some amplitude and phase: <em>g(t) = A</em> cos(2π <em>f t</em> + <em>φ</em>), where the signal information is contained in slowish variations of <em>A</em> and <em>φ</em>, the amplitude and phase (the modulation). This is familiar from broadcast radio: you can send music and speech program material over the air by encoding it as amplitude modulation (AM) or frequency modulation (FM). AM changes the heights of the peaks of the sinusoidal carrier wave in response to the audio signal (<em>A</em> varies), while FM changes the position of the peaks in time (<em>φ</em> varies). FM maps the baseband signal <em>s(t)</em> onto the instantaneous frequency, so <em></em><em></em> d<em>φ</em>/d<em>t</em> is proportional to <em>s(t)</em>. In <em>phase modulation</em> (PM), which is less common in radios but more useful in measurements, the signal maps directly: <em>φ</em> is proportional to <em>s(t)</em>. The two are collectively known as <em>angle modulation</em>.</p>
<p>All types of modulation widen the carrier spectrum, forming <em>sidebands</em> above and below <em>f<sub>c</sub></em> that carry the signal information. It's generally preferable to talk about AM and PM, especially in discussions of noise, because in PM a flat baseband spectrum produces flat sidebands, whereas in FM it doesn't. That makes PM much easier to think about.</p>
<p>A carrier with both AM and PM can be written as <em>g</em>(<em>t</em>) = <em>A</em>(<em>t</em>) cos( 2<em>π</em> <em>f<sub>c</sub> t</em> + <em>φ</em>(<em>t</em>) ), where <em>A</em> and <em>φ</em> are slowly varying compared with <em>f<sub>c</sub></em>. From trigonometry, we know that</p>
<p>cos(<em>a+b</em>) = cos <em>a</em> cos <em>b</em> - sin <em>a</em> sin <em>b</em>, or in this case, cos( 2<em>π</em> <em>f<sub>c</sub> t</em> + <em>φ</em> ) = cos( 2<em>π</em> <em>f<sub>c</sub> t</em> ) cos <em>φ</em> - sin( 2π <em>f<sub>c</sub> t</em> ) sin <em>φ</em> (PM)</p>
<p>Thus by measuring the amplitudes of the sine and cosine components of the signal, we can recover its phase. Rearranging the same trigonometric identity shows us how to do this:</p>
<p>cos<em> a</em> cos <em>b</em> = ( cos(<em>a-b</em>) + cos(<em>a+b</em>) ) / 2 and</p>
<p>sin <em>a</em> sin <em>b</em> = ( cos(<em>a-b</em>) - cos(<em>a+b</em>) ) / 2.</p>
<p>Thus if we multiply our signal by <em>local oscillator</em> (LO) signals sin(2π<em> <em>f<sub>c</sub></em> t </em>) and cos(2π<em>f<sub>c</sub> t </em>), we get</p>
<p><em>I = A</em> cos <em>φ</em> cos(2<em>π <em>f<sub>c</sub></em> t</em>) cos(2<em>π <em>f<sub>c</sub></em> t </em>) = <em>A</em> cos <em>φ</em> [cos(0) + cos(4<em>π <em>f<sub>c</sub></em> t </em>)], which is <em>I = A</em> cos <em>φ</em> + (a signal near 2<em>f<sub>c</sub></em> ), and</p>
<p><em>Q = A</em> sin <em>φ</em> sin(2<em>πf<sub>c</sub> t </em>) sin(2π<em>f<sub>c</sub> t </em>) = <em>A</em> sin <em>φ</em> cos(0) cos(4<em>πf<sub>c</sub> t </em>), which is <em>Q = A</em> sin φ + (another signal near 2<em>f<sub>c </sub></em>).</p>
<p>Lowpass filtering gets rid of the 2<em>f<sub>c</sub></em> components of <em>I</em> and <em>Q</em> and rejects noise exactly as our narrow bandpass filter would, with the same tradeoff of bandwidth <em>vs.</em> measurement speed but without the excess low-frequency noise. Baseband signals <em>I</em> and <em>Q</em> are the so-called <em>in-phase</em> and <em>quadrature phase</em> signals,. (You can think of "quadrature" as referring to the signal shifted a quarter cycle, though it actually comes from an old term for integration: sin <em>x</em> is the integral of cos <em>x</em>.) (The LO is the same signal we'll use to modulate the measurement (using <em>e.g.</em> an optical chopper or something more intelligent), so there's no problem there.)</p>
<p>Thus the procedure of multiplying by the sine and cosine phases of the carrier converts the modulated carrier into a pair of baseband signals containing both the amplitude and phase information. Because of the lowpass filtering, the exact waveform of the modulated wave (sine, square, or something else) doesn't matter much--only sinusoidal components sufficiently close to <em>f<sub>c</sub></em> contribute. This property of sines and cosines is called <em>orthogonality</em>. Very often only one of the two is of interest, usually <em>I</em>, but one can also recover <em>A</em> and <em>φ</em> easily:</p>
<p><em>A</em>= √( <em>I</em><sup> 2</sup> + <em>Q</em><sup> 2</sup> ) and <em>φ</em> = tan<sup>-1</sup>(<em>Q / I </em>).</p>
<p>(One has to worry about a few other things when computing <em>φ</em>, such as which quadrant it's in, whether you're dividing by zero, and whether it needs unwrapping to avoid ambiguities of multiples of 2<em>π</em>.) The multiplications also of course produce the cross terms, proportional to</p>
<p>cos(2<em>πf<sub>c</sub> t </em>) sin(2<em>πf<sub>c</sub> t </em>) = 1/2 sin(4<em>π f<sub>c</sub> t </em>),</p>
<p>but these have no baseband component and so get filtered out as well, showing that the sine and cosine components are orthogonal even though their frequencies are the same. </p>
<p>The sine and cosine LO signals can be derived from a reference frequency that you supply, or generated internally. Generally this reference is the same source used to generate the AC modulation of the measured signal, but it'll still work even if the two are different (the frequency error will show up as a ramp in <em>φ</em>(<em>t</em>), of course).</p>
<p>So that's the general principle of how lock-in amplifiers can improve our SNR by narrowing the measurement bandwidth while avoiding the low-frequency noise. In Part 3 we'll look at how that's done, in both analog and digital lock-in amplifiers.</p>
<!--
<p>using two multipliers, one for I and one for Q, with the sine and cosine LO signals derived from a reference frequency that you supply. Generally this reference is the same source used to generate the AC modulation of the measured signal. There are two basic kinds of lock-ins: analog, where the multipliers and filters are physical circuits, and digital, where the signal is first digitized and the multipliers and narrow filters are done numerically by software or programmable logic. Either way, the orthogonality of sines and cosines is what makes lock-ins work.</p>
<p></p>
<p>A lock-in is basically a radio that measures the phase and amplitude of its input using two multipliers, one for I and one for Q, with the sine and cosine LO signals derived from a reference frequency that you supply. Generally this reference is the same source used to generate the AC modulation of the measured signal. There are two basic kinds of lock-ins: analog, where the multipliers and filters are physical circuits, and digital, where the signal is first digitized and the multipliers and narrow filters are done numerically by software or programmable logic. Either way, the orthogonality of sines and cosines is what makes lock-ins work. A fine but important point is that accurate digitization requires that the signal first pass through an analog filter to prevent high frequency junk from appearing at lower frequencies, a phenomenon called _aliasing_. This is familiar from moire' patterns in bridge railings and fences seen from the highway, or the tendency of stagecoach wheels in old Western movies to appear to rotate slowly backwards instead of quickly forwards. If the digitizer is sampling at f_s samples per second, the antialiasing filter has to reject frequencies above f_s/2, the so-called _Nyquist frequency_. (This requirement follows from the sampling theorem.) Real-world antialiasing filters are not infinitely sharp, so they have to start rolling of sooner than that. The maximum useful signal frequency is thus a bit below Nyquist, typically by 20%-30%. . Lock-ins are of course amplifiers as well; the amplification is mostly done ahead of the multipliers, and is generally range-switched rather than continuously variable like a volume control. It's a lot easier to make the amplifier quiet and stable that way, and those things matter a great deal in a lock-in. Because the signal of interest is often very much smaller than the wideband noise, lock-ins have to have a lot of _dynamic reserve_. Dynamic reserve is the ratio of the maximum allowable (signal + noise) to the full-scale signal amplitude on a given range, and is often a factor of 100 to 10,000 (40 to 80 decibels). The smallness of the desired signal is why the amplifiers have to be so stable and quiet, and the multipliers and the digitizer as well. (Minor problems in the digitizer system become very objectionable for this reason--in my experience nobody gets their first digital lock-in design quite right, because they aren't paranoid enough about this.) This is more than compensated for by the massive increase in multiplication accuracy and stability afforded by computer arithmetic compared with analog multiplier chips. Thus if it's done properly, a digital lock-in is better than an analog one, other things being equal. Done badly, it can easily be much worse. Quantization Noise ~~~~~~~~~~~~~~ Digital lock-ins also exhibit quantization noise, which requires a bit of explanation. An M-bit digitizer measuring a voltage V produces an M-bit binary fraction F = V/Vref, where V_ref is the reference voltage supplied to the digitizer. Digitizers come in various resolutions, usually between 10 and 24 bits. A plot of the output code vs. input voltage thus looks like a staircase, ideally a perfectly straight staircase with perfectly equal tread widths. The analog section of the digitizer contributes noise like any normal circuit, but in addition the digitizing operation introduces _quantization noise_, the inaccuracy inherent in converting a continuously-variable voltage into one of those 2M discrete steps. This is inherently a complicated thing to model, but we're saved by Widrow's theorem, which says that as long as the signal is at least a few steps in amplitude, the digitizing operation can be accurately modelled by a noiseless digitizer acting on a signal with added uniformly-distributed (white) noise of amplitude N = V<sub>ref<\sub> 2-M / &surd;12. Mathematically the digital signal behaves just like a slightly noisier version of the analog one. Interestingly, the wideband noise provides an important benefit by exercising a much wider range of digitizer steps than the signal alone, effectively smoothing out minor irregularities in the staircase. (Pseudorandom noise is sometimes added in analog and subtracted again digitally to ensure that this happens in a known way, a procedure called _dithering_.) Sampling Rate Because the antialiasing filter is not adjustable, the digitizer must be run at a sufficiently-fast f<sub>s</sub> that the out-of-band components and higher harmonics of the modulation are attenuated enough that they don't reduce the accuracy of the measurement. There is thus little advantage in adjusting f<sub>s</sub>, so it's generally fixed in a given instrument. That means that even near the upper signal frequency limit the digitizer samples more than twice per cycle of the input signal, and at lower frequency many more times than that. Digitizers, Averaging, and Widrow's Theorem ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Lock-ins use adjustable adjustable lowpass filters on I and Q to allow the user to trade off noise rejection vs. measurement speed. These lowpass filters form an average of the sampled values of I and Q. Averaging P samples of the signal reduces the filter bandwidth to f_s/(2P) and reduces the noise amplitude by a factor of 1/sqrt(P), so using narrower filters will reduce noise but require a slower measurement. Traditionally, lock-ins have used 1- or 2-pole RC analog filters, which work very similarly to simple digital filters used for continuous averaging of sampled data; the stored average (analog or digital) undergoes a slow exponential decay with time and new information is added to replace it. In this way, choosing a 1-s time constant results in the output being a moving exponential average of the past 4 or 5 seconds' worth of data. At lower carrier frequencies, the The Stanford Research Systems SRS 850 Digital Lock-In Amplifier The SRS 850 works in the above fashion, with a few additional details. It uses an 18-bit digitizer, a fixed sampling frequency of 256 kHz, and an antialiasing filter cutoff of 108 kHz. It allows a maximum reference frequency of 102.4 kHz . Its digital filters can be adjusted for time constants from 10 us to 30 ks (8.33 hours). It forms the LO signals by computing sines and cosines digitally to an accuracy of 24 bits, the word size of its internal digital signal processor. This is the same relative precision as an IEEE-standard 32-bit floating-point number, which has a 24-bit significand (23 bits plus one sign bit) and an 8-bit exponent. To do this, it must advance the numerical phase by 2 pi f_ref/f_s per sample. The reference frequency is continally measured and the numerical phase step and phase offset adjusted so that the positive-going zero-crossing of the cosine LO coincides with the positive-going zero-crossing of the reference. This is an example of a _digital phaselocked loop_ (DPLL).</sub></p>
-->Signal to Noise Ratio and You, Part 12021-01-24T11:16:17+00:002022-02-02T18:22:07.728797+00:00Philip Hobbshttps://electrooptical.net/News/author/pcdh/https://electrooptical.net/News/digital-lock-in-principles/<p>In building an ultrasensitive instrument, we're always fighting to improve our signal-to-noise ratio (SNR). The SNR is the ratio of signal power to noise power in the measurement bandwidth, and is limited by noise in the instrument itself and the noise of any background signals, such as the shot noise of the background light or the slight hiss of a microphone. </p>
<p>If the signal is weak, it will have proportionally more noise, so that the apparatus has to be designed to get rid of as much noise as possible. There are a number of ways to do this. The best is to get more signal or reduce the noise, for instance by increasing the laser power and using a <a href="https://www.electrooptical.net/Projects/laser-noise-cancellers/">laser noise canceller</a>, but eventually we hit a practical limit. At that point, we're left with several options, all of which boil down to filtering in one form or another.</p>
<p>Filters can be hardware or software, but their job is to pass the desired signal frequencies and reject noise at other frequencies. Of course some of the noise lands on top of our signal and so makes it through the filter anyway.</p>
<p>A low-pass filter passes frequencies below its cutoff and attenuates higher ones. If the signal is concentrated below the cutoff frequency, the filter rejects the high-frequency noise while preserving the signal (and the low-frequency noise, of course). By slowing down the measurement, for example by reducing the scan speed, the bandwidth of the signal's frequency spectrum can be reduced and the filter made correspondingly narrower.</p>
<p> A problem with this simple approach is that in most cases there's a concentration of noise at low frequencies (near DC), so filtering doesn't help as much as one might expect--in fact, it's not uncommon for the noise to get <em>worse</em> as the measurement gets slower, which is rather unintuitive. It's because there is a lower limit to the signal spectrum as well as an upper. If we're taking 1000 measurements, each with an averaging time of a millisecond, then the signal spectrum is predominantly contained between 1 Hz and 1 kHz. A measurement that takes a second doesn't contain much signal information or noise between 0 Hz (DC) and 1 Hz. Slowing it down to one measurement per hundred seconds reduces the lower cutoff to (1/100) Hz and the upper cutoff to 10 Hz. That narrows the bandwidth, all right, but interestingly it typically makes the noise worse rather than better. Let's look at why.</p>
<p>To find the total noise, we have to add up the noise contributions at all frequencies in the filter passband. In other words, the total noise power is the integral of the noise power spectral density (PSD). The low frequency noise PSD often goes like 1/<em>f</em>, whose integral is ln(<em>f</em>). Thus if the passband is between <em>f</em><sub>1</sub> and <em>f</em><sub>2</sub>, the total noise goes as ln(<em>f</em><sub>2</sub>) - ln(<em>f</em><sub>1</sub>) = ln(<em>f</em><sub>2 </sub>/ <em>f</em><sub>1</sub>). Because the ratio <em>f</em><sub>2 </sub>/ <em>f</em><sub>1</sub> is the same in both the fast and slow measurements, the 1/<em>f</em> noise is also the same—sacrificing a factor of 100 in speed hasn't improved things at all. In fact, since things like thermal drifts rise more steeply than 1/<em>f</em>, going slower is likely to make things worse in real cases. So lowpass filtering can help, but only up to a point. In <a href="https://electrooptical.net/News/signal-to-noise-ratio-and-you-part-2/">Part 2</a>, we'll look at ways to get round this roadblock.</p>
<!--
<p>A slightly more sophisticated approach that generally works better is to make the signal periodic in time at some frequency <em>f</em>, i.e. to move it away from DC to escape the low-frequency noise.&nbsp; (This is generally easy to do.)&nbsp; Our noise rejection filter now needs to be a bandpass centered at f, and we'll also need some means of measuring the amplitude and phase of the AC signal.&nbsp; That's more complicated, of course, but with that setup we can narrow the bandwidth as much as we like and still get the full SNR improvement.&nbsp; A lock-in amplifier is a device for making such narrow-band AC measurements conveniently.</p>
<p>An AC signal that passes through a narrow filter becomes a sine wave with some amplitude and phase: <em>g(t) = A</em> cos(2<em>ft</em> + &phi;), where the signal information is contained in slowish variations of A and &phi;, the amplitude and phase. This is familiar from radio: you can send music and speech over the air by encoding it as amplitude modulation (AM) or angle modulation. Amplitude modulation changes the size of the peaks of the sinusoidal <em>carrier</em> wave in response to the audio signal (<em>A</em> varies), while angle modulation changes the position of the peaks in time (&phi; varies). There are two common types of intentional angle modulation: the more familiar frequency modulation (FM) or phase modulation (PM), which differ only in the details of how the signal amplitude is mapped onto the carrier phase. Both types of modulation widen the carrier spectrum, forming sidebands above and below <em>f</em> that carry the signal information. It's generally preferable to talk about phase modulation, especially in discussions of noise, because in PM a flat baseband noise spectrum produces flat sidebands, whereas in FM it doesn't. From elementary trigonometry, we know that cos(<em>a+b</em>) = cos a cos b - sin a sin b, or in this case, cos(2 pi f t + &phi;) = cos(2 pi f t) cos &phi; - sin(2 pi f t) sin &phi;. Thus by measuring the amplitudes of the sine and cosine components of the signal, we can recover its amplitude and phase. Rearranging the same trigonometric identity shows us how to do this: cos a cos b = ( cos(a-b) + cos(a+b) ) / 2 and sin a sin b = ( cos(a-b) - cos(a+b) ) / 2. Thus if we multiply our signal by _local oscillator_ (LO) signals sin(2 &pi; f t) and cos(2 &amp;pi. f t), we get I = A cos phi [cos(2 pi f t) ][cos( 2 pi f t)] = A cos phi [cos(0) + cos(4 pi f t)], which is I = A cos &phi; + (a signal near 2f), and Q = A sin &phi; [sin( 2 &pi; f t) ] [sin( 2 &pi; f t )] = A sin &phi; [ cos(0) cos(4 &pi; f t) ], which is Q = A sin &phi; + (another signal near 2f). Lowpass filtering gets rid of the 2f components of I and Q and rejects noise exactly as our narrow bandpass filter would, with the same tradeoff of bandwidth vs. measurement speed but without the excess low-frequency noise. I and Q are the so-called in-phase and quadrature signals, which are concentrated near DC in the so-called _baseband. You can think of "quadrature" as referring to the signal shifted a quarter cycle. Thus the procedure of multiplying by the sine and cosine phases of the carrier converts the modulated carrier into a pair of baseband signals containing both the amplitude and phase information. Because of the lowpass filtering, the exact waveform of the modulated wave (sine, square, or something else) doesn't matter much--only the sinusoidal components sufficiently close to f contribute. This property of sines and cosines is called _orthogonality_. Very often only one of the two is of interest, usually I, but one can also recover A and &phi; easily: A = sqrt( I2 + Q2 ) and &phi; = atan(Q/I). (One has to worry about a few other things when computing &phi;, such as which quadrant it's in, whether you're dividing by zero, and whether it needs unwrapping to avoid ambiguities of multiples of 2 &pi;.) The multiplications also of course produce the cross terms, proportional to cos(2 pi f t) sin(2 &pi; f t) = sin(4 &pi; f t)/2 but these have no baseband component and so get filtered out as well, showing that the sine and cosine components are orthogonal even though their frequencies are the same. A lock-in is basically a radio that measures the phase and amplitude of its input using two multipliers, one for I and one for Q, with the sine and cosine LO signals derived from a reference frequency that you supply. Generally this reference is the same source used to generate the AC modulation of the measured signal. There are two basic kinds of lock-ins: analog, where the multipliers and filters are physical circuits, and digital, where the signal is first digitized and the multipliers and narrow filters are done numerically by software or programmable logic. Either way, the orthogonality of sines and cosines is what makes lock-ins work. A fine but important point is that accurate digitization requires that the signal first pass through an analog filter to prevent high frequency junk from appearing at lower frequencies, a phenomenon called _aliasing_. This is familiar from moire' patterns in bridge railings and fences seen from the highway, or the tendency of stagecoach wheels in old Western movies to appear to rotate slowly backwards instead of quickly forwards. If the digitizer is sampling at f_s samples per second, the antialiasing filter has to reject frequencies above f_s/2, the so-called _Nyquist frequency_. (This requirement follows from the sampling theorem.) Real-world antialiasing filters are not infinitely sharp, so they have to start rolling of sooner than that. The maximum useful signal frequency is thus a bit below Nyquist, typically by 20%-30%. . Lock-ins are of course amplifiers as well; the amplification is mostly done ahead of the multipliers, and is generally range-switched rather than continuously variable like a volume control. It's a lot easier to make the amplifier quiet and stable that way, and those things matter a great deal in a lock-in. Because the signal of interest is often very much smaller than the wideband noise, lock-ins have to have a lot of _dynamic reserve_. Dynamic reserve is the ratio of the maximum allowable (signal + noise) to the full-scale signal amplitude on a given range, and is often a factor of 100 to 10,000 (40 to 80 decibels). The smallness of the desired signal is why the amplifiers have to be so stable and quiet, and the multipliers and the digitizer as well. (Minor problems in the digitizer system become very objectionable for this reason--in my experience nobody gets their first digital lock-in design quite right, because they aren't paranoid enough about this.) This is more than compensated for by the massive increase in multiplication accuracy and stability afforded by computer arithmetic compared with analog multiplier chips. Thus if it's done properly, a digital lock-in is better than an analog one, other things being equal. Done badly, it can easily be much worse. Quantization Noise ~~~~~~~~~~~~~~ Digital lock-ins also exhibit quantization noise, which requires a bit of explanation. An M-bit digitizer measuring a voltage V produces an M-bit binary fraction F = V/Vref, where V_ref is the reference voltage supplied to the digitizer. Digitizers come in various resolutions, usually between 10 and 24 bits. A plot of the output code vs. input voltage thus looks like a staircase, ideally a perfectly straight staircase with perfectly equal tread widths. The analog section of the digitizer contributes noise like any normal circuit, but in addition the digitizing operation introduces _quantization noise_, the inaccuracy inherent in converting a continuously-variable voltage into one of those 2M discrete steps. This is inherently a complicated thing to model, but we're saved by Widrow's theorem, which says that as long as the signal is at least a few steps in amplitude, the digitizing operation can be accurately modelled by a noiseless digitizer acting on a signal with added uniformly-distributed (white) noise of amplitude N = V<sub>ref&lt;\sub&gt; 2-M / &amp;surd;12. Mathematically the digital signal behaves just like a slightly noisier version of the analog one. Interestingly, the wideband noise provides an important benefit by exercising a much wider range of digitizer steps than the signal alone, effectively smoothing out minor irregularities in the staircase. (Pseudorandom noise is sometimes added in analog and subtracted again digitally to ensure that this happens in a known way, a procedure called _dithering_.) Sampling Rate Because the antialiasing filter is not adjustable, the digitizer must be run at a sufficiently-fast f<sub>s</sub> that the out-of-band components and higher harmonics of the modulation are attenuated enough that they don't reduce the accuracy of the measurement. There is thus little advantage in adjusting f<sub>s</sub>, so it's generally fixed in a given instrument. That means that even near the upper signal frequency limit the digitizer samples more than twice per cycle of the input signal, and at lower frequency many more times than that. Digitizers, Averaging, and Widrow's Theorem ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Lock-ins use adjustable adjustable lowpass filters on I and Q to allow the user to trade off noise rejection vs. measurement speed. These lowpass filters form an average of the sampled values of I and Q. Averaging P samples of the signal reduces the filter bandwidth to f_s/(2P) and reduces the noise amplitude by a factor of 1/sqrt(P), so using narrower filters will reduce noise but require a slower measurement. Traditionally, lock-ins have used 1- or 2-pole RC analog filters, which work very similarly to simple digital filters used for continuous averaging of sampled data; the stored average (analog or digital) undergoes a slow exponential decay with time and new information is added to replace it. In this way, choosing a 1-s time constant results in the output being a moving exponential average of the past 4 or 5 seconds' worth of data. At lower carrier frequencies, the The Stanford Research Systems SRS 850 Digital Lock-In Amplifier The SRS 850 works in the above fashion, with a few additional details. It uses an 18-bit digitizer, a fixed sampling frequency of 256 kHz, and an antialiasing filter cutoff of 108 kHz. It allows a maximum reference frequency of 102.4 kHz . Its digital filters can be adjusted for time constants from 10 us to 30 ks (8.33 hours). It forms the LO signals by computing sines and cosines digitally to an accuracy of 24 bits, the word size of its internal digital signal processor. This is the same relative precision as an IEEE-standard 32-bit floating-point number, which has a 24-bit significand (23 bits plus one sign bit) and an 8-bit exponent. To do this, it must advance the numerical phase by 2 pi f_ref/f_s per sample. The reference frequency is continally measured and the numerical phase step and phase offset adjusted so that the positive-going zero-crossing of the cosine LO coincides with the positive-going zero-crossing of the reference. This is an example of a _digital phaselocked loop_ (DPLL).</sub></p>
-->Technology: Low Noise Thermoelectric Cooler (TEC) Controllers2020-10-29T14:06:35+00:002022-01-20T13:30:56.872586+00:00Philip Hobbshttps://electrooptical.net/News/author/pcdh/https://electrooptical.net/News/technology-low-noise-thermoelectric-cooler-tec-controllers/<h3>Thermoelectric (Peltier) Coolers</h3>
<p>A thermoelectric cooler is a solid-state device made from two alumina ceramic plates with an array of metallized pillars in between. The pillars are also ceramic--they're made of alternating <em>p</em>-type and <em>n</em>-type bismuth telluride (Bi<sub>2</sub>Te) semiconductors, alloyed with antimony telluride (<em>p</em>-type) or bismuth selenide (<em>n</em>-type), and connected in series electrically. The Peltier effect makes them electric-powered solid state heat pumps. (Thermocouples work the other way round, via the Seebeck effect, but the physics is the same.)</p>
<p>A recent <a href="https://doi.org/10.34133/2020/4361703">review paper</a> gives an interesting look at the state of the art and the many open questions in thermoelectric research. (You wouldn't think that solid-state beer fridges had such interesting physics inside, but they do.)</p>
<p><a href="https://commons.wikimedia.org/wiki/File:Peltierelement.png" title="Wikimedia Commons"><img alt="Peltier element (Wikimedia)" height="280" src="https://upload.wikimedia.org/wikipedia/commons/thumb/a/a2/Peltierelement.png/1280px-Peltierelement.png" width="509"/></a></p>
<p>(Source: <a href="https://commons.wikimedia.org/wiki/File:Peltierelement.png">Wikimedia Commons</a>. Note that real TECs have an even number of pillars so that both wires attach to the hot side, avoiding a massive heat leak through the heavy copper wire.)</p>
<p>They're easy to use: apply a current in one direction and heat moves from the top to the bottom side; switch directions and heat flows the other way. There are some finer points, of course</p>
<ul>
<li>They need current-source biasing, because their DC resistance is low and they produce a fairly large thermocouple voltage related to the temperature difference between the hot and cold plates. That makes voltage biasing less stable.</li>
<li>They're not that efficient--due to resistive (<i>I<sup>2</sup>R</i>) heating and heat conduction along the pillars, the hot side puts out a lot more heat than the cold side takes in. (How much more depends on the temperature drop, but it's at least 3×.)</li>
<li>They're mechanically fragile, especially in the shear direction, so you have to keep the cold plate lightweight and apply a nice big compressive preload via nylon screws.</li>
<li>If you put an ordinary on/off thermostat on one, it will die very rapidly from thermal fatigue. Linear control is key.</li>
<li>Pulse-width modulation (PWM) will reduce performance, because it generates more <i>I<sup>2</sup>R</i> heating than analog linear control does. Ordinary Class-AB linear control is somewhat wasteful, because a lot of heat is dissipated in the driver amp, which we have to get rid of without warming up the box too much in the process.</li>
</ul>
<p> For instrument use, we have to keep in mind one much less well-known property of TECs:</p>
<ul>
<li>There's an astounding amount of capacitance between the TEC elements and the cold plate. </li>
</ul>
<p>I just put a piece of 1-inch self-adhesive copper tape on the cold side of a typical 30-mm Marlow TEC, and measured 67 pF from there to the wiring, about 10 pF/cm<sup>2</sup>. A PWM (Class D) driver would put a few volts at maybe 100 kHz across that, with probably 30-ns edges. The resulting charge injection spikes would be around <br/>(3 V / 30 ns)× 67 pF ~ 7 mA,<br/>and maybe much worse. There are a lot of synchronous buck regulators out there whose switching edges are considerably faster than a nanosecond, which would put the spikes up near an amp.</p>
<p>Fast spikes that large will reliably make a mess of an ultrasensitive measurement, and that puts us in a dilemma. PWM control is too noisy, and even the usual one-or-two-section LC filter probably won't be enough for a low-noise laser or photoreceiver. On the other hand, old-timey Class-AB linear control wastes power and generates even more heat. What to do? One of EOI's building-blocks is a <em>Low Noise </em><em>Class-H TEC Driver,</em> which gives the best of both worlds.</p>
<h3>Class-H Amplifiers</h3>
<p>A Class-H amplifier is a linear amp running off a fast-responding switching power supply. The supply voltage is maintained just high enough for the linear amp to work properly, maybe 0.1 V of headroom. If the TEC needs 3 A at 1.3 V, the supply runs at 1.4 V instead of probably 5 V for a pure linear controller. That cuts the power dissipation in the linear amp from (5 V - 1.3 V)*3 A = 11 W down to 300 mW, a saving of 97%. In addition, our design draws on decades of experience in low-noise analog electronics, and so is able to cut the remaining switching spikes down by over 100 dB, to levels that are hard to measure. Since the TEC itself is dissipating 4 W or so, this is an excellent tradeoff. (If you're interested in all this amplifier-class business, I recommend this <a href="https://circuitcellar.com/wp-content/uploads/2019/10/2013-12-015-Lacoste.pdf">Circuit Cellar article</a> by Robert Lacoste.). Our linear amps are class-AB, with a proprietary reactive-feedback topology that drops the headroom requirement to absolute rock bottom while maintaining very high spike rejection.</p>
<p>The Low-Noise Class-H TEC driver comes in two versions, both of which supply very quiet, stable current-mode drive and millikelvin temperature stability. The simpler one is best for detection systems, which require cooling well below ambient temperature but don't need heating. Thus the driver's output works in one quadrant, with its output voltage and current both positive. It's simple, inexpensive, and takes up only 1 square inch of board space including the switching supply. (It's best if the switcher is on the other side of the ground plane--we actually use one of those subnanosecond switchers, the LMR23630, because it's small and efficient, and its noise doesn't hurt us.)</p>
<p>On the other hand, stabilized diode lasers generally work near room temperature, and so need both cooling and heating. Furthermore, lasers get turned off and on, and some are modulated, so that the thermal load on the cold plate is highly variable. This requires four-quadrant operation in general. Thus our more advanced TEC controller uses a symmetric current-conveyor topology that easily copes with whatever the laser is doing. The cost increase is only about 25%, and the board space about another half square inch, so either one can easily fit in a tight package with equally tight cooling and cost constraints. </p>
<p>Because of our engineering ethos and long experience in instrument design, we achieve this high performance and small size without using a lot of fancy parts, resulting in very low BOM cost. Any licensing cost is a small fraction of the money saved on the parts, so everybody wins.</p>
<p>We've used these building blocks in several products, from our ultraquiet laser driver to SEM cathodoluminescence detectors based on MPPCs [also called silicon photomultipliers (SiPMs)] and avalanche photodiode detectors (APD or SPAD) for biomedical systems. </p>
<p>Give us a call if you have an application we might be able to help with!</p>
<p></p>How We Work2020-01-30T14:56:30+00:002022-11-21T19:03:33.467171+00:00Philip Hobbshttps://electrooptical.net/News/author/pcdh/https://electrooptical.net/News/how-we-work/<!-- 30 Jan 2020 12:49:01 -->
<p><em>At EOI, we've been building advanced instruments for a long time. One reason for our success is our large inventory of working designs, and another is the way we go about doing it. This post walks through a typical sort of development plan for a challenging customer requirement, in the form of a hypothetical email proposal outline for a fibre-coupled noninvasive glucose sensor similar to <a href="https://electrooptical.net/News/transcutaneous-blood-glucose-a-war-story/%0A">the one we did in 2013</a>.<br/> (You can also read about a <a href="https://electrooptical.net/working-with-us/silicon-photomultiplier-cathodoluminescence-detector/"> recent project</a> that went a lot like this, except with a single prototype stage.) </em></p>
<p>Dear George:</p>
<p>I hope you and yours are doing well.</p>
<p>The blood solute project is an interesting one that will make a difference to a lot of people, and promises to be good business as well. We've enjoyed working with your team thus far--they're sharp and good to work with, so I expect it'll be a lot of fun. The project goals as outlined focus on getting a working preproduction version through an FDA animal study as soon as possible, with a close eye on on design for manufacturing.</p>
<p>We propose to develop a prototype system at time and materials (NRE), in the following stages:</p>
<p><strong>1. Photon budget.</strong> Our usual method starts with a theoretical performance calculation, which we call a 'photon budget.' That lets us know how good the measurement <em>could</em> be, which is key to knowing how well the apparatus is performing. Knowing the physical limits enables us to make intelligent tradeoffs between performance, cost, size, and schedule with minimal risk, resulting in a realistic spec for an optimal system. In our experience, the final system nearly always gets very close to the theoretically optimal performance, and does it without needing a lot of expensive parts. In this case we have our existing transcutaneous blood solute photon budget to start from, so this is not a lot of work. The photon budget includes the fiber bundle design. (We have a supplier that makes very nice custom bundles in a week or so for a decent price.)</p>
<p><strong>2. Simple proof of concept.</strong> The photon budget leads naturally to a set of performance specifications. Once those have been agreed, we generally do a preliminary proof-of-concept using hand-modified or entirely hand-wired circuits and simplified optics and mechanics. These days we have enough existing designs that our POC systems are mainly circuit boards in boxes, sometimes modified, and wired up with cables and (in this case) fiber bundles. The POC system allows us to verify the photon budget, and lets you see how the final system will perform. (If you like, we can ship you the POC so your folks can evaluate it—it'll be pretty easy to use.) This approach is both inexpensive and fast, and gives you good management control. Thus it reduces the technical and financial risk considerably without sacrificing time-to-market.</p>
<!-- &#8239; is a thin nonbreaking space, and &#8819; is greater or approximately equal -->
<p>For this application we plan to use our low noise constant power / constant current laser driver (The LC120C), and one of our nanoamp photoreceivers (The QL01 / QL02), which we have in stock. The POC system would be suitable for use with tissue phantoms and informal, in-house human tests to help guide the detection algorithm development. While these are obviously not the same as a clinical trial, if the system performs well we will have good confidence in the trial outcome, and so can proceed comfortably to the next stage. If for some reason it does not do so well, it won't have consumed a lot of time or money. Based on our previous experience and our existing products we expect that this development will take about four to six weeks.</p>
<p><strong>3. Brassboard systems for animal trials.</strong> After the POC system has been approved, we would be ready to build the preproduction versions, which would include the final optomechanical design, circuit board design / fabrication, and bring-up of 1 to 5 systems for test. We would do this in cooperation with your mechanical and industrial design folks, who are in charge of the product's look, packaging, replication, and so on. (A <em>brassboard</em> is a prototype suitable for use in the field, as opposed to a <em>breadboard</em>, which is a lab system.)</p>
<p>These steps aren't quite this orthogonal in practice. The design will need to be manufacturable at the right unit price, testable, and be able to fit the required form factor, all of which will influence the prototype design as well.</p>
<p><strong>EOI's Value</strong></p>
<p>As you know, our focus is high performance, low noise electronics, optics, and software, but we do a fair amount of 3D CAD and EDA in-house, so working with outside groups who specialize in those is straightforward. For the transcutaneous glucose sensor project we bring a track record of working products and the only <em><a href="https://electrooptical.net/News/transcutaneous-blood-glucose-a-war-story/%0A">reliable working example</a></em> of a transcutaneous ethanol/glucose detector that we know of. Except for a couple of patents that you've seen, and the detailed design of the hand cradle, that proof-of-concept design was based on our pre-existing calculations and other design expertise in fibre-coupled photoemission spectrometers for semiconductor inspection.</p>
<p>We have many existing designs and products that we designed for similar challenging applications.</p>
<p><strong>IP Licensing</strong></p>
<p>We are bringing three different sorts of IP that potentially apply to this project.</p>
<p>1) The optical design, front end, and instrument design know-how that allows us to consistently produce instruments that extend the state of the art while keeping the bill-of-materials cost very low. This cost/performance advantage is our key technical skill. Designs we create are heavily influenced if not directly based on many years of experience. "Building Electro-Optical Systems" has helped a generation of researchers and product designers, as we hear over and over from workers in the field.</p>
<p>2) Experience in fiber-coupled transcutaneous blood solute detectors, including detailed photon budgets, detailed fiber bundle design, and theoretical models.</p>
<p>3) Several existing products and designs that are likely an excellent fit for the final product.</p>
<p>As a customer, the value of this IP is higher performance, faster time-to-market, and, most critically, greatly decreased technical risk. We've done this stuff very successfully for a very long time, and we know where the potholes are.</p>
<p>We propose a two-tier license. The first (lower) tier is for items 1 and 2 above, and would apply to things we design for you that contain our background IP, irrespective of whether it embodies our existing detailed designs. The second tier would apply if our existing products (whether modified or not) are actually incorporated in your product. We're quite happy either way, but in our experience using pre-existing products is generally a big win, especially since the BOM cost reduction is more than enough to pay the royalty. Based on our existing deals, we suggest rates of 2.5% and 5% for the two tiers, but the exact terms will obviously depend on how much of each kind of IP is included, TAM, competitive landscape, and so on. We want to succeed together with you.</p>
<p>This is a great project and we'd love to be involved in making it a reality. I look forward to hearing from you.</p>
<p>Cheers</p>
<p>Phil</p>Silicon Photomultiplier (SiPM, MPPC) System for Cathodoluminescence2020-01-27T11:38:02+00:002022-01-20T13:36:29.010710+00:00Philip Hobbshttps://electrooptical.net/News/author/pcdh/https://electrooptical.net/News/silicon-photomultiplier-sipm-mppc-system-for-cathodoluminescence/<!-- 30 Jan 2020 12:20:00 -->
<p>In <a href="https://electrooptical.net/working-with-us/how-we-work/">How We Work</a>, we gave an overview of how we build instruments, from the initial feasibility calculation (or <i>photon budget</i>) to delivery of the first production units.</p>
<p>Each project is different, of course, but there are common themes. Here's a description of these steps from our most recent one at this writing (late January 2020), which is a low-cost cathodoluminescence detection system for use in scanning electron microscopes (SEMs).</p>
<h3>Photon Budget</h3>
<h4>Cathodoluminescence Principles</h4>
<p>A SEM works by scanning a tightly-focused beam of high-energy electrons (1 keV - 30 keV) across a sample, and looking at the stuff that comes out. For ordinary imaging you usually look at backscattered and secondary electrons, but there are other modes. For instance, you can get a lot of information about the sample's chemical composition by looking at the x-rays it emits. Most samples will also emit some amount of light, a process called <a href="https://en.wikipedia.org/wiki/Cathodoluminescence"><i>cathodoluminescence </i></a>.</p>
<!-- <img src="/static/media/uploads/InGaN_crystal_SEM+CL.png" alt="InGaN
crystal
imaged in SEM + CL mode (Wikimedia)" width="699" height="464" /> (Image credit:
<a href="https://en.wikipedia.org/wiki/File:InGaN_crystal_SEM%2BCL.png">
Wikipedia</a>.) -->
<div class="thumbnail"><img alt="InGaN crystal
imaged in SEM + CL mode (Wikimedia)" src="https://electrooptical.net/static/media/uploads/InGaN_crystal_SEM+CL.png" width="65%"/>
<div>(Image credit: <a href="https://en.wikipedia.org/wiki/File:InGaN_crystal_SEM%2BCL.png"> Wikipedia</a>.)</div>
</div>
<p>Of course, some samples are a great deal brighter than others. The <i>luminescent yield</i> is the average number of photons escaping from the surface of the sample per incident electron. It ranges from about 10<sup>-5</sup> for some metals to more than 100 for LED chips.</p>
<h4>Signal-To-Noise Ratio</h4>
<p>To make a halfway decent image, you need a signal-to-noise ratio (SNR) of at least 20 dB, which means at least 100 detected photons per pixel on average. The smallest vaguely useful image size is around 300x300 pixels, so a minimally acceptable image needs at least 100×300×300 ~10<sup>7</sup> detected photons.</p>
<p><i>Good</i> images, ones you might want to publish, would have 10 times that many pixels and would need 10 or even 100 times more photons per pixel to reduce the visual noise. To make the system pleasant to use, you want the frame time to be at most a few seconds during setup and no more than a minute for a high quality image. Thus we need a count rate ≳10<sup>7</sup>e<sup>-</sup> / 3 s or 3 MHz for low SNR and ≳10<sup>9</sup> - 10<sup>10</sup> e<sup>-</sup> / 60 s or 17-170 MHz for high quality.</p>
<!-- &#8239; is a thin nonbreaking space, and &#8819; is greater or approximately equal -->
<p>A typical beam SEM beam current is on the order of 100 nA, which is roughly 10<sup>12</sup> electrons per second. The total photon emission rate will thus be between 10<sup>10</sup> and 10<sup>14</sup> per second for our intended range of samples. Fancy cathodoluminescence systems use large ellipsoidal or parabolic mirrors to collect nearly all of the emitted light, but those are a huge pain to align, and they get in the way of the other detectors for secondary electrons and x-rays.</p>
<p>This low-cost system is intended to be inexpensive and easy to use. The client was willing to specialize it for somewhat brighter samples, those with yields of ~0.1% or higher. We therefore relied on putting the sensor as close as possible to the sample without running into anything or blocking other detectors.</p>
<p>The result was that we collect about 1% of the emitted light. With 35% peak overall efficiency (48% area efficiency and 72% detection probability), we have about 4×10<sup>7</sup> to 4×10<sup>11</sup> detection events per second. With an ordinary photodiode with a gain of 1, that's a current range of about 6 pA to 60 nA, and that's for light near the peak sensitivity wavelength of the detector. It's hard to get good results in a wide bandwidth with that sort of current, so an electron-multiplying detector would help a lot.</p>
<p>Together with the client, we chose a tiled array of Hamamatsu <i>multi-pixel photon counters</i> (MPPCs), also known as <i>silicon photomultipliers</i> (SiPMs). These devices are sensitive to single photons, and have about the same overall quantum efficiency as a PMT (10-40% or so, depending on wavelength). They consist of an array of hundreds or thousands of individual avalanche photodiodes (APDs) wired in parallel, each with a series resistor to recharge it when it fires. At the right bias voltage, a single detected photon will cause one of these APD pixels to avalanche, dumping a fixed amount of charge into the external circuit. They recharge pretty fast (~20ns), which makes MPPCs useful in analogue mode as well as photon counting mode.</p>
<p>In <i>Building Electro-Optical Systems</i> I'm quite critical of avalanche photodiodes in general, because on an apples-to-apples basis they have around a million times higher dark count rates than photomultiplier tubes (PMTs). That is, a 100-μm silicon APD has about the same dark count rate as a <i>four inch</i> bialkali PMT. That's really bad for the lowest-light measurements.</p>
<p>MPPCs do have some very important advantages, though: they're much less delicate, easier to drive, longer-lived, and considerably cheaper. In this case that turned out to be a big win, because the minimum useful signal for imaging (3 MHz count rate) is more than three times the maximum dark count rate (about 900 kHz in our operating conditions). Thus for imaging, the dark count rate has only a minor effect on the system performance.</p>
<h3>Proof of Concept</h3>
<p>The POC prototype was composed of our high bandwidth voltage controlled amplifier (0.5×-64×, 50 MHz BW), low dissipation thermoelectric cooler (TEC) driver, and avalanche photodiode bias supply; a custom front end based on previous designs, with a bootstrap based on a Mini Circuits <a href="https://www.minicircuits.com/pdfs/SAV-551+.pdf">SAV-551+</a> pHEMT; a collection of power supplies out of the drawer; and of course a very nice Hamamatsu MPPC with built-in thermoelectric cooler (TEC). The whole thing was controlled with a modified version of one of our older laser driver models. The firmware and PC software were derived from these products as well.</p>
<p><a href="https://electrooptical.net/static/media/uploads/videos/MPPCphotonCounting.mpg">The result</a> was a system that could resolve individual photon detection events at high gain, and worked in room light at low gain, all with the twist of a knob. Actually there are two knobs in this version—one for the MPPC bias and one for the voltage-controlled amplifier. The production version has one knob that controls both via software, with the "knob feel" designed to mimic that of a photomultiplier. (The bias control and the VCA are actually implemented with a microcontroller. It turns out to be very hard to do a VCA in analogue that maintains low noise at low gain, at least with a reasonable number of parts.)</p>
<h3>Productizing</h3>
<p>The POC hardware design, software, and construction took a bit over three weeks' work. Calendar time from contract signing to delivery: eight weeks, including a week each for the photon budget and the client's evaluation process.</p>
<p>The client was happy with the POC system, so we agreed informally on license terms and moved forward with productizing it. This was a bit more involved than usual. The system needed to operate inside the SEM's vacuum chamber, and had to be very small in order to get the detector as close to the sample as possible so that we'd get more light. It also had to have an adjustable length so as to fit as many SEM brands as possible, and had to minimize stray magnetic fields.</p>
<p>All of this meant a fair amount of back-and-forth with the client's engineers to arrive at a workable optical/electrical/mechanical/thermal design. We wound up with a three-board solution.</p>
<p>The MPPC itself mounts on a 15-mm square board that sits on top of the TEC, along with a SMT thermistor. It connects via a Kapton flex circuit with a 300 μm pitch. This minimizes the heat leak from the wiring, which is a serious issue in cooled circuitry.</p>
<p>A very small front-end board (20 × 70 mm) also goes inside the chamber, with the front end amplifier, variable gain stages, various sensors for ambient conditions, and a small microcontroller (ARM Cortex M0+). Cooling electronics is tough <i>in vacuo</i>, so power dissipation had to be kept low. The main cooling issue is getting rid of the waste heat from the TEC, which is several watts, but that flows down the aluminum mounting bracket to the vacuum flange.</p>
<p>The third board, which mounts on the outside of the vacuum flange, has the bias generator for the MPPC, DC-DC converters, <a href="https://electrooptical.net/News/technology-low-noise-thermoelectric-cooler-tec-controllers/">low noise thermoelectric cooler controller</a>, configurable output amplifiers, and communications: serial to the front end board and USB to the user interface box.</p>
<p>All three boards worked fine on the first iteration. (A couple of resistor values needed changing, but that was it.) The result was the <a href="https://www.delmic.com/sparc-jolt-detection">Delmic JOLT system.</a></p>
<p>Overall, a very pleasant and successful project, working with some great people.</p>Noninvasive Transcutaneous Blood Glucose: A War Story2020-01-15T16:55:20+00:002022-02-02T17:52:37.845007+00:00Philip Hobbshttps://electrooptical.net/News/author/pcdh/https://electrooptical.net/News/transcutaneous-blood-glucose-a-war-story/<!-- 30 Jan 2020 13:19:23 -->
<p>Here at EOI we have three main kinds of project. One is our internal technology development projects. Some of these fail, mostly because they tend to be insanely hard, but the ones that pay off give us important new capabilities.</p>
<p>The second is research projects with customers, trying to push technological limits in fields such as biochip DNA sequencing, nanoantennas for infrared detection, ultrahigh resolution optical microscopy, hypersonic lidar, and (closer to home) infrared remote controls for consumer electronics. Those ones are a bit sporty, but succeed more often than not.</p>
<p>The third and most usual is customer work aimed at product development. These projects almost always succeed. On the rare occasions when they don't, it's generally because of client imperatives such as cancelling an already-funded project, whether for internal reasons or because the exernal funding went away. (Of course we aren't perfect either---I've already posted a <a href="https://electrooptical.net/static/media/uploads/Projects/Footprints/fpwaropn.pdf"> project from 20 years ago</a> where the responsibility was more nearly 50:50. Still, our record is very good.)</p>
<p>One example of a tantalizing near-miss was a transcutaneous (<em></em>noninvasive) sensor for blood glucose and alcohol, to replace finger pricks (<em>ouch</em>) and breathalyzers. It was really sad—folks have been working on that problem for 30 years, burning through mountains of cash, and mine is the only one I know of that actually worked reliably. Here's the story.</p>
<p>The founder called me out of the blue at 3 PM on Christmas Eve, 2012. He turned out to be a charming and intelligent fellow with a lot of drive, who was almost entirely self-taught and was practically supernatural at raising money. He wanted me to build him an instrument, because that's what I do. We eventually became good friends.</p>
<p>He'd patented the general principle, which avoided the individual physiological variations that usually bedevil those sorts of measurements. The idea was to use a hand cradle with a virtual pivot(*) holding a fibre bundle against the web of the first and second fingers. The location is perfect: there are two arteries very close to the surface, so you get to measure fresh blood instead of tissue fluid, and no one has hair, fat, or calluses there to get in the way. (The finger webs are also quite tender to the touch, so if you put a small-diameter pin there as well, you can prevent the user from pushing so hard that the arteries get squashed.)</p>
<p>He had some promising data that he took himself using a Perkin-Elmer FTIR (Fourier-Transform Infrared) spectrometer and his hand cradle. He made arrangements with some folks at USC to provide him with lab space and a bit of technical help doing that. (He was a very able guy—being self-taught has great strengths as well as some profound weaknesses.) The USC statistics folks worked with him to develop AI-based detection algorithms for alcohol and glucose, which did very well, but of course his FTIR cost $100k. So he called me.</p>
<p>The project was unusual in that I didn't have my arms around the whole measurement. I designed and built the gizmo, but the founder had his USC statistics colleagues use their AI chops to build the model and extract the blood solute data, so I never knew in detail how that was done. (It wasn't anything simple such as spectral differences or ratios.)</p>
<p>I did a photon budget, which is my term for a detailed feasibility calculation emphasizing stability and SNR. That's super important, because without calculating how good the measurement <em>could</em> be, you never really know how you're doing. A photon budget prevents you from wasting time on recreational impossibilities on the one hand, or turning a silk purse back into a sow's ear on the other.</p>
<p>In this case it looked as though we could get a very good measurement fairly simply, using a tungsten source and condenser, a custom-designed split bundle of about 20 fibres (TX + RX), a conventional Czerny-Turner monochromator, a single extended-InGaAs photodiode at room temperature, and a chopper wheel plus lock-in for detection. (That passes for simple for the IR business.) The proof-of-concept (POC) system took me about six weeks start to finish, including the photon budget, optical design, designing and building the electronics, assembling the optomechanics, and writing the software.</p>
<p>It was built on a 12 x 24-inch aluminum breadboard using a combination of hacked Microbench(**) parts, JB Weld (the poor man's machine shop), a chopper wheel, and a servo from an RC airplane for moving the grating. The servo had titanium gears, so it was pretty manly for its size, and the grating cradle was also built from toy airplane parts, all courtesy of servocity.com. The electronics were hand-wired in die cast aluminum stomp boxes, dead bug style, connected with BNC cables. The chopper was a commercial unit from Thor Labs, and the back end was a console-mode C++ program running on a second-hand laptop and communicating via a LabJack data acquisition brick. The LabJack also produced the pulse-width-modulated (PWM) signal to control the servo. (This was early on, when EOI was just me. Nowadays it would have one of our MCU-based products inside, and Simon would have done some nice firmware to make it take data and communicate over USB, among other things.)</p>
<p>It all worked great, and was very amusing to watch—an advanced clinical instrument built with JB Weld and toy parts. Wouldn't the FDA have loved that?</p>
<p>We did the preliminary acceptance test by having some friends over for drinks and measuring all of our blood spectra every 15 minutes or so. Qualitatively the data looked exactly as we hoped—nice repeatable curves with the right time dependence and no big physiological variations between subjects. We did some glucose work using a strip reader for comparison, but the strips have relatively poor accuracy, so we concentrated on the alcohol measurement for that part of the demo. (Quaffing a few cool ones is much more fun than sticking pins in your fingers, coincidentally.)</p>
<p>After the founder used the POC data to raise a bunch more money, we brought the proto and the Perkin-Elmer FTIR to a contract engineering house in Orange County CA that will remain nameless because they have this unfortunate tendency to sue everybody in sight. The founder kept me sort of distantly in the loop, but made a crucial mistake: he tried to save money by supervising the CE firm himself, when he didn't have the technical background.</p>
<p>The optomechanics needed redoing, obviously. The CE hired an external consultant to do most of that, and he did a very nice job overall. The folks doing the electronics, motion control, and software were a different story. They proceeded to fall into every pothole along the road, like a drunk. Ignoring both the photon budget and my working design, they proceeded to replace my front end with an ordinary op amp TIA, not realizing they were trashing the SNR by a factor of 30 (15 dB) in the process. (I managed to get that one fixed, and the guy responsible taken off the project. Unfortunately he wasn't the worst.)</p>
<p>They replaced my direct drive for the grating with a rubber belt drive, which did give nice smooth motion. I had initially suggested a <a href="https://pdfpiw.uspto.gov/.piw?Docid=04322166">sine bar</a>, which is used in most Czerny-Turner monochromators on account of its high resolution and excellent repeatability, but they ignored that too. That put them in need of more encoder precision, so they added an encoder to the motor as well as the grating shaft, and did some trick to combine the two encoder readings. Of course this scheme rapidly lost all accuracy as the belt squirmed around while moving, so that the calibration wouldn't sit still. (A metal taut-band drive would probably have worked.) Even the encoder on the grating shaft drifted like mad.</p>
<p>I went out to California to try to get to the bottom of some of this stuff. It was an uphill battle, because I had no official position in their client's organization (<em>i.e.</em> I wasn't writing the cheques), but we did manage to solve that one. The encoder's output was a PWM signal, and the data was encoded as the duty cycle i.e. the ratio of the pulse width to the period (like the RC servo only backwards). They were measuring the pulse width by itself, using a capture input of their MCU. That turned the frequency drift into an angular drift. Fortunately, once found it was easily fixed in software. When that was done, I hit the poor encoder with cold spray and a heat gun, vastly exceeding its specified operating temperature range, but couldn't get it to drift at all. Kudos to US Digital for building solid encoders, even <a href="https://www.usdigital.com/products/encoders/absolute/kit/mae3/">those cheapish ones</a>.</p>
<p>The belt-drive system failed anyway, basically because the measurement was being done on the slope of the very strong IR absorption spectrum of water in the 1.4-1.7 μm range, so that small wavelength shifts caused much larger amplitude errors. That put a huge premium on wavelength accuracy. The wavelength range was narrow, meaning that the grating moved only a few degrees during a scan, so even 4096-steps/turn wasn't fine enough resolution. Once again, I told them to use the tried-and-true sine-bar drive, and once again they refused, insisting on using a worm-and-sector gear instead, with the encoder on the worm shaft to get more encoder lines per degree of grating tilt. This was another mistake.</p>
<p>What's so bad about worm gears, you ask? Well, they're fine for some things, but high-precision angular motion is not one of them, for a number of reasons. Nearly all kinds of gears use rolling friction; the gear teeth are shaped so they mesh without sliding, like train wheels on a track. This minimizes wear. Worms are the exception; they work using sliding friction, which requires a lot of lubrication. Moving back and forth through a few-degree angular range makes the grease film thin out with time, as you'd expect, and that had a serious effect on our angular accuracy.</p>
<p>I calculated that in their design, with the very small radius of the sector gear and the tight wavelength-error budget, the maximum lubricant variation we could tolerate was about 70 nanometres, about the diameter of a small virus. Since they were nearly finished with the prototype build for the formal clinical trial, I told them to use dry molybdenum disulphide (MoS<sub>2</sub>) for lubricant instead of grease. Being a solid, that had some hope of working.</p>
<p>They straight-up refused again, saying they couldn't get MoS<sub>2</sub>, so I sent them a link to the exact SKU on fastenal.com, after verifying that their local Fastenal had it in stock. I even sent them Mapquest directions so they could find the store. (That was a bit sarcastic, which I regret, but I was getting pretty tired of their nonsense along about then.)</p>
<p>They proceeded to ship one unit with grease and three units unlubricated. When I complained about all the fiddling they were doing, with no calculations to guide it, one cheery lad smiled and said brightly, "That's engineering!" (He was one of the better ones.) They also took the POC proto apart to use bits of it in their test setup, so that they had no comparison data, and, oh, yes, they broke the $100k FTIR and didn't tell anyone.</p>
<p>The clinical trial had to be scrubbed when the units failed the acceptance test. I attended it, but since the USC folks weren't crunching the data in real time, the failure wasn't entirely apparent till later.</p>
<p>All along, I told the founder about the problems, and he told the CE. They did fix a few things, but mostly they simply said "yes" and meant "no". Since I wasn't supervising them, they didn't keep me in the loop with what they were doing to fix the problems.</p>
<p>By that point the CE had run through a year's time and most of a million bucks, and the founder had to pull the plug. Some months later, two units arrived on my bench, each attached to an expensive National Instruments A/D box because they hadn't been able to get their data acquisition system working. Along with the boxes came hundreds of megabytes of documents and software, and an urgent request for me to get to the bottom of it all. Turned out to be a real onion problem—you peel off one layer, cry, and peel off the next. Here are a couple of the layers:</p>
<p>Layer 1: The phase of the detected signal was wandering around by ±10 degrees or so. Since the measured signal goes as the cosine of the phase, this amounted to a couple of percent error—easily enough to destroy the measurement. The control code seemed to be an ordinary proportional-integral-derivative (PID) controller using an optointerrupter on the chopper wheel, which should have been fine. I built a strobe light using an HP 3325A frequency synthesizer driving a LED, so that I could stop the motion and see the loop dynamics. (This is a standard trick in motor control.) The controller was totally broken—regardless of the settings of P, I, and D, there was no way of making the phase sit still. A gentle continuous stream of canned air would move the phase, and it would never recover—<em>i.e.</em> there was no integral term in the control law, despite what the settings would have one believe.</p>
<p>Layer 2: It turned out that they'd discarded my nice working analogue lock-in amplifier design (3 jellybean chips and some <em>R</em>s and <em>C</em>s) in favour of a digital lock-in, probably to allow them to re-use a previous design. They'd never built a lock-in before, and were trying to extract the (approximately trapezoidal) signal waveform by <em><strong>least-squares curve fitting</strong></em> to a sine wave, instead of multiplying by samples of the sine and averaging like normal people.</p>
<p>Everybody screws up their first digital lock-in(***), but I'd never seen one as bad as that. (For non signals-and-systems folks: least squares fitting works OK at high signal-to-noise ratio, but being nonlinear, it falls apart completely with noisy data. Multiply-and-average uses the linear orthogonality property of sines and cosines, and so works at any SNR. The fast Fourier transform works that way too.)</p>
<p>I didn't get to the last onion layer, because the founder ran out of both money and friends. He never did pay me for my last month's work. A pity—I would have made those boxes do good measurements eventually.</p>
<p>A few lessons learned:</p>
<p>Stay out of Orange County.</p>
<p>Seriously, having a sharp technical person supervising, with a formal specification, design reviews, sign-offs on hardware and software, unit tests, and so on would have prevented this disaster. Considering all the effort that was wasted, doing it right might not have been any more expensive, and the system would have worked.</p>
<p>Checking the CE's references would have been a smart move too. They claimed that their contracts mostly forbade them to tell who their customers were, and the founder fell for that one. Oh, and when checking references, if you get good ones be sure to ask the names of the CE's people that worked on those projects. They'll always give out their 'A' team's customers, but you may get the 'B' or 'C' team. In our case, the guy who did the lock-in and encoder work was their CTO, so presumably the 'B' team was worse.</p>
<p>So there you have the sad story of a great project that initially succeded and nevertheless failed in the end. I'd like to have another whack at it one of these times, because it could help a lot of people.</p>
<p>Phil Hobbs</p>
<p>(*) One where the business end slides around on a concave curved surface, like the blade assembly on some razors, so that the pivot point is outside the mechanism like the focus of a lens.</p>
<p>(**) A cage system using plates held together with 6-mm centreless-ground stainless steel rods, similar to the Thor Labs 30-mm cage system.</p>
<p>(***) Digital lock-ins are difficult because they have to pull weak signals out of very strong background noise. You have to be totally paranoid about things like slew artifacts, settling time, input voltage sag during sampling, and anything getting in on the reference voltage. Some A/D converters can't slew their internal nodes fast enough to prevent pattern-dependent errors, and many op amps have trouble handling the charge injection that occurs during the A/D's sampling interval. (It's called"kickout".) The frequency domain is pretty brutal, and as far as I can tell, nobody ever learns that except by way of at least one failure. (You also have to use the right algorithm, and curve fitting is not it.)</p>Touch Panel Displays: Low-Cost Optical Front End2017-09-07T16:42:01+00:002017-09-07T21:09:12.868027+00:00Simon Hobbshttps://electrooptical.net/News/author/simon/https://electrooptical.net/News/touch-panel-displays-low-cost-optical-front-end/<p>In cooperation with Flatfrog Laboratories AB, Lund, Sweden. This one was interesting mostly due to the requirement for high and stable performance at an absolute rock-bottom cost.</p>Plasmonic Nano-Antennas for Thermal Infrared Pixels2017-09-07T16:41:41+00:002017-09-07T21:09:23.341902+00:00Simon Hobbshttps://electrooptical.net/News/author/simon/https://electrooptical.net/News/plasmonic-nano-antennas-for-thermal-infrared-pixels/<p>This was a seedling design study for a DARPA program that never got funded. It leveraged POEMS and my antenna-coupled tunnel junction devices, adding a couple of novel wrinkles: metal-insulator-metal varactors and parametric readout using a 10 GHz pump frequency. Hopefully there will be a chance to revisit this, because it was potentially a pretty sweet solution.</p>Long-range IR transceiver2017-09-07T16:41:18+00:002017-09-07T21:09:35.242571+00:00Simon Hobbshttps://electrooptical.net/News/author/simon/https://electrooptical.net/News/long-range-ir-transceiver/<p>For a large Far Eastern consumer electronics manufacturer to use in virtual reality games. A greatly improved transimpedance amplifier got them a factor of 10 in range (30 m vs. 3 m) for about the same amount of power.</p>Aircraft Carrier Flight Deck Optical Communications Link2017-09-07T16:40:41+00:002017-09-07T21:09:45.183180+00:00Simon Hobbshttps://electrooptical.net/News/author/simon/https://electrooptical.net/News/aircraft-carrier-flight-deck-optical-communications-link/<p>Photon Budget and Optical Data Receiver<br/>This one was a somewhat similar application for the Navy.</p>Ad Hoc Optical Battlefield Network: Optical Data Receiver2017-09-07T16:40:00+00:002017-09-07T21:09:52.721144+00:00Simon Hobbshttps://electrooptical.net/News/author/simon/https://electrooptical.net/News/ad-hoc-optical-battlefield-network-optical-data-receiver/<p>This was a collaboration with Chris Wieland of Della Enterprises on an Army Research contract.</p>Instrumentation: Nanowatt Photodetector2017-09-07T16:10:42+00:002017-09-07T21:10:02.784684+00:00Simon Hobbshttps://electrooptical.net/News/author/simon/https://electrooptical.net/News/instrumentation-nanowatt-photodetector/<p>The standard problem with conventional nanowatt photoreceivers is that in order to get near the shot noise, you have to use feedback resistors so gigantic that you can't maintain decent bandwidth.<br/>This one has what I think is a completely novel photo-feedback architecture, i.e. rather than using a feedback resistor in the TIA, it uses two secondary photocurrents to cancel the input current. Putting the two secondary photodiodes in series makes the cancellation current 3 dB quieter than the shot noise, and a feedback system prevents them from fighting, as series-connected current sources normally would.</p>
<p><br/>This results in a noise floor asymptotically only 10 log(1.5) ≈ 1.76 dB above the shot noise of the signal photocurrent, instead of 3 dB for straight photocurrent feedback.</p>Downhole Interferometry: Cavity-Stabilized 1550-nm Laser2017-09-07T16:10:00+00:002017-09-07T21:10:12.941698+00:00Simon Hobbshttps://electrooptical.net/News/author/simon/https://electrooptical.net/News/downhole-interferometry-cavity-stabilized-1550-nm-laser/<p>This was in collaboration with a start-up in New Mexico called Symphony Acoustics. Downhole measurements are notoriously difficult, and this one was no exception: building a laser that could achieve an Allan variance of 10-10 at 10,000 seconds, and do it 5000 feet down a 2-inch cased drillhole. Due to the casing thickness, the maximum outer diameter of the instrument package was 38 mm, including its own casing and two concentric zones of thermal control.</p>
<p>The stabilization strategy was one I patented in about 1992: Send the beam through a fixed etalon; detect both the reflected and transmitted beams; form a linear combination C = T-αR for some convenient value 0 < α < 1; and servo the laser tuning to null out C, which can be done very accurately, without needing a high finesse cavity. The key observation is that by choosing α correctly, you can completely eliminate the coupling between AM and FM laser noise, so that besides excellent laser stability, you can also get outstandingly stable amplitude measurements by forming the combination A = T+αR. If you choose the right value of α, namely<br/>α = -(dT/dν) / (dR/dν), then dA/dν=0,<br/>so none of the FM noise of the laser gets turned into AM noise. (I'm not entirely certain that I was the first one to do this, but that was pretty early days for diode laser based instruments.) When combined with laser noise cancellation to get rid of the actual AM noise of the laser, this scheme lets you do shot-noise limited measurements inside a passive resonant cavity, which is a very useful trick.</p>
<p>A modern telecom DFB laser doesn't current-tune very far. It's easy to say, but a lot of design effort has gone into making this happen. DWDM channels are very closely spaced; when you current-modulate one laser to send some data, you don't want it to scribble all over the adjacent channels. That's excellent for telecoms, but inconvenient for laser stabilization, because the tuning range is too narrow. Thus this design needed a combination temperature- and current-tuning loop. Only current-tuning could achieve the required feedback loop bandwidth, and only temperature tuning could cover the required wavelength range. The breadboard prototype worked very well, in fact well enough to advance the state of the measurement art, but funding ran out before the actual downhole version could be completed. There were a few very interesting temperature-control concepts that came out of this work as well. I'd very much like to revisit it if I have the opportunity.</p>Nuclear Nonproliferation: 100-MHz Noise Cancelling Front End2017-09-07T16:09:38+00:002022-01-20T01:17:14.636087+00:00Simon Hobbshttps://electrooptical.net/News/author/simon/https://electrooptical.net/News/nuclear-nonproliferation-100-mhz-noise-cancelling-front-end/<p>This was in cooperation with Mesa Photonics of Santa Fe NM. It's part of a DOE program, an advanced deployable solar occultation spectrometer for detecting volatile plumes from clandestine uranium enrichment.</p>
<p>Their scheme uses a really cool technique: solar heterodyne detection. It's a good illustration of the importance of a photon budget.</p>32-Channel IR Detector for Compressive Scanning Camera2017-09-07T16:09:22+00:002017-09-07T21:10:33.418652+00:00Simon Hobbshttps://electrooptical.net/News/author/simon/https://electrooptical.net/News/32-channel-ir-detector-for-compressive-scanning-camera/<p>A follow-on to the single channel version. This one had to work at very much lower power, which required a new amplifier topology based on local feedback around a very low noise JFET. This was a very fruitful development, which has been used in a number of follow-on designs.</p>Compressive-Scan Camera: 1-Channel IR Detector Analog Front End2017-09-07T16:08:31+00:002017-09-07T21:10:41.613043+00:00Simon Hobbshttps://electrooptical.net/News/author/simon/https://electrooptical.net/News/compressive-scan-camera-1-channel-ir-detector-analog-front-end/<p>In cooperation with InView Technology. Compressive scanning is a scheme for doing image sensing with a single-element detector, without suffering the N2 speed penalty of raster scanning. It's a sort of combination of scanning and image compression—you use a digital micromirror device (DMD) to multiply the image by a series of 1-bit digital basis functions, measure the resulting photocurrent, and then invert the transform to produce a compressed image. That's not too useful in the visible, where image sensors are cheap commodity items, but in the UV and especially the shortwave IR (SWIR), image arrays are extremely expensive, so there's a need for compressive scan cameras.</p>
<p><br/>This front end had to reach the shot noise limit with high-capacitance SWIR photodiodes at very low photocurrents, which is a difficult combination.</p>Current Amplifier for Nanopore Biochips for DNA Sequencing2017-09-07T16:07:03+00:002017-09-07T21:10:51.646261+00:00Simon Hobbshttps://electrooptical.net/News/author/simon/https://electrooptical.net/News/current-amplifier-for-nanopore-biochips-for-dna-sequencing/<p>Discussion on the topic here.</p>
<p></p>
<p>This one came from a a major industrial research laboratory: near shot noise limited detection of 1 nA currents in 100 MHz bandwidth.<br/>This was one that I wasn't at all sure would work: it's pretty sporty trying to detect a few dozen electrons at 100 MHz in a built-up circuit. (A 100-MHz lowpass has a time-domain response about 5 ns wide, and 1 nA in 5 ns is 31 electrons.) Obviously to get the highest available signal voltage, the input-node capacitance has to be absolutely the minimum possible: less than 1 pF.</p>
<p><br/>Doing this required another novel transimpedance design, and the use of microwave transistors: 20 GHz GaAs pHEMTs and 40 GHz SiGe:C bipolars. Because performance verification of such a device is very difficult, I also designed an on-board calibrator that used the same sorts of devices to produce a 50-kHz triangular wave that was really really triangular: the corner showed less than 1 ns of curvature. When differentiated by a very small coupling capacitance, this produces a square wave current of about 10 nA at the input, which is a convenient calibration signal.</p>Instrumentation: Wideband Laser Noise Canceller2017-09-07T16:06:39+00:002017-09-07T21:11:02.103839+00:00Simon Hobbshttps://electrooptical.net/News/author/simon/https://electrooptical.net/News/instrumentation-wideband-laser-noise-canceller/<p>> 60 dB of laser RIN cancellation out to > 10 MHz, about 100 times faster than current commercial devices</p>Spectroscopic-Detection Biochips Based on Photonic Waveguides2017-09-07T16:05:15+00:002017-09-07T21:11:12.508734+00:00Simon Hobbshttps://electrooptical.net/News/author/simon/https://electrooptical.net/News/spectroscopic-detection-biochips-based-on-photonic-waveguides/<p>Using <a href="https://electrooptical.net/#Poems">POEMS</a> to design waveguides, coupling structures, and optical/chemical interaction regions; consulting on microfabrication issues</p>