1108 IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 58, NO. 4, APRIL 2011 Linear–Logarithmic CMOS Pixel With Tunable Dynamic Range Monica Vatteroni, Pietro Valdastri, Member, IEEE, Alvise Sartori, Member, IEEE, Arianna Menciassi, Member, IEEE, and Paolo Dario, Fellow, IEEE Abstract—A CMOS pixel with linear–logarithmic response and programmable dynamic range (DR), based on a tunable transition point, has purposely been designed for endoscopic applications. A theoretical model of the pixel was developed and validated. A chip with a 100 × 100 pixel array and a 12-b digital output was fabricated in a 0.35-μm technology and was fully tested, thus demonstrating state-of-the-art performance in terms of DR and noise. Intraframe DR proved to be extendable to more than 110 dB through a logarithmic compression of the signal in the light ir- radiation power density (LIPD) range. The measured temporal noise (pixel noise) was less than 0.22% over the full range. The architecture presented limited fixed pattern noise (FPN) due to the scheme adopted, which allowed its correction over the full signal range: FPN was 0.83% (1.37%) in the linear (logarithmic) region. Although the test chip was designed mainly for endoscopic applications, the technology may also be applied to other fields, e.g., robotics, security and industrial automation, whenever high DR is a crucial feature. Index Terms—CMOS imager, endoscopy, logarithmic response, pixel. I. INTRODUCTION S INCE 1879 [1], vision systems have widely been used inbiomedical applications, mainly for endoscopic inspection and visualization enhancement in surgery. Despite this condi- tion, the first fiber endoscope was developed more than 70 years later when Hopkins and Kapani published the use of a gastro- scope based on coherent glass fibers in 1954 [2]. One additional milestone was reached in this field with the introduction of television, which allowed binocular vision from a convenient distance by several observers. In 1988, a digital vision system that was placed on the distal part of the instrument eliminated the need for optical fibers that run through the entire shaft [1]. Nowadays, as a consequence of the wide diffusion of minimally invasive diagnostic and surgical techniques, endoscopic vision systems have dramatically strengthened their role in the surgical field [3], [4]. Manuscript received June 21, 2010; accepted December 26, 2010. Date of publication February 14, 2011; date of current version March 23, 2011. This work was supported in part by the European Commission through the frame- work of VECTOR FP6 European project EU/IST-2006-033970 (www.vector- project.com). The review of this paper was arranged by Editor J. R. Tower. M. Vatteroni, P. Valdastri, A. Menciassi, and P. Dario are with the BioRobotics Institute, Scuola Superiore Sant’Anna, 56025 Pisa, Italy (e-mail: m.vatteroni@sssup.it; p.valdastri@sssup.it; a.menciassi@sssup.it; p.dario@sssup.it). A. Sartori is with NeuriCam SpA, 38100 Trento, Italy (e-mail: sartori@ neuricam.com). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TED.2011.2106787 Minimally invasive surgery allows surgeons to perform pro- cedures through small incisions, thereby offering several ben- efits to the patient, e.g., reduced trauma and faster recovery, compared to traditional open techniques. Since its introduction, surgical endoscopy has played a major role in improving the safety, precision, and reliability of medical interventions. The most desirable features of an endoscopic vision system are high image resolution, low noise, bright illumination, and low working temperature. Saturated regions must particularly be prevented, because they limit and, sometimes, hamper image comprehension. The disruptive progress that was achieved by the CMOS im- ager industry in the last few years, mainly driven by consumer electronics, has allowed all these requirements to simultane- ously be met [5]. This case is particularly true for saturation, which can be prevented by using high-dynamic-range (HDR) image sensors [6]. HDR image sensors allow a wide range of light irradiation power density (LIPD) to be mapped in the same picture. Tech- niques for obtaining HDR have widely been investigated and can be classified as interframe and intraframe techniques. In the former case, images are combined together to obtain a single HDR image [7]–[10], whereas in the latter case, the whole light dynamic is mapped in the same image. With regard to the interframe approach, the multi-integration technique is currently the most commonly used approach at the industrial level, because it is based on a simple concept. In particular, this approach is easily implementable with a CMOS sensor. Among existing intraframe techniques, the simplest ap- proach is the logarithmic technique [11], [12]. The array is con- tinuously read out and the photodetector output is compressed with a dynamic metal–oxide–semiconductor field-effect tran- sistor (MOSFET) load. This case results in a logarithmic relationship between the LIPD input and the voltage output covering more than six decades of LIPD. The main advantages of this technique are its simple architecture and HDR. On the other hand, poor response at low light intensity and high residual fixed pattern noise (FPN) are the main drawbacks. One alternative and more recent technique for obtaining intraframe HDR is the time-to-saturation approach [13]. In this case, the output is obtained by combining a standard linear out- put and a signal related to the time required by the photodiode to saturate. The DR can be extended to more than 120 dB, depending on the function used to map the saturation time into the output voltage. A detailed reference list with a comparative theoretical study of main HDR architectures is reported in [14]. 0018-9383/$26.00 © 2011 IEEE VATTERONI et al.: LINEAR–LOGARITHMIC CMOS PIXEL WITH TUNABLE DYNAMIC RANGE 1109 Focusing on endoscopic applications, the main requirements for an HDR vision system are chip size and image quality. Con- sequently, the cited techniques could hardly be implemented in endoscopy. In fact, the multi-integration technique is based on the acquisition of the same image, with different integration times, to obtain an HDR image from their superposition. This case is not always guaranteed during endoscopic inspections because of possible movements (e.g., peristalsis, respiration, and tool motion) during image acquisition. With regard to intraframe techniques, the time-to-saturation pixel guarantees good image quality, at the cost of a relevant pixel pitch. There- fore, high resolution cannot be achieved, unless a large silicon area is used. The logarithmic pixel is an intraframe approach that can be implemented in a small pitch; however, the response at low LIPD is poor. This condition is a crucial drawback in the event of a light- and temperature-controlled environment, e.g., endoscopy. Based on these considerations, a novel image sensor with linear–logarithmic response was developed by merging the simple design and the HDR of the logarithmic technique in case of high LIPD with a linear response at low intensity light. This case is not the first time that a linear–logarithmic pixel is presented in the literature [15]. The main innovative feature of the proposed design consists of the integration of a hard reset structure that gives rise to an FPN subtraction in hardware—completely when the response is in the linear region and partially when the behavior is logarithmic. Moreover, the power responsivity curve can be adjusted through a tunable analog reference to dynamically achieve the desired tradeoff between HDR and high pixel resolution [16], [17]. To preliminarily evaluate the performance of this new pixel architecture, one model of the pixel response was developed and is presented in Section II. The new technique was im- plemented in a 100 × 100 pixel image sensor, fabricated in a standard 0.35-μm, 3.3-V CMOS technology. Details are given in Section III. Section IV gives concrete information on performance by illustrating the results of the electro-optical tests that were carried out and by comparing them to the proposed model. This section also describes the performance of endoscopic image acquisition through ex vivo experiments. II. PIXEL ARCHITECTURE AND MODEL The linear–logarithmic pixel that was integrated in the pro- posed chip is shown in Fig. 1. The core is the same as that of a linear pixel. The photodiode (D1), a first MOSFET that works as a reset switch (M1), and a second MOSFET (M5), in source–follower configuration, shield the photosensitive node from the load of the readout chain. A global shutter (SH), which consists of a transistor (M4) that was driven by an external signal, completes the basic block. Note that, during preliminary testing, the sensitivity of the pixel was relevantly affected by the switching of the M4 transistor. Therefore, it was excluded from the signal chain by leaving it always switched on and will not be considered in the following discussion. HDR functionality is guaranteed by a series of transistors (M2 and M3), with the gate connected to the drain in a diode configuration. Placing a second transistor in series to the first one doubles the gain in Fig. 1. Schematic of the linear–logarithmic pixel. the logarithmic region, thus enhancing the signal-to-noise ratio (SNR) [18]. This part of the circuitry works as a voltage-light dependent active load, and is responsible for the logarithmic compression. Such a load is active only when the voltage VPH at the photo- sensitive node (PH) exceeds a threshold which can be tuned by the operator through an external reference voltage VLOG. The operation of the pixel starts with a reset status. During this phase, the reset transistor M1 is switched on, and the photosensitive node (PH) is pulled to the reference voltage VRES, which can be adjusted by the user. The following in- tegration phase begins with the release of PH by switching M1 off. PH is now isolated. and its voltage (VPH) starts to decrease as a consequence of the charge carriers drained by the photogenerated current. The higher the photosensitive current is, the faster the discharge will be. If the combination of light intensity and integration time is not enough to allow the pixel to reach the threshold at which the active load starts to work (i.e., VLOG − 2∗VTH, where VTH is the transistor threshold voltage, then the signal VPH at the photosensitive node will be proportional to the photogenerated current (iPH), increased by the dark current (idark), and to the integration time (tINT) as follows [6]: VPH = tINT(iPH + idark) CIN VPH > VLOG − 2∗VTH (1) where CIN is the capacitance at the photosensitive node, which is made up of the intrinsic photodiode capacitance and the parasitic one. If VPH reaches the threshold that was tuned by VLOG, then the active load starts to work by draining part of the photogen- erated current. The result is a logarithmic relation between iPH and VPH. This behavior is obtained due to the subthreshold working condition of the M2 and M3 load transistors, guaranteed by the shortcut between the gate and the drain (VGS = VDS). Indeed, for VDS < VTH, the transistor should be switched off. However, from an analog standpoint, the transistor is in weak inversion, and a small current of minority carriers (iD) flows in the MOSFET channel. 1110 IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 58, NO. 4, APRIL 2011 The equation for iDS is [19] iDS = Dn kT√ 2qNAΨS εSi ( W L )( n2i NA ) e qΨS kT ( 1− e− qVDSkT ) (2) where ni and NA are, respectively, the intrinsic and the donor doping concentration, Ψs is the surface potential, Si is the silicon dielectric constant, W and L are the MOSFET width and length, and Dn is the electron diffusivity. At room tem- perature, VDS > 3kT/q ∼ 75 mV, and the last term of this equation is around 1. Furthermore, VDS = VGS = nΨS , and all the constant parts of the formula can be collected in one single term as i0 = Dn kT√ 2qNAΨS εSi ( W L )( n2i NA ) (3) where Dn = (kT/q)μn, and μn is the electron mobility. Therefore, (2) can be simplified as iDS = i0e VDS kT q . (4) This formula can be inverted to find the source voltage as logarithmically dependent on the iDS current. We have VS = VD − (kT/q)∗ ln(iDS/i0) (5) with VS = VPH, VD = VLOG, and iDS = iPH. Equation (5) represents the logarithmic relation between the photogenerated current and the voltage at the photosensitive node for a single transistor load and for VPH ≥ VLOG − VTH. In the configura- tion presented in Fig. 1, the load transistors are M1 and M2, and the transfer function in the logarithmic region is VPH =VLOG − 2∗(kT/q)∗ ln(iDS/i0) VPH≤VLOG − 2∗VTH. (6) To relate these results to the physical implementation presented in this paper, the electrical design rules from ON-Semiconductor (formerly AMI) 0.35 μm [20] may be considered. This standard CMOS technology was chosen for the proposed sensor, because it may partially be customized, thus enhancing the overall sensor performance. In particular, a nonstandard shield to silicide implant and to complementary p-well diffusion was added at the array level [21]. To define a model that can practically be used, a series of approximations must be assumed. In particular, i0, which is temperature dependent as T 2 [11], is considered constant over the whole temperature range. This assumption is acceptable for endoscopic applications, where the temperature is supposed to be constant. As a result, based on the data from the silicon foundry, the i0 current is in the order of 0.01 fA, and the logarithmic response may be calculated as a function of the photogenerated current. The model was developed by using Microsoft Excel 2007, and the results are shown in Fig. 2. As further reported, the model was validated on the developed prototype. Once Fig. 2. Simulated power responsivity for the linear–logarithmic pixel at different VLOG and TINT values. ADCOUT is the ADC digital output word as defined in Section III. experimentally verified, the model will allow the theoretical extrapolation of several parameters that would otherwise be difficult to measure, e.g., the capacitance at the photosensitive node and the actual dark current. As shown in Fig. 2, the response in the logarithmic region maintains the same trend, regardless of the integration time. In particular, VPH is not affected by CIN [see (5)]. This condition leads us to conclude that the DR is not related to the full- well capacitance, as in linear pixels, but is potentially limited only by the voltage range of the readout channel and the silicon secondary effects. The CIN effect is only relevant for the linear region, where a small CIN value would increase the slope of the power responsivity, as highlighted in (1), thus resulting in an improved discrimination between different levels of input signal. To better fit the linear region, a dependency of the photodiode capacitance from the applied voltage was considered in CIN [6]. This approach results in a better fit in the low-light region. III. IMPLEMENTATION OF THE PIXEL IN A TEST CHIP The linear–logarithmic technique presented in this paper was implemented in a monolithic active-pixel gray-level camera- on-a-chip sensor. The photo-sensitive matrix consists of a 100 × 100 pixel array based on an n-well/p-substrate photo- diode, with a pixel pitch of 9.4 μm × 9.4 μm and a fill factor (FF) of 30% [6]. The sensor integrates a pixel array, a number of column data sampling (CDS) blocks, a data double sampling (DDS) block at the array level [10] with double sampling and preamplification capability, a 12-b analog-to-digital converter (ADC), and several other support blocks (see Fig. 3). When a row-decoder addresses a row, every pixel of the selected line simultaneously transfers its output value to a CDS, which stores the information. Then, the pixels of the selected row are reset, and their reset values are transferred to the associated bit-line CDS. The CDS subtracts this value from the stored one and makes the result available at the output. This subtraction between two noncorrelated samples mainly allows pixel FPN reduction. Depending on the response, either linear VATTERONI et al.: LINEAR–LOGARITHMIC CMOS PIXEL WITH TUNABLE DYNAMIC RANGE 1111 Fig. 3. Simplified block diagram of the camera. or logarithmic, this correction is performed, respectively, on the whole signal chain or just on a part of it. In particular, when the pixel is in the reset condition, the level of VPH is such that the active load is not working. This condition means that M2 and M3 are not involved in the signal path during the acquisition of the reset value; thus, their FPN contribution cannot be subtracted. The sampling operation of the CDS block can be summarized as follows: VSigCDSOut = [(VResPix − VSigPix) + VbCDS]×GCDS (7) where VSigCDSOut is the output of the CDS, VResPix and VSigPix are the reset and signal output of the pixel, respectively, VbCDS is an external reference voltage, and GCDS is the CDS gain. Each CDS block is then reset to generate a reference value for the following DDS block. The reset output of the CDS is VResCDSOut = VbCDS ×GCDS. (8) The following operation is the sequential selection of each CDS block by a column decoder. After the selection, the output value of each CDS is transferred to the DDS twice: one time for the signal and another time for the reset value. The DDS is a fully differential switched capacitor block that performs a subtraction between the signal and the reset value from each CDS, the addition of a threshold voltage (VREF), and a mul- tiplication by a gain factor (GAIN). Such a gain can be set by the user, ranging from 1 to 4, by changing the input capacitance of the switched capacitor circuit. The resulting DDS output is differential as described in the following relation: VOutDDS± = VCM ±GAIN × (VREF − (VResCDSOut − VSigCDSOut)) (9) where VOutDDS± are the DDS outputs, and VCM is the com- mon mode signal. Fig. 4. Complete (a) imager and (b) pixel layout. The DDS differential output signals are sent to a 12-b pipelined ADC [22] and digitized into 4096 levels. Therefore, each pixel is described by a 12-b word as follows: ADCOUT = V +OutDDS − V −OutDDS VREFP − VREFN ∗ 2 11 (10) where VREFP and VREFN are the ADC reference voltages, and ADCOUT is the ADC digital output word. IV. EXPERIMENTAL RESULTS The image sensor presented in this paper was fabricated in the standard 0.35-μm mixed-signal CMOS technology from ON-Semiconductors. The chip layout is shown in Fig. 4(a). The die size is 6.36 mm× 3.68 mm, including test structures, more than five 100 × 100 pixel arrays, not described in this paper. Only the first 100 × 100 array based on a standard layout design [see Fig. 4(b)] was considered and characterized. Electro-optical tests were carried out on a dedicated electro- optical bench to characterize the imager performance. For this purpose, a custom system was developed (see Fig. 5) to interface the sensor with a personal computer (PC). The system consists of two boards. The first board (referred to as the “daughter board” in Fig. 5) is a custom board, particularly designed to interface the vision sensor with the second board (referred to as “mother board” in Fig. 5) through a couple of 50-pin stripes. The second board features a 100-Kgates field- programmable gate array (FPGA; Xilinx, Spartan). The code 1112 IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 58, NO. 4, APRIL 2011 Fig. 5. Customized test boards. that was loaded on the FPGA enables imager control and interfacing of the imager with a PC-based user console. With regard to electro-optical characterization, the overall integration time was set to 30 and 100 ms, and the VLOG reference ranged from 1 V to 3.3 V to assess the power responsivity modulation. The reference signals of the sensor were set to have an offset value of 500 LSBs, thus preventing downward saturation of the readout channel. This condition guarantees accurate sensitivity estimation, which measures the capability of the sensor to detect low LIPD. Note that sensitivity can be identified by the LIPD value, where SNR = 1 for the given configuration of the sensor parameters [6]. As shown in Fig. 6(a), a DR of 110 dB can be achieved over an effective number of bits of 11.5 by setting TINT = 30 ms and VLOG = 2.3 V. For this specific case, 11.2 b resolves the linear region of the sensor, thus covering 2.25 decades of irradiance (45 dB). The rest of the dynamics, i.e., 65 dB, which is equivalent to more than three decades, is logarithmically mapped over 9.2 b, with a significant decrease in resolution. To test power responsivity modulation, VLOG was set at three different values, i.e., 1.8, 2.0, and 2.3 V. As represented in Fig. 6(a), an increase in VLOG results in a shift of the transition point from the linear to the logarithmic region in the power responsivity curve. Furthermore, when changing the VLOG value from 1.8 V to 2.3 V, the resolution loss in the linear region is negligible if compared to the overall gain of one decade in the DR. Note that the 110-dB power response limit is due to the measurement setup, which cannot provide the sensor with a LIPD over 103 W/m2. This limit can theoretically be extended over 120 dB with a proper sensor setting, as detailed later on in this paper. Furthermore, pixel sensitivity can be improved by increasing the integration time, similar to a standard linear active pixel sensor (APS). This feature was both modeled (see Fig. 2) and experimentally assessed by setting the integration time up to 100 ms, thus obtaining an improvement in sensitivity by a factor of 30. Noise performance of the sensor was extracted from ex- perimental results as a percentage of the signal range. The average FPN value was measured at different LIPD levels as the standard deviation of pixel values over an image acquired without a focusing lens on top of the chip. To exclude pixel Fig. 6. (a) Power responsivity and (b) SNR at different VLOG and TINT values over the whole LIPD range. noise (PN), the standard deviation was calculated over an image that results from the average of 100 acquisitions at constant environmental and setting conditions. The resulting value for FPN is 0.83% in the linear region and 1.37% in the logarithmic region [see Fig. 7(a)]. PN was measured over the whole LIPD range as the standard deviation over 100 pixel acquisitions taken at constant environmental and setting conditions. To avoid FPN contribution, each value was taken as the average pixel over a whole image. The resulting value for PN is less than 0.22% of the whole signal range [see Fig. 7(b)]. The measured SNR, defined as the signal over the PN (in decibels), is plotted in Fig. 6(b) for the entire dynamic range (DR). Note that variations in VLOG do not affect the SNR response. Furthermore, there is no appreciable decrease in the SNR curve for strong incident light, thus potentially allowing us to achieve a maximum DR larger than 120 dB. Examples of ex vivo biological images of a porcine stomach, acquired with the described chip, are shown in Fig. 8. The same image was acquired with different VLOG levels to show the increasing number of details that can be obtained by tuning VLOG. To quantify this variation, the histogram of each image is included. The additional information content that can be recovered by increasing VLOG is highlighted with a circle. Note that no processing was applied to the images shown in Fig. 8. Residual FPN in the high-light power region is 1.37% of the full signal range, i.e., four times lower than with a standard VATTERONI et al.: LINEAR–LOGARITHMIC CMOS PIXEL WITH TUNABLE DYNAMIC RANGE 1113 Fig. 7. (a) FPN and (b) PN at different VLOG and TINT values over the whole LIPD range. logarithmic pixel [23]. The main performance of the image sensor is summarized in Table I. The power responsivity results were compared with the model presented in Section II. Parameters that cannot di- rectly be measured on chip, e.g., the photodiode capacitance, were tuned around values that were calculated considering the 0.35-μm ON-Semiconductor process parameters (e.g., CIN = 16 fF and iD = 52 fA). Fig. 9 shows a good fit between the model and the measurements, both in the logarithmic and the linear regions. This fit was quantified as percentage error for each LIPD step and is plotted in Fig. 10. The average percentage error over the LIPD range, excluding the saturated regions, is around 0.8% for all the considered sensor configurations, with a peak in the transition region. Once model output reliability has been demonstrated, the DR that may be achieved by the sensor can further be discussed. As reported in Fig. 11, by setting VLOG = 2.3 V, the model predicts a LIPD saturation of 4 × 106, thus achieving more than 120 dB in the DR. This result is further supported by the aforementioned SNR stability at high LIPD. V. CONCLUSION A novel linear–logarithmic pixel for intraframe HDR image sensor, featuring a 12-b digital output, has been presented. This pixel configuration was integrated in a 100 × 100 pixel Fig. 8. Comparison between ex vivo images of a porcine stomach with differ- ent dynamic ranges, from low (VLOG = 2.0 V) to high (VLOG = 2.70 V) threshold values. 1114 IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 58, NO. 4, APRIL 2011 TABLE I MAIN CHARACTERISTICS AND PERFORMANCE OF THE IMAGE SENSOR DEVELOPED Fig. 9. Modeled and experimental power responsivity of the linear– logarithmic pixel at different VLOG and TINT values over the whole LIPD range. array within a monochrome imager. A DR over 110 dB was experimentally measured, whereas an extension over 120 dB, not measurable due to measurement setup limitations, is ex- pected on account of the modeling results and SNR stability at high LIPD. Measurement results show good PN (0.22% rms) and FPN (0.83–1.37%) performance, both in the linear and the logarithmic regions. Furthermore, the DR and, accordingly, the resolution of the power responsivity curve can simply be adjusted by tuning the point of transition between the linear and the logarithmic domains through the setting of an analog reference signal (i.e., VLOG). A behavioral model of the linear–logarithmic pixel was presented and compared with the experimental results. The Fig. 10. Percentage error over the LIPD range in the fit of the measured results by the model for different VLOG and TINT. Fig. 11. Percentage error over the LIPD range in the fit of the measured results by the model for different VLOG and TINT. average percentage error in the fit between experimental and theoretical data was 0.8%. Given such consistency, the model can be used to estimate the DR with different settings of the reference signals or to extrapolate specific device parameters from experimental data (e.g., the photodiode capacitance and the dark current), which would otherwise be impossible to experimentally quantify. The sensor was tested by acquiring images from an ex vivo porcine stomach. Due to the sensor’s ability to be self referential by changing VLOG and, accordingly, the DR, the efficiency of an HDR response in comparison with a standard DR one was proven. The main result consists of additional details in the HDR image if compared to the standard DR one. In endoscopic imaging, HDR is useful in case of high reflective regions, mainly in correspondence of wet mucosa. In the case of the presented pixel, these regions are mapped in the logarithmically extended dynamic, thus avoiding saturation of the scene and providing better detail to the user. To complete the characterization of the pixel, a spectral responsivity measurement will be performed as the next step to gather information about the spectral performance and the quantum efficiency. VATTERONI et al.: LINEAR–LOGARITHMIC CMOS PIXEL WITH TUNABLE DYNAMIC RANGE 1115 The presented HDR pixel technology will be implemented in a second prototype with extended resolution. Filters will be added to obtain a color image stream, which is of paramount importance in the targeted biomedical application. ACKNOWLEDGMENT The authors would like to thank Carmela Cavallotti, Daniele Covi, and Luca Clementel for their valuable support and activ- ity in the optical characterization phase. REFERENCES [1] G. Berci and M. Paz-Partlow, “Electronic imaging in endoscopy,” Surg. Endosc., vol. 2, no. 4, pp. 227–233, Dec. 1988. [2] H. Hopkins and N. S. Kapany, “A flexible fiberscope using static scanning,” Nature, vol. 173, no. 4392, pp. 39–41, Jan. 1954. [3] AltaSens—Medical Imaging. [Online]. Available: http://www.altasens. com [4] OmniVision—Medical Imaging. [Online]. Available: http://www.ovt. com/applications/application.php?id=5 [5] M. Bigas, E. Cabruja, J. Forest, and J. Salvi, “Review of CMOS image sensors,” Microelectron. J., vol. 37, no. 5, pp. 433–451, May 2006. [6] A. Theuwissen, “Digital imaging: Image capturing, image sensors, tech- nologies and applications,” CEI-Europe2004. [7] D. Yang, A. El Gamal, B. Fowel, and H. Tian, “A 640× 512 CMOS image sensor with ultrawide dynamic range floating-point pixel-level ADC,” IEEE J. Solid-State Circuits, vol. 34, no. 12, pp. 1821–1834, Dec. 1999. [8] T. Lulé, B. Schneider, and M. Bohm, “Design and fabrication of a high- dynamic-range image sensor in TFA technology,” IEEE J. Solid-State Circuits, vol. 34, no. 5, pp. 704–711, May 1999. [9] B. J. Hosticka, W. Brockherde, A. Bussmann, T. Heimann, R. Jeremias, A. Kemma, C. Nitta, and O. Schrey, “CMOS imaging for automotive applications,” IEEE Trans. Electron Devices, vol. 50, no. 1, pp. 173–183, Jan. 2003. [10] J. B. Chun, H. Jung, and C. M. Kyung, “Dynamic-range widening in a CMOS image sensor through exposure control over a dual-photodiode pixel,” IEEE Trans. Electron Devices, vol. 56, no. 12, pp. 3000–3008, Dec. 2009. [11] M. Loose, K. Meier, and J. Schemmel, “A self-calibrating single-chip CMOS camera with logarithmic response,” IEEE J. Solid-State Circuits, vol. 36, no. 4, pp. 586–596, Apr. 2001. [12] S. Kavadias, B. Dierickx, D. Scheffer, A. Alaerts, D. Uwaerts, and J. Boagaerts, “A logarithmic-response CMOS image sensor with on-chip calibration,” IEEE J. Solid-State Circuits, vol. 35, no. 8, pp. 1146–1152, Aug. 2000. [13] D. Stoppa, M. Vatteroni, A. Baschirotto, A. Sartori, and A. Simoni, “A 120-dB dynamic range CMOS image sensor with programmable power responsivity,” IEEE J. Solid-State Circuits, vol. 42, no. 7, pp. 1555–1563, Jul. 2007. [14] S. Kavusi and A. El Gamal, “Quantitative study of high-dynamic-range image sensor architectures,” Proc. SPIE, vol. 5301, pp. 264–275, 2004. [15] J. Hynecek, E. C. Fox, and D. R. Dykaar, “Sensor pixel with linear and logarithmic response,” U.S. Patent 6 323 479, Nov. 27, 2001. [16] A. Sartori and M. Vatteroni, “Photo-sensitive element for electro-optical sensors operating under various illumination conditions,” U.S. Patent 7 119 320, Oct. 10, 2006. [17] A. Sartori and M. Vatteroni, “Photo-sensitive element for electro-optical sensors,” European Patent, EP1533995, Feb. 13, 2008. [18] G. Tecchiolli and A. Sartori, “Photo-sensitive element for electro-optical sensors,” U.S. Patent 6 891 144, May 10, 2005. [19] M. Loose, “A self-calibrating CMOS image sensor with logarithmic re- sponse,” Ph.D. dissertation, Institut fur Hochenergiephysik, Universitat Heidelberg, Heidelberg, Germany, 1999. [20] Design Rule Manual C035M-A, ON-Semiconductor, Phoenix, AZ, 2002. ID DS 13337, Rev. 03. [21] H. S. Wong, “Technology and device scaling considerations for CMOS imagers,” IEEE Trans. Electron Devices, vol. 43, no. 12, pp. 2131–2142, Dec. 1996. [22] S. Hamami, L. Fleshel, and O. Yadid-Pecht, “CMOS image sensor em- ploying 3.3-V 12-bit 6.3-Ms/s pipelined ADC,” Sens. Actuators A: Phys., vol. 135, no. 1, pp. 119–125, Mar. 2007. [23] M. Vatteroni, D. Covi, D. Stoppa, B. Crespi, and A. Sartori, “High- dynamic-range CMOS image sensors in biomedical applications,” in Proc. IEEE EMBS Conf., Lyon, France, 2007, pp. 2819–2822. Monica Vatteroni received the M.S. degree in electrical engineering from the University of Pisa (I), Pisa, Italy, in 2001 and the Ph.D. degree in physics from the University of Trento (I), Trento, Italy, in 2008. From 2002 to 2008, she was with NeuriCam, Trento, as a Pixel Engineer and an Analog Designer, where she was responsible for CMOS image sensor development in 2005. She is currently a Postdoctoral Fellow with the Scuola Superiore Sant’Anna, Pisa, where she is in charge of the research and develop- ment of image sensors and vision systems for biomedical applications. She is the author or a coauthor of several conference proceedings and journal publi- cations and is the holder of three patents. Her research interests include CMOS image sensors, low-noise analog electronics, high-dynamic-range pixels, and endoscopic vision systems. Pietro Valdastri (M’05) received the Ph.D. degree (with honors) in electronic engineering from the University of Pisa, Pisa, Italy, in February 2002 and the Ph.D. degree in bioengineering from the Scuola Superiore Sant’Anna, Pisa, in 2006. His Ph.D. dissertation was multiaxial force sensing in minimally invasive robotic surgery. He is currently an Assistant Professor with the CRIM Lab, Scuola Superiore Sant’Anna. His research interests include implantable robotic systems and active capsular endoscopy. He is working on several European projects for the development of minimally invasive and wireless biomedical devices. Alvise Sartori (M’91) received the M.A. degree in physics from the University of Oxford, Oxford, U.K., in 1978 and the Ph.D. degree in geophysics from Imperial College, London, in 1983. He then joined the Central Research Laboratory of Olivetti, where he carried out research on the modeling of fluido-dynamic systems and the design of digital CMOS integrated circuits. In 1990, he was with the research institute Center for Information Technology (IRST), Bruno Kessler Foundation, Trento, Italy, where he was in charge of the VLSI Design Laboratory. Since 1998, he has been the President and the Chief Executive Officer (CEO) of NeuriCam SpA, Trento, a company that he cofounded in 1998, which is active in the fabless production of chips and systems for computer vision. Arianna Menciassi (M’00) received the degree (with honors) in physics from the University of Pisa, Pisa, Italy, in 1995 and the Ph.D. degree in bioengi- neering, with a research program on the micromanipulation of mechanical and biological micro objects, from the Scuola Superiore Sant’Anna, Pisa, in 1999. Her Ph.D. dissertation was microfabricated grippers for the micromanipulation of biological and mechanical objects. She is currently a Professor of biomedical robotics with the Scuola Superi- ore Sant’Anna. Her main research interests include biomedical microrobotics and nanorobotics, microfabrication technologies, micromechatronics, and mi- crosystem technologies. She is working on several European and international projects for the development of microrobotic and nanorobotic systems for medical applications. Paolo Dario (F’02) received the Dr.Eng. degree in mechanical engineering from the University of Pisa, Pisa, Italy, in 1977. He established and teaches the mechatronics course with the School of Engineering, University of Pisa. He has been a Visiting Professor with the Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne, Switzerland, and Waseda University, Tokyo, Japan. He is currently a Professor of biomedical robotics with the Scuola Superiore Sant’Anna, Pisa. He is also the Director of the CRIM Lab, Scuola Superiore Sant’Anna, where he supervises a team of about 70 researchers and Ph.D. students. His main research interests include medical robotics, mechatronics, and microengineering, in particular sensors and actuators for the aforementioned applications. He is the Coordinator of several national and European projects, the Editor of two books on robotics, and the author of more than 200 journal papers. Prof. Dario is a Member of the Board of the International Foundation of Robotics Research. He is an Associate Editor for the IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, a Member of the Steering Committee of the JOURNAL OF MICROELECTROMECHANICAL SYSTEMS, and a Guest Ed- itor of the SPECIAL ISSUE ON MEDICAL ROBOTICS, IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION. He is the President of the IEEE Robotics and Automation Society and a Cochair of the Technical Committee on Medical Robotics of the IEEE Robotics and Automation Society.