Sensors and Actuators A 172 (2011) 301– 307 Contents lists available at ScienceDirect Sensors and Actuators A: Physical jo u rn al hom epage: www.elsev ier .com/ locate /sna An FPGA-based versatile development system for endoscopic capsule design optimization C. Cavallotti a,∗, P. Merlinob, M. Vatteronia, P. Valdastri a, A. Abramoc, A. Menciassi a, P. Darioa a The BioRobotics Institute, Scuola Superiore SantAnna, Pisa 56100, Italy b PTLab, Agemont S.p.A., Amaro 33020, Italy c DIEGM, Universita’ di Udine, Udine 33100, Italy a r t i c l e i n f o Article history: Received 30 September 2010 Received in revised form 11 January 2011 Accepted 12 January 2011 Available online 31 January 2011 Keywords: FPGA Wireless Capsule Endoscopy Sensor a b s t r a c t This work presents a development system, based on Field Programmable Gate Array (FPGA), that was specifically designed for testing the entire electronics to be integrated in an endoscopic capsule, such as a camera, an image compression engine, a high-speed telemetric system, illumination and inertial sensors. Thanks to its high flexibility, several features were tested and evaluated, thus allowing to find the optimal configuration, in terms of power consumption, performances and size, to be fit in a capsule. As final result, an average frame rate of 19 frame per second (fps) over a transmission channel of 1.5 Mbit/s was chosen as the best choice for the development of a miniaturized endoscopic capsule prototype. © 2011 Elsevier B.V. All rights reserved. 1. Introduction Wireless Capsule Endoscopy (WCE) is an emerging technology which is producing a big impact on the practice of endoscopy. A typical endoscopic capsule is equipped with an imaging sen- sor, an illumination system, an image processor, a radio-frequency transmitter and a power source which provides energy to the whole system [1]. WCE seems to be superior in the diagnosis of the small bowel pathologies to other painless imaging modalities, such as X-ray, computerized tomographic enterography and mag- netic resonance enteroclysis, because it provides a direct vision of some gastrointestinal (GI) tracts otherwise difficult to reach with- out surgery [2]. This technology has helped doctors to diagnose pathologies such as obscure GI bleeding, small-bowel tumours, Crohn’s disease, and celiac disease [3]. Moreover, WCE reduces the invasiveness and pain of traditional procedures resulting more acceptable for patients. Since its commercial distribution in 2002, different capsules are now available on the market [4]. Despite of several enhancements [5], main limitations are still related to core parts such as the vision system. The commercially available endoscopic capsule transmits the images at the resolution of 256-by-256 8-bit pixels with a maxi- mum frame rate of 7 fps. This frame rate is not enough for a real time video streaming, which is highly desirable for a correct diagnosis. However, these limitations must be overcome taking into account ∗ Corresponding author. E-mail address: c.cavallotti@sssup.it (C. Cavallotti). the limited power supply and the maximum dimensions (11 mm in diameter (d) × 31 mm in length (l)) suitable for a swallowable device. Our aim is to develop a WCE device with real time vision, thus frame rate at least of 15 fps must be guaranteed in order to avoid flashing images [6]. Moreover, a high image quality in terms of fea- ture perception, noise and sufficient illumination has to be assured to achieve a correct diagnosis. As a preliminary work, we developed a versatile development system, based on an FPGA device, for testing different configura- tions of some sub-modules which are core parts in the capsule, such as a camera, an illumination system, but also a high data rate transmitter and a compressor engine which are crucial to reach the desired frame rate. The main feature of the proposed system is the high flexibility, which allows to investigate the whole vision sys- tem chain and to highlight the critical issues. The FPGA core makes development system easy-fitting to different configurations which can be tested without any physical hardware change. This system can be used as a case study for assessing the optimum configuration in terms of performance, power consumption and overall dimen- sions, and to solve the critical aspects before to start the design of the miniaturized version suitable for WCE applications. 2. Development system architecture The system is composed by three units: a dedicated vision board, a main control board and a third board for debug pur- poses (Figs. 1 and 2). In the following sections these boards will be described in details. 0924-4247/$ – see front matter © 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.sna.2011.01.010 302 C. Cavallotti et al. / Sensors and Actuators A 172 (2011) 301– 307 Fig. 1. Development system with optics and illumination system. 2.1. Vision board The core of the vision board is a custom Complementary Metal- Oxide Semiconductor (CMOS) image sensor, called Vector2. The chip was produced in the UMC 0.18!-CIS (CMOS Image Sen- sor) technology and includes a Quarter of VGA (QVGA) 320 ×240 pixel array with a Bayer Color Filter Array (CFA) up, the com- plete readout channel, a 10-bit Analog-to-Digital converter (ADC), a series of Digital-to-Analog converters (DAC) for internal refer- ences and digital blocks for the chip control [7]. An I2C-like input is used for setting and control while a serial low-voltage differ- ential signaling (LVDS) output interface allows to transmit the data. A rolling shutter read-out is implemented in order to max- imize sensitivity of the sensor. Its high sensitivity, low power consumption and a simple control of the full chip make it suitable for WCE applications. Taking into account the physical dimen- sion of the Vector2 sensing area (1.408 mm in horizontal (d) × 1.056 mm in vertical (v)) and the depth of focus from 1 mm to 10 mm which is driven by the final application, a commercial posi- tive focal lens (NT45-589, Edmund Optics, New Jersey, USA) was chosen, with a focal length of 1 mm and a diameter of 1 mm, thus achieving a field of view of 82◦ (h) × 61◦ (v). A plastic, unreflective holder was also designed in order to fix and align the optical mod- ule in front of the chip. The vision board allows to connect different types of illumination systems. Four white light-emitting diodes (LED) were arranged onto a round shape printed circuit board (PCB). The design of this illumination system was carried out considering a trade-off between power consumption, size and the amount of light necessary for diagnostic purposes. Taking in to account these features high efficiency LEDs (Nesw007AT, Nichia, Nokushima, Japan) were chosen with a dimension of 1.2 mm in height (h) × 2 mm in width (w) × 1.3 mm in thickness (t) and light intensity of 1000 mcd with a power consumption of 15 mA@3.3 V [8]. The white LEDs board can be replaced with color LEDs board with no major design changes in the hardware, in order to obtain white light by color light combination [9] or to enable spectroscopic imaging, such as autofluorescence imaging [10]. In these cases, the color LEDs are controlled by independent driving circuits imple- mented on the FPGA enabling a precise control on the amount of light provided to the scene, as will be explaned in more details in Section 3.1. 2.2. Control board and FPGA architecture The control board is based on a FPGA which implements the main functionalities of the whole system. FPGAs were introduced as an alternative to digital custom Integrated Circuits (ICs) for imple- menting the entire system on one chip and to provide flexibility of re-programmability to the user [11]. Therefore, thanks to its high-flexibility and low cost, FPGA represents the best choice for testing different solutions to select the optimal for our application. A detailed benchmark analysis was carried out to choose an FPGA, suitable not only for testing purpose but also to be integrated in a future miniaturized version of the system, which will be fitted in an endoscopic pill. As consequence, parameters such as power consumption, physical size and overall gate count were taken into account. A compact system from SiliconBlue (iCE65L08) was cho- sen being 4.79 (h) × 4.37 (v) mm in size with a very low power consumption (12 mA@32 MHz). These features make the iCE65L08 suitable to be fit in an endoscopic pill. Fig. 2. Top-view block diagram of the three boards. The blocks below the dashed line will be integrated in the wireless endoscopic capsule. C. Cavallotti et al. / Sensors and Actuators A 172 (2011) 301– 307 303 In order to have a real time video streaming, a frame rate of at least 15 fps is needed. However, increasing the amount of data causes huge increase in power consumption in the RF transmission [12], that is the main constraint in WCE. Hence, applying image compression is necessary for limiting the transmission workload and saving the power dissipation. A compression engine which is well suited for WCE must have low power consumption, logic resources and limited mem- ory size needed for the compression. Current JPEG compression chips require the availability of a considerable amount of hardware resources resulting in a high power consumption (typically more than 100 mW), which is not acceptable in this application [13]. Among simple dedicated algorithms, we chose the low-complexity compression engine developed by [14] because its performances are comparable with JPEG2000, but lowering the complexity allow- ing its implementation on the chosen FPGA. The FPGA is also used to distribute the clock to the whole sys- tem. The image sensor is then driven by a 8 MHz@1.8 V clock, while the FPGA internal logic by a 16 MHz@1.8 V with an external 32 MHz@3.3 V oscillator in input to the FPGA. As in Fig. 2, several logic blocks were implemented on the FPGA in order to carry out some basic tasks such as the Vector2 config- uration, the image acquisition and illumination control. Moreover, a simple PC-based software allows the user to configure the FPGA and the vision chip and to show the acquired images on screen through the USB connection. The Vector2 configuration task is per- formed by the USB Interface, the Instruction (n.b.) Control and I2C Master blocks. The configuration data, sent by the user through the developed software, are received by the USB Interface block that interconnects the FPGA with the external Cypress FX2 USB con- troller. The Instruction Control block decodes the instructions and sends the configuration data to the I2C Master block. Finally, the I2C Master configures the Vector2 chip through the I2C bus. The Instruc- tion Control block sends configuration data also to the LED Driver in order to control the LEDs and the amount of light provided to the scene, as it will be explained in Section 3.1. After the configuration phase, the Vector2 Receiver decodes the data acquired by the vision chip, converting the LVDS signals to a 10-bit parallel format. Then the acquired frames are stored in the external SRAM chip, which is used as a frame buffer. The memorized frames can be read from the SRAM by the Memory Controller block and sent to the PC through the USB Interface block and external USB controller. The logic blocks implemented in this configuration are written with VHSIC Hardware Description Language (VHDL), and use 31%1 of the total FPGA (2400 logic cells) and can operate at a maximum frequency of about 41 MHz. The power consumption of the devel- oped system is less than 360 mW and it splits as follows: 40 mW for the Vector2 chip, about 10 mW for the FPGA and 310 mW for debug blocks which will be not foreseen in the final miniaturized prototype. 2.3. Debug board The system is also equipped with a debug board to increase the flexibility of the system. The board is equipped with the Cypress FX2 USB controller that provides high-speed connection and fully configurability by the integrated 8051 microcontroller. The debug board is also equipped by the Cypress CY7C1339G SRAM chip that is used as frame buffer for the acquisition of images from the sen- sor chip and to store additional data for image processing. Finally, several connectors are used to monitor each pin of the FPGA. The 1 We use the logic cells as a measure of the FPGA resources occupation. The pre- sented area occupation refers to the basic configuration, without the compressor, brightness control block or wireless transmitter block. Fig. 3. Acquired images from different gastrointestinal tracts, during ex vivo tests. purpose of this board is to provide a real time debug platform for the whole demo system. A PC-based software is used to set up the reg- isters of the FPGA and of the Vector 2 chip and to monitor the status of the system through the USB connection. The acquired images are stored in the SRAM, sent to the PC and shown on the screen. Finally, the USB connection is used also for the control of the illumination and the LED drivers. 3. Tests and sub-modules integration Some experiments were done to test separately and finally together each sub-module which will be integrated in the final pill. 3.1. Images acquisition and brightness control At first, images from ex vivo animal tissue were acquired using the vision board, with the optics and white illumination, in order to define the imager and illumination control settings necessary to achieve a suitable image quality for diagnostic purposes (Fig. 3(a) and (b)). A simple LED driver was implemented in the FPGA able to set the amount of light driving the LEDs by a Pulse Width Mod- ulation (PWM) technique. The LED driver switches on and off the 304 C. Cavallotti et al. / Sensors and Actuators A 172 (2011) 301– 307 Fig. 4. Image blocks used to brightness level estimation and brightness control architecture. illumination during the integration time of the optical sensor, thus modulating the average current provided to the LEDs. The length of the current pulses provided to the illumination system and their number are set by a few internal registers, which can be modifided in real time with the USB connection. However, the illumination is switched on only when the optical sensor is in its integration phase in order to avoid flickering effects. In the case of RGB LEDs, three drivers controlled by three different groups of registers are implemented on the FPGA in order to drive each group of LEDs independentely. Since in a real application such as WCE, it is not desirable to manually set the proper amount of light, we also imple- mented an automatic brightness control system. In modern vision systems, the brightness of the acquired images depends on several factors. Among others, the most important ones are the lens, the sensitivity and integration time of the vision sensor and the illumination. In our prototype the lens is chosen based on the field of view and size requirements, while the integration time of the sensor is fixed in order to achieve the desired frame rate. As a consequence, we can set the brightness of the images by controlling the amount of light provided to the scene. This is equivalent to con- trol the exposure time in standard digital cameras. Exposure control algorithms typically divide the acquired image in several blocks and compute the average luminance signal in each block. Then the block luminance values are combined with different weights in order to estimate backlit or frontlit scene [15]. Since in our application the only possible case is the frontlit because the only source of light are the LEDs, we decided to compute the average luminance signal of a single 128 × 128 pixels block in the centre of the image (Fig. 4). Moreover, we cannot compute the luminance values of the pixels because of the Bayer CFA mounted on the Vector2 chip and the lack of the demosaicing block. Consequently, we decided to esti- mate the brightness based only on the green pixel values because the green component mostly contributes to the luminance of an image [16] and in a Bayer filter their number are double than the red and blue ones. The brightness control block reads the pixel values recovered by the receiver and drives the LED driver block accordingly with the estimated brightness level. Hence, the LEDs intensity is controlled to maintain the brightness within a defined interval. The LEDs are driven by the defined sequence of current pulses only during the sensor integration time in order to min- imize the power consumption. This strategy allows not only to accurately control the amount of light provided to the scene but also to simulate a global shutter. The entire sensor starts gathering light when the LEDs are turned on, while the contents of the sen- sor are read out when they are turned off, thus minimizing image artefacts [17]. The FPGA implementation of the proposed brightness control block uses 320 logic cells, while the impact on the maximum clock frequency is minimal. 3.2. Column pattern noise correction The CMOS imagers often suffer from Fixed Pattern Noise (FPN) [18]. FPN is a non-temporal spatial noise, and it is caused by the non-uniformity of the transistor’s characteristics within the pixels and the column amplifier, this resulting from fabrication process tolerances.The pixel FPN noise is usually removed at the pixel level by hardware subtraction, while a way to eliminate the column FPN is a subtraction between the acquired image and a reference dark image. This simple method requires that the dark image is acquired and stored into the FPGA or in the external SRAM. Since a full frame cannot be memorized inside the FPGA because of the lack of memory and in the real application it is not desirable to use an external SRAM, we developed and tested an alternative version of the algorithm that reduces the memory requirements. Our idea is to compute the mean values of the even and odd rows of a dark image and to recursively subtract these from the acquired images. In this way, the memory requirements of the architecture can be reduced to two rows only (2 × 320 × 10 bits). We compute two dif- ferent mean values for even and odd rows because the CFA mounted on the imager uses a pattern of 2 × 2 pixels. At first, the illumination is switched off and the FPGA starts to acquire a dark image. Each couple of rows is accumulated in a two- rows memory inside the FPGA. At the end of the acquisition, the average values are computed. Then the LEDs are switched on and when the next image is received by the FPGA, the reference dark rows are substracted from each couple of row acquired. Fig. 5(a) shows the image with the FPN, while Fig. 5(b) shows the result of the correction strategy. The resulting image is better in terms of perceived resolution. In order to evaluate the effectiveness of our algorithm we calculated the standard deviation of the origi- nal image and the elaborated one. Since the fixed pattern noise is a short-range noise, we acquired a white image with an uniform illumination in order to exclude the contribution of image contrasts and dark fixed pattern noise. We observed that the standard devi- ation of the image after elaboration is 20% less than the original one. 3.3. Compressor implementation In order to fulfil the frame rate requirements, a image com- pressor was implemented on the FPGA. Several compressors were C. Cavallotti et al. / Sensors and Actuators A 172 (2011) 301– 307 305 Fig. 5. Original image and denoised image with reconstructed dark image. tested [13,19] taking into account the power limitation and small size conditions which are the main features of WCE. For these rea- sons, a low-power, low-complexity lossy compressor specifically developed for capsule endoscopy was chosen [14]. The imple- mented compressor is based on integer version of discrete cosine transform (DCT) and performs sequentially four operations: color transformation, image transformation, coefficients quantization and entropy coding. This configuration consumes about 77% of the resources of the FPGA and 25 block RAMs and can work at a fre- quency of up to 39 MHz. Finally, the results of the implementation of the chosen com- pressor can be seen in Fig. 6(a) and (b) . The first picture shows an image acquired with an integration time of 50 ms and LEDs switched on for 25 ms, while the second one shows the same image after the compression stage with a ratio of about 8. As can be seen, the compressor introduces some artifacts due to the lossy nature of the compression algorithm, but the quality of the image is sufficient for diagnostic purposes. As a final remark, it can be noted that the compression ratio can be set through a proper choice of the com- pressor parameters, thus allowing the reduction of the amount of data between 8 and 20 [14]. Fig. 6. Acquired images before and after compression. Fig. 7. Miniaturized prototype. 4. Conclusions and future works An FPGA-based development system was designed in order to test a complete wireless imaging acquisition chain suitable for WCE. The final goal is to achieve a smooth real time video stream with at least 15 fps and low power consumption. 306 C. Cavallotti et al. / Sensors and Actuators A 172 (2011) 301– 307 The main challenge faced integrating the system, in order to enable a real time diagnosis, was related to image compression and wireless video stream transmission vs. the original data payload. After the analysis of several wireless technologies [20], we decided to implement the transmitter presented in Ref. [21]. The chosen solution is based on near-field technology and presents the best performance in terms of data rate and the best efficiency in terms of power consumption vs. data rate [20], enabling a trans- mission of 1.5 Mbit/s with a power consumption of 2 mW@1.8 V. Since the Vector2 imager resolution is a QVGA and each pixel is decoded by 10 bits, the original amount of data for each frame is 768 kbit. Considering an average compression ratio of 10, the min- imum frame rate is 19.53 fps with an overall power consumption of 90 mA@3.3 V and 26 mA@1.8 V. Considering the results obtained with the development system, a miniaturized version was designed and now under test (Fig. 7). The prototype consists of two boards connected by a permanent flexible interconnection with a diameter of 9.9 mm in order to fit in a pill case with an inner diameter of 10 mm. Moreover, the additional three flexible circuit parts allow the connection of other boards with components required by the system, such as battery or wireless power supply [22] and inertial sensors [23]. Acknowledgements The work described in this paper was funded by the European Commission in the framework of VECTOR FP6 European project EU/IST-2006-033970. References [1] P. Swain, The future of wireless capsule endoscopy, World Journal of Gastroen- terology 14 (26) (2008) 4142–4145. [2] Z. Fireman, Y. Kopelman, New frontiers in capsule endoscopy, Journal of Gas- troenterology and Hepatology 22 (8) (2007) 1174–1177. [3] S.D. Ladas, K. Triantafyllou, C. Spada, M.E. Riccioni, J.F. Rey, Y. Niv, M. Delvaux, R. de Franchis, G. Costamagna, the ESGE Clinical Guidelines Committee, Euro- pean society of gastrointestinal endoscopy (esge): Recommendation (2009) on clinical use of video capsule endoscopy to investigate small-bowel, esophageal and colonic diseases, Endoscopy 42 (2010) 220–227. [4] J.L. Toennies, G. Tortora, M. Simi, P. Valdastri, R.J. Webster, Swallowable med- ical devices for diagnosis and surgery: the state of the art, Proceedings of the Institution of Mechanical Engineers. Part C. Journal of Mechanical Engineering Science 224 (7) (2010) 1397–1414. [5] A. Moglia, A. Menciassi, M.O. Schurr, P. Dario, Wireless capsule endoscopy: from diagnostic devices to multipurpose robotic systems, Biomedical Microdevices 9 (2) (2007) 235–243. [6] J.Y.C. Chen, J.E. Thropp, Review of low frame rate effects on human performance, Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on 37 (6) (2007) 1063–1076. [7] M. Vatteroni, D. Covi, C. Cavallotti, L. Clementel, P. Valdastri, A. Menciassi, P. Dario, A. Sartori, Smart optical cmos sensor for endoluminal applications, Sensors and Actuators A: Physical 162 (2) (2010) 297–303. [8] http://www.nichia.co.jp/. [9] N. Narendran, N. Maliyagoda, L. Deng, R.M. Pysar, Characterizing leds for gen- eral illumination applications: mixed-color and phosphor-based white sources, Proc. SPIE 4445, (2001) 137. [10] M. Kato, M. Kaise, J. Yonezawa, K. Goda, H. Toyoizumi, N. Yoshimura, Y. Yoshida, M. Kawamura, H. Tajiri, Trimodal imaging endoscopy may improve diagnos- tic accuracy of early gastric neoplasia: a feasibility study, Gastrointestinal Endoscopy 70 (5) (2009) 899–906. [11] N. Sulaiman, Z.A. Obaid, M.H. Marhaban, M.N. Hamidon, Design and implemen- tation of fpga-based systems – a review, Australian Journal of Basic and Applied Sciences 3 (4) (2009) 3575–3596. [12] K. Wahid, S.B. Ko, D. Teng, Efficient hardware implementation of an image compressor for wireless capsule endoscopy applications, in: Neural Networks, 2008. IJCNN 2008. (IEEE World Congress on Computational Intelligence). IEEE International Joint Conference on, 2008, pp. 2761–2765. [13] D. Turgis, R. Puers, Image compression in video radio transmission for capsule endoscopy, Sensors and Actuators A: Physical 123-124 (2005) 129–136. [14] P. Turcza, T. Zielinski, M. Duplaga, Hardware implementation aspects of new low complexity image coding algorithm for wireless capsule endoscopy, Com- putational Science ICCS (2008) 476–485. [15] R. Ramanath, W.E. Snyder, Y. Yoo, M.S. Drew, Color image processing pipeline, Signal Processing Magazine, IEEE 22 (1) (2005) 34–43. [16] Q.K. Vuong, S.H. Yun, S. Kim, A new auto exposure system to detect high dynamic range conditions using cmos technology, in: Convergence and Hybrid Information Technology, 2008. ICCIT’08. Third International Conference on, 2008, pp. 577–580. [17] S. Lauxtermann, A. Lee, J. Stevens, A. Joshi, Comparison of global shutter pixels for CMOS image sensors, in: Proc, 2007 International Image Sensor Workshop, Ogunquit Maine, USA June 7–10, 2007. [18] A. Theuwissen, Digital imaging: image capturing, image sensors, technologies and applications, CEI Europe, 2004. [19] M.-C. Lin, L.-R. Dung, P.-K. Weng, An ultra-low-power image compressor for capsule endoscope, BioMedical Engineering OnLine 5 (1) (2006) 14. [20] M.R. Yuce, T. Dissanayake, H.C. Keong, Wireless telemetry for electronic pill technology, in: IEEE SENSORS 2009, 2009. [21] J. Thone, S. Radiom, D. Turgis, R. Carta, G. Gielen, R. Puers, Design of a 2 mbps fsk near-field transmitter for wireless capsule endoscopy, Sensors and Actuators A: Physical 156 (1) (2008) 43–48. [22] R. Carta, J. Thon, R. Puers, A wireless power supply system for robotic capsular endoscopes, Sensors and Actuators A: Physical 162 (2) (2010) 177–183. [23] G. Ciuti, P. Valdastri, A. Menciassi, P. Dario, Robotic magnetic steering and locomotion of capsule endoscope for diagnostic and surgical endoluminal pro- cedures, Robotica 28 (Special Issue 02) (2010) 199–207. Biographies Carmela Cavallotti received a degree in biomedical engineering (with honours) from the Campus Bio-Medico University in Rome in December 2007. She is currently a PhD student in biorobotics at the CRIM Lab of the Scuola Superiore Sant’Anna in Pisa. Her main research interests are in the fields of vision systems for biomedical applications. Pierantonio Merlino was born in Udine, Italy, in 1980. He received the laurea degree in electrical engineering (summa cum laude) from the University of Udine, Italy, in 2005 and the PhD degree in electrical engineering from the same Institution in 2009. His research interests includes the study and realization of electronic sys- tems for pervasive computing applications, communication and power technologies for wireless/contactless applications. Currently he is working on vision systems for endoscopic applications. Monica Vatteroni was born in La Spezia, Italy, in 1975. She received an M.S. degree in electrical engineering from the University of Pisa (Italy) in 2001 and a PhD degree in physics from the University of Trento (Italy), in 2008. From 2002 to 2008, she worked for NeuriCam, Trento (Italy), as pixel engineer and analogue designer, and in 2005 she became responsible for the development of CMOS Image Sensors. Presently, she works for the Scuola Superiore Sant’Anna in Pisa (Italy) as post doctoral fellow, where she is responsible for the research and development of image sensors and vision systems for biomedical applications. She is the author and co-author of sev- eral conference and journal publications and of three patents. Her interests include CMOS image sensors, low noise analogue electronics, high dynamic range pixels and endoscopic vision systems. Pietro Valdastri received a degree in electronic engineering (with honours) from the University of Pisa in February 2002. In the same year he joined the CRIM Lab of the Scuola Superiore Sant’Anna in Pisa as a PhD student. In 2006 he obtained a PhD in bioengineering from the Scuola Superiore Sant’Anna discussing a the- sis titled “Multi-Axial Force Sensing in Minimally Invasive Robotic Surgery”. He is now assistant professor at CRIM Lab, with main research interests in the field of implantable robotic systems and active capsular endoscopy. He is currently work- ing on several European projects for the development of minimally invasive and wireless biomedical devices. Antonio Abramo was born in Bologna, Italy, in 1962. He received the laurea degree in electrical engineering (magna cum laude) from the University of Bologna, Italy, in 1987, and the PhD degree in electrical engineering from the same Institution in 1995. His experience includes research periods with the Intel Corporation, Santa Clara (CA), USA (1992), at the Center for Integrated Systems, Stanford University, Stanford (CA), USA (2000). Between October 1993 and December 1994 he was resi- dent scientist at the AT&T Bell Laboratories, Murray Hill (NJ), USA, while from 1995 to 1997 he was post-doc at the Department of Physics, University of Modena, Italy. Antonio Abramo is co-author of about 70 scientific publications on International Journals and Conferences. In years 2001–2002 he has been appointed member of the “Modeling and Simulation” technical sub-committee of the IEEE International Electron Device Meeting (IEDM) Conference, and in year 2003 Chair of the same sub-committee. Presently, he is associate professor of electronics at the University of Udine, Italy. After about 10 years of scientific activity in the field of modeling and simulation of carrier transport in electron devices, starting from year 2001 his scien- tific interest has moved to the design of circuit and system for wireless applications, to the study of neural networks circuits for reconfigurable platforms, to the design of wearable systems, and towards the methodologies for distributed computing in wireless sensor networks. Arianna Menciassi received her laurea degree in physics (with honours) from the University of Pisa in 1995. In the same year, she joined the CRIM Lab of the Scuola Superiore Sant’Anna in Pisa as a PhD student in bioengineering with a research program on the micromanipulation of mechanical and biologi- cal micro objects. In 1999, she received her PhD degree by discussing a thesis C. Cavallotti et al. / Sensors and Actuators A 172 (2011) 301– 307 307 titled “Microfabricated Grippers for Micromanipulation of Biological and Mechanical Objects”. Currently she is a professor of biomedical robotics at the Scuola Supe- riore Sant’Anna, Pisa. Her main research interests are in the fields of biomedical micro and nano-robotics, microfabrication technologies, micromechatronics and microsystem technologies. She is working on several European projects and interna- tional projects for the development of micro and nano-robotic systems for medical applications. Paolo Dario received his laurea degree in mechanical engineering from the Univer- sity of Pisa in 1977. Currently, he is a professor of biomedical robotics at the Scuola Superiore Sant’Anna, Pisa. He also established and teaches the course on mechatron- ics at the School of Engineering, University of Pisa. He has been a visiting professor at the Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne, Switzerland, and at Waseda University, Tokyo, Japan. He is the director of the CRIM Lab of Scuola Superi- ore Sant’Anna, where he supervises a team of about 70 researchers and PhD students. His main research interests are in the fields of medical robotics, mechatronics and microengineering, and specifically in sensors and actuators for the above applica- tions. He is the coordinator of many national and European projects, the editor of two books on the subject of robotics and the author of more than 200 journal papers. He is a member of the Board of the International Foundation of Robotics Research. He is an associate editor of the IEEE Transactions on Robotics and Automation, a member of the Steering Committee of the Journal of Microelectromechanical Systems and a guest editor of the Special Issue on Medical Robotics of the IEEE Transactions on Robotics and Automation. He serves as president of the IEEE Robotics and Automa- tion Society and as the co-chairman of the Technical Committee on Medical Robotics of the same society.