An active-pixel sensor ( APS ) is the image sensor where each picture element ("pixel") has a photodetector and active amplifier. There are many types of active pixel sensor integrated circuits including metal-oxide-semiconductor (CMOS) APS is used most commonly in mobile cameras, web cameras, most digital pocket cameras since 2010, and in most digital single lens reflex cameras (DSLRs). Such image sensors are manufactured using CMOS technology (and hence are also known as CMOS sensors ), and have emerged as an alternative to charge-coupled device image sensors (CCD).
The term 'active pixel sensor' is also used to refer to the individual pixel sensor itself, as opposed to an image sensor; in this case the image sensor is sometimes called the active pixel sensor imager , or active-pixel image sensor .
Video Active pixel sensor
Histori
The term active pixel sensor was created in 1985 by Tsutomu Nakamura who worked on the Active Sensor Sensor of the Charge Modulation Device in Olympus, and more broadly defined by Eric Fossum in a 1993 paper.
The image sensor elements with pixel amplifiers are described by Noble in 1968, by Chamberlain in 1969, and by Weimer et al. in 1969, at a time when the pixel sensors - that is, pixel sensors without their own amplifiers or active noise cancellation circuits - are being investigated as a solid-state alternative to vacuum tube devices. The passive MOS-pixel sensor only uses a simple switch inside the pixel to read the integrated charge of the photodiode. Pixels are aligned in a two-dimensional structure, with access activation cables shared by pixels in the same row, and output cables divided by columns. At the end of each column there is an amplifier. Passive-pixel sensors suffer from many limitations, such as high noise, slow readings, and lack of scalability. The addition of an amplifier to each pixel overcomes this problem, and results in the creation of active pixel sensors. Noble in 1968 and Chamberlain in 1969 created an array of sensors with an active readout amplifier of MOS per pixel, essentially a modern three-transistor configuration. CCD was invented in October 1969 at Bell Labs. Because MOS processes vary greatly and MOS transistors have characteristics that change over time (V instability), CCD-charge-domain operations are easier to operate and quickly surpass the passive MOS sensor and active pixels. Imaging N-channel low-resolution "mostly digital" MOSFETs with intra-pixel amplification, for optical mouse applications, shown in 1981.
Another type of active pixel sensor is the infrared focal plane array (IRFPA) designed to operate at cryogenic temperatures in the infrared spectrum. This device is two chips put together like a sandwich: one chip contains detector elements made in InGaAs or HgCdTe, and other chips are usually made of silicon and used for reading photodetectors. The exact date of origin of these devices is classified, but by the mid-1980s they were widely used.
In the late 1980s and early 1990s, the CMOS process was established as a well-controlled stable process and was the basic process for almost all logic and microprocessors. There is a resurgence in the use of passive-pixel sensors for low-end imaging applications, and active pixel sensors for high-resolution low-function applications such as retinal simulations and high-energy particle detectors. However, the CCD continues to have lower temporal noise and fixed pattern noise and is the dominant technology for consumer applications such as camcorders and also for broadcast cameras, where they move the video camera tubes.
Eric Fossum, who works at NASA's Jet Propulsion Laboratory et al. , finds image sensors that use intra-pixel cost transfer along with in-pixel amplifiers to achieve true correlated true double sampling (CDS) and low temporary noise operations, and on-chip circuit for fixed pattern noise reduction, and publish articles the first to predict the emergence of APS imagers as the commercial successors of the CCD. Between 1993 and 1995, the Jet Propulsion Laboratory developed a number of prototype devices, which validate key features of the technology. Although primitive, this device shows good image performance with high readout speed and low power consumption.
In 1995, frustrated by the slow adoption of technology, Eric Fossum and his current wife Dr. Sabrina Kemeny founded Photobit Corporation to commercialize the technology. It continues to develop and commercialize APS technology for a number of applications, such as web cams, high speed and motion capture cameras, digital radiography, endoscopy cameras, DSLRs and camera-phones. Many other small image sensor companies also soon live after that due to the accessibility of the CMOS process and all quickly adopt the active pixel sensor approach. Recently, CMOS sensor technology has spread to medium-format photography with Phase One being the first to relaunch a medium format digital with a CMOS sensor made by Sony.
Eric Fossum is now doing research on Quanta Image Sensor (QIS) technology. QIS is a revolutionary change in the way we collect images on cameras being discovered in Dartmouth. In QIS, the goal is to calculate every photon that attacks the image sensor, and to provide a resolution of 1 billion or more special photoselements (called jots) per sensor, and to read jet plane bits hundreds or thousands of times per second generated in terabit/sec data.
Maps Active pixel sensor
Comparison with CCD
APS pixels solve the problem of speed and scalability of the passive pixel sensor. They generally consume less power than CCDs, have fewer lag images, and require less specialized manufacturing facilities. Unlike CCDs, APS sensors can combine image sensor functions and image processing functions in the same integrated circuit. The APS sensor has found a market in many consumer applications, especially camera phones. They have also been used in other fields including digital radiography, ultra high speed military image acquisition, security cameras, and optical mouse. Manufacturers include Aptina Imaging (independent spinout from Micron Technology, which bought Photobit in 2001), Canon, Samsung, STMicroelectronics, Toshiba, OmniVision Technologies, Sony, and Foveon, among others. CMOS-type APS sensors are typically suitable for applications where packaging, power management, and on-chip processing are important. CMOS type sensors are widely used, ranging from high-end digital photography to camera phones.
CMOS gain compared to CCD
The great advantage of CMOS sensors is usually cheaper than CCD sensors.
CMOS sensors also typically have better bloom control (ie, bleeding photos of pixels that are too exposed to other nearby pixels).
In a three-sensor camera system that uses a separate sensor to complete the red, green, and blue components in the image along with the prism beam splitter, the three CMOS sensors can be identical, while most prism splitters require that one CCD sensor has to be a mirror image of the other two to read aloud images in a compatible order. Unlike CCD sensors, CMOS sensors have the ability to reverse the addressing of sensor elements.
Lack of CMOS compared to CCD
Because CMOS sensors typically capture a line at a time of about 1/60th or 1/50th of a second (depending on refresh rate), it can produce a "rolling shutter" effect, where the image is skewed (tilted to the left). or right, depending on the direction of the camera or subject movement). For example, when tracking a car is moving at high speed, the car will not be distorted but the background will appear oblique. The frame-transfer CCD sensor or the "global shutter" CMOS sensor does not have this problem, but it captures the entire image all at once into the frame store.
Active circuits in CMOS pixels take some areas on the surface that are not sensitive to light, reducing the detection efficiency of the device's photons (rear-light sensors can reduce this problem). But the frame-transfer CCD also has about half the non-sensitive area for the frame store node, so the relative gain depends on the type of sensor being compared.
Architecture
Pixel
The standard pixel CMOS APS currently consists of photodetectors (pinned photodiodes), floating diffusion, transfer gates, reset gates, electoral gates and reader-follower readout transistors - so-called 4T cells. The embedded photodiode was originally used on interconnected CCD transfers because of its low dark response and good blue response, and when combined with the transfer gate, allowing complete charge transfers from photodiodes embedded into the floating diffusion (which is further connected to the transistor readout gate) lag. The use of intrapixel charge transfer can offer lower noise by allowing the use of correlated double sampling (CDS). The Noble 3T pixel is still sometimes used because of the less complex fabrication requirements. 3T pixels are made up of elements equal to 4T pixels except transfer gates and photodiodes. The resistor transistor, M rst , acts as a switch to reset the floating diffusion to V RST , which in this case is represented as gate M sf transistor. When the reset transistor is turned on, the photodiode is effectively connected to the power supply, V RST , cleans all the integrated charge. Since the reset transistor is of type-n, pixels operate in a soft reset. The read transistor, M sf , acts as a buffer (specifically, source follower), an amplifier that allows pixel voltage to be observed without removing the accumulated charge. The power supply, V DD , is usually related to the power supply of the resistor transistor V RST . The transistors select, M cell , allowing a single line of pixel arrangement to be read by read-only electronics. Other innovations of pixels such as 5T and 6T pixels also exist. By adding an extra transistor, a function like a global shutter, as opposed to the more general rolling shutter, is possible. To increase pixel density, share-row, four-way and eight-way sharing are read, and other architectures can be used. The variant of the 3T active pixel is the Foveon X3 sensor created by Dick Merrill. In this device, three photodiodes are stacked on top of each other using a planar fabrication technique, each photodiode has its own 3T circuit. Each successive layer acts as a filter for the underlying layer shifting the spectrum of light absorbed in successive layers. By declaring the response of each layered detector, red, green, and blue signals can be reconstructed.
APS uses thin-film transistors
For applications such as large-area digital X-ray imaging, thin film transistors (TFT) can also be used in the APS architecture. However, because of the larger size and lower transconductance advantages of TFT compared to CMOS transistors, it is necessary to have fewer on-pixel TFTs to maintain the image resolution and quality at an acceptable level. APS/PPS two-transistor architecture has proven promising for APS using amorphous silicon TFT. In the two-transistor APS architecture on the right, T AMP is used as diversions that are integrated functions of both M sf and M cells in APS three transistors. This results in reduced number of transistors per pixel, as well as an increase in pixel transconductance gain. Here, C pix is the pixel storage capacitance, and is also used to compare the "Read" to the T AMP gate pulse for ON-OFF switch. This pixel readout circuit works best with low-capacity photoconductor detectors such as amorphous selenium.
Array
A typical two-dimensional pixel series is organized into rows and columns. The pixels in the row share row are rendered, so the entire row is reset at a time. Selected row rows from each successive pixel are also tied together. The output of each pixel in a particular column is tied together. Since only one line is selected at a given time, there is no competition for the output line that occurs. Further amplifier circuits are usually based on columns.
Size
The pixel sensor size is often given in height and width, but also in optical format.
Design variant
Many different pixel designs have been proposed and made. Standard pixels are the most common because they use the fewest cables and the fewest, most dense transistors possible for active pixels. It is important that the active circuit in a pixel requires as little space as possible to allow more space for the photodetector. The high number of transistors injures the fill factor, which is the percentage of light-sensitive pixel area. Pixel sizes can be traded for the desired quality such as noise reduction or image lag reduction. Noise is a measure of accuracy that can be measured by incident light. The lag occurs when the traces of the previous frame remain in the upcoming frame, ie the pixels are not completely reset. The voltage voltage variation in a soft-reset pixel (gate-regulated voltage) is , but the lag of images and fixed pattern interference may be problematic. In rms electron, noise is .
Hard reset
Mengoperasikan piksel melalui hasil hard reset dalam kebisingan Johnson-Nyquist pada fotodioda atau , tetapi mencegah jeda gambar, terkadang merupakan pilihan yang diinginkan. Salah satu cara untuk menggunakan hard reset adalah mengganti M rst dengan transistor tipe-p dan membalikkan polaritas sinyal RST. Kehadiran perangkat tipe-p mengurangi faktor pengisian, karena ruang ekstra diperlukan antara p- dan n-perangkat; itu juga menghilangkan kemungkinan menggunakan transistor reset sebagai saluran pembuangan anti-blooming, yang merupakan manfaat yang biasa dimanfaatkan dari reset tipe-n FET. Cara lain untuk mencapai hard reset, dengan FET tipe-n, adalah menurunkan tegangan V RST relatif terhadap tegangan RST. Pengurangan ini dapat mengurangi kapasitas headroom, atau kapasitas pengisian penuh, tetapi tidak mempengaruhi faktor pengisian, kecuali V DD kemudian diarahkan pada kawat terpisah dengan tegangan aslinya.
Kombinasi hard dan soft reset
Techniques like reddish resetting, pseudo-flash reset, and soft-to-soft reset incorporate soft and hard reset. The details of this method are different, but the basic idea is the same. First, the hard reset is done, removing the image lag. Furthermore, a soft reset is done, causing a low sound reset without adding any lag. Reset pseudo-flash requires the separation of V RST from V DD , while the other two techniques add more complicated column circuits. Specifically, pseudo-flash reset and hard-to-soft reset both add transistors between the actual pixel power supply and V DD . The result is a lower headroom, without affecting the fill factor.
Reset active
Source of the article : Wikipedia