What does interpolation mean to 13 mp. Interpolation of the camera, why and what is it? Not to ask extra questions

P2P camera - IP camera containing softwareallowing you to identify it and connect to the camera remotely by unique number (ID number) without using a static IP address or features such as DDNS and UPNPCT. P2P cameras were designed to facilitate the configuration of remote access to the camera for ordinary users - non-specialists.

How the P2P camera works

When connecting the P2P camera to the Internet (via a router or 3G connection), the camera automatically sends a request to a remote server that identifies the chamber by its unique ID number. To access the camera and watch a video, the user needs to be installed on the device (computer or mobile devices) special application From the IP camera developer. IN this application The user enters the camera ID (or photographs the QR code of the camera to not enter the code manually), after which it can browse the video from the camera to online, view the video archive from the SD card, control the rotary device and use other functions. The server in this case acts as an intermediary connecting the IP chamber and the user device directly.

Why do I need P2P technology

This technology is designed to simplify the installation of the IP camera to the end user. Without this technology, the user needs to be connected to remote camera access to the camera. static IP address Or enjoy special skills. In the case of P2P cameras, a regular user spends on camera installation and setting remote viewing Not more than 10 minutes.

P2P cameras

P2P cameras allow you to get a full-fledged video surveillance system with remote access From anywhere in the world and simple in the installation for small money. The main spheres of P2P cameras:

  • observation of the country house and / or plot
  • observation of the safety of the apartment
  • pet Owning
  • small business security and surveillance points
  • monitoring patients
  • use in state and municipal institutions and others

Companies involved in the development and production of P2P cameras

The world leader in the production of P2P cameras is Cisco.

What does "Interpolation 5.0MP" and "Interpolation 8.0MP" mean?

In the description of the DOOGEE X5 smartphone, I found an interesting and, at the same time, not a clear point:
Two cameras: 2.0MP (5.0MP Interpolation) Front Camera; 5.0MP (Interpolation 8.0MP) Rear camera With flash and autofocus.

What does "Interpolation 5.0MP" and "Interpolation 8.0MP" mean?
Really how many megapixel cameras are 2 and 5 megapixel or 5 and 8 megapixel?

Living Creature.

Means "Fiction" ... Gift Cameras are given for high-quality ... 2MP Camera programmatically gives an image 5mp ... You are trying to make a fake ... In the original DVRs are not used interpolation ...

Vladssto.

This means that the camera physically has a real permission to be allowed 5MM and in the smartphone there are software that spreads the adjacent pixels and draws another pixel between them in color between neighboring, and the output is already a photo by resolution of 8MP.
It does not particularly affect the quality, just a photo with a big resolution can be closer more, view details

In the smartphone 8 MPIX camera. What does interpolation up to 13 MPIX mean?

Sergey 5.

Up to 13 MPIX - it can be 8 MPIX real, like yours. Or 5 MPIX real. The camera software interpolates the graphic product of the chamber to 13 MPIX, without improving the image, and electronically increasing it. Simply speaking, like a magnifying glass or binoculars. Quality does not change.

This means that the camera can take a snapshot to 8 Mpix, but it can be programmatically increased pictures up to 12 MPIX. So it increases programmatically, but the image does not become high quality, the image will be even at 8 MPIX. This is a purely trick of the manufacturer and there are such smartphones more expensive.

Consumer

If you explain on the promotion, then the smart pixels processor for the active pixels of the matrix, when creating a photo, adds its own pixels, as if it calculates the picture and telesses it up to the size of 13 TI MP. Quality from this improves not particularly.

Violet A.

Interpolation of the camera, this is the trick of the manufacturer, so artificially overstate the price of the smartphone.

If you have a camera 8 mpix, then it can do the corresponding, interpolation does not improve the quality of the photo of the picture, it simply increases the photo size of the picture to 13 megapixels.

The USSR

Interpolation of megapixels is such a programmazing of the picture. Real pixels are moved, and inserted between them, with the color of the average value from the colors spread. Nonsense, no one needs self-deception. Quality does not improve.

Mastermiha.

On the chinese smartphones It is now being used constantly, just a camera sensor on a 13MMP stands much more expensive than 8MP, that's why they put on 8MP, but the camera app will stretch the resulting image, as a result, the quality of these 13mp will be noticeably worse if you look at the original resolution.

In my opinion, this feature is neither for nothing, because 8m is quite enough for the smartphone, I basically have enough 3MP, the main thing is that the camera itself was high-quality.

Azamatik.

Good day.

This denotes that your smartphone stretches the photo / image, making an 8 MPIX camera, to 13 Mpix. And this is done by means of the fact that real pixels are moving out and inserted additionally.

But, if you compare the image quality / photograph made by 13 megapixel and 8 megapixel with interpolation to 13, then the quality of the second will be noticeably worse.

Doubloon

This means that in your camera, as it was 8 MPIX and remains, it remains - no more and no less, but everything else is a marketing move, scientific foolishness of the people to sell the goods are more expensive and no more. This function is worthless, when interpolating the quality of the photo is lost.

Moreljuba.

Such a concept suggests that the camera of your device will make a photo on 8 MPIX, but already programmatically it is possible to increase to 13 MPIX. In this case, the quality is not the best. Just the space between pixels is clogged here and that's it.

Gladius74.

Interpolation is a way of staying intermediate values.

If this is all translated into a more human language, applies to your question, then the following will be:

  • software can handle (increase, stretch)) files up to 13 MPIX.

Marlene

The fact is that real camera These phones are 8 megapixels. But with the help of internal programs, images are stretched to 13 megapixels. In essence, it does not reach real 13 megapixels.

What is the camera interpolation?

Everyone has modern smartphones There are built-in cameras, allowing to increase the resulting images using special algorithms. From a mathematical point of view, interpolation is a way to detect the intermediate values \u200b\u200bof the number by the existing set of discrete parameters.

The effect of interpolation something resembles the action of the magnifying glass. Smartphone software does not increase the clarity and sharpness of the image. It simply expands the picture to the desired size. Some smartphone manufacturers write on the packaging of their products that the built-in camera has the resolution of "up to 21 MP". Most often, it is about an interpolated image that has low quality.

Types of interpolation

Method of the nearest neighbor

The method is considered basic and refers to the category of the simplest algorithms. Pixel parameters are determined based on one nearest point. As a result of mathematical calculations, the size of each pixel is doubled. The use of the method of the nearest pixel does not require large computing capacities.

Bilinear interpolation

The pixel value is determined based on the four nearest points recorded by the camera. The result of calculations becomes weighted averaging parameters of 4 pixels that surround the starting point. Bilinear interpolation allows you to smooth out the transitions between the color boundaries of the objects. The images obtained using this method are significantly superior to the quality of the picture interpolated by the nearest pixel.

Biobubic interpolation

The color value of the desired point is calculated based on the parameters of the 16 nearest pixels. The points that are located closest are obtained by calculating the maximum weight. Biobubic interpolation is actively used by the software of modern smartphones and allows you to get a fairly high-quality image. The use of the method requires significant power. central processor and high resolution of the built-in camera.

To not ask extra questions:

Pros and cons

In fantastic films often show how the camera fixes the face of the passerby and transmits digital information computer. The machine increases the image, recognizes the photo and finds a person in the database. In real life, interpolation does not add the image of new details. It simply increases the original picture using a mathematical algorithm, improving its quality to an acceptable level.

Interpolation defects

The most frequent defects arising from image scaling are considered:

  • Step-weight;
  • Blur;
  • The effect of halo (halo).

All interpolation algorithms allow you to comply with a certain balance of listed defects. Reducing the step will necessarily cause an increase in the blurring of the image and the appearance of halo. Strengthening the image of the image will increase the blurring of the picture, etc. In addition to listed defects, interpolation can cause various graphic "noises" that can be observed with maximum image increase. We are talking about the appearance of "random" pixels and unusual textures given by this subject.

The built-in camera is not the last thing when choosing a smartphone. This parameter is important for many, so many when searching for a new smartphone look at how much megapixels are stated in the chamber. At the same time, the disassembly people know that the matter is not in them. So let's look at what you need to pay attention when choosing a smartphone with a good camera.

The way to remove the smartphone depends on which the camera module is installed in it. It looks like in the photo (modules of the front and main chambers look approximately the same). It is easily accommodated in the smartphone housing and, as a rule, fastened with a loop. This method makes it easy to replace it in case of breakdown.

The monopolist in the market is Sony. It is its cameras, in the reproving majority, are used in smartphones. Also manufactured by Omnivision and Samsung.

It is important for the manufacturer of the smartphone. In fact, a lot depends on the brand, and the self-respecting company will equip its device a really good camera. But let's deal with what the quality of the shooting of a smartphone on items depends on.

CPU

Are you surprised? It is the processor that makes the image processing when it receives data from the photomathica. Whatever high quality matrix, a weak processor will not be able to process and convert that information that it will receive from it. This applies not only to recording video in high resolution and rapid shift frames per second, but also to create high-resolution images.

Of course, the more frames per second change, the greater the load on the processor.

Among people disassembled in phones, or considering that they understand, there are an opinion that smartphones with American Qualcomm processors remove better than smartphones in Taiwanese MediaTek processors. Do not refute and not confirm I will not. Well, the fact that smartphones with excellent cameras There is no spreadtrum at low-profile Chinese processors, as of 2016, this is already a fact.

Number of megapixels

A snapshot consists of pixels (points), which forms a photomatrix during shooting. Of course, the more pixels, the better the image should be, above its clarity. In the chambers this parameter is indicated as megapixels.

Megapixels (MP, MPKS, MPIX) - An indicator of the resolution of photographs and video (number of pixels). One megapixel is one million pixels.

Take, for example, Smartphone Fly IQ4516 Tornado Slim. He shoots photos in maximum resolution 3264x2448 Piscames (3264 color points in width and 2448 in height). 3264 Piscames multiply on 2448 piscills, 7,990,222 pixels are released. The number is large, so it is transferred to the value of mega. That is, the number 7 990 272 pixels, approximately 8 million pixels, that is, 8 megapixels.

In theory, more squeaks, then a clearer photo. But do not forget about the noise, about the worsening of shooting with bad lighting, etc.

Interpolation

Unfortunately, many chinese manufacturers Smartphones do not disguise the software increase. This is called interpolation. When the camera can take a picture in the maximum resolution of 8 MP, and it is programmatically increased to 13 MP. Of course, the quality is better not to become. How not to be deceived in such an case? Search on the Internet information about which camera module is used in the smartphone. In the characteristics of the module, it is indicated in what resolution it removes. If you did not find information about the module - there is already a reason to alert. Sometimes in the characteristics of the smartphone it can be honestly indicated that the camera is interpolated, for example, from 13 megapixel to 16 megapixel.

Software

Do not underestimate software processing a digital image and representing it to us in the ultimately, which we see it on the screen. It determines the transmission of colors, eliminates noises, ensures the stabilization of the image (when the smartphone is twisted in the hand when shooting), etc. Not to mention the various shooting modes.

Camera matrix

This type of matrix (CCD or CMOS) and its size is important. It is she who captures the image and transmits it to processor processing. The camera resolution depends on the matrix.

Diaphragm (luminous)

When choosing a smartphone with a good chamber, you should pay attention to this parameter. Roughly speaking, it indicates how much light gets a matrix through the optics of the module. The bigger, the better. Less set - more noise. Denotes the diaphragm of the letter f with the layer (/). After slash, the aperture value is indicated, and, and the less, the better. As an example, it is indicated as follows: F / 2.2, F / 1.9. Frequently indicated by B. specifications Smartphone.

The camera with a diaphragm F / 1.9 will be removed better with low lighting than a camera with a diaphragm F / 2.2, as more light falls on the matrix. But stabilization is important as software and optical.

Optical stabilization

Smartphones are rarely equipped with optical stabilization. As a rule, it is expensive devices with an advanced camera. Such an apparatus can be called cameras.

The shot of a smartphone is conducted from a mobile hand and that the image has not been lubricated, optical stabilization is applied. There may be hybrid stabilization (software + optical). It is especially important to optical stabilization with long excerpt when, due to insufficient illumination, the snapshot can be made for 1-3 seconds in a special mode.

Flash

The flash can be LED and xenon. The latter will provide much the best photos In the absence of light. Double LED flash is found. Rarely, it may be two: LED and xenon. It is very the best way. Realized B. sAMSUNG camera M8910 PIXON12.

As you can see, how the smartphone will remove depends on many parameters. So when choosing, in characteristics it is worth paying attention to the name of the module, a diaphragm, the presence of optical stabilization. It is best to search for a specific phone reviews on the Internet, where you can familiarize yourself with examples of pictures, as well as the author's opinion about the chamber.

Interpolation of images occurs in all digital photos at a certain stage, be it deematrization or scaling. It happens whenever you change the size or brief image from one pixel grid to another. Changing the image size is necessary when you need to increase or decrease the number of pixels, while the change in the position can occur in a wide variety of: correcting the distortion of the lens, change the perspective or rotation of the image.


Even if the same image is subject to resizing or exploit, the results can differ significantly depending on the interpolation algorithm. Since any interpolation is just an approximation, the image will lose a few times whenever interpolation is exposed. This chapter is designed to ensure a better understanding of what has an impact on the result, and thereby help you minimize any loss of image quality caused by interpolation.

Concept

The essence of interpolation is to use the available data to obtain the expected values \u200b\u200bat unknown points. For example, if you wanted to know what was the temperature at noon, but was measured at 11 and an hour, it can be assumed to be assumed by using linear interpolation:

If you had an additional measurement in half the twelfth, you could notice that until noon, the temperature grew faster, and use this additional measurement for quadratic interpolation:

The more temperature measurements you will have around noon, the more complex (and expected more accurate) can be your interpolation algorithm.

An example of resizing image size

Interpolation of images works in two dimensions and tries to achieve the best approximation in the color and brightness of the pixel, based on the values \u200b\u200bof the surrounding pixels. The following example illustrates the scaling operation:

plane interpolation
Original before after without interpolation

In contrast to air temperature fluctuations and an ideal gradient, the pixel values \u200b\u200bmay vary much more sharply from the point to the point. As in the example with the temperature, the more you know about the surrounding pixels, the better the interpolation will work. That is why the results are rapidly deteriorating as the image stretching, and in addition, the interpolation will never be able to add an image of the detail, which is not in it.

An example of image rotation

Interpolation occurs also every time you rotate or change the perspective of the image. The previous example was deceptive because it private casein which interpolators usually work well. The following example shows how to quickly be lost image detail:

Image degradation
Original 45 ° rotation rotate 90 °
(without loss)
2 rotation 45 ° 6 turns 15 °

The rotation of 90 ° does not make losses, since no pixel need to be placed on the border between two (and as a result, divided). Notice how much parts are lost at the first turn, and how quality continues to fall at the next. This means that it follows avoid rotations as far as possible; If a unevenly exposed frame requires turn, you should not rotate it more than once.

The above results use the so-called "bicubic" algorithm and show a significant deterioration in quality. Pay attention to how the total contrast is reduced due to a decrease in color intensity, as dark blue, dark halo arise around the light blue. Results can be significantly better depending on the algorithm of interpolation and the displayed item.

Types of interpolation algorithms

Generally accepted interpolation algorithms can be divided into two categories: adaptive and non-adaptive. Adaptive methods vary depending on the subject of interpolation (sharp boundaries, smooth texture), while non-adaptive methods process all the pixels equally.

Nonadaptive algorithms Includes: the nearest neighbor, bilinear, bicubic, spline, the function of the cardinal sinus (SINC), the Lantseos method and others. Depending on the complexity, they are used from 0 to 256 (or more) adjacent pixels for interpolation. The more adjacent pixels they include, the more accurate may be, but this is achieved by a significant increase in processing time. These algorithms can be used both for expanding and to scaling the image.

Adaptive algorithms Includes many commercial algorithms in licensed programs, such as Qimage, PhotoZoom Pro, Genuine Fractals and others. Many of them apply various versions of their algorithms (based on pixel analysis) when they detect the presence of the border - in order to minimize the unsightly interpolation defects in places where they are most visible. These algorithms are primarily designed to maximize the low-defense detail of enlarged images, so some of them are unsuitable for rotation or change the perspective.

Method of the nearest neighbor

This is the most basic of all interpolation algorithms, which requires the smallest processing time, because only one pixel is taken into account - the nearest to the interpolation point. As a result, each pixel just becomes more.

Bilinear interpolation

Bilinear interpolation examines the square 2x2 of famous pixels surrounding unknown. As an interpolated value, weighted averaging of these four pixels is used. As a result, the image looks much more smooth than the result of the method of the nearest neighbor.

The chart on the left relates to the case when all known pixels are equal, so the interpolated value is simply their sum divided by 4.

Biobubic interpolation

Biobubic interpolation goes one step further bilinear, considering an array of 4x4 surrounding pixels - only 16. Because they are on different distances From an unknown-pixel, the nearest pixels are obtained by calculating greater weight. Biobubic interpolation produces significantly sharper images than previous two methods, and possibly optimal by the ratio of processing time and output. For this reason, it has become standard for many image editing programs (including Adobe Photoshop.), printer drivers and built-in camera interpolation.

Post-order interpolation: splines and SINC

There are many other interpolators that take into account the more surrounding pixels and thus require more intensive computing. These algorithms include splines and cardinal sinus (SINC), and they retain most of the image information after interpolation. As a result, they are extremely useful when the image requires multiple turns or changes in perspective for individual steps. However, for single increments or turns, such higher order algorithms give a minor visual improvement with a significant increase in processing time. Moreover, in some cases, the cardinal sinus algorithm on the smooth plot works worse than bicubic interpolation.

Observed interpolation defects

All non-adaptive interpolators are trying to choose the optimal balance between the three undesirable defects: border halo, blur and speedness.

Even the most developed non-adaptive interpolators are always forced to increase or decrease one of the above defects at the expense of the other two - as a result, at least one of them will be noticeable. Notice how much the boundary halo is similar to a defect generated by increasing the sharpness using a non-trim mask, and how it increases seemingly sharpness by enhancing a clarity.

Adaptive interpolators can create or not create defects described above, but they can also produce a non-generated image texture or single pixels on a large scale:

On the other hand, some "defects" of adaptive interpolators can also be considered as advantages. Since the eye expects to see in areas with a shallow texture, such as foliage, details up to the smallest details, such drawings can deceive the eye at a distance (for certain types of material).

Smoothing

Smoothing or anti-aliasing is a process that is trying to minimize the appearance of stepd or gear diagonal borders that give the text or images rough digital view:


300%

Smoothing removes these steps and creates impressions of softer boundaries and high resolution. It takes into account how the ideal border overlaps adjacent pixels. The stepped border is simply rounded up or down without intermediate value, while the smoothed border issues a value proportional to how much of the border fell into each pixel:

An important consideration with increasing images is to prevent excessive step by interpolation. Many adaptive interpolators determine the presence of borders and are adjusted to minimize step-by-step, while saving the sharpness of the boundary. Since the smoothed boundary contains information about its position at a higher resolution, it is quite possible that the powerful adaptive (defining boundary) interpolator will be able to at least partially reconstruct the border when increasing.

Optical and digital zoom

Many compact digital cameras can carry out both optical and digital zoom (zoom). The optical zoom is carried out by the movement of the variplate, so that the light is intensified before entering the digital sensor. In contrast, the digital zoom lowers quality, since it makes a simple interpolation of the image - after receiving it with a sensor.


optical zoom (10x) digital zoom (10x)

Even despite the fact that the photo using digital zoom contains the same number of pixels, its detail is cleanedly less than when using an optical zoom. Digital zoom should almost completely eliminate, less deduction when it helps to display a remote object on the LCD screen of your camera. On the other hand, if you are usually removed in JPEG and want to cut and increase the picture, the digital zoom has the advantage that its interpolation is carried out before making compression defects. If you find that you need a digital zoom too often, buy a teleconverter, and even better lens with a big focal length.

Sensors are devices that define only grayscale (gradation of light intensity - from completely white to completely black). In order for the camera to distinguish colors, an array of color filters is superimposed on silicon using the photolithography process. In those sensors where microlynes are used, filters are placed between lenses and photodetector. In the scanners where the trilinear CCDs are used (nearby three CCD, reacting, respectively, on red, blue and green colors), or in high-end digital cameras, where three sensors are also used, its light is filtered on each sensor. certain color. (Note that in some cameras with multiple sensors, combinations of several colors in filters are used, and not three standard). But for the device with one sensor, which is most consumer digital cameras, color filter arrays, CFA are used for processing different colors (Color Filter Arrays, CFA).

In order for each pixel to fit his main color, the filter of the corresponding color is placed above it. Photones before hitting a pixel, first pass through the filter that skips only the waves of its color. Lights of another length will simply be absorbed by the filter. Scientists have determined that any color in the spectrum can be obtained by mixing only a few basic colors. In the RGB model three.

For each application, their color filter arrays are developed. But in most digital cameras, filter arrays are most popular color model Bayer Pattern. This technology was invented in the 70s Kodak company, when studies were conducted in the field of spatial separation. In this system, the filters are located interspersed, in a checker order, and the number of green filters are twice as much as red or blue. The order of location is such that the red and blue filters are located between green.

Such a quantitative relationship is explained by the structure of the human eye - it is more sensitive to green light. A chess order provides the same image color no matter how you hold the camera (vertically or horizontally). When reading information from such a sensor, the colors are recorded sequentially in the lines. The first line should be BGBGBG, the following - GRGRGR, etc. Such a technology is called serial RGB (Sequential RGB).

In the CCD cameras, the alignment of all three signals is not in the sensor, but in the image forming device, after the signal is converted from an analog view into digital. In the CMOS sensors, this alignment can occur directly on the chip. In any case, the primary colors of each filter mathematically interpolate taking into account the colors of adjacent filters. Note that in any image, most points are mixing the main colors, and only a few really represent pure red, blue or green.

For example, to determine the effect of neighboring pixels on the color of the central in the linear interpolation will be processed by the pixel matrix of 3x3. Take, for example, the simplest case - three pixels - with blue, red and blue filters, are located in one line (BRB). Suppose you are trying to get the resulting color of the red pixel. If all the color is equal, then the color of the central pixel is calculated mathematically as two parts of blue to one part of the red. In fact, the algorithms of even simple linear interpolation are much more complicated, they take into account the values \u200b\u200bof all the surrounding pixels. If the interpolation occurs badly, the teeth are obtained on the borders of the color shift (or color artifacts appear).

Note that the word "resolution" in the field of digital graphics is used incorrectly. Purists (or Pedants - someone like it), familiar with photography and optics, know that permission is a measure of the ability of a human eye or device to distinguish between individual lines on the grid of permissions, for example, on the ISO grid shown below. But in the computer industry is made by permission to call the number of pixels, and since it was so necessary, we will also follow this convention. After all, even the developers call permission to the number of pixels in the sensor.


Calculate?

The size of the image file depends on the number of pixels (permissions). The more pixels more file. For example, an image of sensors standard VGA. (640x480 or 307200 active pixels) will occupy about 900 kilobytes in uncompat. (307200 pixels of 3 bytes (R-G-B) \u003d 921600 bytes, which is about 900 kilobytes) Image 16 MP sensor will occupy about 48 megabytes.

It would seem that this is to count the number of pixels in the sensor to determine the size of the resulting image. However, camera manufacturers represent a bunch of different numbers, and each time they claim that this is the true resolution of the camera.

The total number of pixels consists of all pixels physically existing in the sensor. But only those that participate in obtaining an image are active. About five percent of all pixels will not participate in obtaining an image. These are either defective pixels, or pixels used by the camera for another purpose. For example, there may be masks to determine the level of the dark current or to determine the frame format.

Frame format is the ratio between the width and the height of the sensor. In some sensors, for example, with a resolution of 640x480, this ratio is 1.34: 1, which corresponds to the frame format of most computer monitors. This means that the images created by such sensors will accurately fit into the monitor screen, without prior crop. In many devices, the frame format corresponds to the format of a traditional 35-millimeter film, where the ratio is 1: 1.5. This allows you to take snapshots of the standard size and shape.


Interpolation of permission

In addition to optical resolution (the real ability of pixels to respond to photons), there is also a resolution enlarged by software and hardware complex using interpolating algorithms. As in interpolation of colors, the permissions in the interpolation are mathematically analyzed by the data of neighboring pixels. In this case, as a result of interpolations are created intermediate values. Such an "implementation" of new data can be made quite smoothly, while the interpolated data will be among the average, between real optical data. But sometimes, with such an operation, various interference may occur, artifacts, distortion appear, as a result of which the image quality will only worsen. Therefore, many pessimists believe that the interpolation of permission is not at all a way to improve the quality of images, but only the method of increasing files. When you select the device, pay attention to what permission is indicated. It is not necessary to rejoice at high interpolated resolution. (It is marked as interpolated or enhanced).

Another image processing process on programmatic level - This is subdiscretion (sub-sampling). In fact, this is the process, inverse interpolation. This process is performed at the image processing stage, after the data is converted from an analog digital view. This deletes the data of various pixels. In the CMOS sensors, this operation can be carried out on the chip itself, temporarily turning off the reading of certain pixel lines, or reading data only with selected pixels.

Subdiscretization performs two functions. First, to compact the data - to store more pictures in the memory of a certain size. The smaller the number of pixels, the less the file size is obtained, and the more pictures you can fit on the memory card or in internal memory Devices and the less often you have to download photos to a computer or change memory cards.

The second function of this process is to create defined sizes for certain purposes. Cameras with a 2MP sensor completely on the teeth take a snapshot of the standard photo of 8x10 inches. But if you try to send such a photo by mail, it will noticeably increase the size of the letter. Subdiscript allows you to process the image so that it normally looked on the monitors of your friends (if you do not put the purpose of detail) and at the same time went fast enough even on machines with a slow connection.

Now that we have familiarized themselves with the principles of the work of the sensors, we know how the image is obtained, let's look somewhat deeper and touched on more complex situations that occur with digital photography.