I think many readers have noticed more than once that the image in the photograph is different from what we see with our own eyes. This is partly due to the peculiarities of the transmission of perspective at different focal lengths. You can read more about this in the article about. In addition, defects may appear in the image in the form of color halos in contrasting areas, darkening of the frame at the edges, and changes in the geometry of objects. These shortcomings can be safely attributed to optical distortions of lenses, so we'll talk about them in today's article.

distortion

Distortion is the geometric distortion of straight lines when they appear curved. Do not confuse distortion and perspective distortion, in the latter case, straight parallel lines become converging, but not curved. There are two types of distortion according to the type of impact on the picture: pincushion - when the lines are concave and barrel-shaped - when they are convex.

Pincushion distortion, normal image and barrel distortion

Of course, in practice, the image rarely takes such ugly forms as in the diagram. More a real example The photo at the beginning of the article with a slight barrel-shaped distortion can serve as an effect.

First of all, distortion is visible on zoom lenses, and the greater the zoom factor, the more noticeable it is. Usually in the wide-angle position, you can observe the "barrel", and in the body - "cushion". Between the extreme positions of the lens, the flaws in the optics become less noticeable. In addition, the level of distortion may also change from the distance to the object, in some cases a close object may be affected, and a distant one will turn out fine in the photo.

Chromatic aberration

The second type of optical distortion that we will consider is chromatic aberration, you can often see the abbreviated "XA". Chromatic aberrations are caused by the decomposition of white light into color components, due to which the object in the picture has a little different sizes V different colors and as a result, colored contours appear along its edge. Often invisible in the center of the frame, they become visible on objects closer to the edges of the image. XA do not depend on the value of the focal length, nor on the aperture, but more often and more strongly manifest themselves again in zoom lenses. This is due to the need to introduce additional elements into the optical scheme to eliminate the effect, which is noticeably more difficult for lenses with a variable focal length than for primes.

In the picture on the left, CAs are especially noticeable on the hair (purple outline) and on the window grille (turquoise).

It cannot be said that chromatic aberrations greatly spoil the picture, but on contrasting objects, especially in backlighting, they become very noticeable and quite striking.

Vignetting

The last point is vignetting, in other words, darkening the areas at the edges of the frame. It can usually be seen on wide-angle lenses at their widest aperture. This effect is quite rare.

Do not confuse vignetting caused by optics flaws and appearing due to additional accessories. In the picture above, the edges turned out black due to several fairly thick filters wound onto the lens. A similar effect can be obtained when screwing on a long hood.

Initially, all optical distortions are directly dependent on the class and type of optics you use. Expensive series lenses have complex lens layouts and many additional elements that minimize these undesirable effects. Cheaper lenses, especially zooms, are much more prone to these problems due to the simplification of their design.

I hasten to disappoint readers, there are simply no lenses completely devoid of the above problems. To one degree or another, even expensive models of optics with a fixed focal length still distort the picture, although this is noticeable mainly at the edges of the frame. Good news in that for the most part these effects do not spoil the picture very much and can be eliminated programmatically quite easily (we will talk about this in the next article). In addition, on cameras with a non-full-format matrix, and these are all amateur DSLRs, the edges of the image are cut off in any case, and when using good optics, visible distortions are minimal.

Recovery of the constant component.

Halftone (gradational) distortion.

TV picture quality. For a completely identical transmission of the image of the world around us, a stereo color system with very high quality parameters is required. So far, it is not possible to implement such a system, and therefore The qualitative parameters of a TV image include: the number of lines, the number of frames, the number of flickers per second, the number of halftones and their distribution in the dynamic range of brightness changes, color gamut, etc., which define nominal TV picture quality reproduced by this system. In addition to these limitations, the correspondence of the image to the original is reduced due to distortions that occur in almost all elements of the TV system. Objective and subjective evaluation of the parameters and distortions of the TV system, the conditions of observation and processing of the results are also regulated.

Let's consider the main types of distortions and methods for their assessment.

9.1. Geometric (coordinate) distortions.

Geometrical distortions arise due to changes in the coordinates of the transmitted elements, and manifest themselves as a violation of the geometric similarity of the TV image to the original. Geometric similarity is violated mainly due to the non-identity of the raster shape and the relative speeds of horizontal and vertical scanning during image analysis and synthesis.

Distinguish linear And non-linear raster distortion.

Figure 9.1 shows the main types of linear raster distortions, which include: pillow-shaped, barrel-shaped, trapezoidal.

The assessment is made on special square or rectangular elements, which is part of specialized or universal test tables using geometric distortion coefficients, it is visually easier to produce it on test elements in the form of circles and over the entire image field.

Fig.9.1. Geometric distortions of the “chessboard” image due to distortions in the shape of the raster

Pincushion distortion raster arise due to the discrepancy between the linear velocity of the scanning beam in the central and peripheral parts of the screen due to the projection onto a flat screen of electron beams deflected along the radius. At a constant angular velocity of the beam, as the distance from the center of the screen increases, the length of the beam increases, which leads to an increase in its linear velocity, and, consequently, to the stretching of the image along the edges of the screen ( rice. 9.1a). To combat pincushion distortion, special methods correction of the shape of the deflecting current, slowing down the speed of movement of the beam of the peripheral part of the screen or changing the size of the period, increasing the central and compressing at the edges.


Barrel distortion arise as a result recorrection of the pillows(fig 9.1.b).

Pincushion and barrel distortion are measured by the coefficient of geometric distortion according to the following formulas:

or

keystone distortion arise due to violation of the optical and electrical axis to the image plane ( rice. 9.1.c).

Aspect distortion may occur due to a violation of the ratio of the values ​​​​of the deflecting currents of horizontal and vertical scans (Fig. 9.1.d, e.). Estimating the values ​​of this type of distortion is inappropriate, since they are easily corrected by the controls for adjusting the image size horizontally and vertically.

Nonlinear geometric distortions (Fig. 9.2) arise due to the inconstancy of the velocities of the rays moving vertically or horizontally, that is, due to the nonlinearity of the frame currents (Fig.9.2.a) or line scanning (Fig.9.2.b).

Fig.9.2. Geometric image distortions due to non-linearity of horizontal and vertical scanning

The coefficients of geometric distortion in the vertical and horizontal directions are estimated as follows:

The human eye hardly notices non-linear distortions. So the non-linearity of the sweep up to 5% in any direction is almost imperceptible, and at 8 ... 12% the image is perceived as good.

Distortion is the optical curvature of the straight lines of an object, which is characteristic of wide-angle lenses.

The resulting picture will not be geometrically similar to the original, except perhaps in the middle, but the closer to the edges, the more noticeable the curvature will be. Distortion will not affect the sharpness of the image.

Kinds

Lens distortion when photographing can be barrel-shaped(convex) and pillow-shaped(concave). Photographers call them much simpler: “barrel” and “pillow”.

With concave distortion, telephoto lens owners are more familiar, it is they who get a flatter picture.

There is also complex distortion, characterized by distortion different type and intensity in different parts of the picture. It will be difficult to fix this in photo editors, because the curvature will go in “waves”.

Causes

When shooting on a portrait or telephoto camera, you are unlikely to see distortion. It becomes especially noticeable if straight lines pass through the entire frame, for example, in shooting architecture with an ultra-wide angle lens.

Our brain has a peculiar perception of “correct”, it believes that, for example, the walls of a building are parallel, and if they converge in a photo, then the picture is contrary to reality. And from the point of view of technology, this is not a distortion, but a natural transfer of 3D space.

Distortion occurs when different parts of an image are different when zoomed in linearly. For example, if you are shooting tall buildings from a low angle, with the camera tilted, distortion is almost inevitable, especially if you have a cheap zoom lens. Give preference to lenses with a fixed focal length and high-quality expensive glasses - with a variable one.

The range of photographic equipment is too large and you can not make a choice? We will advise!

Do you know why the effect of distortion is most often manifested in wide-angle photographs? Answer .

Tormented by information hunger about photography? Eliminate it with our selection of professional photographer sites:

How to avoid

First, get quality lenses. Think about the purpose of the shot: sometimes using a wider-angle lens can save the day. And move your legs more: move further from the subject and use the zoom function if you have a high-quality zoom.

Secondly, in certain cases, it is possible to solve such a problem if you use the lens with even wider angles. In the top three most sought-after for professional photographers, they were "in the company" of lenses for portraits and television lenses. A lens equipped with wide angles changes the perspective, pushing the frames of the required photo. Then the approximate objects move closer, becoming closer, and the distant ones move even further away. This gives in the future a good chance to crop the picture more freely.

Thirdly, it is possible to remove distortion from an already taken photo, bringing it to a more harmonious and proportional look, using the unique and simplest option in Adobe Photoshop, or work with a photo in some other available editor for graphic images. This is also often used by professionals in their work.

But it is most rational to buy yourself a high-quality (expensive) lens in order to avoid the appearance of optical image distortions in the pictures. Although, for the sake of truth, it is worth noting that distortion is not obviously negative effect. If you have ever photographed with a fisheye (fisheye), then this is also a kind of feature that many people like. And it looks quite bright and unusual, although it is a vivid demonstration of distortion.

If already at the shooting you understand that distortion correction is mandatory, then immediately shoot “with a margin” at the edges of the photo: the composition that you are building now will be severely cut when compensating for distortions.

But don't chase the perfect lens: it doesn't exist. With the current technical capabilities, it is impossible to fix the object in the photo exactly as it is in reality, there will still be slight distortions. Your task when choosing optics - focus on the one that minimizes possible imperfections.

artistic tool

If you have ever held a lens fisheye(fisheye), you should have already seen a vivid example of distortion, only on fisheye this is a feature that everyone knows and likes. Photos taken with a fisheye are rarely corrected. The result of shooting on fisheye is a circular picture, and the frame is still rectangular. Both Canon and Nikon have such lenses.

Also, when photographing, they will create distortion. tilt-shift lenses that lovers of architectural and technical photography deliberately resort to. This optical design is tilt-and-shift, allowing you to control perspective.

If you feel sorry for the money for such a lens, you can try to achieve a similar effect in Photoshop.

Getting rid of the problem in Photoshop

So, you have come to the conclusion that the distortions in the photo are visible to the naked eye to a simple viewer, and you are thinking how to remove distortion in Photoshop, then this whole thing will take you just a couple of minutes. Tabs: Filter -> Distort -> Lens Correction, or in another version of the program Filter -> Lens Correction. You just have to move the slider left and right until you get the best result.

In lightroom you will need modules Develop -> LensCorrections. If you activate the "Enable Profile Corrections" lens correction profile, then the program corrects distortion automatically. If she makes a little mistake, correct it manually in the tab Amount -> Distortion. If you like to control everything, there is Manual for you - an absolutely manual mode for correcting curvature.

There are other programs for correction, for example, DXOOpticPro, which corrects curvature (and not only) automatically.

Please note that after compensating for the unwanted effect, empty space will be added to the picture, you will have to crop it, and this can have a sad effect on the composition.

In principle, if the distortion is not so striking, you can not waste time on correction.

Do you choose? We have already chosen them for you!

conclusions

  • Do not spare money for the purchase good lenses, which will allow you to shoot with minimal post-processing of photos.
  • If you really want to photograph an object, but you don’t have the right glasses with you, it’s better to shoot with distortion than not shoot at all. Then correct the distortion in the photo editor.
  • Distortion can spoil your picture or make it look unusual. Evaluate whether the correction of image distortion in Photoshop is necessary, or in this particular case, a beautiful artistic effect has turned out? Leave it as is if the photo looks original

Let - information in a form that allows discretization, available in the so-called image plane. An arbitrary point on this plane is given by the radius vector x. functional

dependence on x is written as

The functional dependencies of all other quantities given in the image plane are presented in a similar way.

Let us now assume that the information is subjected to a time-invariant distortion determined by the function, the value of the function at the point is "smeared" on the image plane in accordance with the form of the function. This means that only linear distortions are considered, so that the distorted signal can be sufficiently general view written like this:

where denotes an area element centered at a point (image plane) determined by the radius vector In expression (3.2), a double integral is indicated due to the two-dimensionality of the image plane. The infinite limits simply mean that the whole image is covered by the integration.

If the distortion is so general character that the expression (3.2) cannot be specified and simplified, then it is rarely possible to successfully restore the function but the function. Widely applicable methods of restoration and reconstruction have been developed for spatially invariant distortions (characterized by the fact that the blur is the same for all points x), or for distortion. which can be represented as space-invariant by one of two methods. The first method is based on the geometric transformation of the image to translate the spatially dependent distortion into a spatially invariant one. In the second method, an image with space-dependent distortion is divided into a number of fragments, in each of which it can be considered as space-invariant. Both of these methods are discussed in detail in § 15.

Spatial invariance means that the function that defines the distortion has the form

If function (3.3) is substituted into expression (3.2), then we obtain the so-called convolution integral. The convolution operation will be denoted by an asterisk set as a multiplication sign. Then expression (3.2), taking into account equality (3.3), can be written in compact form

Even if the distortion is spatially invariant, there are no a priori restrictions imposed on the form of the Olna convolution kernel. In practice, well-defined forks of this function are often encountered, four of which are given in Table. 1.1 (see example 1 at the end of this chapter). Linear blur occurs when the subject being photographed moves in a straight line during exposure (or, equivalently, if the camera is accidentally shaken while the subject is stationary). The intermediate profile shown in Table. 1.1 in the case of blur, shows how the photographed object moves during the exposure (a sharp cut of the profile at the edges corresponds to a very fast response of the camera shutter). If the height of the section is constant during exposure, then such a linear smear is called homogeneous.

Another common cause of photographic distortion is the defocus effect. In this case, the function has a form very close to a circle. (This can be said from simple considerations of geometrical optics: given circle there is an intersection of the image plane with a cone of rays emanating from the far point of the camera field, which would converge to a point in the image plane if the camera were in focus; then the image plane would be the focal plane.) When an object is viewed through a turbulent medium with an optical system with high resolution, distortion in the case of a short exposure (during which the state of the medium does not have time to change) is often well described by a function having the form of a set of random pulses. In the case of long exposures, the shape of the function approaches the Gaussian one. Although the causes of these four types of distortion may be very different, the ones mentioned above are perhaps the most typical.

Let us now turn to the imaging process in optical system separated from the object by a distorting medium. We will be extremely brief. A detailed analysis can be found in the literature. The arbitrary point indicated in § 1 in the plane on which the radiation is incident is characterized by a radius vector. If the radiation field at each point is simply the field modulated in amplitude and phase that would exist at this point in the absence of distortion, then the distortion is called isoplanatic. Isoplanatism is a very simple concept, but it has a very important practical meaning, and therefore it is advisable to give another definition of it. Consider a beam emanating from an arbitrary point of the radiation source and arriving at the point. We will characterize the attenuation and delay of this beam, corresponding to the distortion, by the modulus and phase of the complex number by the Condition

isoilanaticity is the independence of the complex number from i.e. the equality

We emphasize that in practice with isoplanatic distortion complex number can vary greatly depending on the point. The larger the linear dimensions of the radiation source, the less likely it is that condition (3.5) will be satisfied for an arbitrary specific distorting medium. Moreover, in order for condition (3.5) to remain valid, the size of the "cells" of the medium that introduces the distortion must exceed a certain minimum value determined by the geometry of the source and medium. Thus, we come to the concept of an isoplanatism patch. the size of which is the largest "effective size" of the radiation source. It is convenient to express the dimensions of the area of ​​isoplanatism in terms of angular measure. If at all points the visible angular dimensions of the radiation source smaller sizes area of ​​isoplanatism, then the distortion is isoplanatic.

Let us denote the radiation field at an arbitrary instant of time at a point by and its Fourier image by (§ 6). Let us assume that the point lies in the plane of the pupil (ie, in the plane of the aperture stop) of the imaging device (eg, telescope, ultrasonic transducer, radio antenna). If the focal surface of such a device is identified with the image plane introduced in § 1, then the signal will be the "instant image" generated by this device.

Let us now introduce the concept of an analytic signal. This is a signal that does not have negative temporal frequencies. An analytical signal is necessarily complex, and its imaginary part is connected by a Hilbert transform with its real part. The actual measured signal is usually taken as the real part of the analytical signal. The simplest analytic signal is an exponential function, where constant angular frequency, constant phase. The real signal corresponding to this function is . In this book, analytic signals will be rare, and therefore we will not dwell on them here (an exhaustive presentation of the theory of analytic signals is lano in the literature indicated in § I). However, we emphasize that wherever a signal is introduced that explicitly depends on time, it will be considered complex and not having negative time frequencies.

The properties of the "image" formed by the corresponding device depend on the degree of spatial coherence of the radiation source. In the generated image, the degree

space of a different coherence finds expression in how the quantity depends on

where is a time interval large enough for the application under consideration. Complete coherence occurs when the value for any two points xx, at which the values ​​are finite, is also non-zero. In the case of complete spatial incoherence, the quantity (3.6) is equal to zero for values ​​exceeding the smallest linear size the smallest detail that can be resolved by an imaging device.

Note that a bar over any time function in this book always denotes time averaging.

Radiation with a spatial coherence intermediate between full and zero is almost never used, and therefore only edge cases complete spatial coherence and complete spatial incoherence. Of course, these extreme cases are idealizations, but in practice one or another approximation to them is possible. For example, this takes place during the reflection and refraction of radiation emitted by radio and microwave transmitters, ultrasonic transducers and lasers, on the one hand, and various natural sources of radiation in nature, on the other. Therefore, it makes sense to consider only these two limiting cases of coherence.

When assessing the degree of spatial coherence, for convenience, individual spectral components (images and radiations) are usually considered, considering them to be monochromatic. For example, an instantaneous image is considered in the form The ideal recorded image, which we will denote by the symbol, is expressed in terms of the following:

Note that the time averaging in definition (3.7) must be carried out over a large number of periods of the center frequency of the field incident on the focal surface of the imaging device. The time interval of such averaging is usually a small fraction of the duration of the actual recording process (for example, film exposure, scanning of one element

multi-element photodetector, obtaining a sufficiently large microwave receiver signal). Note that a million periods of visible scat is only a few nanoseconds, and for most of the microwave range, the time interval in spans more than a thousand periods. From the point of view of image processing, the difference between cases of spatial coherence and spatial incoherence boils down to the following:

In this book, image processing of spatially coherent fields is not considered, mainly because of the practical difficulties associated with the implementation of "optical" calculations (§ 2). In what follows, unless otherwise stated, it is assumed that

If we neglect the noise that is inevitably introduced when recording images, and also consider the distortion to be ideally isoplanatic, the function coincides with the function in formula (3.4). This is a consequence of the convolution theorem for Fourier images (see § 7, and also § 8, where the question of images of spatially incoherent sources is considered below). In accordance with condition (3.9), throughout this book, unless otherwise stated, it is assumed that

We emphasize that a diffraction-limited image, since the diameter of the aperture (or pupil) of any imaging device is necessarily finite. If X is the central wavelength of the radiation, then the imaging device cannot resolve the details of the real picture of sources that correspond to angles smaller than . In principle, super-resolution is possible, but only on condition that the dimensions of the resolved details in the original image significantly exceed the size of one image element.

The distortions discussed so far in this section can be compensated for by the methods described in Chap. 3 and 6. Methods introduced

in ch. 7-9 are suitable both for compensating for these distortions, and for correcting geometric distortions and improving the visual quality of images (see the corresponding definitions in § 2).

Image distortions are not only due to the influence of the distribution medium and imperfections or incorrect settings of the device that forms the image. Sometimes they are connected with the fact that they do not allow measurements or some very important data are missing, as in the problems considered in Chap. 4. In other cases, they may be due to a measurement procedure which, although ultimately ideal, introduces distortions such that, without additional processing, the images are practically unusable, as in the applications discussed in Chap. 5.

Lens correction helps to compensate for the imperfections present in almost every shot. Among them, there may be darkening at the edges of the frame, straight lines may be curved, and colored strokes will appear around objects. While these things are often invisible in the original photo, there is almost always an advantage to not having them. However, if you're not careful, lens correction will only worsen your shots. Depending on the subject, some imperfections may even be beneficial.

Before editing

After editing

The result after getting rid of vignetting, distortion and chromatic aberration. The difference becomes even clearer when viewed in full screen mode.

Review

The three most common lens corrections address the following issues:

Vignetting

distortion

Chromatic aberration

  1. Vignetting. Its effect is a gradual darkening at the edges of the image.
  2. distortion. Straight lines curve inward or outward.
  3. Chromatic aberration. This problem shows up as a colored stroke around high contrast edges.

However, lens correction software can usually only correct some types of each imperfection, so the key is to recognize them. The following sections will describe the types and causes of each defect. You'll learn when to apply correction, and how to minimize imperfections in the first place.

Most programs will work for this tutorial, but the most popular options are among others: Adobe Camera RAW, Lightroom, Aperture, DxO Optics, and PTLens.

1. Vignetting

This defect is described as a gradual decrease in light around the edges of the photo and is perhaps the most noticeable and easy to fix problem.

Internal vignetting

Physical vignetting

Note that internal vignetting is most problematic only in the top left and bottom right corners due to the subject, even though the effect is applied equally from all sides.

Defect fixed

Vignetting can be broken down into two main categories:

Physical. Often it can't be fixed except by cropping or manual lighting/cloning. It looks like a strong, sharp darkening, usually appearing only in the very corners of the picture. Causes are piled/large filters, lens caps, or other objects physically blocking light around the edge of the frame.

Internal. Usually easily corrected. Appears as a smooth, often slight blackout from the center of the image. Appears due to inner work specific lens or camera. This type usually becomes most noticeable with smaller f-numbers, when using zoom or wide-angle lenses, and when focusing on distant subjects. Digital reflex cameras cropped sensors are usually less prone to vignetting due to the fact that dark edges are simply cut off (unlike full-frame models).

  • Technical note: Internal vignetting comes in two categories: optical and natural. The former can be minimized by stopping down the lens (use large f-numbers), but the latter is independent of lens settings. Therefore, natural vignetting is unavoidable, unless you use a lens with a smaller field of view or a special corrective filter that casts light to the center of the image (rarely used somewhere other than large format cameras).

Correction

Vignetting can often be corrected using the amount slider alone, although sometimes it may be necessary to change the correction center using the midpoint slider (rarely used). However, the correction will increase the amount of noise around the edges, since digitally brightening the image amplifies signal and noise equally.

Vignetting adjustment sliders in Photoshop.

Artificial vignetting. Some photographers deliberately add vignetting to their photos to draw attention to the central subject and make the edges of the frame less sharp. However, you may need to apply the effect after the photo has been cropped (sometimes referred to as "post-crop vignetting").

2. Distortion: Barrel, pillow and perspective

This kind of imperfection makes straight lines look curved outward or concave inward, and also affects depth.

pincushion distortion

barrel distortion

The most common categories of distortion include:

Cushion-shaped. Straight lines seem to bend into the picture. Usually appears on telephoto lenses or at the telephoto end of a zoom lens.

Barrel. Straight lines curve outward. It most often occurs when working with wide-angle lenses or at the wide-angle end of a zoom lens.

Perspective distortion. Appears when parallel lines converge. The reason is that the camera is not pointing perpendicular to these parallel lines; when shooting trees and architecture, this usually means that the camera is not pointing at the horizon.

When working with landscape photography distortion of the horizon and trees is usually the easiest to spot. Placing the horizon along the center of the image will help minimize the appearance of all three types of distortion.

Blue dot - camera direction; red lines are converging parallel lines.

  • Technical note: Perspective distortion is not exactly true distortion due to the fact that it is a natural characteristic of 3D vision. We see it with our own eyes, but our brain knows the correct location of objects in 3D space and therefore does not perceive the lines as converging. If you want to learn more, check out our tutorials on wide angle lenses and using tilt-shift lenses to control perspective.

Correction

Fortunately, each of the above types can be fixed. However, this should only be done when necessary, for example with subjects involving straight lines or something very geometric. For example, architectural photography is the most sensitive area, while landscape photography has almost no distortion.

Distortion Adjustment Sliders in Photoshop

Editing software usually has sliders for pincushion and barrel distortion correction, as well as horizontal/vertical perspective correction. However, be sure to use the grid overlay feature (if available) so you can evaluate your work.

Flaws

Correcting distortion usually requires trimming the curved edges of the frame, which can affect the composition. It also redistributes the image resolution; removing the pincushion will sharpen the edges a bit (at the expense of the center), while removing the barrel distortion will sharpen the center (at the expense of the edges). When working with a wide angle lens, barrel distortion is a good way to compensate for the softening of edges that is a common consequence of using this lens.

3. Chromatic aberration

Chromatic aberration (CA) appears as unsightly colored fringing around high-contrast edges. Unlike the other two drawbacks, chromatic aberration is usually only noticeable at high scale on a computer or large print.

Photo before correction

Before and after with 100% zoom

The above correction is effective, since CA in the majority belonged to the easily eliminated lateral type.

Types and Causes

Chromatic aberrations are perhaps the most diverse and complex defect. Its distribution largely depends on the subject. Fortunately, HA is easy to understand by dividing them into at least three phenomena:

Lateral (lateral). ​

Axial.

Blooming. ​

  • Technical note: Clear side CAs happen when the color components of an image are shot at different relative sizes (but they are all sharply in focus). In the case of axial HA, they appear at the same relative size of the color components, but some of them are out of focus. Blooming, on the other hand, occurs when both problems are present on a small scale on the microlens of the sensor instead of appearing across the entire width of the image on the camera lens.

Lateral (lateral). The easiest type to correct. Appears as an opposite two-tone fringe extending radially from the center of the image, increasing in size towards the edges. The most common color combination is teal/magenta, along with a potential blue/yellow component.

Axial. Not amenable to correction or only partially amenable to side effects. They appear as a single-color glow around all edges of contrasting details, and also vary less depending on the position in the picture. The glow is often purplish, but its color and size can sometimes be corrected by shifting the autofocus forward or backward.

Blooming. Usually can be fixed. This is a unique phenomenon in digital sensors that causes excess light to be clipped, producing a variable color stroke at the sensor level, usually blue or magenta. Most commonly seen with harsh, clipped specular highlights on high resolution compact cameras. A classic example is the edges of treetops and foliage against a bright white sky.

All images have some combination of the above types, although their relative abundance can vary greatly depending on the content of the image and the lens. Lateral and axial CAs are more common in inexpensive lenses, while blooming occurs in older compact cameras; however, all aberrations are more noticeable in high resolution.

  • Technical note: Although axial CA and blooming are usually distributed evenly around all edges, they may not appear evenly in all directions, depending on the color and brightness of the particular edge. Because of this, they can often be confused with lateral CAs. Lateral and axial CAs are sometimes also referred to as transverse and longitudinal CAs, respectively.

Correction

Reducing chromatic aberration can make a huge difference in image sharpness and quality - especially around the edges of the frame. However, only some HA components can be removed. The trick is to recognize and apply the right tools for each component separately without degrading the others. For example, reducing the axial CA in one area (with the wrong use of tools for lateral CA) will make the rest of the sites worse.

Chromatic Aberration Correction Sliders in Photoshop

Start with high contrast edges near the corner of the photo, viewing it full screen at 100-400% zoom to see how effective the correction is. It is usually best to start with the lateral CAs using the red/turquoise sliders and then the blue/yellow sliders as they are the easiest to get rid of. All that remains after is a combination of axial CA and blooming. You can clean them up with the Defringe tool in Photoshop. No matter what settings you start with, experimentation is the key to getting the best results.

The piece is taken from the top left of the sunset shot above.

However, don't expect miracles; almost always some proportion of blooming and axial CA will remain. This is especially true with bright lights at night, stars, and direct reflections on metal or water.

Axial XA and blooming

Defects reduced (but still present)

Automatic Lens Correction Profiles

Modern RAW programs are often equipped with a lens correction function using pre-set parameters for huge amount combinations of cameras and lenses. If there is such an opportunity, it can save a lot of time. Adobe Camera RAW (ACR), Lightroom, Aperture, DxO Optics, and PTLens have this feature in their latest versions.

Don't be afraid to use them with more than just the default settings of 100% (full correction). Some, for example, prefer to keep a little vignetting and distortion, but fully correct for chromatic aberration. Although in the case of CA, the best results are usually achieved with manual work.

If you are using lens correction as part of the post-processing process, the execution order can affect the result. Noise removal is usually more effective before CA removal, but sharpening should be done after as it may interfere with CA cleaning. Although, if you use programs for working with RAW, you don’t have to worry too much about the order - all corrections will be intelligently applied.