the hulk wrote in post #18245777
I always do that but its not enough to reduce the halo effect in this case.
I would like some kind of intelligent tool mix of Lightrooms CA/ColorDefring and the cloning tool. All the colored pixel the defring tool gets rid of the cloning tool should replace the defringed pixels with nearby pixel color in a smooth blending.
That would be good, but from which side should it clone? Both? Light to dark? Dark to light? Should it clone a strip that is the width of the "line"? or should it just take the very last "good" colour and extend that across? The answer is usually all of the above, and often all in the same image. It is computationally very difficult when dealing with complex two dimensional data arrays, as you in the case of a photograph. It's not too difficult to compute the steps required to draw a single pixel wide line and any arbitrary angle on the grid, although you may need a very long line to see very small differences in angle. It is also pretty trivial to do a two dimensional edge finding routine, and of course we will know where we have changed the colour already. For lines that run along the axes of the array grid it is not too bad, since it becomes quite easy to check that there is no other "line" in close proximity to the pixels that need cloning, if you are going for the full width clone option, and then simply copy x number of pixel values across. This would be the same for cloning from a single side, or both sides. The difficulty is when the "line" of the CA is not parallel to the grid axes of the image array. It becomes quite difficult to calculate the angle from the data, especially with only short runs. The problem is that you need to know the correct angle of the line that was corrected, since it is important that you correctly interpolate the data necessary to make it look like you cloned across orthogonally from the line. Thats just for the parts of the CA that are straight lines, of course that is not always the case, often you will get CA around edges that are actually complex curves. At that point you need to be able to compute your interpolated pixels for cloning based on a line that is orthogonal to the tangent of the curve at that point. That's again not too hard where you calculated the curve in the first place, since a calculation that will deliver up a curve will just as easily give us the tangent at any point along it. calculating the curve from the data is quite tricky, and remember that would be just for one edge of it, most cases of CA are not just a single pixel wide, so you would have to calculate this curve as a two dimensional object, and that adds issues, since the chord length of the inside will be shorter than the chord on the outside of the curved line.
The problems is that computers are actually very poor at seeing patterns in data that humans spot instantly, even when programed by humans! humans are so good at seeing "patterns" that we see them even when they don't really exist. So for a human spotting the corrected areas of CA, and knowing where to clone the data in from is a really simple visual task, and one we are actually very adept at, since spotting irregularities in our field of view has been keeping us alive for millennia. Those who didn't have those skills were weeded out of the gene pool by the simple expedient of becoming dinner for a saber toothed tiger, or some other large, or indeed small, predator. So not only can we easily see the problem, but we can also very easily see where we need to pick up the data to fix the problem, something that is very difficult to program a computer to do. Of course the problem is that for us humans it takes such a long time to do the job by hand. I suppose that as Artificial Intelligence programming improves this will become something that a computer becomes able to manage. It is a bit like catching a ball in the hand, something that most humans are capable of at an instinctive level, we see the ball, and are able to pretty much instantly calculate the flight path, which we can then update based on feedback from out eyes as unseen effects such as the wind change that flight path, then we can control our arm to put our hand in just the right place to catch the ball out of the air. Even a supercomputer on the other hand is going to struggle to calculate the flight path in before the ball gets to where it is going, let alone use the feedback from a vision system to make real time updates, in order to be able to predict the path to allow the interception by a hand sized object. This is extraordinarily difficult computationally, since it requires the simultaneous computation of multiple third order differential equations. Although not seeming so on a basic level the requirement here is of a similar level of difficulty for an AI system.
Although now getting on for 25 years ago I have had a little experience in this sort of thing. When I did my Electronics Engineering degree the lecturer who taught us C Programming for Engineers field of study was machine vision, specifically in cell biology, so most of our course work was writing image processing filters. I remember doing edge detection, and a couple of different interpolation algorithms, although nothing as advanced as Bicubic. But this was back in 92/93, and IIRC Photoshop was only on about version 1 or 2 maybe, and I don't think it had Bicubic yet then either. Fractals were the really new big thing then, and the talk was they might even be useful for using in interpolation. So i guess AI, which has been the next big thing for even longer in computing might actually solve this problem, if they ever get the AI systems actually working that is.