And now for something completely different.

Sometimes, when I try to describe to others what I do as a color scientist, I am asked if I can fix their photos.  Usually it is to make their printer look more like their monitor, but a few years ago it was a friend asking about how to correct his underwater pictures while scuba diving. It turns out that this is an unsolved problem in color science and it was intriguing enough that I spent some time studying it, and worked out a solution for a simple geometry using my friend’s images as test cases.

Five years later I decided to submit a summary of this work to the international Color Imaging Conference, this year held in Lillehammer Norway.  Remarkably, it was accepted, and doubly remarkably, it was runner up for the coveted “Cactus Award” for interactive poster papers.

Here are the posters I presented for this work.

Questions?  Don’t hesitate to ask.

Underwater Color Correction.2017.poster.pg1.8x15-1

Underwater Color Correction.2017.poster.pg2.8x15-1

Posted in Uncategorized | Leave a comment

Eddington epilogue

I was lucky to have ended up at this observing location with such excellent weather. When planning to view total eclipses, I am advised to arrange for other activities as well; the eclipse itself is subject to fickle viewing conditions (my one prior total solar eclipse effort was thwarted, but the travel experience was rewarding nevertheless).

While I was mesmerized by the experience and NOT taking pictures, my script-driven camera captured more than just background stars for measuring gravitational deflections. Here are my three favorites.

3472-3479.hdr-1.bw

My HDR composite spanning 14 stops of exposure and showing some of the structure in the corona.

_MG_3524.C3

A shot at third contact showing some prominences and the emergence of two “beads”

_MG_3527.diamondRing.crop

The end of totality marked by the “diamond ring” left a lasting impression; a signature of the unique event we had just witnessed.

 

Posted in Uncategorized | Leave a comment

Discussion

It is a bit disappointing to be unable to show a clear gravitational signal, even with all of the successful exposures that were taken, but I recognized the difficulty of this measurement early on. In addition to the variables I anticipated, there are some additional uncertainties that I now recognize.

Here is my updated list of confounding variables:

  1. Lens distortion. This seems to be the largest one, measuring many arcseconds by the time one reaches the edge of the frame. There are two ways to combat it: take the before and during images with the exact same center and orientation on the sky (which I found impractical with my equipment), or calibrate the lens. I did the latter, but found that this too was sensitive to overall gain/magnification assumptions.
  2. The variation in positions due to the stars twinkling is about 2 arcseconds. I attempted to mitigate this by multiple exposures, but more were needed than I obtained.
  3. Centering and orientation. I did the best I could to position and orient the reference stars to the coordinate of the sun’s center during mid-eclipse, but the rigid transform technique requires an adequate collection of uniformly distributed stars. With only five or six, there may have been biases in this calibration of the order of arcseconds.
  4. Temperature variation. My reference images were taken during Minnesota spring and summer evenings, usually light-jacket weather. The eclipse pictures were taken at midday in Idaho. Although we noticed a cool-down during the eclipse, only a few of us felt the need to put on our fleece. Silicon has a thermal sensitivity of 2.5 ppm per degree K, which could account for some of the error. Overall however, this is small compared to the uncertainties displayed, since even a 10-degree K difference would show an error of 0.02 arcseconds per thousand radial distance. There may be other artifacts of temperature change however, see the following regarding focus.
  5. Focus variations. As one focuses the image on the detector, there is a geometric gain involved. I measured the travel on my telescope focuser for one turn of the fine-focus knob. I also noted that best focus could be determined to within 1/8 turn of that knob. This worked out to be 0.3mm, which, at the focal plane of a 480mm lens is about 0.6 arcseconds per thousand, a significant amount!
  6. Algorithm sensitivities. The before images were taken at night, sometimes with a partial moon providing background illumination of the sky. The during images were taken in the presence of the corona, a strong offset of the background level, and one which has a directional gradient as well. It is possible that the first stage of processing, starPos.m, could have been influenced by this difference. I do not have any estimates of its sensitivity.
  7. Published star positions. I used reference star locations from Stellarium, which uses the latest publicly available database of star locations. I used J2000 epoch numbers out of habit, but perhaps I should have used current date coordinates. This would only affect the errors comparing my observed positions with the published ones, not the errors between observed “before” and “during” star positions.

 

While I am not surprised at the failure to find the gravitational deflection signal, I am disappointed I did not get a bit closer. Regardless, it has been a wonderful project to undertake. I learned much and re-learned more. I hope the descriptions of the process have been enlightening. If you have read this far, perhaps you have found the narration worthwhile or even enjoyable. Best wishes and clear skies to all future solar eclipse observers!

 

Posted in Uncategorized | 1 Comment

Analyzing Eclipse Day Results

I was able to obtain 35 photos during totality that were candidates to locate stars in the field. The exposures ranged from 1/60 to 2 seconds, but it became clear after applying the detection procedure starPos.m, that only the longest exposures, 1 and 2 seconds, would yield detected stars. The inner regions of the corona were just too bright and irregular for the algorithm to find them.

This left 15 images to work with. The camera orientation was good in that there were many candidate reference stars in the frame. Here is the mapping of a mid-eclipse exposure (3496):

analysis.1

 

Here is a map of the located stars. The color codes indicate the channels (red, green or blue) in which the star was found. White indicates being found in all channels. The number of located stars for this image are: 6 red, 9 green, 5 blue.

analysis.2

 

The detected stars are then mapped to our virtual camera view and correlated against our list of reference stars (imagePos.m). They are then more precisely aligned to the center of the sun. The lens distortion is removed at this stage (radialAlign.m).

The uncorrected lens positions look like this:

analysis.3

Lens corrected:

analysis.4

The errors are still rather large, but by collecting the statistics from all of the “during” frames, we can see how they land with respect to their published reference positions. The standard deviations are consistent with atmospheric seeing, but the differences in average positions indicates other sources of error.

analysis.5

 

If we average all of the “before” images I took, and see how they compare to the published star locations, we get a similar wide ranging plot, the variances are again consistent with the seeing (but the average errors are not):

analysis.6

 

If we take the difference between these two data sets, we should see the gravitational deflection signal we are looking for. Unfortunately, it is lost in the noise.

analysis.7

 

I can make a plot similar to Eddington’s that shows the average measured deflection of these reference stars, but I will not claim that it demonstrates gravitational deflection.

analysis.8

 

 

 

 

 

 

Posted in Uncategorized | Leave a comment

Comparing Before and During images, step EE-4

This was written prior to eclipse day as I was contemplating how to compare the two image sets.  I include it here to keep the thought sequence intact.

 

When we apply steps EE-1, 2, and 3 to both the before images and the during images, we will have a set of radial distances to compare. In the best of conditions, the distances will be very close to each other. There will be measurement noise and it is unlikely that the subpixel difference we are looking for will be immediately obvious from the measurement of any single reference star.

There are some tests we can make ahead of time to see what to expect. For example, we can compare an image taken on different (before) dates to see if they show consistent positions of the reference stars. We can compare images taken in a single session to measure the effects of atmospheric seeing and other factors.

To this end, I have made various images of this field of the sky over the months preceding the eclipse. They are among the least interesting of the astrophotos I have ever taken, since they show a single bright star (Regulus), and not much more. There are no deep sky objects of interest in this particular patch.; no galaxies, no nebulas, no star clusters, no Milky Way field. However, there are stars that can be detected, even in metropolitan light pollution, that match up with the Stellarium reference star database.

I took exposures for two consistency tests. One is multiple exposures of the same exact scene, with no changes in any settings. Ideally, the images would be identical, but if not, would be a measure of the dynamic atmospheric distortions, or perhaps the mechanical vibrations of my camera-telescope-tripod setup.

The second test changes the camera angle. The center of view remains approximately the same, but the camera rotates to 45 and 90 degrees. Each change requires a re-focus. If my frame centering and lens radial corrections were accurate, the stars should remain in the same locations.

Even if my lens corrections were imperfect, the first test: multiple exposures with no changes, should pass. Perhaps the lens corrections did not place the detected stars at their exact reference locations, but at least they should all fall at the same place. I could compare them against their reference, which might show an error, but if I compare them against each other, the differences should vanish.

The second test, comparing images of the same field but at different angles (portrait vs landscape etc), is a simulation of pictures taken at different times. It is a “best case” test: everything is the same except the camera was rotated and refocused, compared to the real-world case for the “after” image where not only is the angle different, the telescope alignment will have a slightly different center, the angle in the sky (and its atmospheric refraction) will be different, the temperature, elevation, and air pressure will be different, and many other uncontrolled (and unknown) variables will differ from those of the before image. This test tells us the best we can expect from comparing before and during images.

Here are the results of the first test, where a second image is compared to the first, all else being equal. The stars should be detected at the same exact positions, and yet they aren’t.

compareBeforeAndDuring

The differences between the positions of the same stars in two successive exposures. There are approximately eight stars detected (in green, fewer in the other channels). This is an indication of the variations introduced by the atmospheric seeing.

 

The standard deviation of this comparison was 2.3 arcseconds. The comparisons of other successive frames yielded standard deviations of 2.1 and 1.6. It appears that the consistency of star positions as detected by my equipment is about 2 arcseconds. This is consistent with reports of the atmospheric seeing in Minnesota.

I was curious about how to characterize “seeing” and found these interesting links (there is always too much to explore and investigate to the depth I would like):

Astronomical Seeing Part 2: Seeing Measurement Methods
https://www.handprint.com/ASTRO/seeing2.html

Lucky Exposures: Diffraction Limited Astronomical Imaging Through the Atmosphere
http://www.mrao.cam.ac.uk/projects/OAS/publications/fulltext/rnt_thesis.pdf

 

There is another domain of relevant knowledge: how to determine if the distribution of observations is the same, or different, from another. I will be comparing position measurements from the “before” condition to the “during” condition when the sun’s gravity will have a possible influence. How can we tell if the measurements are from truly different conditions, rather than the normal variations caused by noise? Here are some links I explored to try to answer this question:

Are Two Distributions Different?
http://www.aip.de/groups/soe/local/numres/bookcpdf/c14-3.pdf

Goodness of Fit Tests
http://www.mathwave.com/articles/goodness_of_fit.html

Tests of Significance
http://www.stat.yale.edu/Courses/1997-98/101/sigtest.htm

 

 

Posted in Uncategorized | Leave a comment

Corona tangent

I encountered a report about the predicted corona and wondered how it would compare to what we actually saw.  I do not know the orientation of either the simulation or of my image (where is the solar north pole?).  Are these similar?

Update 20170906:  I still don’t know the absolute orientations of either image, but found that there was a correlation of the field lines that seem to stream directly out from the Sun.  See if this is a visual match:

coronalForecastOverlay.2up

Posted in Uncategorized | Leave a comment

Eclipse Day!

The good weather held and we had zero clouds and negligible smoke for the eclipse.  Temperatures were climbing in the bright morning sun, but stalled and then dropped during the partial phases of the eclipse– enough that we donned our fleece jackets again!  Here are some shots of the event (click to enlarge).

Posted in Uncategorized | Leave a comment

Step EE-3: Finding the best fit

We now have a way to transform the stars detected in an image into our virtual camera reference frame, but the previous step was just the “rough alignment” based on two bright stars. This is vulnerable to errors in how those two star positions were identified, especially if they were bright enough to saturate the detector, or were influenced by poor atmospheric seeing conditions.

More importantly, we are attempting to measure a tiny change in radial distance from the center of the sun. If we make a small error in where that center point is, by even a fraction of a pixel, it will affect all of our distance measurements to the stars revealed during the eclipse.

This means that it is less important to align the star positions as it is to get their angles with respect to the sun correct. If the angles are correct, then we can measure the radial distance from the image center and be confidant when we compare the “before” distances to the “during” distances that we are measuring from the same center point and are not being fooled by some offset to the actual center. This is particularly important in the before images. In the during image, we can, in principle, locate the center of the sun (though this too has uncertainties since we are seeing the moving moon’s edge, not the sun’s). The collection of observed stars is an unambiguous pointer to the sun’s position at the moment of maximal eclipse.

To this end, the output of step EE-3 is the small additional offset and rotation that places the observed stars in best angular alignment with known angles of the reference stars. This small adjustment will be applied to all of the detected stars to obtain their final position in the reference image plane. Their radial distance from the sun is then easily computed.

Matlab script radialAlign.m was created to do this task. It takes the results from step EE-2, the rough alignment and transform to the reference frame. The output is a set of star positions—the reference star positions in the reference frame, and the slight deviations from those positions as detected in the image.  In an ideal world, the deviations would be zero in the “before” image, and would show some gravitational deflection in the “during” image.

Posted in Uncategorized | Leave a comment

Step EE-2A: A side process to calibrate (radial) lens distortion

radialFitTestErrorsUnexpected errors while looking for the best angular fit between the imaged stars and their reference locations.

 

My early efforts to map the imaged stars onto their virtual camera positions showed unexpected errors. They were close, but displayed error amounts that were inconsistent with a simple scale or rotation mismatch.

It seemed possible that the errors came about because of the radial distortion of the lens, which one reference suggested might be as much as 0.05%, a huge amount for this measurement.

To calibrate, a frame with more stars included (via increasing the ISO) were compared with their target (reference) locations and the radial errors were plotted and a correction curve computed to best fit them.

radialCalFig2Reference stars and detected stars. The correlated pairs are used for evaluating radial distortion.

 

There are reference stars that do not have a corresponding star in the image, but there are many more detected stars without a reference coordinate. These may be spurious noise, or faint stars that were not listed in the reference star database (displayed by Stellarium).

radialCalFig7The correlated pairs of reference and detected image stars with an indication of the position discrepancies. Near-center stars are too close, and farther stars are too far.

 

The errors range over +/- 15 arcseconds, a few pixels in the image, but ten times the amount of deflection signal we are trying to detect.

To improve on this, we need to perform the rigid transform that finds the image center and angle that best aligns the detected stars with their reference locations.

 

radialCal6The radial errors, plotted as a function of distance from frame center. A polynomial fit is made to approximate them.

 

A radial correction function can then be obtained by finding a polynomial fit to the residual errors. This means that a function with factors of radius: r0, r1, r2, r3 etc. is used to approximate the distortion introduced by the lens. Knowing the distance from the center, we can use it to compute how the lens has shifted the star positions toward or away from the center, and then correct them.

I experimented with several forms of the polynomial function and found that the best behaved was among the simplest. It did NOT include an offset term (r0), and the linear term (r1, equivalent to magnification), was minimized by adjusting the overall base image magnification (arcseconds per pixel). This results in the linear term representing the slight differences in magnification between red, green and blue channels, something that apochromatic lenses strive to match but cannot be perfect at. The remaining error was well represented by the r2 term.

Here are the results averaged from multiple image frames of the eclipse target area:

Camera base magnification to reference frame (arcseconds per pixel):
scale = 2.8443;     % average of EOS 6D calibration frames taken 20170601

This value is used in imagePos.m (step EE-2) as the scale factor to bring star positions into the reference frame where one increment (pixel) equals one arcsecond.

Linear (r1) coefficients (x10-3):
R            0.7731
            G            0.5185
            B            1.0245

Squared (r2) coefficients (x10-3)
R            1.1969
            G            1.2564
            B            1.1745

The linear coefficients are the differential magnifications between the red, green, and blue channels of the image. Green has a small (1/2 arcsecond per thousand) magnification difference from the base magnification. Red is a little higher, and the blue correction is about twice that of the green channel. The image would show a star’s image as having a green fringe toward the image center and a blue fringe at the outside. This captures the general chromatic aberration of the Televue-85 telescope I am using. It is a highly corrected lens that offers beautiful visual views; only a high resolution camera can quantify its color mismatch.

The squared coefficients indicate a general behavior of lenses and how ideal they are in projecting their field of view onto the perfect geometric projection we are using as our reference frame. These corrections are also quite small, but over enough radial distance contribute a position error that builds up to many arcseconds.

The lens correction coefficients (the linear and squared terms just described) are applied in step 3, radialAlign.m which provides the final set of detected star positions which have been angularly centered, and radial lens corrected. There are still some errors from the reference positions, but they are much reduced.

The remaining errors, plotted against radial distance from the center are limited to around +/- 5 arcseconds, and do not show any significant trend. If they did, we could identify it and apply additional corrections to remove it.

 

radialCal9

The residual errors after using the polynomial correction.

 

The radial correction brings the RMS errors to within 2 arcseconds (red) to 4 arcsec (blue). I’m not sure what the source of this residual error is, but it is of the same magnitude as typical Minnesota astronomical seeing. It may just be the twinkling motion of the stars. We could check this by comparing two separate exposures to see if the errors are correlated. If they are, then it is NOT due to atmospheric turbulence. If they are uncorrelated, then we need to find some way to overcome this variation, because it is larger than the signal we are looking for.

 

 

Posted in Uncategorized | Leave a comment

Step EE-2: Transform to reference coordinates

Stellarium.20170821.totalityThe field of view for my EOS60Da APSC sensor (inner rectangle) and the EOS6D full-frame sensor (middle rectangle). The map overlay is the view from Idaho Falls on the day of the eclipse as displayed by Stellarium (and inverted to negative view).

 

Once we know where things are on the image, we need to correlate them to their celestial locations (right ascension and declination, or their equivalent). To do this, we need two landmarks in the image, one for position, and the other for rotation. We will position the image so the first landmark aligns with its celestial coordinate, and then rotate the image until the second matches up. This will provide a “rough alignment” so that the next step (point set registration) will have a good starting point for matching all of the stars in the field of view.

The reference coordinate system could, in principle, be any particular view of the sky. Using the pixel coordinates of the “before” image is a reasonable choice (as would be the “during” image). This would have the advantage of not needing to transform that image at all; it is already in the reference system — only the ‘during” image would need to be converted.

But to keep the mapping easy to think about, and utilize multiple before pictures, I chose the reference coordinate system to be that of a camera, aimed at the sun’s center, and oriented to “up” at the location where I will be taking the “during” photos (near Idaho Falls, ID).

To perform this transformation from image coordinates, we need the projected celestial coordinates of our landmark stars. One star that will be certain to be bright enough to record is Regulus, “the heart of the lion”, alpha-Leo, the brightest star in the constellation Leo, in which the sun will be residing in this eclipse (hence the astrological sign of those born during most of August, including me).

For the second landmark, the next brightest star in the field is nu-Leo, considerably dimmer. Its visual brightness is magnitude 5, compared to Regulus at 1.3, roughly 30 times dimmer.

To gain a sense of star positions, it is helpful to use a use a sky map simulator. I discovered a (free) open source program that does a very nice job of simulating the view of the night sky and allowing me to make maps of the area I am exploring. It is Stellarium (http://www.stellarium.org), and I have been able to identify and locate the candidate stars that will be near the sun during the eclipse.

Using Stellarium, I can identify the reference locations for my landmark stars:

Regulus:             RA 10h 8m 22.01s, Dec 11o 58’ 2.9”
Nu-Leo:            RA 9h 58m 13.35s, Dec 12o 26’ 41.1”

These are the brightest, but there are a few others that may be useful in case one or both of these are not in the image.

I need to map these angles onto the image plane of the virtual camera in Idaho that is aimed at the sun. This is a basic calculation in computer graphics, a field that I accidentally have some experience in.

Now that we have a reference frame established, we need to transform our before and during images into it. This too is a standard operation in computer graphics. All that is needed is the transformation matrix, a set of six numbers. Given two reference star locations, the six numbers can be calculated and applied to all the other stars in the image to find out where they land in the reference frame.

The virtual camera can have any number of pixels we wish. I decided that it would be convenient if one pixel represented one arc-second in the sky.  The image frame from this virtual camera looks like this; the reference stars are plotted, and the alignment stars are marked. The stars detected from a one-second exposure of this part of the sky have been mapped into this reference frame.

imageViewHeiseA test “before” image transformed to the reference virtual camera view at eclipse time in Idaho.  The location where the sun will be is indicated, centered at coordinate (0,0).

Posted in Uncategorized | Leave a comment