~600 images with 4 seconds exposure without a tracker. This is my first ever (successful) try of any object on the night sky. I did a post a few weeks ago asking for a camera setup, and I went and bought a panasonic lumix g85 with the 45-150 f4-5.6 lens I had in mind before that. I know this is not the ideal setup, but I am still very proud of that image. Although I would appreciate any feedback on how to improve my skills.
For untracked this looks not too bad. To improve I would do the following:
- in Siril: remove green noise, then photometric colour calibration
- reduce exposure length to 1s
- work on background extraction (maybe try GraXpert)
- if you are not already doing: take calibration frames, especially flats
Are you interested in sharing the raw stacked file? I use a (paid) deconvolution tool called BlurXterminator and I wonder if it can handle such extreme star shapes. If it works I will of course send you the file.
Thanks for the feedback. You can try it with your software. Here is the file: https://drive.proton.me/urls/M8309JQDYR#rc9F6A9RYyCk
But I read some negatives online about how it uses hubble images to calibrate images. Here is a forum post: https://forum.astronomie.de/threads/blurxterminator-nein-danke.336225/ It is in german, but I’m sure you will understand it. Let me know what you think
Edit: Since the software is probably trained on images from better telescopes/observatories it is most likely using their images to calibrate mine. I have no direct problem with that, but I wouldn’t consider this to be my own anymore. Personally I don’t like using AI to enhance or fix some of my stuff.
But I’d still like to see the results if it works.
Thanks for sharing! The software worked better than expected on your image!
On my research before purchasing the program I also stumbled over your linked forum post. However I found it very misleading, as the software does not generate details learned from other images but only works with data already in your image. As it is a deconvolution tool results can deviate slightly from the true nature. But that has little to do with AI being used here. I needed a whole semester at university to truly understand the maths behind it. My biggest problem is that the software isn’t open source so one can’t look into all the details. But there are already people working on open alternatives.
But this already is a very specific problem, don’t forget that the biggest difference for good quality makes the data itself. I wish best of luck on your journey!
Oh and I forgot to mention that one other advice I would give is to search the darkest location to shoot from that you can access. Lightpollutionmap really helps with finding such places.
Edit: unstreched file with only background extraction and deconvolution: https://drive.proton.me/urls/QD4870ZMF4#uyVBYxKgWxgb
This looks amazing. Thanks. And thanks for the advice. I’ll try it sometime. With the software I have the same problems as you do. I‘d also prefer it if it would be open source. And I don‘t own it, so I couldn’t confirm or deny anything from the forum. Sorry
Those star trails look longer than you would get from 4 second exposures, so I suspect you could do even better with more careful processing. Good start, though!
But the star trails are similar sized in the raw files. I used Siril to stack and had to set the roundness value to around 0.3 for it to register the stars. I saw a yputube fix for that. The guy also set the Relax PSF thing to true, maybe this made them not perfectly aligned?
Then it’s a problem with how you’re taking the images. Maybe the system is physically moving when you take the image?
Big up. I’ve been thinking about getting a camera and tracking mount just to photograph Andromeda.