this post was submitted on 18 Sep 2025
271 points (99.3% liked)
pics
25155 readers
476 users here now
Rules:
1.. Please mark original photos with [OC] in the title if you're the photographer
2..Pictures containing a politician from any country or planet are prohibited, this is a community voted on rule.
3.. Image must be a photograph, no AI or digital art.
4.. No NSFW/Cosplay/Spam/Trolling images.
5.. Be civil. No racism or bigotry.
Photo of the Week Rule(s):
1.. On Fridays, the most upvoted original, marked [OC], photo posted between Friday and Thursday will be the next week's banner and featured photo.
2.. The weekly photos will be saved for an end of the year run off.
Instance-wide rules always apply. https://mastodon.world/about
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It's called post processing. Generally it takes multiple frames and combines them or gets other camera data.
In theory, if this were true, you should be able to get a light, place it where the moon is and point it at it, and it should replace it with a moon. But it doesn't.
Im familiar with post processing.
To be clear, the assumption is that the algorithms the phone is using to determine you were trying to take a photo of the moon are "smart" enough to identify it as a photo of a night sky focused on the moon, rather than a light bulb. I'm not sure how you'd set up a light of the correct brightness at infinite focal length to test this though.
ETA: I've never seen this post processing happen so starkly with anything other than a photo of the moon, so it sticks out pretty hard. And I take a lot of photos at work of things that are tough to capture clearly.
I had to dig through my phone to find these photos when i got home from work, but they were taken literally seconds apart with my phone on full optical + digital zoom. The detail on the second is absolutely absurd, while the first is what I typically see on my screen when I've tried to replicate since.
I reckon it's just processing a raw capture. The only difference is exposure, which yeah, phones could easily post process.
There are literally tests of Samsung phones replacing a blurred jpeg of the moon (the data is lost, there is no way to recreate it) with detailed images of the moon after it is "identified" as the moon.
https://www.reddit.com/r/Android/comments/11nzrb0/samsung_space_zoom_moon_shots_are_fake_and_here/
It isn't post processing, it is image generation and merging it with the source image.