Come on, she said it was average.
Mycology
You mom may be an expert, but in this I think she's biased.
And I thought my joke was lacking creativity...
Trump's lawyers entered the chat...
Baby sized? That's huge for a mushroom!
What's a focus stack? Artificially increasing the depth of field to span the subject?
Not artificially, but by taking many photos of differing depths and computationally stitching them together. Various cameras have the in-build ability to automatically take many pictures in sequence with a variable amount of depth (which in itself is calles focus bracketing). And there is software especially designed to then compute a single photo with increased depth of field from a stack.
That's a valid technique and yet I'd call it artificial because there is no lens that produces this image. Heck, I'd even call the simple trick of a white sheet to add soft light from the underside artificial.
I understand where you are coming from, but I think that - perhaps without realizing it - you are using a definition of 'artificial' that practically categorizes all photography as 'artificial'.
For example: Camera sensors and also the older types of photographic film usually do not discriminate color directly. Techniques are used to combine multiple layers of color data together in order to replicate colors as we see them. Inside of modern a digital camera, for example, this is generally done my using a variable color filter grid on top of a monocromatic pixel array and then applying an algorithm to smooth out color data. Many camera users today may completely oblivious to this kind of processing, but the camera is making a lot of different choices for them and performing different kinds of processing. There are also other built-in features to removing aliasing, optical aberrations, color correction, etc...
Old school photographers would also need to combine filters or different material films together to create color renditions. It is just that, today, the camera does it for us. But photography involves a lot of these 'artificial' methods to capture an image.
Focus stacking is a technique in which one expands the range of an optical system by capturing multiple slices and combining them optically together to recreate a larger depth of field. This is technical photography.
So, when you call this 'artificial', well... The act of projecting an image into some kind of film and then somehow preserving that image either directly on the film or as a digital representation is an artificial process. All photography is artificial.
The processes such as colorspace conversion that a well-calibrated camera uses by default are to make its images closer to the natural ideal of what an image should be based on human vision. It's worse in some ways (dynamic range) and better in others (telephoto resolution). I get why you wouldn't consider focus stacking artificial because it's effectively simulating what our brain does automatically, but it's intentional photo manipulation like the old technique of dodging and burning.
It is not about colorspace conversion. Most color cameras today use a Bayer filter: https://en.wikipedia.org/wiki/Bayer_filter . The camera captures 3 almost-overlapping images, one green, one blue, one red. Using data from these three images, it calculates the red,green,blue values for each pixel. This combines a physical technique (the Bayer filtering) with digital software algorithms to produce the final image.
In focus stacking, one generates a set of overlapping images while scanning the focal plane. Software is then used to combine the in-focus slices to produce an image that is in focus over a wider depth of field. So, again, we combine a physical process (movement of the focal plane) with a digital processing method.
In the first case you have a technique that has been implemented at the hardware level by camera sensor engineers. The second is a technique that is implemented at the photographer level. I see both techniques as equally 'artificial'. In the first case the filters scan through colors. In the second case the focal plane is scanned. In the first case the people who developed the camera firmware did the work of automated processing, in the second case the photographer needs to do the processing themselves.
I don't mean to debate your definition, I just wanted to jump in and share my perspective.
Yes, I know how Bayer filters work. And there is almost always some colorspace conversion because the color filters are not ideal, not to mention other math like WB. There's a reason you need to specify colorspace when exporting RAW to JPEG.
Ah, alright! My reason for describing the details of the process was primarily to emphasize the parallels along the processing chain between different techniques.
I am curious about how you draw the line between 'artificial' and 'not artificial', hope you don't mind me asking.
- Is a black-and-white image produced by a camera without the color filter artificial?
- Is a landscape photograph generated by sticking multiple images together artificial?
- Is a long exposure image artificial?
- What about placing a monochromatic camera into a tripod, taking three different photos - one with a green, one with a blue, and one with a red filter, and then creating a color image using these three different images as an input?
Interesting examples, we're really splitting hairs here. Maybe you'll catch me contradicting myself with my idea of "artificial" as "deliberately going outside what is normally possible with the technology or challenging the realist nature of the medium" and I'll learn a lesson!
- Is the filter omitted for technical/practical reasons (security camera) or intentionally for artistic purposes? I'd say no and yes, respectively – the latter is specifically altering the equipment for a desired effect.
- Most likely I'd say yes because it's using a technique to increase FOV. The view from any given point is spherical and one needs to introduce some perspective to map it onto a cylinder or prism. That's not necessarily bad, but it's an intentional way to bypass technical limits.
- It's been known since the dawn of photography that longer exposures collect more light so the technique is part of the medium. So even if it's done to create an effect that one couldn't see with the naked eye, such as the sun "scanning" the sky arc by arc every day from solstice to solstice, I'd say it's not artificial... unless it's stacking exposures such as in planetary astrophotography.
- Artificial. A very manual and deliberate process to push the equipment's limitations.
Interesting examples, we’re really splitting hairs here.
Haha, maybe 😜 I did some reflection about why the term 'artificial' in the context of photography made me want to jump into the conversation in the first place. I think that the reason is that the term 'artificial' implies that there is a boundary between what corresponds to a 'natural' photograph and an 'artificial' photograph.
Thanks for responding to those examples and giving a definition, I think now I better understand what you mean when you say 'artificial'. I was interpreting it from a universal point of view of 'natural/artificial', but I see now that you meant it in the sense of the camera's nature. So, if one simply takes a photo with the camera, it is 'natural' in the sense that the camera's nature was enough to capture that image. When a human uses a technique that creates an image cannot be captured by the camera itself, then it is 'artificial'.
No need to continue discussing the semantics of 'artificial', I think we both know what each other means now 😄
Still, always to chat more about these things as I enjoy talking about techniques. I am actually considering getting a monochrome industrial camera to create some color images manually. I already have filters from UV to the near-IR. Like what I mentioned in example 4. I am curious about whether I can capture noticeably better luminances throughout by using the filters manually. I'm also keeping an eye for an affordable camera with this sensor type: https://www.sony-semicon.com/en/products/is/industry/multispectral.html ...
I get why the Bayer filter exists but I'm not really fond of RGGB being effectively the only one available. Why not RGBW with some interesting wavelength response of the white subpixel?
Same with LCDs. It wouldn't take much change in the manufacturing process much to create a WWW or YWB 1080p LCD that has less or no color but passes way more light, allowing less backlight or even a reflective mode, while still being driven with conventional electronics. These could be used in public transport signage etc. In some cases, a monochrome LCD with RGB backlight could also come in handy.
Also not really related but it infuriates me that Samsung turned the Bayer filter 45°, halved the pixel count and patented it as an OLED pattern so nobody can make similar displays.
Why not RGBW with some interesting wavelength response of the white subpixel?
Hmm, I'm not really sure. A monochrome pixel would be much more sensitive, but without a neutral density filter it might saturate when the RGB pixels are well-exposed. With a neutral density filter I think it could resolve better the variation of light intensities of very small features.
Same with LCDs. It wouldn’t take much change in the manufacturing process much to create a WWW or YWB 1080p LCD that has less or no color but passes way more light, allowing less backlight or even a reflective mode, while still being driven with conventional electronics
So, would the WWW be a monochrome LCD? Wouldn't these be similar to the ones sometimes used in small electronic displays like this one:
I am not sure of what the YWB would do.
These could be used in public transport signage etc. In some cases, a monochrome LCD with RGB backlight could also come in handy.
I am also interested in the use of the 'E-Ink' displays for public signage in well-illuminated places. I found a few examples online:
Also not really related but it infuriates me that Samsung turned the Bayer filter 45°, halved the pixel count and patented it as an OLED pattern so nobody can make similar displays.
I am not familiar with this... I looked it up and I think it is this? https://en.wikipedia.org/wiki/PenTile_matrix_family
I'll look into it. Interesting!
Large e-inks are unfortunately still quite expensive.
Yes, a WWW display is monochrome, tripling its light throughput. A YWB display is capable of color on the blue-yellow axis (although the color cannot be both bright and saturated) and has double the light throughput of RGB. What you're showing is a passive STN display, I'm after an active matrix (TFT or IPS). To save on driver development, there will still be subpixels, just without color – exactly the same as the normal RGB model except with clear gel instead of RGB for the color mask. Thanks to subpixel antialiasing that most OSs do by default, the extra horizontal resolution will not be wasted, at least with text.
BTW YWB's color gamut looks like this:
This might seem awful but remember that this is an extension of the YB color gamut (where the white component is 0) towards the top right (added white of course, which doubles the brightness of monochrome text):
If the factory can satisfy this, you can use any two colors (recommended saturated ones that add up to white):
As for the OLED, I mean this pattern:
Maybe this one is not Samsung's patent but either way, they sought to ban their patented pixel patterns' import to the US, effectively banning all but large-volume shipments of OLEDs (because the customs can't check for pixel patterns whenever a US repair shop orders a spare).
Some of these terms I am not yet familiar with, so I will need to do some reading. I'll save this comment and come back during the week. It seems like you are very knowledgeable about display technologies! Very cool
In short: (S)TN is almost always monochrome and in a calculator or Game Boy: just glass, electrodes and liquid crystal. It's cheap and customizable but it doesn't scale to high resolutions well and the contrast is poor. TFT (and the later IPS) is almost always color and uses a thin film transistor on each subpixel to hold its state between updates, simplifying driving while maintaining contrast at high resolutions.