In older versions of Windows, if you had a video playing, took a screenshot, and pasted that screenshot into Paint, you could sometimes see the video continue to play inside Paint. What kind of sorcery enabled this to happen? A few of you will realise instantly why this used to happen: render surfaces. Back in at least the Windows 9x days, playing video involved drawing solid green where you wanted the video to go (the video player window), rendering the video pixels to a surface shared with the graphics card, and then have the graphics card replace said green pixels with the video pixels from the shared surface.
This approach has a whole array of benefits, not least of which is that it allowed you to render the video on a thread separate from the main user interface, so that if the main interface was sluggish or locked up, the video would keep rendering properly. You could also create two shared surfaces to render multiple frames at once, thereby eliminating tearing. Knowing this, it should be obvious what’s going on with the screenshot and Pain story.
Now, when you load the image into Paint or any other image viewer, Windows sends those green pixels to the video card, but if the media player is still running, then its overlay is still active, and if you put Paint in the same place that the media player window is, then the green pixels in Paint get changed into the pixels of the active video. The video card doesn’t know that the pixels came from Paint. Its job is to look for green pixels in a certain region of the screen and change them into the pixels from the shared surface.
If you move the Paint window to another position where it doesn’t overlap the media player, or if the media player isn’t playing a video, you will see the bitmap’s true nature: It’s just a bunch of green pixels.
↫ Raymond Chen at The Old New Thing
I’ve never had this particular oddity happen, but I do have vague memories of video player windows rendering tons of green artifacts whenever something went wrong with the video player, the file it was trying to play, or whatever else, and I guess the cause of those green artifacts is the same. In modern operating systems, graphics rendering of the UI is done entirely on the GPU, with only the final composition being sent to your display.
As such, the green screen effect no longer occurs.

I ran into this about a week ago when I forgot to turn off video overlay in WinAMP 2.95 when taking a screenshot of my ThinkPad T410’s “Windows XP pretending to be 98 SE” desktop.
Fun little surprise.
Thom Holwerda,
I don’t remember if it had to be green, I vaguely remember something using pink, but I could be misremembering. The effect would have happened regardless of software rendering or GPU rendering (directx or opengl). The reason for this wasn’t the source of the pixels, but the fact that a separate bitmask/alpha channel wasn’t used. Instead the hardware was designed to look at the pixel value and replaced matching colors with the decoded video. This is similar to how transparent GIF files map a specific color index to be transparent – transparency takes up one of the colors, only implemented in hardware.
Back in the day I didn’t have a TV, only a computer with a matrox card. Screenshots didn’t work (it could capture the “transparency” colors instead). If you wanted to take a screen shot of the actual video, you had to disable hardware acceleration. Or, as I found on matrox cards, playing a second instance of the video file would result in it being played via software renderer while the original kept playing using hardware acceleration.
Alfman,
Before this, the “3d accelerators” were literally separate cards that took in VGA, looked for those pink / green / whatever pixels and replaced it with the game output.
All of them are just basic, primitive forms of texture rendering. But there is only a single, rectangular surface to paint on.
Today, this is part of the native GPU pipeline, the video is still rendered similarly, by a separate decoder, but there are no pink / green pixels needed.
(Except when taking screenshot, it would still render black due to DRM)
*nod* I had a Voodoo 2 and it used a pass-through cable to chain up to the 2D card externally.
ssokolow (Hey, OSNews, U2F/WebAuthn is broken on Firefox!),
I have a Voodoo 2 card somewhere, I see a lot of people talking about it as though it was using green-screen, but I don’t think the original voodoo cards were ever capable of chroma keyed on screen overlays. It was always full screen, and all or nothing output. The VGA input was just used to switch between normal VGA output and 3dfx rendered output.
Chroma-keying on a VGA input would have been more problematic because it has to happen in the analog domain. This would have lost even more colors and the 3dfx would become a slave to the VGA card’s output resolution and timing signals, which I don’t think was ever the case.
If you attached two monitors, you’d see the normal DOS or windows desktop on the VGA card and the 3d rendered game on the voodoo with no green-screen present. I actually found someone demonstrating this here…
“3Dfx Voodoo2 on a Modern PC! (FAQs Answered)”
https://youtu.be/91sEpXHnCOk?t=267
Voodoo3 cards had proper 2d & 3d support without needing a separate 2d video cards and worked more like the video cards of today without these funny loopback cables.
Good point. I imagine chroma-keying in the days of separate accelerator cards was achieved by tying them internally using the feature connector.
*nod* I no longer have my Voodoo 2, but I still have my Voodoo 3 3000 PCI, though I think it might need its VBIOS reflashed. (Last time I tried it, it wouldn’t POST and I assumed at the time that it was a compatibility bug in the PCI v1.0 motherboard I was trying it in, given that it was an older PC than what I originally used it with. I’ve since learned that it’s common for Voodoo cards to need VBIOS reflashing when they get this old.)
The Voodoo 1 and 2 were pass-through cards, while the Voodoo Banshee, 3, and beyond did 2D as well.
This is a natural result of specialization and generalization cycles.
The early 3D accelerators were highly specialized. They would run a (very restricted subset of) OpenGL API, and only handle that, but nothing else.
They later got merged with 2D, hence no longer needing that VGA pass through or other shenanigans.
Same with video acceleration, ethernet checksum calculation, encryption, shaders, GPU kernels, “neural cores”, …
This ebbs and flows are creating these weird artifacts of their time.