I'm trying to make enemies in Witcheye flash red when they get hit, and it's a lot harder than you'd think. But to explain why, we're going to have to take a bit of a detour, into the fascinating history of computer graphics.
[cue Moog-heavy theme song]
Long, long ago, when I was a wee lad programming little games on my parents' IBM PCjr, computer graphics were a pretty simple business. Your screen ran at a resolution of probably 320x200, meaning that 60 times a second, the computer would draw 64000 pixels... which sounds like a lot, but is pretty small potatoes compared to today's computers, with resolutions of 2500x1400 or higher.
Because they were "only" handling 64000 pixels per frame, computers would handle all their graphics on the CPU. In other words, the same chip that was handling all the rest of the program information would handle all of the graphics, too. And it was nice: you had very direct control of any graphics you were creating. If you wanted to draw a red pixel in the middle of the screen, you'd say:
Draw (160, 100, 4);
(or something like that... it's been a while!)
...and you'd get a red pixel (color 4 in the PCjr's 16-color palette) at coordinates 160, 100. Like I said: nice.
It was also not so nice, in that even pushing graphics at very low resolutions by today's standards, it was slowwwwwww. CPUs weren't really made for graphics per se. They could do the math just fine, but you could do a lot better if you had a specialized chip to handle the visuals while the CPU took care of everything else.
This is what console video game systems had. The MOS 6502 CPU in the Nintendo Entertainment System wasn't as powerful as the Intel 8088 CPU in the PCjr, but the NES could play games that were vastly more graphically intensive than the PCjr, with multiple animated sprites and scrolling backgrounds. That's because the NES had a dedicated graphics chip called the PPU--picture processing unit. The NES CPU can concentrate on handling the game logic, while the PPU does all the drawing.
This is a big reason that "console games" and "PC games" were separate things for a long time--the hardware dictated a different style of play. (One platform that straddled the lines was the Amiga, which, unusually for a computer, had a dedicated graphics chip, and as a result had a lot of flashy, graphically intensive games more like what you'd see on consoles.) Nowadays, though--and really since the late '90s--the distinction has been blurred, because PCs now have dedicated graphics chips of their own: GPUs. (You can probably guess the acronym there.) As a result, a high-powered PC can push as many polygons as your PS4, and games now routinely get released across platforms in almost identical versions.
With increased specialization, though, come some limitations. Graphics on the good ol' NES work in very specific ways. Backgrounds are constructed out of 8x8 tiles with four colors each. Foreground objects are drawn with four color sprites. The color palettes and other variables have certain rigid controls. You really can't just draw a pixel on the screen the way you could on the PCjr.
Likewise, your control of graphics on a modern computer is less direct than on an old 1980s clunker. That's because GPUs are so different from CPUs on the most basic structural level.
A CPU is a generalist; it's made to do just about anything. A GPU, on the other hand, is made to draw graphics--to calculate the shapes of polygons and to blast pixels onto the screen as quickly as possible. While a CPU can be put to the world's most complicated tasks, it can basically only do one thing at a time. (Well, nowadays CPUs can do a few things at a time, but never mind.) A GPU is designed to handle hundreds of thousands of pixels many times per second. It can do a lot of things at once, but they're relatively simple things.
The distinction here is kind of like the difference between one adult and 5000 kindergartners. The adult can file a tax return, but it will take him a lot longer to paint a fence. But getting kindergartners to do something requires pretty specific instructions. Likewise, GPU programs--called "shaders"--are very different from regular programs, and are often talked about with a certain degree of fear. ("Black magic," etc.)
This brings us back to where we started: I'm trying to make enemies in Witcheye flash red. I'll be back next week with an explanation of how I did it.
[cue Moog-heavy theme song]
Long, long ago, when I was a wee lad programming little games on my parents' IBM PCjr, computer graphics were a pretty simple business. Your screen ran at a resolution of probably 320x200, meaning that 60 times a second, the computer would draw 64000 pixels... which sounds like a lot, but is pretty small potatoes compared to today's computers, with resolutions of 2500x1400 or higher.
Because they were "only" handling 64000 pixels per frame, computers would handle all their graphics on the CPU. In other words, the same chip that was handling all the rest of the program information would handle all of the graphics, too. And it was nice: you had very direct control of any graphics you were creating. If you wanted to draw a red pixel in the middle of the screen, you'd say:
Draw (160, 100, 4);
(or something like that... it's been a while!)
...and you'd get a red pixel (color 4 in the PCjr's 16-color palette) at coordinates 160, 100. Like I said: nice.
It was also not so nice, in that even pushing graphics at very low resolutions by today's standards, it was slowwwwwww. CPUs weren't really made for graphics per se. They could do the math just fine, but you could do a lot better if you had a specialized chip to handle the visuals while the CPU took care of everything else.
This is what console video game systems had. The MOS 6502 CPU in the Nintendo Entertainment System wasn't as powerful as the Intel 8088 CPU in the PCjr, but the NES could play games that were vastly more graphically intensive than the PCjr, with multiple animated sprites and scrolling backgrounds. That's because the NES had a dedicated graphics chip called the PPU--picture processing unit. The NES CPU can concentrate on handling the game logic, while the PPU does all the drawing.
This is a big reason that "console games" and "PC games" were separate things for a long time--the hardware dictated a different style of play. (One platform that straddled the lines was the Amiga, which, unusually for a computer, had a dedicated graphics chip, and as a result had a lot of flashy, graphically intensive games more like what you'd see on consoles.) Nowadays, though--and really since the late '90s--the distinction has been blurred, because PCs now have dedicated graphics chips of their own: GPUs. (You can probably guess the acronym there.) As a result, a high-powered PC can push as many polygons as your PS4, and games now routinely get released across platforms in almost identical versions.
With increased specialization, though, come some limitations. Graphics on the good ol' NES work in very specific ways. Backgrounds are constructed out of 8x8 tiles with four colors each. Foreground objects are drawn with four color sprites. The color palettes and other variables have certain rigid controls. You really can't just draw a pixel on the screen the way you could on the PCjr.
Likewise, your control of graphics on a modern computer is less direct than on an old 1980s clunker. That's because GPUs are so different from CPUs on the most basic structural level.
A CPU is a generalist; it's made to do just about anything. A GPU, on the other hand, is made to draw graphics--to calculate the shapes of polygons and to blast pixels onto the screen as quickly as possible. While a CPU can be put to the world's most complicated tasks, it can basically only do one thing at a time. (Well, nowadays CPUs can do a few things at a time, but never mind.) A GPU is designed to handle hundreds of thousands of pixels many times per second. It can do a lot of things at once, but they're relatively simple things.
The distinction here is kind of like the difference between one adult and 5000 kindergartners. The adult can file a tax return, but it will take him a lot longer to paint a fence. But getting kindergartners to do something requires pretty specific instructions. Likewise, GPU programs--called "shaders"--are very different from regular programs, and are often talked about with a certain degree of fear. ("Black magic," etc.)
This brings us back to where we started: I'm trying to make enemies in Witcheye flash red. I'll be back next week with an explanation of how I did it.