Merging the Real and Virtual Worlds
For the last few weeks I've been messing around on a project with renowned digital media artist Max Kazemzadeh of the College of Visual Arts and Design (COVAD) at UNT to merge the real and virtual worlds. I've written an app that takes input from a webcamera using DirectShow and combines it with animation using DirectX. The final output is run through a pixel shader. The video above shows a character who walks from left to right across the screen across a line defined by the video input, in this case the edge of a magenta cut-out on a green background.
There are various toolsets around for doing this sort of thing, but I wanted the challenge of making one myself. I'm kind of obsessive that way. I want to know how it works under the hood.
The challenge is not just getting it done, but getting it to run at a decent frame rate, keeping in mind that the video camera delivers input at about 24fps, while the graphic card can in principle render at 60fps when tied to the vertical retrace. The result runs at about 30fps on my old 2GHz Toshiba laptop (shown) and upwards of 50fps on Metalwolf, my quad-core heat-generating monster uber-desktop, currently down for graphics-card melting repairs.