Tuesday, November 13, 2007


video


I made some major progress on Zack today. Firstly, I fixed the vertical jitter. It was due to the fact that the camera runs asynchronously and I made the mistake of zeroing out the height array on each video frame before computing new height values. Naturally the game rendering thread sometimes saw a zeroed out array, depending on where the video thread got interrupted.


I also made a first cut on some physics. Notice the new background setup, I can now pull the video camera further back.

Friday, November 09, 2007


video


I've been working on getting Zack, my virtual character, to turn more smoothly using an animation sequence from the artist. That involved making a bunch of the code into real code, as opposed to held together with chewing gum and barbed wire as in the last video I posted. I also found that the edge recognition works better with a black background. I think that with another day's work it should be a solid enough foundation on which to build a cool demo.

Wednesday, November 07, 2007

video

Interacting with a Virtual Character

Now I now have a virtual character who will interact with you in a simplistic way. He doesn't like heights, so he walks back and forth between drop-offs. You can use your hand to help him from place to place, as you can see on the video. There are still some height glitches, as you can see when I carry him from the left side of the pit to the right side, so I'm not claiming it's perfect, but it's still pretty cool.





My Fan Club

Yesterday I had 4 (count them, 4) 120mm fans installed into Metalwolf. It looks impressive, almost the entire left wall of the case (the left side of the image above) is now made up of fans. So far today (touch wood) it seems to be running a lot cooler. Fingers crossed!

Monday, November 05, 2007

video

Merging the Real and Virtual Worlds

For the last few weeks I've been messing around on a project with renowned digital media artist Max Kazemzadeh of the College of Visual Arts and Design (COVAD) at UNT to merge the real and virtual worlds. I've written an app that takes input from a webcamera using DirectShow and combines it with animation using DirectX. The final output is run through a pixel shader. The video above shows a character who walks from left to right across the screen across a line defined by the video input, in this case the edge of a magenta cut-out on a green background.

There are various toolsets around for doing this sort of thing, but I wanted the challenge of making one myself. I'm kind of obsessive that way. I want to know how it works under the hood.

The challenge is not just getting it done, but getting it to run at a decent frame rate, keeping in mind that the video camera delivers input at about 24fps, while the graphic card can in principle render at 60fps when tied to the vertical retrace. The result runs at about 30fps on my old 2GHz Toshiba laptop (shown) and upwards of 50fps on Metalwolf, my quad-core heat-generating monster uber-desktop, currently down for graphics-card melting repairs.