I’ve been working with some test harnesses for our Croquet worlds. It’s been a real pain working outside of Croquet: getting things to happen across multiple platforms. Moving data around. It’s all so much easier in a virtual space that automatically replicates everything.
Anyway, we finally got it working enough that there are several machines in Qwaq’s Palo Alto office that are all running around as robots in a virtual world, doing various user activities to see what breaks. Being (still!) in Wisconsin, I have to peek on these machines via remote. I’m currently using Virtual Network Computing (VNC), but there’s also Windows Remote Desktop (RDP). These programs basically scrape the screen at some level, and send the pictures to me. So when these robots are buzzing around in-world, I get a screen repaint, and then another, and then another. And that’s just one machine. If I want to monitor what they’re all doing, I have to use have a VNC window open for each, scraping and repainting away. Yuck. If only there were a better way….
Well, these robots are all in a dedicated “Organization” called “Tests”. All I have to do is sign in to Tests with the normal Qwaq Forums client directly from my own home machine. I take a birds-eye position in the sky (with one press of the “End” key) and can watch all the robots do their thing in real time. There’s even a panel that lists all the (robot) users in the org and tells me if they are in a different space in the Organization, and puts up a little exclamation mark if they are being unresponsive. And of course, the graphics are all smooth.
How cool is that?
This is a nice illustration of the difference between the two approaches for distance collaboration:
VNC: I run a dumb client to see a slow-rate frame rate view of the whole desktop of each of the N individual remote machine. I have to manage N 2D windows, and each shows what that machine shows. I can’t change my observational perspective, nor participate in the test itself. There are N pairwise traffic connections. Each observed robot has to run additional software (a VNC server) and I have access to the whole machine on which it runs.
Virtual worlds: I run a smart client with one low-traffic connection that uses my graphics card to give a high interactive frame rate. I can independently choose my observational viewpoint and can interact with the robot participants. There’s nothing else needed on the robot machines, and I don’t get access to those machines – just access to the shared/common virtual world.