David Smith made this video a year ago, showing how you could have:
- virtual world objects automatically populated by real world objects;
- scripted behavior for those interactive objects that:
- gives realtime display of real world data associated with those objects;
- allows you to control the associated real world objects (like Swayze in “Ghost”);
- all while functioning in a standard virtual world in which the participants can communicate with voice/video/text/gesture and spontaneously share apps, etc.
I don’t know why I failed to post this when it first came out. I think maybe I wanted to see how it would play out. Everthing shown was written in the widely used Python scripting language in a way that is added to the system by end-user/programmers, rather than being built into the system by the original developers. Would anyone actually do that? Would anyone use in-world computer screens to interact with external real world programs?
Well, the panels and the programming interface have had a year to mature, and we now have multiple government agencies and multiple big oil using it for their own operation centers. I’ve never seen most of it and can’t show it, so this remains the only video I can show of an operation center. There are some nice descriptions of portions of the Navy’s sub training environment, but no video. The stuff that is written and public can give you a feel what isn’t.