Transparent Computing

In What Is It About Immersive 3D?, I claim that being immersed in among the application components allows and encourages us to mix and match among bits and pieces of different applications. That is, we’re getting rid of the idea of having separate “applications” on a computer.

I forgot to mention the other aspect of immersive 3d: that we want to get rid of the computer. Well, actually, that we want to make using each application object feel like a real world object, not a computer thingie. The direct manipulation feel makes it easier to work with stuff, and the lack of indirect abstractions and symbols makes it easier to understand.

A few examples below the fold.

Recently, we were trying to figure out when dragging one component onto another should cause the first to be made a subcomponent of the other. We like doing this when we’re “building” stuff, but think it would be confusing if this happened all the time. But how do provide this capability for people who aren’t using some “power user” builder mode? We think the answer is to stop thinking like a software developer and start thinking like a kindergartener: we’ll provide a glue stick and let the user cover a component to make it sticky.

Imagine an Internet meeting in which a bunch of people are presenting and discussing ideas about, say, a new product. We want everyone to see and hear each other directly, rather than having, say, silent and motionless avatars in one window and some disconnected 3D text chat in another. A conference room setting should work. Someone should be able to put their PowerPoint presentation on a virtual screen, or maybe the engineer puts up the AutoCAD model on a huge virtual monitor. OK, so far, that’s just closing travel distance and giving us a well-outfitted conference room for zero cost. Plus you can leave everything up and come back a month or a year later, because there’s no need to clear the whiteboard for the next group coming in. But I want to do more than just reproduce the real world more cheaply and efficiently. I want to do things that I can’t do in the real world. So instead of showing the AutoCAD model on a flat monitor on the virtual conference room wall, I want to show the 3D model as a sort of hologram on the conference table. But most of the attendees don’t know how to use AutoCAD, and I want them to be able to manipulate the model. Someone might say, “Well, what if we made this part longer?” The avatar grabs a part of the model and stretches it, with all the other parts adjusting to match. “No,” says someone else. “It should have 3 wheels.” And they add the wheels. We want it to be as easy for casual users to manipulate stuff as in the real world, only more so.

There are already some pretty cool projects around in which multiple video cameras capture a live scene and make a dynamically changing, textured 3d model of it. So instead of cartoon avatars or flat faced monitor-heads at this conference, we can have video-realistic people. We can mix live 3D video with computer generated stuff such you can’t tell (and don’t need to tell) the difference. Except that the computer generated stuff can be manipulated.

I think this fits with David Smith’s or Julian Lombardi’s ideas of enhancing human performance.

About Stearns

Howard Stearns works at High Fidelity, Inc., creating the metaverse. Mr. Stearns has a quarter century experience in systems engineering, applications consulting, and management of advanced software technologies. He was the technical lead of University of Wisconsin's Croquet project, an ambitious project convened by computing pioneer Alan Kay to transform collaboration through 3D graphics and real-time, persistent shared spaces. The CAD integration products Mr. Stearns created for expert system pioneer ICAD set the market standard through IPO and acquisition by Oracle. The embedded systems he wrote helped transform the industrial diamond market. In the early 2000s, Mr. Stearns was named Technology Strategist for Curl, the only startup founded by WWW pioneer Tim Berners-Lee. An expert on programming languages and operating systems, Mr. Stearns created the Eclipse commercial Common Lisp programming implementation. Mr. Stearns has two degrees from M.I.T., and has directed family businesses in early childhood education and publishing.

4 Comments

  1. Tis comment is mostly orthogonal to your subject, but something in the discussion reminded me of a feature in the old SunOS windowing system that I liked a lot.

    Each window had a picture of a push pin on it. If you put the push pin “in”, the window stayed where it was; else it could be moved or closed. Also, you could set it so child windows either did or did not close when their parent window closed.

    There were a bunch of other nifty features of that UI that I can’t now recall. I only recall that I miss them.

  2. One feature of Open Look that I liked was mouse-over-to-focus, but I understand that it’s now considered to be in poor taste. Not sure why. It might be that people want more deliberate gestures when switching between applications. If so, I’m not sure that the rule really transfers to Brie.

  3. Bit of a random question here, I have been luring around the opencroquet site, & reading your blogs for a couple of months.

    Do you have any examples of afore mentioned projects for creating 3d models from multiple cameras?

    I have seen one or two (one at citris i think, which looks cool), but nothing that I can get my hands on.

  4. I don’t think HP’s Coliseum is available. As I understand it, it works with the latest high speed, multi-processor PCs, but isn’t ready for the current mass market. (If the following link doesn’t work, go to hpl.hp.com and type coliseum: http://search.hp.com/query….)

    Also, the official home of Croquet is http://croquetproject.org, although both URL’s currently indirect to the same place.

Comments are closed