Search Results for: direct manipulation

Digital Clutch

We’re getting some practical experience using Croquet away from of the confines of the lab and out on the wide open Internet. One problem we found is that our “outgoing” bandwidth (from each machine to the others) is often limited by, e.g., consumer Internet Service Providers. At my home, if I try to send more than about 30 KBytes/sec, my ISP kicks in a sort of governor in which it transmits the bits more slowly to keep my upload speed constant.

When this happens, it takes longer for the bits to reach the Croquet router that timestamps and redistributes them to all the participating machines. No participating machine, including our own, will act on this until it comes back to us from the router. So when the messages take longer to get to the router, they get timestamped for execution farther and farther from when they were sent. If we keep getting throttled, we end up falling further and further behind. It doesn’t take long before you do something and it seems like nothing ever happens in response. So you really don’t want to get your upload speed clamped.

Whenever we move the mouse around, the mouse position is sent along with a bunch of other stuff. When we send voice or video, much more data goes. This is fine on a high-speed Local Area Network, but not so good in the real world. We can and should send a lot less data. But how efficient is efficient enough? With different networks, there isn’t a single target number. The limits could even vary with the time of day or other traffic.

We’ve had some good preliminary results with a rather elegant solution.

Continue reading

Low res or no res?

I sometimes get asked about Croquet for computing devices with lower graphics capability, such as today’s phone/PDA/iPods. I think the train of thought is that there’s so much in Croquet that could be valuable independently of the immersive 3D environment, so shouldn’t that part be available on lesser machines?

I feel it is only worthwhile to initially build Croquet – all of Croquet and only one Croquet – on machines with the best commonly available graphics capability and also on those with no visual capability whatsoever!

Continue reading

Transparent Computing

In What Is It About Immersive 3D?, I claim that being immersed in among the application components allows and encourages us to mix and match among bits and pieces of different applications. That is, we’re getting rid of the idea of having separate “applications” on a computer.

I forgot to mention the other aspect of immersive 3d: that we want to get rid of the computer. Well, actually, that we want to make using each application object feel like a real world object, not a computer thingie. The direct manipulation feel makes it easier to work with stuff, and the lack of indirect abstractions and symbols makes it easier to understand.

A few examples below the fold.

Continue reading

intregration with document-oriented applications

How do we integrate Croquet with the Web? How do we integrate with legacy applications in general?

We interact with computers now in a document model developed by Alan Kay’s Xerox PARC team a long time ago. (Xerox: The Document Company.) It is as is if we have our head bent over our desktop, looking at a piece of paper. We slide other pieces of paper in and out below the face of our bowed head. In Croquet, Kay’s team today lets us lift our head up off the desk and look up at the world around us, including our coworkers. But just as the 3D world has paper within it, shouldn’t the Croquet world have document-based software within it? Yes!

Continue reading

component programming

Marshall McLuhan said that the interesting thing about a medium is what it makes the user become in order to use it.

What does Croquet make people become? Rick McGear, a Croquet advocate at HP, says that using Croquet makes us become programmers.

What is programming? The classic definition is of computational processes, but object-oriented programming seems to take a different view. And Croquet’s TeaTime architecture describes objects in terms of a mapping between message histories. I’m not finding process to be satisfying.

Continue reading

components

The computer spreadsheet doesn’t get enough credit among computer programmers. I think that more than any other one concept, VisiCalc, 1-2-3, and Excel were the killer app for the personal computer. As a programmer, I have tended first to think of formulae and calculation mechanisms when I think of spreadsheets, but the UI and development style are perhaps more significant. For each individual cell, you can look at the value, the formula, or the formatting, and change each through a menu. You can incrementally build up quite a complex application all on your own, never leaving the very environment you use to view the results. Why doesn’t all software work this way, only better? That’s what I’m working on.

Continue reading

Where do we fit?

computer-dimensions

There are many ways by which to categorize work in Virtual Reality: by feature set, market, etc. Here are some of the dimensions by which I view the fun in VR, and where High Fidelity fits in.  To a first-order approximation, these axes are independent of each other. It gets more interesting to see how they are related, but, first the basics.

Scope of Worlds: How do you count VRs?

As I look at existing and developing products for virtual worlds, I see different kinds of cyberspaces. They can be designed around one conceptual place with consistent behavior, as exemplified by social environments like Second Life or large-scale MMORGs like World of Warcraft. Or they can replicate a single small identical meeting space or game environment, or some finite set of company-defined spaces. Like my previous work with Croquet, High Fidelity is a “metaverse” — a network in which anyone can put up a server and define their own space with their own behavior, much like a Web site. People can go from one space to another (like links on the Web), taking their identity with them. Each domain can handle miles of virtual territory and be servered from one or more computers depending on load. Domains could be “side by side” to form larger contiguous spaces, although no one has done so yet.

Scope of Market: How do you sell VRs?

Many see early consumer VR as a sustaining technology for games, with the usual game categories of FPS, platformers, builders, etc. The general idea is that gaming is a large market that companies know how to reach. Successfull games create a feeling of immersion for their devotees, and VR will deepen that experience.

An emerging area is in providing experiences, such as a concert, trip, or you-are-there story.

Others see immersion and direct-manipulation UI as providing unique opportunities for teaching, learning, and training, or for meetings and events. (The latter might for playing or for working.)

Some make tools and platforms with which developers can create such products.

High Fidelity makes an open source platform, and will host services for goods, discovery, and identity.

Scope of Technology: How do you make and populate VRs?

Software:

By now it looks like most development environments provide game-like 3D graphics, some form of scripting (writing behavior as programs), and built-in physics (objects fall and bounce and block other objects). Some (including High Fidelity and even some self-contained packaged games) let users add and edit objects and behaviors while within the world itself. Often these can then be saved and shared with other users.

A major technical differentiator is whether or not the environments are for one person or many, and how they can interact. For example, some allow small numbers of people on the Internet to see each other in the same “space” and talk by voice, but without physical interaction between their avatars. Others allow one user to, e.g., throw an object to a single other user, but only on a local network in the same physical building. High Fidelity already allows scores of users to interact over the Internet, with voice and physics.

Hardware:

Even Low-end Head Mounted Displays can tell which way your head is turned, and the apps rotate your point of view to match. Often these use your existing phone, and the incremental cost beyond that is below $100. However, they don’t update fast enough to keep the image aligned with your head as it moves, resulting in nausea for most folks. The screen is divided in two for separate viewpoints on each eye, giving a 3-dimensional result, but at only half the resolution of your phone. High-end HMDs have more resolution, a refresh rate of at least 75Hz, and optical tracking of your head position (in addition to rotation), as you move around a small physical space. Currently, high-end HMDs connect to a computer with clunky wires (because wireless isn’t fast enough), and are expected to cost several hundred dollars. High fidelity works with conventional 2D computer displays, 3D computer displays, and head-mounted displays, but we’re really focusing the experience on high end HMDs. We’re open source, and work with the major HMD protocols.

A mouse provides just 2 degrees of free motion: x and y, plus the buttons on the mouse and computer. Game controllers usually have two x-y controllers and several buttons. The state of the art for VR is high frequency sensors that track either hand controllers in 3-position + 3-rotation Degrees Of Freedom (for each hand), or each individual finger. Currently, the finger sensors are not very precise. High Fidelity works with all of these, but see the next section.

Capture:

One way to populate a VR scene is to construct 3D models and animated characters using the modeling tools that have been developed for Computer Aided Design and Computer Generated movies. This provides a lot of control, but requires specialized skills and talent.

There is now pretty good camera/software apps for capturing real scenes in 3D and bringing them into VR as set-dressing. Some use multiple cameras to capture all directions at once in a moving scene, to produce what is called “cinematic VR” experiences. Others sweep a single camera around a still scene and stitch together the result as a 3D model. There are related tools being developed for capturing people.

Scope of Object Scale: What do you do in VRs?

The controllers used by most games work best for coarse motion control of an avatar — you can move through a building, between platforms, or guide a vehicle. It is very difficult to manipulate small objects. Outside of games, this building-scale scope is well-suited to see-but-don’t-touch virtual museums.

In the early virtual world Second Life, users can and do construct elaborate buildings, vehicles, furniture, tools, and other artifacts, but it is very difficult to do so with a mouse and keyboard. At High Fidelity, we find that by using two 6-DoF high-resolution/high-frequency hand controllers, manipulating more human-scaled objects becomes a delight. Oculus‘ “toy box” demonstration and HTC’s upcoming Modbox show the kinds of things that can be done.

Interdimensional conflicts:

These different axes play against each other in some subtle ways. For example:

  • Low-end HMD would seem to be an obvious extension of the game market, but the resolution and stereo make some game graphics techniques worse in VR than on desktop or consoles. The typical game emphasis on avatar movement may accentuate nausea.
  • High-end hand controllers make the physics of small objects a delightful first-person experience, but it’s profoundly better still when you can play Jenga with someone else on the Internet, which depends on software design limitations. (See October ’15 video at 47 seconds.)
  • Game controllers provide only enough input to show avatar motion using artist-produced animations. But 18-DoF (in two-hand-controllers plus high-end HMD) provide enough info to compute realistic skeletal motions of the whole body, even producing enough info to infer the motion of the hips and knees that do not have sensors on them. (See October ’15 video at 35 seconds.) This is called whole-body-Inverse-Kinematics. High Fidelity can smoothly combine artist-animation (for walking with a hand-controller joystick), with whole-body-IK (for waving with your hand controller while you are walking).
  • These new capabilities allow for things that people just couldn’t do at all before, as well as simplifying things that they could not do easily. This opens up unchartered territory and untested markets.

Notice that all the mentioned interactions depend on The Scope of Object Scale (above), which isn’t often discussed in the media. It will be interesting to see how the different dimensions of VR play against each other to produce a consistent experience that really works for people.


 

A word about Augmented Reality: I think augmented reality — in which some form of VR is layered over our real physical experience — will be much bigger than VR itself. However, I feel that there’s so much still to be worked out for VR along the above dimensions and more, that it becomes difficult to make testable statements about AR. Stay tuned…