I sometimes get asked about Croquet for computing devices with lower graphics capability, such as today’s phone/PDA/iPods. I think the train of thought is that there’s so much in Croquet that could be valuable independently of the immersive 3D environment, so shouldn’t that part be available on lesser machines?
I feel it is only worthwhile to initially build Croquet – all of Croquet and only one Croquet – on machines with the best commonly available graphics capability and also on those with no visual capability whatsoever!
There’s a lot in Croquet that has nothing to do with graphics: the shared simulation model of collaboration, edge-network scalability, and transparent persistence. Surely we could build applications that made use of these alone. However, I feel that the combination of these capabilities gives rise to a whole that is greater than the sum of its parts, and that this is best embodied in a spatially oriented user interface. Even small cheap complex devices will soon have good graphics capabilities. I call these “nextPods.” Hardware capabilities have tend to develop faster than our collective ability to develop appropriate user interface models that make effective use of them, so let’s aim high and let the hardware catch up.
Besides, community is crucial in the adoption of a new disruptive technology. I don’t want to create a low-res Croquet network that doesn’t play well with the high-res Croquet network.
However, while nextPods will someday have better graphics, many users will always be visually impaired. But the beautiful thing about a really “good idea” is that it is useful outside the limits of its original implementation. Genius is in the generalities, not the details. A spatially oriented, direct manipulation model of the world is A Good ThingTM regardless of whether it’s shown visually. Here’s three levels of how that could be realized:
How did people play computer games before there were video games? When I was a kid, I used to play a game called “adventure” (aka “Zork” or “Wumpus”).
“You are in a maze of twisty passages all different. There is a wall in front of you.”
> Look right.
“There is a wall in front of you.”
> Look left.
“There is a passage there.
> Turn left. Walk.
”There is a mailbox here.“
> Look inside.
”There is a map here!“
I’m undoubtedly misreporting the details. (I was pretty bad at the game.) But the point is that all the high level objects can be reported to the user audibly as a descriptive video service. (Our Brie objects already have recordable labels.) The user can then focus their attention on one of them and the whole process can repeat. This is an aural version of the Zoom User Interface. It all works because there are human-centered objects to be manipulated, not arbitrary computer stuff like ”files,“ ”windows“ and ”applications.“ Creating an immersive 3D user interface happens to create just the right sort of model. We should take a cue from literature and ”serious games“ and create learning environments, toys, and other applications as a narrative, so this fits right in.
Now, a good musician can hit an arbitrary note at the end of the piano keyboard without looking, and many visually impaired people are very good at acting on their own internal map of the space around them. If properly registered and oriented, many should be able to use a mouse if hovering over an object emitted a consistent and unique tone (and perhaps a spoken label). This will save time versus waiting for a spoken descriptive menu of choices. This is actually already built into Brie. The tones are stereo-located and distance-attenuated to the objects that produce them. We did it to provide a richer and more natural experience for all users, but I suspect that it’s A Good ThingTM for the visually impaired as well.
Finally, we can use the spatial orientation to help the user find things. As in any population, some blind folks have motor-coordination difficulties, and it wouldn’t do to have them waving a mouse around trying to hit an object. While we currently play sounds when the mouse enters or leaves an object, we could also play an attenuated sound as the mouse approaches an object. It would get louder as you got closer as a sort of ”warmer… colder… warmer… getting hot… hot… ding” guide. Each object in the current field of attention could produce a chord. A scan of the room would produce a series of chords: each space is a composition!
Clearly, the aural zoom interface is useful to all of us: in the car, as we walk down the street with our nextPod mic/earbud in, or as we take a bath. It’s really no different then a personal computer interface from any ordinary science fiction story. I do think this is worth doing, particularly as it can work for all Croquet spaces and thus allows more participation in a single world-wide Croquet network rather than segregating folks into a low-res and high-res network