I was taught that science is all about managing complexity by creating abstractions over different domains. A common layman’s mistake is to anecdotally observe or hear that something is true at some level, somewhere, and assume that this fact or definition applies throughout every discussion. For example:
One hears that computers are “programmed in binary,” or that they “understand binary,” but in fact, programmers don’t write in binary. Programmers work at a higher level of abstraction than binary encoding.
One hears that computers use “digital circuits,” that are simply “on” or “off”, but in fact, the physics of each electronic component is continuously variable. Device physics is at a lower level of abstraction than digital electronics.
So, what’s a server and what is peer-to-peer? It depends on what ‘s being discussed?
Early virtual earth Web map systems had a database of images. From a Web browser you indicated where you wanted to look, and the corresponding picture was sent to your browser. Nothing was “computed” in the sense of modeling or simulating the relationship between things, and nothing was “rendered” in the sense of generating rather than simply recalling a pre-generated image. I think this fits most people’s intuitive idea of being server-based. But how does the information get to your computer? It goes through one or more network-traffic computers called routers. On the back-end, the data storage and retrieval might be implemented in a number of ways, including distributed databases. It is only when we deliberately ignore such considerations that we can isolate the function of “connect (somehow) here and get me all of the result that I want, which I will simply display.”
Now, suppose you want to zoom in to different places or scroll around. You could imagine having such images generated for you on the fly, on the “server”, but that would be quite slow. Partly there’s a problem with loading the server, and party there’s a problem with getting the request to the server and the result back fast enough so that it feels interactive. And so providers have given each user a player of some sort (as a separate program or a browser “plug-in”). The server still feeds the player images at different locations or resolutions, and the player uses these to render the transitions smoothly. I would say that this kind of virtual environment is doing the rendering at the user’s machine, rather than at the server. Is this still server-based? Sure, but not quite in the same way as the first example. Distributing some of the processing to the user’s machine allows a greater level of capability.
Three-dimensional virtual worlds also do the rendering at the user machine. Just as with the previous pan-and-zoom example, the possibilities cannot be pre-generated. Furthermore, thanks to the gaming industry, modern graphics processors provide tools to offload 3D rendering to the graphics card, so that the user machine can produce the visual scene without slowing down the user machine’s central processor.
However, another element is introduced by 3D virtual worlds that share in real-time between simultaneous users. There is an ever-changing model of where everything is, and what everything is doing. One way to handle this is to have everyone send information about what they are doing to a central “server”, which integrates the information and stores it in some sort of live database. The user machines can get updates of what they need from there. There are quite a few ways to do this: some have the integration (aka simulation) server and the database being one and the same. Some distribute the database. To my way of thinking, these have a centralized or server(s)-based simulation, although the rendering is still distributed to the user machines.
Croquet moves not only the rendering to the user’s machine, but also the simulation. It does this through a magical trick that I’ve described elsewhere. But this trick, called Simplified Tea Time, emphasizes something we were able to ignore in the three previous paragraphs: there are messages that have to get routed to and from the user’s machine. For individual users of the most server-based virtual earth Web system, the user’s requests to pan and zoom need only go to the server. For a shared virtual world like Croquet, a user’s movement must get routed to each other user. In Simplified Tea Time, this is done by having each user of a given space send their messages to the same router, which then distributes the messages to all the users in that space. Is that server-based? I don’t know. Such a router is doing no more computation than the box with the blinking lights that connects your home computer and the Internet and through which all your traffic goes (including Croquet and all your P2P traffic). Is that blinking box a server? It is a bottleneck of sorts, however, and so is a Croquet router. (Croquet architect David Reed is trying to create a “Full” Tea Time architecture that would allow for distributed routers made of end-user machines.)
And what of additional services?
- A virtual world system (including Croquet) might require a login, and that might be handled as a service on some server. Does the virtual world have an economy of virtual dollars, game points, karma, experience rating, or student-score to be administered in some more centralized or more distributed way?
- There are many static pictures, textures, models, documents, persisted worlds not-in-current-use, and other resources that might be used in a virtual world system, and these might come from a file server or database (distributed or otherwise), or a peer-to-peer network.
- Some world systems (including Croquet) allow you to interact with a Web page in-world, and of course that uses the same server-based WWW that everyone uses out-of-world. But on top of that there may be a streaming-media distribution system to funnel the Web page interactions to the external WWW server and to get the page data back for display. Such systems may be more or less server-centric (e.g., based on VNC or other distributed display technologies, or mesh-p2p, or some sort of tree-distribution in between the two). Such media channels may be used for other on-demand streaming media, VoIP, telephony or video.
- If application sharing is available, it might be hosted on dedicated servers or provided by participants, and then interacted with via any of the above media technologies.
These questions might each be answered by their own myriad layers of more or less distributed technologies, and each service might have a different set of answers. If the answer to one or more of these (such as embedded Web browsing) is “server-based”, does that mean the virtual world as a whole is to be labeled “server based?” Of course not.
Overmind emergent.
John, I think one of the things necessary for an emergent system is that components at one level are highly robust and SELF-reconfigurable to support changes in the next higher level. The complex web of technologies I describe is quite robust and adaptable by (really really good) engineers. However, my instant reaction is that we’re nowhere near the self-recombinant abilities required for the fears you are voicing. Rest easy. This week.