Our Philip Rosedale gave a talk this week at the Silicon Valley Virtual Reality Conference. The talk was on what the Metaverse will be, comprising roughly of the following points. Each point was illustrated with what we have so far in the Alpha version of High Fidelity today. There are couple of bugs, but it’s pretty cool to be able to illustrate the future with your laptop and an ad-hoc network on your cell phone. It’ll blow you away.
The Metaverse subsumes the Web — includes it, but with personal presence and a shared experience.
The Metaverse has user generated content, like the web. Moroever, it’s editable while you’re in it, and persistent. This is a consequence of being a shared experience, unlike the Web.
A successful metaverse is likely to be all open source, and use open content standards.
Different spaces link to each other, like hyperlinks on the Web.
Everyone runs their own servers, with typable names.
The internet now supports low latency, and the Metaverse has low latency audio and matching capture of lip sync, facial expressions, and body movement.
The Metaverse will be huge: huge spaces, with lots of simultaneous, rich, interactive content. The apps are written and shared by the participants in standard, approachable languages like Javascript.
The Metaverse will change education. Online videos have great content, but the Metaverse has the content AND the student AND the teacher, and the students and teachers can actually look at each other. The teacher/student gaze is a crucial aspect of learning.
The Metaverse scales by using participating machines. There are 1000 times more desktops on the Web than there are in all the servers in the cloud.
The talk starts at about 2:42 into the stream: