Makers’ Mash-Up

As the nascent VR industry gears up for The Year of VR, the press and pundits are wrestling with how things will break out. There are several Head Mounted Display manufacturerers that will release their first products early this year, and they are initially positioning them as extensions of the established games market. The idea is that manufacturers need new content for people to play on their boxes, and game studios need new gizmos in which to establish markets for their content. The Oculus will initially ship with a traditional game controller. The Vive will provide hand sensor wands that allow finer manipulation. They’re both thinking in terms of studio-produced games.

The studio/manufactuer model is well-understand and huge — bigger than the motion picture industry. The pundits are applying that framework as they wonder about the chicken-and-egg problem of content and market both requiring each other to come first. Most discussion takes for granted a belief that the hardware product market enables and requires a studio to invest in lengthy development of story, art, and behavior, followed by release and sale to individuals.

But I wonder how quickly we will move beyond the studio/manufacturer model.

I’m imagining a makers’ mash-up in which people spontaneously create their own games all the time…

  • a place where people could wield their Minecraft hammers in one hand, and their Fruit Ninja swords in the other.
  • a place that would allow people to teleport from sandbox to sandbox, and bring items and behaviors from one to another.
  • a place where people make memories by interacting with the amazing people they meet.

I think there’s good reason to believe this will happen as soon as the technology will enable it.

Second Life is an existence proof that this can work. Launched more than a dozen years ago, its roughly 1M montlhy users have generated several billion dollars of user-created virtual goods. I think SL’s growth is maxed out on its ancient architecture, but how long will it take any one of the VR hardware/game economies to reach that scale?

Ronald Coase’s Nobel-prize-winning work on the economics of the firm says, loosely, that companies form and grow when growing reduces their transaction costs. If people can freely combine costume, set, props, music, and behaviors, and are happy with the result, the economic driver for the studio system disappears.

I think the mash-up market will explode when people can easily and inexpensively create items that they can offer for free or for reputation. We’ve seen this with the early Internet, Web, and mobile content, and offline from Freecycle to Burning Man.

High Fidelity’s technical foundation is pretty close to making this happen at a self-sustaining scale. There are many design choices that tend to promote or restrict this, and I’ve described some in the “Interdimensional conflicts” section at the end of “Where We Fit”. Some of the key architectural aspects for a makers’ mash-up are multi-user, fine-manipulation, automatic full body animation, scriptable objects that can interact with a set of common physics for all objects, teleporting to new places with the same avatar and objects, and scalability that can respond to unplanned loads.

What You Need To Know To Understand The FCC National Broadband Report.

The FCC is required by Congress to do lots of reports. Of these, the one that gets the most attention is the annual Report on broadband deployment under Section 706 of the 1996 Telecommunications Act (47 C.F.R. 1302). Sure enough, with the latest report announced as up for a vote at the FCC’s January open meeting, we can see the usual suspects gathering to complain that the FCC has “rigged the game” or “moved the goal post” or whatever sports metaphor comes to mind to accuse the FCC of diddling the numbers for the express purpose of coming up with a negative finding, i.e. That “advanced telecommunications capability” (generally defined as wicked fast broadband) is not being deployed in a timely fashion to all Americans.

 

As usual, to really understand what the FCC is doing, and whether or not they are actually doing the job Congress directed, it helps to have some background on the now 20 year old story of “Section 706,” and what the heck this report is supposed to do, and why we are here. At a minimum, it helps to read the bloody statute before accusing the FCC of a put up job.

 

The short version of this is that, because between 1998 and 2008 the FCC left the definition of “broadband” untouched at 200 kbps, Congress directed the FCC in the Broadband Data Improvement Act of 2008 (BDIA) (signed by President Bush, btw) to actually do some work, raise the numbers to reflect changing needs, and take into account international comparisons so as to keep us competitive with the world and stuff. This is why, contrary to what some folks seem to think, it is much more relevant that the EU has set a goal of 100% subscription of 30 mbps down or better by 2020 than what is the minimum speed to get Netflix.

 

Also, the idea that the FCC needs a negative finding to regulate broadband flies in the face of reality. Under the Verizon v. FCC decision finding that Section 706 is an independent source of FCC authority to regulate broadband, the FCC gets to regulate under Section 706(a) (general duty to encourage broadband deployment) without making a negative finding under Section 706(b) (requirement to do annual report on whether broadband is being deployed to all Americans in a “reasonable and timely manner”).

 

So why does the FCC do this report every year if they already have regulatory authority over broadband. Because Congress told them to do a real report every year. This is what I mean about reading the actual statute first before making ridiculous claims about FCC motivation. Happily, for those who don’t have several years of law school and are ld enough to have actually lived through this professionally, you have this delightful blog to give you the Thug Notes version.

 

 

More below . . . .

Continue reading