Tales of the Sausage Factory:
If McConnell Trusted His Own Party, He’d Follow the “Bork Precedent” and Hold A Vote.

There are a lot of interesting questions about the possibility that the President will appoint Judge Sri Srinivasan to replace the late Justice Antonin Scalia on the Supreme Court. For example, what happens if the D.C. Circuit has not yet voted out the net neutrality case? If Srinivasan is nominated and confirmed, would he be able to participate in an appeal of the net neutrality case? I, however, do not propose to answer either of those questions here.

 

No, I’m going to take a moment to urge Republicans to do the right thing and follow the Bork precedent of which they make so much — have a vote and reject a nominee you don’t like. That’s what the Constitution says ought to happen, and it’s a perfectly legitimate thing to do.  The meaning of “with advice and consent of the Senate” has changed a bunch over the years, but it is clearly intended as a restraint and means of forcing cooperation between the Senate and Executive, as discussed by Hamilton in Federalist No. 76.  (Hamilton thought the power to reject appointments would be little used. Unfortunately, George Washington was right about the corrupting influence of party factionalism.)

 

So why have Senator Mitch McConnell (R-KY) and Senator Chuck Grassley (R-IA), the Chairman of the Judiciary Committee, refusing to hold even a hearing on the as-yet-unnamed Obama Appointee? Fear. They cannot trust their own party to toe the line, especially the 8 Republican Senators facing difficult re-election fights in swing states.

 

While the check on the President is the need for advice and consent of the Senate, the check on the Senate is that they do their work openly, with each member accountable to their state. If Republicans really believe that “the people deserve to decide,” they would vote to reject the nominee and let “the people decide” if they approve of how their Senator voted. But of course, that would mean letting the people actually talk to their Senators while considering the vote, and potentially voting against those Republican Senators who disappointed their independent and swing-Democratic voters.

 

So the GOP elite leadership have conspired once again to take matters out of the hands of the people. Not by following the Bork precedent, which got a floor vote. Not even by filibustering the nominee, as the combined Republican/Dixicrat alliance did for Abe Fortas. No, the GOP leadership have such little trust for their own party, and the voters, that they will not even let the matter come to the floor.

 

More below . . .

Continue reading

Inventing the Future:
The Bird Is the Word!

Believe it or not, there’s some great engineering and sportsmanship behind this:

View post on imgur.com

bird-is-the-word

We’re trying to create low-latency, high-fidelity, large-scale multi-user experiences with high-frequency sensors and displays. It’s at the edge of what’s possible. The high-end Head Mounted Displays that are coming out are pretty good, but even they have been dicey so far. The hand controllers have truly sucked, and we’ve been basing everything on the faith that they will get better. But even our optimism had waned on the optical recognition of the LeapMotion hand sensor. We made it work as well as we could, and then left it to bit-rot.

But yesterday LeapMotion came out with a new version of their API library, compatible with the existing hardware. Brad is the engineer shown above, and no one knows more than him how hard it is to make this stuff work. He was so sure that the new software would not “just work”, that he offered a bottle of Macallen 18 year old scotch to anyone who could do so. Like the hardworking bee that doesn’t know it can’t possibly fly, our community leader, Chris, is not an engineer and and just hooked it up.

chris-baring-magic-leap

In just minutes he made this video to show Brad, who works from a different office.

True to his word, Brad immediately went online and ordered the scotch, to be sent to Chris. Brad then dug out his old Leap hardware from the drawer next to his CueCat and made the more articulate version above.

We sent it to few folks, including Stanford’s VR lab, which promptly tweeted it, with the caption, “One small step for Mankind? Today we saw 1st networked avatar with fingers”.

So now we have avatars with full body IK driven by 18-degree of freedom sensors, plus optical tracking of each finger, facial features and eye gaze, all networked in real time to scores of simultaneous users, with physics.  In truth, we still have a lot of work to do before anyone can just plug this stuff in and have it work, but it’s pretty clear now that this is going to happen!

One more dig: Apple has long said that they can only make things work at the leading edge by making the hardware and software together, non-interoperable to anyone else. Oculus has said that networked physics is too hard, and open platforms are too hard. Apple and Oculus make really great stuff that we love. We make only open source software, and we work with all the hardware, on Windows, Mac, and Linux.

Inventing the Future:
Makers’ Mash-Up

As the nascent VR industry gears up for The Year of VR, the press and pundits are wrestling with how things will break out. There are several Head Mounted Display manufacturerers that will release their first products early this year, and they are initially positioning them as extensions of the established games market. The idea is that manufacturers need new content for people to play on their boxes, and game studios need new gizmos in which to establish markets for their content. The Oculus will initially ship with a traditional game controller. The Vive will provide hand sensor wands that allow finer manipulation. They’re both thinking in terms of studio-produced games.

The studio/manufactuer model is well-understand and huge — bigger than the motion picture industry. The pundits are applying that framework as they wonder about the chicken-and-egg problem of content and market both requiring each other to come first. Most discussion takes for granted a belief that the hardware product market enables and requires a studio to invest in lengthy development of story, art, and behavior, followed by release and sale to individuals.

But I wonder how quickly we will move beyond the studio/manufacturer model.

I’m imagining a makers’ mash-up in which people spontaneously create their own games all the time…

  • a place where people could wield their Minecraft hammers in one hand, and their Fruit Ninja swords in the other.
  • a place that would allow people to teleport from sandbox to sandbox, and bring items and behaviors from one to another.
  • a place where people make memories by interacting with the amazing people they meet.

I think there’s good reason to believe this will happen as soon as the technology will enable it.

Second Life is an existence proof that this can work. Launched more than a dozen years ago, its roughly 1M montlhy users have generated several billion dollars of user-created virtual goods. I think SL’s growth is maxed out on its ancient architecture, but how long will it take any one of the VR hardware/game economies to reach that scale?

Ronald Coase’s Nobel-prize-winning work on the economics of the firm says, loosely, that companies form and grow when growing reduces their transaction costs. If people can freely combine costume, set, props, music, and behaviors, and are happy with the result, the economic driver for the studio system disappears.

I think the mash-up market will explode when people can easily and inexpensively create items that they can offer for free or for reputation. We’ve seen this with the early Internet, Web, and mobile content, and offline from Freecycle to Burning Man.

High Fidelity’s technical foundation is pretty close to making this happen at a self-sustaining scale. There are many design choices that tend to promote or restrict this, and I’ve described some in the “Interdimensional conflicts” section at the end of “Where We Fit”. Some of the key architectural aspects for a makers’ mash-up are multi-user, fine-manipulation, automatic full body animation, scriptable objects that can interact with a set of common physics for all objects, teleporting to new places with the same avatar and objects, and scalability that can respond to unplanned loads.

Tales of the Sausage Factory:
What You Need To Know To Understand The FCC National Broadband Report.

The FCC is required by Congress to do lots of reports. Of these, the one that gets the most attention is the annual Report on broadband deployment under Section 706 of the 1996 Telecommunications Act (47 C.F.R. 1302). Sure enough, with the latest report announced as up for a vote at the FCC’s January open meeting, we can see the usual suspects gathering to complain that the FCC has “rigged the game” or “moved the goal post” or whatever sports metaphor comes to mind to accuse the FCC of diddling the numbers for the express purpose of coming up with a negative finding, i.e. That “advanced telecommunications capability” (generally defined as wicked fast broadband) is not being deployed in a timely fashion to all Americans.

 

As usual, to really understand what the FCC is doing, and whether or not they are actually doing the job Congress directed, it helps to have some background on the now 20 year old story of “Section 706,” and what the heck this report is supposed to do, and why we are here. At a minimum, it helps to read the bloody statute before accusing the FCC of a put up job.

 

The short version of this is that, because between 1998 and 2008 the FCC left the definition of “broadband” untouched at 200 kbps, Congress directed the FCC in the Broadband Data Improvement Act of 2008 (BDIA) (signed by President Bush, btw) to actually do some work, raise the numbers to reflect changing needs, and take into account international comparisons so as to keep us competitive with the world and stuff. This is why, contrary to what some folks seem to think, it is much more relevant that the EU has set a goal of 100% subscription of 30 mbps down or better by 2020 than what is the minimum speed to get Netflix.

 

Also, the idea that the FCC needs a negative finding to regulate broadband flies in the face of reality. Under the Verizon v. FCC decision finding that Section 706 is an independent source of FCC authority to regulate broadband, the FCC gets to regulate under Section 706(a) (general duty to encourage broadband deployment) without making a negative finding under Section 706(b) (requirement to do annual report on whether broadband is being deployed to all Americans in a “reasonable and timely manner”).

 

So why does the FCC do this report every year if they already have regulatory authority over broadband. Because Congress told them to do a real report every year. This is what I mean about reading the actual statute first before making ridiculous claims about FCC motivation. Happily, for those who don’t have several years of law school and are ld enough to have actually lived through this professionally, you have this delightful blog to give you the Thug Notes version.

 

 

More below . . . .

Continue reading

Inventing the Future:
Where do we fit?

computer-dimensions

There are many ways by which to categorize work in Virtual Reality: by feature set, market, etc. Here are some of the dimensions by which I view the fun in VR, and where High Fidelity fits in.  To a first-order approximation, these axes are independent of each other. It gets more interesting to see how they are related, but, first the basics.

Scope of Worlds: How do you count VRs?

As I look at existing and developing products for virtual worlds, I see different kinds of cyberspaces. They can be designed around one conceptual place with consistent behavior, as exemplified by social environments like Second Life or large-scale MMORGs like World of Warcraft. Or they can replicate a single small identical meeting space or game environment, or some finite set of company-defined spaces. Like my previous work with Croquet, High Fidelity is a “metaverse” — a network in which anyone can put up a server and define their own space with their own behavior, much like a Web site. People can go from one space to another (like links on the Web), taking their identity with them. Each domain can handle miles of virtual territory and be servered from one or more computers depending on load. Domains could be “side by side” to form larger contiguous spaces, although no one has done so yet.

Scope of Market: How do you sell VRs?

Many see early consumer VR as a sustaining technology for games, with the usual game categories of FPS, platformers, builders, etc. The general idea is that gaming is a large market that companies know how to reach. Successfull games create a feeling of immersion for their devotees, and VR will deepen that experience.

An emerging area is in providing experiences, such as a concert, trip, or you-are-there story.

Others see immersion and direct-manipulation UI as providing unique opportunities for teaching, learning, and training, or for meetings and events. (The latter might for playing or for working.)

Some make tools and platforms with which developers can create such products.

High Fidelity makes an open source platform, and will host services for goods, discovery, and identity.

Scope of Technology: How do you make and populate VRs?

Software:

By now it looks like most development environments provide game-like 3D graphics, some form of scripting (writing behavior as programs), and built-in physics (objects fall and bounce and block other objects). Some (including High Fidelity and even some self-contained packaged games) let users add and edit objects and behaviors while within the world itself. Often these can then be saved and shared with other users.

A major technical differentiator is whether or not the environments are for one person or many, and how they can interact. For example, some allow small numbers of people on the Internet to see each other in the same “space” and talk by voice, but without physical interaction between their avatars. Others allow one user to, e.g., throw an object to a single other user, but only on a local network in the same physical building. High Fidelity already allows scores of users to interact over the Internet, with voice and physics.

Hardware:

Even Low-end Head Mounted Displays can tell which way your head is turned, and the apps rotate your point of view to match. Often these use your existing phone, and the incremental cost beyond that is below $100. However, they don’t update fast enough to keep the image aligned with your head as it moves, resulting in nausea for most folks. The screen is divided in two for separate viewpoints on each eye, giving a 3-dimensional result, but at only half the resolution of your phone. High-end HMDs have more resolution, a refresh rate of at least 75Hz, and optical tracking of your head position (in addition to rotation), as you move around a small physical space. Currently, high-end HMDs connect to a computer with clunky wires (because wireless isn’t fast enough), and are expected to cost several hundred dollars. High fidelity works with conventional 2D computer displays, 3D computer displays, and head-mounted displays, but we’re really focusing the experience on high end HMDs. We’re open source, and work with the major HMD protocols.

A mouse provides just 2 degrees of free motion: x and y, plus the buttons on the mouse and computer. Game controllers usually have two x-y controllers and several buttons. The state of the art for VR is high frequency sensors that track either hand controllers in 3-position + 3-rotation Degrees Of Freedom (for each hand), or each individual finger. Currently, the finger sensors are not very precise. High Fidelity works with all of these, but see the next section.

Capture:

One way to populate a VR scene is to construct 3D models and animated characters using the modeling tools that have been developed for Computer Aided Design and Computer Generated movies. This provides a lot of control, but requires specialized skills and talent.

There is now pretty good camera/software apps for capturing real scenes in 3D and bringing them into VR as set-dressing. Some use multiple cameras to capture all directions at once in a moving scene, to produce what is called “cinematic VR” experiences. Others sweep a single camera around a still scene and stitch together the result as a 3D model. There are related tools being developed for capturing people.

Scope of Object Scale: What do you do in VRs?

The controllers used by most games work best for coarse motion control of an avatar — you can move through a building, between platforms, or guide a vehicle. It is very difficult to manipulate small objects. Outside of games, this building-scale scope is well-suited to see-but-don’t-touch virtual museums.

In the early virtual world Second Life, users can and do construct elaborate buildings, vehicles, furniture, tools, and other artifacts, but it is very difficult to do so with a mouse and keyboard. At High Fidelity, we find that by using two 6-DoF high-resolution/high-frequency hand controllers, manipulating more human-scaled objects becomes a delight. Oculus‘ “toy box” demonstration and HTC’s upcoming Modbox show the kinds of things that can be done.

Interdimensional conflicts:

These different axes play against each other in some subtle ways. For example:

  • Low-end HMD would seem to be an obvious extension of the game market, but the resolution and stereo make some game graphics techniques worse in VR than on desktop or consoles. The typical game emphasis on avatar movement may accentuate nausea.
  • High-end hand controllers make the physics of small objects a delightful first-person experience, but it’s profoundly better still when you can play Jenga with someone else on the Internet, which depends on software design limitations. (See October ’15 video at 47 seconds.)
  • Game controllers provide only enough input to show avatar motion using artist-produced animations. But 18-DoF (in two-hand-controllers plus high-end HMD) provide enough info to compute realistic skeletal motions of the whole body, even producing enough info to infer the motion of the hips and knees that do not have sensors on them. (See October ’15 video at 35 seconds.) This is called whole-body-Inverse-Kinematics. High Fidelity can smoothly combine artist-animation (for walking with a hand-controller joystick), with whole-body-IK (for waving with your hand controller while you are walking).
  • These new capabilities allow for things that people just couldn’t do at all before, as well as simplifying things that they could not do easily. This opens up unchartered territory and untested markets.

Notice that all the mentioned interactions depend on The Scope of Object Scale (above), which isn’t often discussed in the media. It will be interesting to see how the different dimensions of VR play against each other to produce a consistent experience that really works for people.


 

A word about Augmented Reality: I think augmented reality — in which some form of VR is layered over our real physical experience — will be much bigger than VR itself. However, I feel that there’s so much still to be worked out for VR along the above dimensions and more, that it becomes difficult to make testable statements about AR. Stay tuned…

Inventing the Future:
The Big Show

We do a lot of live demos. Our VR software is in alpha and we sometimes run into bugs, but we pride ourselves on showing the true state of things instead of slides and videos. It’s often a build with radical changes created just a few moments before. And yet the most common trouble is the unpredictable network connectivity one finds at demo venues. Our folks have done quite a few tethered to a cell phone.

But this demo from Sony is in a whole other class. Ouch. I’m sure we’ll all have great hand controllers within a year or so, but right now, hand controller hardware really sucks.

By the way, I think their IK looks great, and I especially like the jump. But notice that the avatars are stuck on a pedestal instead of moving around. I haven’t seen anyone combine IK with artist-designed animations the way we have.

Tales of the Sausage Factory:
My Amazingly Short (For Me) Quickie Reaction To Oral Argument

So, I suppose you’re wondering, how did oral argument went.  Since we have less than an hour before Shabbos, I will give you all my short version. You can download the recording from the D.C. Circuit here: Part I (wireline), Part II (wireless, First Amendment, Forbearance).

 

As always, the usual disclaimers apply. It is always perilous to try to guess from oral argument how things are going to go. Judges may ask a lot of questions to explore options, or they may let one judge pursue a line of inquiry while hanging back.  And there’s lots of issues that never get discussed that are part of the appeal and will get decided based on the written record. Or the judges may be leaning one way, but when they start drafting and hasj things out further they change their mind.

 

Taking all that into account, here are my impressions based on sitting in the front row listening and watching the judges and attending to all the nuances, as filtered to my obvious bias in wanting to see the FCC affirmed.

 

More below . . . .

Continue reading

Tales of the Sausage Factory:
Net Neutrality: Tomorrow Is The Judgement Day (Well, Oral Argument).

So here we are. One day more until oral argument on the FCC’s February 2015 decision to reclassify broadband as a Title II telecom service and impose real net neutrality rules. We definitely heard the people sing — 4 million of them sang the songs of very angry broadband subscribers to get us where we are today. But will we see a new beginning? Or will it be every cable company that will be king? Will Judges Tatel and Srinivasen and Senior Judge Williams nip net neutrality in the bud? Or will we finally meet again in freedom in the valley of the Lord?

 

You can read my blog post on the Public Knowledge blog for a summary of the last 15 years of classification/declasification fights, rulemakings, and other high drama. You can read my colleague Kate Forscey’s excellent discussion of the legal issues in this blog post here. This blog post is for all the geeky Tales of the Sausage Factory type factoids you need to know to really enjoy this upcoming round of legal fun and games and impress your friends with your mastery of such details. Thing like, so how do you get in to the court to watch? What opinions have the judges on the panel written that give us a clue? What fun little things to watch for during argument to try to read the tea leaves? I answer these and other fun questions below . . .

Continue reading

Tales of the Sausage Factory:
A Reflection on the First Thanksgiving.

It is fashionable now to conflate the 250 year American experience between the European settlers and the Native Americans as simply one of oppression and displacement. Or, as one friend put it: “I’m Thankful that a bunch of European religious fanatics came over and displaced the native population.”

But it wasn’t like that at the First Thanksgiving, or for about 35 years thereafter. In failing to appreciate the efforts of English settlers and Wampanoag tribes in the region to live together in peace in the first three decades of English migration to Plymouth, we ignore both that a better world was possible — and that we have the capacity to build a better world today . . .

 

Continue reading

Tales of the Sausage Factory:
In Memoriam: Wally Bowen — Internet Pioneer, Community Activist, and A Hell of God Guy.

Last week, we lost a true leader for rural communities, a true champion of social justice in communications policy, and a personal friend and inspiration.  Wally Bowen, founder of Mountain Area Information Network, died of ALS (aka “Lou Gehrig’s Disease”) on November 17 at the age of 63.

You can read his official obituary here. As always, such things give you the what and the where, but no real sense of what made Wally such an amazing person. I don’t have a lot of personal heroes, but Wally was one. Simply put, he gave the work I do meaning.

It’s almost Thanksgiving, and I am truly thankful for the time we had with Wally on Earth, even if I am sorry that it ended too soon. I elaborate below . . .

Continue reading