Is Net Neutrality (And Everything Else) Not Dead Yet or Pining For the Fjords? Contemplating Trump’s Telecom Policy.

The election of Donald Trump has prompted great speculation over the direction of telecom policy in the near future. Not surprisingly, everyone assumes that the primary Republican goal will be to completely roll back net neutrality and just about every other rule or policy adopted by the Wheeler FCC — perhaps even eliminating the FCC altogether or scaling back it’s authority to virtual non-existence. Why not? In addition to controlling the White House, Republicans have majorities in the Senate and the House.  Jeff Eisenach, the head of Trump’s FCC transition team (now called “Landing Teams”), has been one of the harshest critics of the FCC under both Wheeler and Genachowski. So it is unsurprising to see a spate of articles and blog posts on the upcoming death of net neutrality, broadband privacy, and unlicensed spectrum.

 

As it happens, I have now been through two transitions where the party with the White House has controlled Congress. In neither case have things worked out as expected. Oh, I’m not going to pretend that everything will be hunky-dory in the land of telecom (at least not from my perspective). But having won things during the Bush years (expanding unlicensed spectrum, for example), and lost things in the Obama years (net neutrality 2010), I am not prepared to lay down and die, either.

 

Telecom policy — and particularly net neutrality, Title II and privacy — now exists in an unusual, quantum state that can best be defined with reference to Monty Python. On the one hand, I will assert that net neutrality is not dead yet. On the other hand, it may be that I am simply fooling myself that net neutrality is simply pining for the fjords when, in fact, it is deceased, passed on, has run up the curtain and joined the choir invisible.

 

I give my reasons for coming down on the “not dead yet” side — although we will need to work our butts off to keep from getting clopped on the head and thrown into the dead cart. I expect the usual folks will call me delusional. However, as I have said a great deal over the years: “If I am delusional, I find it a very functional delusion.”

 

More below . . . .

Continue reading

The George Washington Pledge: “To Bigotry No Sanction, To Persecution No Assistance.”

I’m starting what I call the George Washington Pledge.

 

THE GEORGE WASHINGTON PLEDGE

“I pledge to give to bigotry no sanction, to persecution no assistance. I pledge to work toward a world where everyone may sit under their own vine and fig tree, and there shall be none to make them afraid. A world that scatters light and not darkness in our paths, and makes us all in our several vocations useful here, and in due time and way everlastingly happy.”

 

Where did that come from, what does it have to do with George Washington and don’t I know that George Washington was a bigot who kept slaves? To answer the second question first, yes. I know that it is one of the great and cruel tragedies of history that George Washington himself, while expressing these concepts, was committing the ultimate bigotry and persecution by holding slaves and asserting that those of African descent were not fully human. Nevertheless, while this pledge made by the First President of the United States has never been fulfilled, it time we committed to making it true.

 

We live now in a time when it is the duty of those of us committed to the success of the American Experiment in self-rule to remember the promises and values which the founders of our country made the foundation of governance. Whatever their past success, whatever the sincerity of those who wrote the words, it falls on us to do our part to make these foundational values real. To quote the words of our first President: “If we have wisdom to make the best use of the advantages with which we are now favored, we cannot fail, under the just administration of a good Government, to become a great and a happy people.”

 

So where do the words of the George Washington Pledge come from? And what do I mean when I commit myself to it? See below . . . Continue reading

Are Police Jamming Cell Phones At Standing Rock Protest? The FCC Should Investigate.

Given the lack of coverage in mainstream media, you might not have heard about the ongoing protest against the construction of the Dakota Access Pipeline immediately upstream from the Standing Rock Sioux reservation aka #NoDAPL. You can find some good statistics on the pipeline and number of arrests associated with the protest here. Setting aside my personal feelings about democracy, freedom to peacefully protest, and how the Sioux concerns seem rather justified in light of the Alabama pipeline explosion, this has now raised an interesting communications issue that only an FCC investigation can solve. Are police jamming, or illegally spying, on communications at the protest and associated Sacred Stone Camp?

 

Over the last week, I have seen a number of communications from the protest about jamming, particularly in the period immediately before and during the Thursday effort by police to force protesters off the land owned by Dakota Access Pipeline. In addition, this article in Wired documents why tribal leaders connected with the tribal telecom provider, Standing Rock Telecom, think they are being jammed. I’ve had folks ask to speak to me using encrypted channels for fear that law enforcement will use illegal monitoring of wireless communications. As this article notes, there are a number of telltale signs that law enforcement in the area have deployed IMSI catchers, aka Stingrays, to monitor communications by protesters. However, as I explain below, proving such allegations — particularly about jamming — is extremely difficult to do unless you are the FCC.

 

Which is why the FCC needs to send an enforcement team to Standing Rock to check things out. Given the enormous public interest at stake in protecting the free flow of communications from peaceful protests, and the enormous public interest in continuing live coverage of the protests, the FCC should move quickly to resolve these concerns. If law enforcement in the area are illegally jamming communications, or illegally intercepting and tracking cell phone use, the FCC needs to expose this quickly and stop it. If law enforcement are innocent of such conduct, only an FCC investigation on the scene can effectively clear them. In either case, the public deserves to know — and to have confidence in the Rule of Law with regard to electronic communications.

 

More below . . . .

Continue reading

Discovery Part 2: How

 “The Universe is made of stories, not of atoms.” -Muriel Rukeyser

Last time, I discussed why we offer suggested locations to teleport to, and the 5 W’s of the user interaction for suggestions. This time I’ll discuss how we do that, with some level of technical specificity.

Nouns and Stories

Each suggestion is a compact “story” of the form: User => Action => Object. Facebook calls these “User Stories” (perhaps after product management language), and linguists refer to “SVO” sentences. For example, Howard => snapshot => Playa, which we display as “Howard took a snapshot in Playa”. In this case Playa is a Place in-world. The Story also records the specific position/orientation where the picture was taken, and the picture itself. Each story has a unique page generated from this information, giving the picture, description, a link to the location in world, and the usual buttons to share it on external social media. Because of the metadata on the page, the story will be displayed in external media with the picture and text, and clicking on the story within an external feed will bring them to this page.

snapshot-howard-playa

The User is, of course, the creator of the story. The user has a page, too, which shows some of their user stories, and any picture and description they have chosen to share publicly. If allowed, there’s a link to go directly to that user, wherever they are.

user-card-howard

For our current snapshot and concurrency user stories, the Object is the public Place by which the user entered. More generally, it could be any User, Place, or (e.g., marketplace) Thing. These also get their own pages.

place-card-playa

The “feed” is then simply an in-world list of such Stories.

Control

Analogously to any computer on a network and registering with ICANN, a High Fidelity user may create places at an IP address or even a free temporary place name, or they can register a non-temporary name. Places are shown as suggestions to a user only when they are explicitly named places, with no entry restrictions, matching the user’s protocol version. (When we eventually have individual user feeds, we could consider a place to be shareable to a particular user if that logged in user can enter, rather than only those with no restrictions.)

Snapshots are shown only when explicitly shared by a logged-in user, in a shareable place.

snapshot-review

Scale

At metaverse scale, there could be trillions of people, places, things, and stories about them. That’s tough to implement, and tough for users to make use of the firehose of info. But now there isn’t that many, and we don’t want to fracture our initial pioneer community into isolated feeds-of-one. So we are designing for scale, but building iteratively, initially using our existing database services and infrastructure. Let’s look first at the design for scale:

First, all the atoms of this universe – people, places, things, and stories – are each small bags of properties that are always looked up by a unique internal identifier. (The system needs to know that identifier, but users don’t.) We will be able to store them as “JSON documents” in a “big file system” or Distributed Hash Table. This means they can be individually read or written quickly and without “locking” other documents, even when there are trillions of such small “documents” spread over many machines (in a data center or even distributed on participating user machines). We don’t traverse or search through these documents. Instead, every reason we have for looking one up is covered by something else that directly has that document’s identifier.

(There are a few small exceptions to the idea that we don’t have to lock any document other than the one being looked up. For example, if we want to record that one user is “following” another, that has to be done quite carefully to ensure that a chain of people can all decide to follow the next at the same time.)

There are also lists of document identifiers that can be very long.  For example, a global feed of all Stories would have to find each Story one or more at a time, in some order. (Think of an “infinitely scrollable” list of Stories.) One efficient way to do that is to have the requesting client grab a more manageable “page” of perhaps 100 identifiers, and then look up the document on however many of those fit on the current display. As the user scrolls, more are looked up. When the user exhausts that set of identifiers, the next set is fetched. Thus such “long paged lists” can be implemented as a JSON document that contains an ordered array of a number of other document identifiers, plus the identifier for the next “page”. Again, each fetch just requires one more document retrieval, looked up directly by identifier. The global feed object is just a document that points to the identifier of the “page” that is currently first.  Individual feeds, pre-filtered interest lists, and other features can be implemented as similar long paged lists.

However, at current scale, we don’t need any of that yet. For the support of other aspects of High Fidelity, we currently have a conventional single-machine Rails Web server, connected to a conventional Postgres relational database. The Users, Places, and Stories are each represented as a data table, indexed by identifier.  The feed is a sorted query of Stories.

We expect to be able to go for quite some time with this setup, using conventional scaling techniques of bigger machines, distributed databases, and so forth.  For example, we could go to individual feeds as soon as there are enough users for a global feed to be overwhelming, and enough of your online friends to have High Fidelity such that a personal feed is interesting, This can be done within the current architecture, and would allow a larger volume of Stories to be simultaneous added, retrieved, scored, and sorted quickly.  Note, though, that we would really like all users to be offered suggestions — even when they choose to remain anonymous by not logging in, or don’t yet have enough experience to choose who or what to follow. Thus a global feed will still have to work.

Scoring

We don’t simply list each Story with the most recent ones first. If there’s a lot of activity someplace, we want to clearly show that up front without a lot of scrolling, or a lot of reading of place or user names. For example, a cluster of snapshots in the feed can often make it quite clear what kind of activity is happening, but we want the ordering mechanism to work across mixes of Stories that haven’t even been conceived of yet.

Our ordering doesn’t have to be perfect – there is no “Right Answer”. Our only duty here is to be interesting. We keep the list current by giving each Story a score, which decays over time. The feed shows Stories with the highest scores first. Because the scores decay over time, the feed will generally have newer items first, unless the story score started off quite high, or something bumped the score higher since creation. For example, if someone shares a Story in Facebook, we could bump up the score of the Story — although we don’t do that yet.

Although we don’t display ordered lists of Users or Places, we do keep scores for them. These scores are used in computing the scores of Stories.  For example, a snapshot has a higher initial score if it is taken in a high scoring Place, or by a high scoring User. This gives stories an effect like Google’s page ranking, in which pages with lots of links to them are listed before more obscure pages.

To keep it simple, each item only gets one score. While you and I might eventually have distinct feeds that list different sets of items, an item that appears in your list and my list still just has one score rather than a score-for-you and different score-for-me. (Again, we want this to work for billions of users on trillions of stories.)

To compute a time-decayed score, we store a score number and the timestamp at which it was last updated.  When we read an individual score (e.g., from a Place or User in order to determine the initial score of a snapshot taken in that Place by that User), we update the score and timestamp.  This fits our scaling goals because only a small finite number of scores are updated at a time. For example, when the score of a Place changes, we do not go back and update the scores of all the thousands or millions of Stories associated with that Place. The tricky part is in sorting the Stories by score, because sorting is very expensive on big sets of items. Eventually, when we maintain our “long paged lists” as described above, we will re-sort only the top few pages when a new Story is created. (It doesn’t really matter if a Story appears on several pages, and we can have the client filter out the small numbers of duplicates as a user scrolls to new pages of stories.) For now, though, in our Rails implementation, a new snapshot causes us to update the time-decayed score for each snapshot in order, starting from what was highest scoring. Once a story’s score falls below a certain threshold, we stop updating.  Therefore, we’re only ever updating the scores of a few days worth of activity.

Here are our actual scoring rules at the time I write this. There’s every chance that the rules will be different by the time you read this, and like most crowd-curation sites on the Web, we don’t particularly plan to update the details publicly. But I do want to present this as a specific example of the kinds of things that affect the ordering.

  • We only show Stories in a score-ordered list. (The Feed.) However, we do score Users and Places, because their scores are used for snapshots. We do this based on the opt-outable “activity” reporting:
    • Moving in the last 10 seconds bumps the User’s score by 0.02.
    • Entering a place bumps the Place’s score by 0.2.
  • Snapshot Stories have an initial score that is the decayed average of the User and Place – but a minimum of 1.
  • Concurrency Stories get reset whenever anyone enters or leaves, to a value of nUsersRemaining/2 + 1.
  • All scores have a half-life of 3 days on the part of the score up to 2, and 2 hours for the portion over 2. Thus a flurry of activity might spike a user or place score for a few hours, and then settle into the “normal high” of 2.  This “anti-windup” behavior allows things to settle into normal pretty quickly, while still recognizing flash mob activity.

 

For example, under these rules, one needs to move for about 3:20 minutes / day to keep your score nominally high (2.0).  More activity will help the snapshots you create during the activity, but only for a while, and snapshots the next day will only have an nominally high effect.

As another example of current rules, an event with 25 people will bump a place score by 5:

  • If it started at 2, it will back down to 4.5 in two hours, 2.5 in six hours, and back to 2 in 10 hours.
  • If it started at 0, it we be at 3.5 in two hours, and then roughly as above.

Search

We currently search the filter on the client, filtering from the 100 highest scoring results that we receive from the server. Each typed word appears exactly (except for case) within a word of the description or other metadata (such as the word ‘concurrency’ for a concurrency story). There is no autocorrect nor autocomplete, nor pluralization nor stemming. So, typing “stacks concurrency” will show only the concurrency story for the place named stacks. “howard.stearns snapshot” will show only snapshots taken by me.

When the volume of data gets large enough, I expect we’ll add server-side searching, with tags.

Conclusion

We feel that by using the “wisdom of crowds” to score and order suggestions of what to do, we can:

  • Make it easy to find things you are interested in
  • Make it easy to share things you like
  • Allow you to affirm others’ activities and have yours affirmed
  • Connect people quickly
  • Create atomic assets that can be shared on various mediums

In doing so, we hope to create a better experience by bringing users to the great people and content that are already there, and encourage more great content development.

Discovery Part 1: The Issue

“What is there to do here?”

banner

High Fidelity is a VR platform.

It’s pretty clear how to market a video game. It’s a little bit harder to connect users to a new VR chat room, conferencing app, or Learning Management System. We’re not making any of these, but rather a platform on which such apps can be made by third party individuals and companies. Once someone has our browser-like “Interface” software, people can connect to any app or experience on our platform — if they know where to go.

The tech press is full of stories about The Chicken and Egg problem: adoption requires interesting content, but content development follows adoption. Verticals such as gaming make that problem a little more focused, but games still require massive up-front investment in technology, content, and marketing. We are instead betting on user-generated content in both the early market and, as with the Internet in general, we expect user-generated content to be the big story in mainstream adoption as well. This makes it that much more important to hook users up with interesting people, places, and things in-world. The early Web used human-curated directories for news, financial info, games, and so forth, but we’re not quite sure what categories are going to have the most interesting initial experiences. And neither do our users!

We want an easy way for users to find interesting people, places, and things to explore, which doesn’t require High Fidelity Inc. to pick and decide what’s hot. We also want an easy way for creators to let others know about their content, without having to go through us nor a third party to market it.

Crowd Curation

One powerful model that has emerged for recognizing interest in user-generated content is crowd curation: a strong signal is produced by real user activity, and used to drive suggestions.

The signal can be explicit endorsement (likes, pins, tweets, and links) or implicit actions (achievements, or funnel actions such as visiting or building). The signals are weighted in favor of the users you value most: friends, strangers with lots of “karma”, or sites with lots of links to them.

There are various ways in which this information is then fed back to users. Facebook and Twitter provide a feed of interesting activity. Google offers suggestions as you type, and more suggestions after you press return. Amazon tells you at checkout that people who bought X also bought Y. However, the underlying crowd curation concept is roughly the same, and it has accelerated early growth (Twitter, Zynga), and ultimately provided enormous value to large communities and their users (Google, Facebook, eBay).

Of course, these systems don’t crawl High Fidelity virtual worlds, so we need to make our own crowd curation system, or find a way to expose aspects of our virtual world to the Web, or both. But more importantly, what do we want to share?

For Real

So, what should we suggest to our users? Ultimately, we want to suggest anything that will give a great experience: people to meet or catch up with, places to experience, and things from the marketplace to use wherever you are. But in these early days, your friends are not likely to have gear or to be online at any given moment. Places are under construction and without reputation. The marketplace is just forming.

Initially then, we’re starting with just two kinds of suggestions:

 

  1. Taking an in-world snapshot is something that any user can do from any place, and it puts participants onto the road to being content creators. The picture can be taken with the click of a button and requires no typing, which can be hard to do in an HMD. We automatically add the artist username and the place name as description. It often gives a pretty good idea of what’s happening, and we’ve arranged for clicking on the picture to transport you to the very place it was taken, facing the same way, so that you can experience it, too. Finally, it creates a nice visual artifact that you can take home and share outside of High Fidelity.

snapshot-card

 

  1. Even without necessarily knowing another High Fidelity user, it’s definitely more fun to go to places where people are. Even though we’re just in beta, there are always a few places that have people gathered, but they’re not always the same places. It’s hard for a person to know where to look. So we’re making suggestions out of public places, ordered by the number of people currently in them. No need for anyone to do or make anything on this one, as we pick up concurrency numbers automatically from those domains that share this info. (Anyone who makes a domain can control access to it.)

concurrency-card

 

These suggestions appear when you press the “Go To” button, which also brings up an address bar where you can directly enter a known person or place, or search for suggestions (just like a Web browser’s address bar). I can imagine someday offering information about related content in various situations, or a real time messaging and ticker widget for those who want to keep tabs on the latest happenings, but primarily we just want to allow people to “pull” suggestions when they are specifically looking for something to do.

In short, when a person presses the “Go To” button, they get a scrollable list of suggestions that give a visual sense of what has been happening recently, which offers people the chance to visit.

feed

Suggestions are available to both anonymous and logged in users: we don’t want to require a login to use High Fidelity. However, we would like to offer personalized feeds in the future based on your (optional) login. We also don’t share anything that you have not explicitly shared, and such sharing links to your (self-selected, non-real-world) username.

Sharing and searching are not restricted to our system. Every suggestion has a public Web page that can be shared on Facebook or Twitter, or (soon) searched on Google and other search engines. Clicking the picture or link on that page in a browser brings you to that same place in-world if you have Interface installed, just as if you had clicked on the suggestion within Interface. We feel this will make it easier for content creators to promote their places, snapshots, and eventually, marketplace items. We hope to create a “virtuous circle”, in which search and sharing brings people in through external networks that are much bigger than ours, introduces them to more content, and makes it easy for them to further make and share.

Does It Matter?

In the two weeks before we introduced an early form of this, a bit more than a third of our beta users were within 10 meters of another user in-world on any given day (excluding planned gatherings). Then we introduced a prototype of the concurrency suggestions (“N people are hanging out in some place name”), and over the next two weeks, nearly half each day’s users were near another a some point in their day. Since then, we’ve done other things to increase average concurrency, and we’re now near 100%.

I don’t have good historical data on snapshots, and the new data is quite volatile. Our private alpha “random image thread” averaged a healthy five entries a day for more than two years, including entries with no pictures and entries with multiple pictures. Now, on days when something interesting is happening, we get 20 or 30 explicitly shared pictures, with most days generating three to eight.

Next: How we do that

Dude – Who brought the ‘script’s to the party?

party

This week, some of our early adopters got together for a party in virtual reality. One amazing thing is how High Fidelity mixes artist-created animations with sensor-driven head-and-hand motion. This is done automatically, without the user having to switch between modes. Notice the fellow in the skipper’s cap walking. His body, and particularly his legs, are being driven by an artist-created animation, which in turn is being automatically selected and paced to match either his real body’s physical translation, or game-controller-like inputs. Meanwhile, his head and arms are being driven by the relative position and rotation of his HMD and the controller sticks in his hands(*).

So, dancing is allowed.

But the system is also open to realtime customization by the the participants. Some of the folks at the party spontaneously created drinks and hats and such during the party and brought them in-world for everyone to use. A speaker in the virtual room was held by a tiny fairy that lip-sync’d to the music. One person brought a script that altered gravity, allowing people to dance on the ceiling.

dancing


*: Alas, if the user is physically seated at a desk, they tend to hold their hand controllers out in front of them. You can see that with the purple-haired avatar.

FCC Tells You About Your Phone Transition — Y’all Might Want To Pay Attention.

I’ve been writing about the “shut down of the phone system” (and the shift to a new one) since 2012. The FCC adopted a final set of rules to govern how this process will work last July. Because this is a big deal, and because the telecoms are likely to try to move ahead on this quickly, the FCC is having an educational event on Monday, September 26. You can find the agenda here.

 

For communities, this may seem a long way off. But I feel I really need to evangelize to people here the difference between a process that is done right and a royal unholy screw up that brings down critical communication services. This is not something ILECs can just do by themselves without working with the community — even where they want to just roll in and get the work done. Doing this right, and without triggering a massive local dust-up and push-back a la Fire Island, is going to take serious coordinated effort and consultation between the phone companies and the local communities.

 

Yes, astoundingly, this is one of those times when everyone (at least at the beginning), has incentive to come to the table and at least try to work together. No, it’s not going to be all happy dances and unicorns and rainbows. Companies still want to avoid spending money, local residents like their current system that they understand just fine, and local governments are going to be wondering how the heck they pay for replacement equipment and services. But the FCC has put together a reasonable framework to push parties to resolve these issues with enough oversight to keep any player that participates in good faith from getting squashed or stalled indefinitely.

 

So, all you folks who might want to get in on this — show up. You can either be there in person or watch the livestream. Monday, September 26, between 1-2 p.m. For the agenda, click here.

 

Stay tuned . . .

Cleveland and the Return Of Broadband Redlining.

I am the last person to deny anyone a good snarky gloat. So while I don’t agree entirely with AT&T’s policy blog post taking a jab at reports of Google Fiber stumbling in deployment, I don’t deny they’re entitled to a good snarky blog post. (Google, I point out, denies any disappointment or plans to slow down.) “Broadband investment is not for the feint hearted,”

 

But the irony faeries love to make sport. The following week National Digital Inclusion Alliance (NDIA) had a blog post of their own. Using the publicly available data from the FCC’s Form 477 Report, NDIA showed that in Cleveland’s poorest neighborhoods (which are also predominantly African American), AT&T does not offer wireline broadband better than 1.5 mbps DSL – about the same speed and quality since they first deployed DSL in the neighborhood. This contrasts with AT&T’s announcement last month that it will now make its gigabit broadband service available in downtown Cleveland and certain other neighborhoods.

 

Put more clearly, if you live in the right neighborhood in Cleveland, AT&T will offer you broadband access literally 1,000 times faster than what is available in other neighborhoods in Cleveland. Unsurprisingly for anyone familiar with the history of redlining, the neighborhoods with crappy broadband availability are primarily poor and primarily African American. Mind you, I don’t think AT&T is deliberately trying to be racist about this. They are participating in the HUD program to bring broadband to low-income housing, for example.

 

There are two important, but rather different issues here — one immediate to AT&T, one much more broadly with regard to policy. NDIA created the maps to demonstrate that a significant number of people who qualify for the $5 broadband for those on SNAP support that AT&T committed to provide as a condition of its acquisition of DIRECTV can’t get it because the advertised broadband in their neighborhood is soooo crappy that they fall outside the merger condition (the merger requires AT&T to make it available in areas where they advertise availability of 3 mbps). Based on this article from CNN Money, it looks like AT&T is doing the smart thing and voluntarily offering the discount to those on SNAP who don’t have access to even 3 mbps AT&T DSL.

 

The more important issue is the return of redlining on a massive scale. Thanks to improvements the FCC has made over the years in the annual mandatory broadband provider reporting form (Form 477), we can now construct maps like this for neighborhoods all over the country, and not just from AT&T. As I argued repeatedly when telcos, cable cos and Silicon Valley joined forces to enact “franchise reform” deregulation in 2005-07 that eliminated pre-existing anti-redlining requirements – profit maximizing firms are gonna act to maximize profit. They are not going to spend money upgrading facilities if they don’t consider it a good investment.

 

Again, I want to make clear that there is nothing intrinsically bad or good about AT&T. Getting mad at companies for behaving in highly predictable ways based on market incentives is like getting mad at cats for eating birds in your backyard. And while I have no doubt we will see the usual deflections that range from “but Google-“ to “mobile gives these neighborhoods what they need” (although has anyone done any actual, systemic surveys of whether we have sufficient towers and backhaul in these neighborhoods to provide speed and quality comparable to VDSL or cable?) to “just wait for 5G,” the digital inequality continues. I humbly suggest that, after 10 years of waiting and blaming others, perhaps we need a new policy approach.

 

More below . . .

Continue reading

Feeding Content

Our latest High Fidelity Beta release builds on June’s proof of concept, which suggested three visitable places above the address bar. Now we’re extending that with a snapshot feed. This should assist people in finding new and exciting content, and seeing what’s going on across public domains.
Just The Basics:

I. There is now a snapshot button in the toolbar: It works in HMD, and removes all HUD UI elements from the fixed aspect-ratio picture. If you are logged in to a shareable place, you also get an option to share the snapshot to a public feed. (Try doing View->Mirror and taking a selfie!)

snapshot-review

II. The “Go To” address bar now offers a scrollable set of suggestions that can be places or snapshots: The two buttons to the right of the address bar switch between the two sets, and typing filters them. Clicking on a place takes you to that named place, but clicking on a snapshot opens another window with more info. You can then visit the place that snapshot was taken by clicking on the picture, explore the other snapshots taken by that person or in that place, or share the picture to Facebook if you choose. If your friends follow your share to the picture on the Web, they can click on the picture to jump to the same place – if they have Interface installed.

feed

(None of this has anything to with our old Alpha Forums picture feed, which isn’t public or scalable, nor are there changes to the old control-s behavior.)
Where We’re Headed:

There’s a lot more we can do with this, but we wanted to release what we have now and find out what’s important to you.

  1. We’re also thinking about other activity and media you might like to share and see in the feed, such as joining a group or downloading from marketplace.
  2. How might we use the “wisdom of crowds” to score and order the suggestions, based on real activity that people find useful?
  3. The community is quite small right now, and often your real world or social media friends do not have HMDs yet. So for now there there’s just one shared public feed of snapshots. As we grow, we’ll be looking at scaling our infrastructure, and with it, more personalized sharing options.

As we move forward:

  • We don’t want to require a login to use High Fidelity or to enjoy the suggestions made by the feed. We do require a login to share, and we’d like to offer personalized feeds in the future based on your (optional) login.
  • We don’t want to require connecting your High Fidelity account to any social media, but we do want to allow you to do so.
  • We don’t want to share anything without you telling us that it is ok to do so.

Can Obama Stop The Stalling On Clinton Appointees. Or: “It’s Raining Progressives, Hallelujah!”

As we end 2016, we have an unusually large number of vacancies in both the executive branch and the judiciary.  As anyone not living under a rock knows, that’s no accident. Getting Obama appointments approved by the Senate was always a hard slog, and became virtually impossible after the Republicans took over the Senate in 2015.  This doesn’t merely impact the waning days of the Obama Administration. If Clinton wins the White House, it means that the Administration will start with a large number of important holes. Even if the Democrats also retake the Senate, it will take months to bring the Executive branch up to functioning, never mind the judiciary. If Clinton wins and Republicans keep the Senate, we are looking at continuing gridlock and dysfunction until at least 2018 and possibly beyond.

 

In my own little neck of the policy woods, this plays out over the confirmation of Federal Communications Commissioner Jessica Rosenworcel (D). Rosenworcel’s term expired in 2015. Under 47 U.S.C. 154(c), Rosenworcel can serve until the end of this session of Congress. That ends no later than Noon, January 3, 2017, according to the 20th Amendment (whether it ends before that, when Congress adjourns its legislative session but remains in pro forma session is something we’ll debate later). Assuming Rosenworcel does not get a reconfirmation vote (although I remind everyone that Commissioner Jonathan Adelstein was in a similar situation in 2004 and he got confirmed in a lame duck session), that would drop the Commission down to 2-2 until such time as the President (whoever he or she will be) manages to get a replacement nominated and confirmed by the Senate. Given the current Commission, this would make it extremely difficult to get anything done — potentially for months following the election. It would also force Chairman Tom Wheeler to remain on the Commission (whether he wants to or not) for some time.

 

From the Republican perspective, however, this has advantages. If Clinton wins, it means that the FCC is stuck in neutral for weeks, possibly months. Since Republicans generally do not like Wheeler’s policies, that’s just fine. By contrast, if Trump wins, Republicans will have an immediate majority if Wheeler follows the usual custom and steps down at Noon January 20. So even though Republicans promised to confirm Rosenworcel back in 2014 when the Ds allowed Commissioner Mike O’Reilly (R) to get his reconfirmation vote, they have plenty of reasons to break their promise and hold Rosenworcel up anyway. Not that Senate Republicans have anything against Rosenworcel, mind you. It’s just (dysfunctional) business.

 

Again, it’s important to remind everyone who obsesses about communications that this is not unique to Rosenworcel. From Merrick Garland (remember him?) on down, we have tons of vacancies just sitting there without even the virtue of a bad excuse beyond “well, we’d rather the government not function if someone on the other side is running it.” While I keep hoping this will change, I don’t expect either political party to have a change of heart around this following the next election.

 

Fortunately, I have a plan so cunning you can stick a tail on it and call it a weasel.  On the plus side, if I can get the President to go along with it, it will not only keep things working between January 3, Noon, and January 20, Noon. It will also give the Republicans incredible incentive to move Clinton’s nominations as quickly as possible. On the downside, it’s not entirely clear this is Constitutional. I think it is, based on the scanty available case law (mostly Nat’l Labor Relations Bd v. Canning). But, as with test cases generally, I can’t guarantee it. Still, like the idea of preventing a U.S. default on its debt with a trillion dollar platinum coin, it can’t hurt to think about it.

 

For the details of what I call “Operation Midnight At Noon” (throwback to the Midnight Judges), see below . . .

Continue reading