More and more, I’m feeling like a volunteer for the “Mark Sanford in 2012 Committee” finding out what “hiking the Appalachian Trail” really means. I have been a huge supporter of this program from the beginning. Even though I have had some concerns along the way, I have tried to keep the faith.
But the more I see about how this will get implemented, and the more deeply I delve into the details, the more I worry that a potentially great program capable of fundamentally altering our broadband future for the better to something so ridiculously screwed up that we will actually lose ground on both future funding and future policy.
The thing that finally broke my willingness to believe was this eyewitness report I got from my brother and business partner, Shmuel Feld, who attended the first NOFA Workshop held Tuesday, July 7 here in DC. A representative from RUS was explaining how applicants must fully document “unserved” and “underserved” at the census block level — but without access to any carrier data because carriers regard this as proprietary. Then, assuming the application survives to the NTIA/RUS “due diligence” round, the agency will invite broadband access providers in the area to submit confidential information to demonstrate that the area designated by the Applicant is not underserved or unserved. The applicant will have no opportunity to rebut any evidence submitted against the Application. From my brother’s report, this prompted the following exchange:
From Audience: If we, the people, do not know where the (BB) structures are or what the penetration numbers are and the big companies are not sharing these numbers or can deny them in the second round (when it is convenient) under the due diligence investigation, then how will we find out all of the information necessary for the application?
(Direct quote of RUS guy): Well that’s quite a challenge, isn’t it?
The RUS guy’s next line was a suggestion like “boots on the ground and canvassing a county” I could not hear him clearly because of the (I am serious) laughter.
OK, let me explain something to anyone from RUS or NTIA reading this. Giving Applicants an impossible task is not a “challenge.” It is a recipe for failure and a sign that you — NTIA and RUS — have screwed up big time.
I explore what I think is happening, and how it might still get fixed in time to save both the broadband stimulus package and the future of BB policy for the rest of the Obama Administration, below . . . .
So Why So Glum?
Bluntly, the more NTIA/RUS folk talk, especially in response to how the ranking criteria will be implemented, it becomes clear that there is a phenomenal disconnect between what officials say they want, and what the NOFA ranking criteria and documentary requirements actually reward. Officials and the text of the NOFA still speak earnestly and sincerely about bringing in new providers and programs with well-designed dynamic programs that combine provision of network services, computers, training and community involvement to genuinely stimulate new ways of deploying and employing broadband. But if you dig into the ranking criteria and the documentation requirements, what the system actually rewards is the most boring, conservative, stove-piped traditional applications by the usual suspects.
Nothing is so emblematic of how thoroughly wrong this has gone than the little vignette I reported above. If you want data from local providers, why have them come in AFTER the applications come in. It smacks of the “I’m captured and I don’t even know it mentality” of too many staffers I’ve encountered over the years. If you wanted to ensure accuracy and help applicants, you would require providers to proactively submit data before people filed applications — both to make needed info available and to spare applicants the expense and effort of putting in an application that does not meet the criteria. The only reason to do it this way is if you think you need to be “fair” to incumbents and “give them a chance to respond,” even though this is supposed to be about getting the best programs, not about protecting incumbents. But just as the mainstream media has come to view “fair” as meaning “let the two most divergent views contradict each other without making any effort to see what is actually true,” too many staff in DC now view “fair” as “give incumbents every opportunity to protect themselves and respond, even if the entire purpose of the program is to fundamentally alter the existing industry dynamic.”
So How Did This Happen?
There’s an episode of the old MASH TV series in which Father Mulcahy spends much time growing corn on the cob, and dreaming about how wonderful it will be ton have fresh corn instead of the awful creamed corn served by the army. The day at last arrives, and Father Mulcahy eagerly goes to the head of the line to get his fresh corn on the cob. The cook gives him a heaping spoonful of fresh creamed corn.
Mulcahy is outraged: “You creamed my corn! You . . . ninny!”
Pvt Igor Straminsky: “Fine! Next time you can eat it off the cob for all I care!” (stamps away angrily in a huff)
Alas, staff charged with actually taking all the neat-o policy stuff from the statute and worked out by the political appointees have, through lack of imagination, administrative habit, narrowness of vision, and the other things that make it so hard to get stuff done in DC even when the stars properly align on policy, have done unto us as the fictional cook in MASH. Our corn has gotten thoroughly creamed.
I’ll insert here that this isn’t true of all staff everywhere, of course, and it’s not (necessarily) a sign that they are beholden to their corporate masters a la public choice theory. It has much more to do with the fact that change in any environment is hard, and for the last 20 years (or even longer) the Federal civil service have had it pounded into them how to run things and construct things. Anyone inclined to do things otherwise has long since departed. To make matters worse, the open disdain of the Bush Administration for federal employees (a natural corollary to the notion that all things private sector are perfect and all things done by government are screwed up) decimated the ranks of the federal bureaucracy and put in place new barriers and red tape to actually doing anything. And now, we dump on them more money than has ever been in a federal program before, push under politically intense deadlines on a massively ambitious program, and expect them to suddenly become dynamic change makers indifferent to the industry incumbents they have been trained and instructed to think of as “clients.” No surprise we got creamed corn.
That said, let us consider the most objectionable features of the NOFA that produce the most profound disconnects. Examination demonstrates that they fall rather neatly into some traditional staff mindsets that run so deep the career folks who survived 8 years of the Bush Administration don’t even realized how ingrained their programming runs. They fall into two essential modes of thinking: conventional way of doing business and administrative convenience. In this case, the habits re-enforce each other to produce a result completely at odds with the policy set by the statute and political appointees.
The Underserved/Unserved Criteria and the Documentation Requirements
These reflect the obsession of the last forty or so years of federal programs to insure that the money does not go to those who do not “deserve” it and to avoid “competition with the private sector” because obviously the private sector can provide these services better and its not fair for the bad old federal government to compete with the private sector beause they would drive the private sector providers out — despite being so bad and incompetent. No, don’t think about it too hard — it just makes your head hurt.
So despite the fact that we have five equal criteria, the NOFA requires that any “infrastructure grant” must serve unserved or underserved. But it’s not enough to serve unserved and underserved as well as do other neat stuff. To ensure that money doesn’t go to projects that would benefit those who “don’t need help,” the criteria for unserved and underserved are rigidly defined and must be thoroughly documented on a granular level — and with incumbents given the right to show that the application would serve those who “don’t deserve it” and thus “protect themselves” from the government “unfairly picking winners and losers.”
Seen in that light, the definitions and documentation requirements — and giving incumbents a “reply right” — make perfect sense. The fact that (a) that is the exact opposite of what the statute intended, and (b) it makes it impossible to achieve the policies and goals given in the text of the NOFA (you will notice the “reply right” in the “due diligence” phase is buried somewhere around page 70) is perhaps unfortunate, but ultimately acceptable as a cost of fulfilling the overarching purpose of federal program implementation as pounded into those implementing the program.
Artificial Stovepipes
The next thing that cuts against any sort of innovative mixed program that does everything the NOFA text and policy folks say they want to see is the artificial division between “public computing,” “innovative adoption programs,” and “infrastructure” — made worse by the artificial division between “middle mile” and “last mile.” Heck, we don’t even have a real definition for “middle mile.” It basically means, “anything not last mile.” So what gives? And wouldn’t the strongest application given the goals of the statute and the NOFA have elements of all these things? i.e. a project that combined public computing, innovative adoption, last mile connectivity and some backhaul/middle mile construction for sustainability?
The answer lies in administrative convenience and faulty goal protection. On the administrative convenience side, the statute requires no less that $200 Mil for public computing and no less than $250 Mil for innovative adoption programs. Well, if you are administering the program, wouldn’t it be easiest to ensure compliance with this by making these things entirely separate pots of money? Never mind that the stated policy is to encourage projects that cut across artificial stovepipes. The natural thing, the conventional thing, and the easiest thing is have the applicants sort themselves out into neat little stovepipes for accounting purposes.
Then there’s goal protection; specifically, the obsession noted above with making sure that only the “worthy” (under our rigorous criteria for unserved and underserved) get “federal aid” (that this is stimulus, not a federal aid program, and therefore has an entire set of design issues utter different from an aid program is not something those charged with program design/implementation seem to understand). So the NOFA creates an artificial (and unnecessary under the statute) distinction between last mile and middle mile — apparently for the sole purpose of making sure folks “already served by the private sector” don’t get served in a way that could compete with the private sector. Pulling fiber to a library and using that as a hub for last-mile wireless would “compete” with local DSL and cable providers even if it also provided affordable broadband to others. So it’s a “last mile” project and must only serve the unserved or underserved. But a middle mile project that touches a single unserved or underserved area is no competition to local last mile providers — and is likely to be provided by a local private sector company that needs to expand its own backhaul — so that’s OK.
The Role of the States
I wrote awhile ago why I thought giving states the power to rank projects was a bad idea. It absolutely kills any projects on Native American reservations (because no state is going to rank local tribe’s needs over the needs of its own citizens), ensures that incumbents will once again be able to divert money away from potential competing projects, and makes projects that cover multiple states just about impossible. Again, it rewards the most conventional and drives out the most innovative and cross-cutting projects that the actual NOFA text and policy folks say they want.
Cumulative Impact
The cumulative impact of all this doesn’t show up until you actually put each piece together. Like the market dynamic that produced the current financial meltdown — each decision at each level was utterly rational for the individual actor. The harm that it did to the overall goals of the program were not apparent, particularly to folks at the staff level inculcated in particular ways of thinking and folks at the policy level who had never handled implementation before and did not understand how the implementation problems would play out.
Still, you may ask, why didn’t one of these really smart people at the top put all the pieces together and figure it all out? Again, people on the outside need to consider how massive and complex this project is. Herding cats is a breeze compared to getting all the working parts here to move together smoothly toward the common goal. This is more like trying to get elephants and tigers to do the macarena together while hecklers on the sideline keep chanting waltz music. Now add to this that the people who should have had the responsibility for this, head of NTIA Larry Strickling and head of RUS Jonathan Adelstein, were not even confirmed before this project was essentially done.
For those who would still insist that some individual should somehow have managed to put it all together in time, I relate the following tale: Once, there was an FCC Chairman who took his responsibility as head of the FCC very seriously. He knew that every decision that came out of the FCC, from the most routine staff level action to the simplest motion from General Counsel’s office would be his responsibility. As a result, he insisted on reading and approving everything. He insisted on purging any staff he did not personally know and trust and replacing them with reliable people he could trust to carry out his directions. After all, as Chair of the FCC, he was responsible for everything — and by God he was going to take that responsibility seriously and not excuse things by delegating to staff and claiming ignorance.
And the name of that FCC Chairman? Kevin Martin. And, of course, he is remembered as one of the most effective and beloved FCC Chairman in history — because he exerted exactly the sort of iron control and ultimate review that folks assume someone should have done here.
OK, So Now We Know What Happened. What Are The Consequences.
The problem is if this bombs because people don’t apply and we get more of the ‘same old, same old’ from the usual suspects we are screwed on many levels going forward. In the first place, every single incumbent will argue that the “failure” of the program demonstrates (a) that government spending in this area (except as subsidies to incumbents) is a waste and a terrible idea, and (b) that it is all the fault of those horrible net neutrality and interconnection conditions. Like the failure of the Clinton healthcare reform initiative in the mid-1990s, a catastrophic failure of this program may so thoroughly poison the debate over whether or not to have a national broadband industrial policy and the impact of network neutrality and interconnection that it will be another decade before we can make substantive progress.
That’s a worst case scenario, I readily admit. But it does indicate how high I think the stakes are here. While it think we are still likely to get some decent applications, the range of possible screw ups and the ability to misrepresent results here as relevant to other aspects of the broadband policy debate make the risks substantial.
Craig Settles has written a good piece on whether folks should try to apply or not. As you can see, rather than a program new applicants with innovative programs can eagerly embrace, it posses some hard questions. A number of urban folk I’ve talked to are extremely discouraged and feel they wasted significant resources based on the encouraging language in the statute and from policy folks that the criteria and documentation requirements utterly undercuts. OTOH, the workshops keep playing to full houses, so we may yet see some positive response.
What Can Be Done To Fix Things?
First, of course, we need to convince the folks in charge of BTOP and BIP (the RUS program) there is a problem. Happily, I am not the only one expressing bitter disappointment. I think folks on the inside are getting a clue.
To really fix things, they should hire me to become the Philosopher King with an unlimited portfolio. My rates are eminently reasonable. But, to quote Arlo Guthry’s Alice’s Restaurant: “It wasn’t likely and we didn’t expect it.” More seriously, I have volunteered to do the peer review (the fact that it is free does not offend me, as it has some. This is fairly common for federal grant programs). We’ll see if they bite.
Unfortunately, there is a real limit to what NTIA or RUS can do in the short term. You can’t just pull back a NOFA and start over. It’s a huge process. Worse, the window to start submitting (applications for less than a million dollars, on paper rather than electronically) has already opened. I’m not sure you can issue an official clarification at this point,, given that people have at least theoretically started to submit applications based on the published, unclarified criteria. Besides that, the ferociously fast deadlines is going to make it very difficult for potential applicants who gave up after the NOFA got published to shift gears.
What NTIA can do is get the second NOFA out for public comment ASAP. At a minimum, it should move to get a second NOI on how to do the next NOFA, even if it doesn’t have proposed text of its own yet, so that upset folks have a formal way to file constructive comments and compile a real record on why they did or did not apply in the first round. This will not only produce a NOFA much better calculated to get the kind of applications the folks running the program keep saying they actually want, it will create a real record about what prevented potential applicants from applying so that we can make our policy pronouncements on the basis of some sort of actual data not observed effects and guesses. Heck, I might even be wrong, which is something I would want to know, because creating bad policy based on ignorant guess work sucks rocks.
Bottom line: I agree that the outcome of first round NOFA is very disappointing. But I am unwilling to join the mob scene at the Tent of Meeting to demand why Obama/Moses dragged us into the desert and not to the Land of Milk and Broadband. I also think this is fixable, at lest for round two. And i also think we better fix it, or we will be living with the negative outcome for a long time to come.
Stay tuned . . . .
I hear echoes of the same telco-captured disasters of regulation I’ve observed since I graduated university and which you yourself have chronicled for some time. This isn’t the first time this exact sequence of events has transpired in order to improve the lot of the “underserved”. Universal service anyone? After a decade of graft, barriers to competition, and wasted cycles of “let’s help only those who need it!”, how do low-income consumers get their phone service now? Pre-paid cellular, which has been blessedly free of the USF nonsense since its inception as a product.
When does this sort of regulatory dysfunction prompt the conclusion that the entire project of regulating telecommunications is a flawed one, the only fix for which is to scrap it entirely? We create scarcity where none naturally exists, and congratulate ourselves when we somewhat fix some of the resulting problems. When consumers decide they want something else, we regulate that out of existence. When Congress throws money over the wall for any purpose, the agencies blindly pass it out to incumbents. Any change that comes down the pike is invariably implemented so as to kill small businesses.
We need to stop judging our efforts on intentions and start judging on results. The incumbents (telcos, cablecos, broadcasters, etc.) are far more skilled at regulatory jujitsu than we are or their more customer-focused competition was. The only way to win this game is to stop playing. We’ve had an FCC for 75 years now, and it has been an obvious drag on innovation that entire time. Let’s try a decade without the FCC and its related agencies in the various executive departments. It’s not like any consumers will miss them!
Jess:
Some things fail, some succeed. Part 15 has worked out quite well. The cable program access rules have done a lot of good, even if they could use improvement. Number portability works pretty well.
By contrast, we also have a fair number of cases where doing nothing has resulted in extremely bad and anticompetitive results. The deregulation of the special access market is, I believe, an example of such a case.
I’m all for folks trying to find a way to solve competition and consumer protection problems without recourse to regulation. If it works, great. I’ll be out there applauding and enjoying my market-based competition. It’s what I hope will happen by expanding unlicensed spectrum access. But until that happy day arrives, we struggle with the tools we have and keep pushing to maximize the chances of good results while reducing the chances of bad results. I see many serious problems with the NOFA. I am hopeful we will get them fixed and end up with a better world as a result.
To do this, we must learn from the past and do better in the next round — including trying to shift the game from where incumbents have the advantage. But despair, simply giving up on regulatory processes because we frequently have set backs, is not a winning strategy. Abandon regulatory advocacy and oversight and you guarantee the incumbents will take it over and use it to keep out competitors.
Harold, this is very discouraging.
Like many others, I had high hopes that a multi-billion stimulus package would break open the market for broadband in dramatic ways.
The idea of requiring applicants to collect (proprietary!) census-block level data and then giving incumbents a 20-day right of reply turns “underserved” into a candidate for Orwellism of the year.
But I’m equally disappointed in Jess’s comment, as well. The knee-jerk response to regulatory capture is always to say, see regulation doesn’t work, best leave all decisions to the omniscient, invisible hand of the marketplace. I thank you for the gracious, informative response to the commenter, but I don’t understand how anyone could argue for a non-regulated telecom marketplace after the recent disasters emerging from the unregulated financial marketplace.
The forces that drive regulatory capture are as powerful and as predictable as gravity. But the iron low of gravity doesn’t mean that the only way to travel from Seattle to Chicago is by stagecoach. Airplanes fly because they were carefully designed to overcome gravity.
We need a broadband policy that’s been designed from the ground up to overcome precisely the kind of institutional and mental roadblocks you’ve outlined. It may be too late for this round of stimulus funding, but documenting the mistakes, pointing out the alternatives, and pushing for better implementation, both in the next round of funding and in the long-run creation of future broadband and media policy decision making must be an urgent priority.