Howard Stearns works at High Fidelity, Inc., creating the metaverse. Mr. Stearns has a quarter century experience in systems engineering, applications consulting, and management of advanced software technologies. He was the technical lead of University of Wisconsin's Croquet project, an ambitious project convened by computing pioneer Alan Kay to transform collaboration through 3D graphics and real-time, persistent shared spaces. The CAD integration products Mr. Stearns created for expert system pioneer ICAD set the market standard through IPO and acquisition by Oracle. The embedded systems he wrote helped transform the industrial diamond market. In the early 2000s, Mr. Stearns was named Technology Strategist for Curl, the only startup founded by WWW pioneer Tim Berners-Lee. An expert on programming languages and operating systems, Mr. Stearns created the Eclipse commercial Common Lisp programming implementation. Mr. Stearns has two degrees from M.I.T., and has directed family businesses in early childhood education and publishing.

Brothers

I’ve admitted that I didn’t immediately get the point of the One Laptop Per Child project, but now I’m now very excited about the ideas behind this non-profit effort to build a $100 mesh-network computer to be owned by children in the developing world. This essay captures a lot of what I feel and wonder about it, including some fears of dystopian unexpected consequences.

Continue reading

Evocative Performance vs. Information Transmission

An interesting thing happens when a medium has enough bandwidth to be a “rich medium.” It crosses a threshold from merely being an informational medium to being an evocative medium.

Consider radio, which was initially used to carry Morse code over the wireless tracts between ships at sea and shore. The entire communications payload of a message could be perfectly preserved in notating the discrete dots and dashes. Like digital media, the informational content was completely preserved regardless of whether it was carried by radio, telegraph, or paper. But when radio started carrying voice, there was communication payload that was not completely preserved in the context of other media. The human voice conveys more subtlety than mere words.

Thus far, the Internet has been mostly informational. We do use it to transmit individual sound and video presentational work, but the Internet platforms in these situations are merely the road on which these travel rather than the medium itself. (My kids say they are listening to a song or watching a video, rather than that they are using the Internet or that they are on-line. The medium is the music and video.)

So, what happens when an Internet platform supports voice and video, both live and prerecorded, and allows individual works to be combined and recombined and annotated and added to and for the whole process to be observed? Do “sites” become evocative? Do presentations on them become a performance art? Do we loose veracity or perspicuity as the focus shifts to how things are said rather than what is said? Here’s a radio performance musing on some of this and more.

I think maybe this is the point where the medium becomes the message. If a technology doesn’t matter because everything is preserved in other forms, then the technology isn’t really a distinct medium in McLuhan’s sense.

What's the Matter?

My daughter has been studying “matter” in science. This is the unit that discusses physical changes between phases (arrangements of molecules,) versus chemical changes between compounds (arrangements of atoms). It also discusses electrons, protons, and neutrons.

She wasn’t getting it. It was all just so many meaningless words, and symbolic coding isn’t her forte. Not everyone learns the same way, and everyone can benefit from working with the same material presented in different ways. In dealing with this, it is necessary to use not just different words, but different input entirely, which are processed by different parts of the brain. My daughter thinks very geometrically, so we were able to construct a series of visual scenes portraying the material. Napoleon said, “A picture is worth a thousand words,” and Mindard’s famous map of Napoleon’s disastrous invasion of Russia shows that a diagram can sometimes be worth a thousand pictures.

In the phase-change scene, we draw a bucket of water in the middle, a tea-pot on one side, and tray of ice cubes on the other. We drew labeled arrows from solid to liquid and liquid to gas, and back again. (“Melting,” “Evaporation,” “Condensation,” and “Freezing.”)

We also drew the classic old 1950’s nuclear energy picture, with angry-faced (negative) electrons in an elliptical orbit around smiling (positive) protons and neutral neutrons.

It worked.

Continue reading

The Shared Experience

<%image(20061217-Shared_TV.png|588|407|Shared Experience prototype: two people and TV feed.)%>

This is a picture of a three way iChat. My friend Preston Austin travels quite a bit with his business. Here we see Preston in the bottom display, cleaning bicycle parts in Chapel Hill, North Carolina. His wife is folding laundry in Madison, Wisconsin. A third computer has a TV tuner attached, providing a live feed from “Sex in the City.” Preston and his wife have watched movies “together” this way several times. He reports that the experience allowed for more rich interaction than just long video calls and certainly better than separately watching TV.

Preston has been emphasizing to me the value of the shared experience almost since the moment I met him. When he first told me about shared virtual movie theaters, I didn’t really get it. But now I see my kids gathered around the TV or the computer running a DVD, and talking to each other about what they’re seeing. Or they’re on the phone discussing the same TV show that they and their friends are separately and simultaneously watching.

I think the principle here is that every(*) experience can be enriched by sharing it. Regardless of where the solitary activity is in the range from passive to active, the activity becomes more active when shared. This has value for education, training, and entertainment.

Continue reading

Making a Living in Languages (Redux) part 9: How Do You Make Money?

Last time: “Killer Apps,” in which I claimed that it was possible to engineer an application that had good characteristics for success within its chosen market, rather than just having to count on “built it and they will come.”
Now: What are the ways that revenue can be produced from a Killer App on an open-source platform?

[This is an excerpt from a Lisp conference talk I gave in 2002.]

Continue reading

Making a Living in Languages (Redux) part 8: Killer Apps

Last time: “Give ‘Em What They Want,” in which I said that having a desirable application “from the beginning” is necessary to promote a platform.
Now: Sounds good, but how do we go about creating such a scenario? We engineer it!

[This is an excerpt from a Lisp conference talk I gave in 2002.]

Continue reading

Making a Living in Languages (Redux) part 7: Give ‘Em What They Want

Last time: “Can’t Make a Killing From Platforms Without Killing the Community,” in which I said that those who develop a platform rarely recoup their cost directly, and so they might look to reduce their cost through open-source efforts.
Now: How do you create demand for a platform?

[This is an excerpt from a Lisp conference talk I gave in 2002.]

Continue reading

Making a Living in Languages (Redux) part 6: Can’t Make a Killing From Platforms Without Killing the Community

Last time: “Platforms – The New Application-Centric Product Positioning,” in which I encouraged thinking about platforms communities rather than language technologies or standards.
Now: How does a single vendor create a platform community?

[This is an excerpt from a Lisp conference talk I gave in 2002.]

Continue reading

Making a Living in Languages (Redux) part 5: Platforms – The New Application-Centric Product Positioning

Last time: “The Old Language-Centric Products Categories,” in which I said that the old model for vendors was to specialize in one of either language engine, libraries, or developer’s tools.
Now: What would happen if we generalized this approach?

[This is an excerpt from a Lisp conference talk I gave in 2002.]

Continue reading