Wheel Reinvention and Geek Culture

In mainstream North American society, “reinventing the wheel” (investing efforts on something which has already been done) is often seen as a net negative.  “Don’t waste your time.” “It’s all been done.” “No good can come out of it.”

In geek culture, the mainstream stigma on wheel reinvention has an influence. But many people do spend time revisiting problems which have already been solved. In this sense, geek culture is close to scientific culture. Not everything you do is completely new. You need to attempt things several times to make sure there isn’t something you missed. Like scientists, geeks (especially engineering-type ones) need to redo what others have done before them so they can “evolve.” Geeks are typically more impatient in their quest for “progress” than most scientists working in basic research, but the connection is there.

Reasons for wheel reinvention abound. The need to practice before you can perform. The burden of supporting a deprecated approach. The restrictions placed on so-called “intellectual property.” The lack of inspiration by some people. The (in)famous NIH (“Not Invented Here”) principle.  The fact that, as Larry Wall say, “there is always another way.”

Was thinking about this because of a web forum in which I participate. Although numerous web forum platforms exist as part of “Content Management Systems,” several of them free of charge, this web developer created his own content management system, including forum support.

Overall, it looks like any other web forum.  Pretty much the same features. The format tags are somewhat non-standard, the “look-and-feel” is specific, but users probably see it as the exact same as any other forum they visit. In fact, I doubt that most users think about the forum implementation on a regular basis.

This particular forum was created at a time when free-of-charge Content Management Systems were relatively rare.  The site itself was apparently not meant to become very big. The web developer probably put together the forum platform (platforum?) as an afterthought since he mostly wanted to bring people to his main site.

Thing is, though, the forums on that particular site seem to be the most active part of the site. In the past, the developer has even referred to this situation as a problem. He would rather have his traffic go to the main pages on the site than to the forums. Several “bridges” exist between the forums and the main site but the two seem rather independent of one another. Maybe the traffic issue has been solved in the meantime but the forums remain quite active.

My perception is that the reasons for the forums’ success include some “social” dimensions (the forum readership) and technical dimensions (the “reinvented” forum platform). None of these factors could explain the forums’ success but, taken together, they make it easy to understand why the forums are so well-attended.

In social terms, these forums reach something of a niche market which happens to be expanding. The niche itself is rather geeky in the passion for a product category as well as in the troubleshooting approach to life. Forum readers and participants are often looking for answers to specific questions. The signal to noise ratio in most of the site’s forums seems, on average, particularly high. Most moderation happens seamlessly, through the community. While not completely invisible, the site’s staff is rarely seen in most forum threads. Different forums, addressing different categories of issues, attract different groups of people even though some issues cross over from one forum to another. The forum users’ aggregate knowledge on the site’s main topic is so impressive as to make the site look like the “one-stop shop” for any issue related to the site’s topic. At the same time, some approaches to the topic are typically favored by the site’s members and alternative sites have sprung up in part to counterbalance a perceived bias on that specific site. A sense of community has been built among some members of several of the forums and the whole forum section of the site feels like a very congenial place.

None of this seems very surprising for any successful web forum. All of the social dynamics on the site (including the non-forum sections) reinforce the idea that a site’s succes “is all about the people.”

But there’s a very simple feature of the site’s forum platform which seems rather significant: thread following through email. Not unique to this site and not that expertly implemented, IMHO. But very efficient, in context.

At the end of every post is a checkbox for email notification. It’s off by default so the email notification is “opt-in,” as people tend to call this. There isn’t an option to “watch” a thread without posting in it (that is, only people who write messages in that specific thread can be notified directly when a new message appears). When a new message appears in a specific thread, everyone who has checked the mail notification checkbox for a message in that thread receives a message at the email address they registered with the site. That email notification includes some information about the new forum post (author’s username, post title, thread title, thread URL, post URL) but not the message’s content. That site never sends any other mail to all users. Private mail is done offsite as users can register public email addresses and/or personal homepages/websites in their profiles.

There’s a number of things I don’t particularly enjoy about the way this email notification system works. The point is, though, it works pretty well. If I were to design a mail notification system, I would probably not do it the same way.  But chances are that, as a result, my forums would be less successful than that site’s forums are (from an outsider’s perspective).

Now, what does all this have to do with my original point, you ask? Simple: sometimes reinventing the wheel is the best strategy.

Advertisement

Free As In Beer: The Case for No-Cost Software

To summarize the situation:

  1. Most of the software for which I paid a fee, I don’t really use.
  2. Most of the software I really use, I haven’t paid a dime for.
  3. I really like no-cost software.
  4. You might want to call me “cheap” but, if you’re developing “consumer software,” you may need to pay attention to the way people like me think about software.

No, I’m not talking about piracy. Piracy is wrong on a very practical level (not to mention legal and moral issues). Piracy and anti-piracy protection are in a dynamic that I don’t particularly enjoy. In some ways, forms of piracy are “ruining it for everyone.” So this isn’t about pirated software.

I’m not talking about “Free/Libre/Open Source Software” (FLOSS) either. I tend to relate to some of the views held by advocates of “Free as in Speech” or “Open” developments but I’ve had issues with FLOSS projects, in the past. I will gladly support FLOSS in my own ways but, to be honest, I ended up losing interest in some of the most promising projects out there. Not saying they’re not worth it. After all, I do rely on many of those projects But in talking about “no-cost software,” I’m not talking about Free, Libre, or Open Source development. At least, not directly.

Basically, I was thinking about the complex equation which, for any computer user, determines the cash value of a software application. Most of the time, this equation is somehow skewed. And I end up frustrated when I pay for software and almost giddy when I find good no-cost software.

An old but representative example of my cost-software frustration: QuickTime Pro. I paid for it a number of years ago, in preparation for a fieldwork trip. It seemed like a reasonable thing to do, especially given the fact that I was going to manipulate media files. When QuickTime was updated, my license stopped working. I was basically never able to use the QuickTime Pro features. And while it’s not a huge amount of money, the frustration of having paid for something I really didn’t need left me surprisingly bitter. It was a bad decision at that time so I’m now less likely to buy software unless I really need it and I really know how I will use it.

There’s an interesting exception to my frustration with cost-software: OmniOutliner (OO). I paid for it and have used it extensively for years. When I was “forced” to switch to Windows XP, OO was possibly the piece of software I missed the most from Mac OS X. And as soon as I was able to come back to the Mac, it’s one of the first applications I installed. But, and this is probably an important indicator, I don’t really use it anymore. Not because it lacks features I found elsewhere. But because I’ve had to adapt my workflow to OO-less conditions. I still wish there were an excellent cross-platform outliner for my needs. And, no, Microsoft OneNote isn’t it.

Now, I may not be a typical user. If the term weren’t so self-aggrandizing, I’d probably call myself a “Power User.” And, as I keep saying, I am not a coder. Therefore, I’m neither the prototypical “end user” nor the stereotypical “code monkey.” I’m just someone spending inordinate amounts of time in front of computers.

One dimension of my computer behavior which probably does put me in a special niche is that I tend to like trying out new things. Even more specifically, I tend to get overly enthusiastic about computer technology to then become disillusioned by said technology. Call me a “dreamer,” if you will. Call me “naïve.” Actually, “you can call me anything you want.” Just don’t call me to sell me things. 😉

Speaking of pressure sales. In a way, if I had truckloads of money, I might be a good target for software sales. But I’d be the most demanding user ever. I’d require things to work exactly like I expect them to work. I’d be exactly what I never am in real life: a dictator.

So I’m better off as a user of no-cost software.

I still end up making feature requests, on occasion. Especially with Open Source and other open development projects. Some developers might think I’m just complaining as I’m not contributing to the code base or offering solutions to a specific usage problem. Eh.

Going back to no-cost software. The advantage isn’t really that we, users, spend less money on the software distribution itself. It’s that we don’t really need to select the perfect software solution. We can just make do with what we have. Which is a huge “value-add proposition” in terms of computer technology, as counter-intuitive as this may sound to some people.

To break down a few no-cost options.

  • Software that came with your computer. With an Eee PC, iPhone, XO, or Mac, it’s actually an important part of the complete computing experience. Sure, there are always ways to expand the software offering. But the included software may become a big part of the deal. After all, the possibilities are already endless. Especially if you have ubiquitous Internet access.
  • Software which comes through a volume license agreement. This often works for Microsoft software, at least at large educational institutions. Even if you don’t like it so much, you end up using Microsoft Office because you have it on your computer for free and it does most of the things you want to do.
  • Software coming with a plan or paid service. Including software given by ISPs. These tend not to be “worth it.” Yet the principle (or “business model,” depending on which end of the deal you’re on) isn’t so silly. You already pay for a plan of some kind, you might as well get everything you need from that plan. Nobody (not even AT&T) has done it yet in such a way that it would be to everyone’s advantage. But it’s worth a thought.
  • “Webware” and other online applications. Call it “cloud computing” if you will (it was a buzzphrase, a few days ago). And it changes a lot of things. Not only does it simplify things like backup and migration, but it often makes for a seamless computer experience. When it works really well, the browser effectively disappears and you just work in a comfortable environment where everything you need (content, tools) is “just there.” This category is growing rather rapidly at this point but many tech enthusiasts were predicting its success a number of years ago. Typical forecasting, I guess.
  • Light/demo versions. These are actually less common than they once were, especially in terms of feature differentiation. Sure, you may still play the first few levels of a game in demo version and some “express” or “lite” versions of software are still distributed for free as teaser versions of more complete software. But, like the shareware model, demo and light software may seem to have become much less prominent a part of the typical computer user’s life than just a few years ago.
  • Software coming from online services. I’m mostly thinking about Skype but it’s a software category which would include any program with a desktop component (a “download”) and an online component, typically involving some kind of individual account (free or paid). Part subscription model, part “Webware companion.” Most of Google’s software would qualify (Sketchup, Google Earth…). If the associated “retail software” were free, I wouldn’t hesitate to put WoW in this category.
  • Actual “freeware.” Much freeware could be included in other categories but there’s still an idea of a “freebie,” in software terms. Sometimes, said freeware is distributed in view of getting people’s attention. Sometimes the freeware is just the result of a developer “scratching her/his own itch.” Sometimes it comes from lapsed shareware or even lapsed commercial software. Sometimes it’s “donationware” disguised as freeware. But, if only because there’s a “freeware” category in most software catalogs, this type of no-cost software needs to be mentioned.
  • “Free/Libre/Open Source Software.” Sure, I said earlier this was not what I was really talking about. But that was then and this is now. 😉 Besides, some of the most useful pieces of software I use do come from Free Software or Open Source. Mozilla Firefox is probably the best example. But there are many other worthy programs out there, including BibDesk, TeXShop, and FreeCiv. Though, to be honest, Firefox and Flock are probably the ones I use the most.
  • Pirated software (aka “warez”). While software piracy can technically let some users avoid the cost of purchasing a piece of software, the concept is directly tied with commercial software licenses. (It’s probably not piracy if the software distribution is meant to be open.) Sure, pirates “subvert” the licensing system for commercial software. But the software category isn’t “no-cost.” To me, there’s even a kind of “transaction cost” involved in the piracy. So even if the legal and ethical issues weren’t enough to exclude pirated software from my list of no-cost software options, the very practicalities of piracy put pirated software in the costly column, not in the “no-cost” one.

With all but the last category, I end up with most (but not all) of the software solutions I need. In fact, there are ways in which I’m better served now with no-cost software than I have ever been with paid software. I should probably make a list of these, at some point, but I don’t feel like it.

I mostly felt like assessing my needs, as a computer user. And though there always are many things I wish I could do but currently can’t, I must admit that I don’t really see the need to pay for much software.

Still… What I feel I need, here, is the “ultimate device.” It could be handheld. But I’m mostly thinking about a way to get ideas into a computer-friendly format. A broad set of issues about a very basic thing.

The spark for this blog entry was a reflection about dictation software. Not only have I been interested in speech technology for quite a while but I still bet that speech (recognition/dictation and “text-to-speech”) can become the killer app. I just think that speech hasn’t “come true.” It’s there, some people use it, the societal acceptance for it is likely (given cellphone penetration most anywhere). But its moment hasn’t yet come.

No-cost “text-to-speech” (TTS) software solutions do exist but are rather impractical. In the mid-1990s, I spent fifteen months doing speech analysis for a TTS research project in Switzerland. One of the best periods in my life. Yet, my enthusiasm for current TTS systems has been dampened. I wish I could be passionate about TTS and other speech technology again. Maybe the reason I’m notis that we don’t have a “voice desktop,” yet. But, for this voice desktop (voicetop?) to happen, we need high quality, continuous speech recognition. IOW, we need a “personal dictation device.” So, my latest 2008 prediction: we will get a voice device (smartphone?) which adapts to our voices and does very efficient and very accurate transcription of our speech. (A correlated prediction: people will complain about speech technology for a while before getting used to the continuous stream of public soliloquy.)

Dictation software is typically quite costly and complicated. Most users don’t see a need for dictation software so they don’t see a need for speech technology in computing. Though I keep thinking that speech could improve my computing life, I’ve never purchased a speech processing package. Like OCR (which is also dominated by Nuance, these days) it seems to be the kind of thing which could be useful to everyone but ends up being limited to “vertical markets.” (As it so happens, I did end up being an OCR program at some point and kept hoping my life would improve as the result of being able to transform hardcopies into searchable files. But I almost never used OCR (so my frustration with cost-software continues).)

Ah, well…

Lydon at His Best: Comeback Edition

Already posted a blog entry about Radio Open Source (ROS) host Christopher Lydon being at his best when he gives guests a lot of room.

I’ve also been overtly critical of Lydon, in the past. Nothing personal. ROS is a show that gets me thinking and I tend to think critically. I still could have voiced my opinions in a softer manner but blogging, like other forms of online communication, often makes it too easy to use inflammatory language.

At one point, I even posted a remarkably arrogant entry about my perception of what ROS should do.

But, what’s funny, what the show has become is pretty much what I had in mind. Not in format. But in spirit. And it works quite well for me.

Lydon posted a detailed entry (apparently co-authored by ROS producer Mary McGrath) on the thought process involved in building the new ROS show:

Open Source » Blog Archive » As We Were Saying…

Despite the “peacock terms” used, the blog entry seems to imply a “leaner/meaner” ROS which gives much room for Lydon to do his best work. Since it started again a few weeks ago, the show has been focusing on topics and issues particularly dear to Lydon including Jazz, American cultural identity, U.S. politics, and Transcendentalism (those four are linked, of course). It’s much less of a radio show and much more of a an actual podcast as we have come to understand them in the four years since Lydon and Dave Winer “have done the first podcast in human history.” In other words Lydon, a (former) NYT journalist, has been able to adapt to podcasting, which he invented.

What is perhaps most counter-intuitive in Lydon’s adaptation is that he went from a typical “live radio talk show” format with guests and callers to a “conversation” show without callers, all the way to very focused shows with extended interviews of varying lengths. Which means that there’s in fact less of the “listener’s voice” in the show than there ever was. In fact, there seems to be a lot less comments about ROS episodes than there were before. Yet the show is more “podcasty.”

How?

Well, for one thing, there doesn’t seem to be as strict a release schedule as there would be on a radio show. While most podcasters say that regularity in episode releases is the key to a successful podcast, it seems to me that the scheduling flexibility afforded podcasts and blogs is a major part of their appeal. You don’t release something just because you have to. You release it because it’s as ready as you want it to be.

Then there’s the flexibility in length. Not that the variability is so great. Most episodes released since the comeback are between 30 and 45 minutes. Statistically significant, but not extreme variability in podcasting terms. The difference is more about what a rigid duration requirement does to a conversation. From simple conversational cues, it’s quite easy to spot which podcasts are live broadcasts, which are edited shows, and which are free-form. Won’t do a rundown right now but it would make for an interesting little paper.

The other dimension of the new ROS which makes it more podcasty is that it’s now clearly a Lydon show. He’s really doing his thing. With support from other people, but with his own passions in mind. He’s having fun. He’s being himself. And despite everything I’ve written about him as a host, I quite enjoy the honesty of a show centered on Lydon’s passions. As counter-intuitive as this might sound given the peacock terms used in the show’s blog, it makes for a less-arrogant show. Sure, it’s still involved in American nationalism/exceptionalism. But it’s now the representation of a specific series of voices, not a show pretending to represent everything and everyone.

So, in brief, I like it.

And, yes, it’s among the podcasts which make me think.

Product Differentiation

A chart meant to show the merits of Participatory Culture’s Miro video platform over a competing product from Joost.

Miro vs. Joost – Head to Head Comparison

Interesting approach. After the obligatory feature and content comparisons, details about the organizational structure for Participatory Culture (creators of Miro) and Joost N.V. (creators of the Joost program, with links to Skype and Kazaa). Even the “technological” comparison focuses on Miro’s openness.

It seems that social and cultural factors may be important for folks at Participatory Culture. Seems like members of their target audience should care about enterprise size (11 people for Miro vs. 100 for Joost) and about connections to the Open Source community (Participatory Culture is claiming that Joost uses Open Source without making its own source code available). However, this chart makes it a bit difficult to distinguish statements about these factors from typical marketing hype.