Why I Need an iPad

I’m one of those who feel the iPad is the right tool for the job.

I’m one of those who feel the iPad is the right tool for the job.

This is mostly meant as a reply to this blogthread. But it’s also more generally about my personal reaction to Apple’s iPad announcement.

Some background.

I’m an ethnographer and a teacher. I read a fair deal, write a lot of notes, and work in a variety of contexts. These days, I tend to spend a good amount of time in cafés and other public places where I like to work without being too isolated. I also commute using public transit, listen to lots of podcast, and create my own. I’m also very aural.

I’ve used a number of PDAs, over the years, from a Newton MessagePad 130 (1997) to a variety of PalmOS devices (until 2008). In fact, some people readily associated me with PDA use.

As soon as I learnt about the iPod touch, I needed one. As soon as I’ve heard about the SafariPad, I wanted one. I’ve been an intense ‘touch user since the iPhone OS 2.0 release and I’m a happy camper.

(A major reason I never bought an iPhone, apart from price, is that it requires a contract.)

In my experience, the ‘touch is the most appropriate device for all sorts of activities which are either part of an other activity (reading during a commute) or are simply too short in duration to constitute an actual “computer session.” You don’t “sit down to work at your ‘touch” the way you might sit in front of a laptop or desktop screen. This works great for “looking up stufff” or “checking email.” It also makes a lot of sense during commutes in crowded buses or metros.

In those cases, the iPod touch is almost ideal. Ubiquitous access to Internet would be nice, but that’s not a deal-breaker. Alternative text-input methods would help in some cases, but I do end up being about as fast on my ‘touch as I was with Graffiti on PalmOS.

For other tasks, I have a Mac mini. Sure, it’s limited. But it does the job. In fact, I have no intention of switching for another desktop and I even have an eMachines collecting dust (it’s too noisy to make a good server).

What I miss, though, is a laptop. I used an iBook G3 for several years and loved it. For a little while later, I was able to share a MacBook with somebody else and it was a wonderful experience. I even got to play with the OLPC XO for a few weeks. That one was not so pleasant an experience but it did give me a taste for netbooks. And it made me think about other types of iPhone-like devices. Especially in educational contexts. (As I mentioned, I’m a teacher)

I’ve been laptop-less for a while, now. And though my ‘touch replaces it in many contexts, there are still times when I’d really need a laptop. And these have to do with what I might call “mobile sessions.”

For instance: liveblogging a conference or meeting. I’ve used my ‘touch for this very purpose on a good number of occasions. But it gets rather uncomfortable, after a while, and it’s not very fast. A laptop is better for this, with a keyboard and a larger form factor. But the iPad will be even better because of lower risks of RSI. A related example: just imagine TweetDeck on iPad.

Possibly my favourite example of a context in which the iPad will be ideal: presentations. Even before learning about the prospect of getting iWork on a tablet, presentations were a context in which I really missed a laptop.

Sure, in most cases, these days, there’s a computer (usually a desktop running XP) hooked to a projector. You just need to download your presentation file from Slideshare, show it from Prezi, or transfer it through USB. No biggie.

But it’s not the extra steps which change everything. It’s the uncertainty. Even if it’s often unfounded, I usually get worried that something might just not work, along the way. The slides might not show the same way as you see it because something is missing on that computer or that computer is simply using a different version of the presentation software. In fact, that software is typically Microsoft PowerPoint which, while convenient, fits much less in my workflow than does Apple Keynote.

The other big thing about presentations is the “presenter mode,” allowing you to get more content than (or different content from) what the audience sees. In most contexts where I’ve used someone else’s computer to do a presentation, the projector was mirroring the computer’s screen, not using it as a different space. PowerPoint has this convenient “presenter view” but very rarely did I see it as an available option on “the computer in the room.” I wish I could use my ‘touch to drive presentations, which I could do if I installed software on that “computer in the room.” But it’s not something that is likely to happen, in most cases.

A MacBook solves all of these problems. and it’s an obvious use for laptops. But how, then, is the iPad better? Basically because of interface. Switching slides on a laptop isn’t hard, but it’s more awkward than we realize. Even before watching the demo of Keynote on the iPad, I could simply imagine the actual pleasure of flipping through slides using a touch interface. The fit is “natural.”

I sincerely think that Keynote on the iPad will change a number of things, for me. Including the way I teach.

Then, there’s reading.

Now, I’m not one of those people who just can’t read on a computer screen. In fact, I even grade assignments directly from the screen. But I must admit that online reading hasn’t been ideal, for me. I’ve read full books as PDF files or dedicated formats on PalmOS, but it wasn’t so much fun, in terms of the reading process. And I’ve used my ‘touch to read things through Stanza or ReadItLater. But it doesn’t work so well for longer reading sessions. Even in terms of holding the ‘touch, it’s not so obvious. And, what’s funny, even a laptop isn’t that ideal, for me, as a reading device. In a sense, this is when the keyboard “gets in the way.”

Sure, I could get a Kindle. I’m not a big fan of dedicated devices and, at least on paper, I find the Kindle a bit limited for my needs. Especially in terms of sources. I’d like to be able to use documents in a variety of formats and put them in a reading list, for extended reading sessions. No, not “curled up in bed.” But maybe lying down in a sofa without external lighting. Given my experience with the ‘touch, the iPad is very likely the ideal device for this.

Then, there’s the overall “multi-touch device” thing. People have already been quite creative with the small touchscreen on iPhones and ‘touches, I can just imagine what may be done with a larger screen. Lots has been said about differences in “screen real estate” in laptop or desktop screens. We all know it can make a big difference in terms of what you can display at the same time. In some cases, two screens isn’t even a luxury, for instance when you code and display a page at the same time (LaTeX, CSS…). Certainly, the same qualitative difference applies to multitouch devices. Probably even more so, since the display is also used for input. What Han found missing in the iPhone’s multitouch was the ability to use both hands. With the iPad, Han’s vision is finding its space.

Oh, sure, the iPad is very restricted. For instance, it’s easy to imagine how much more useful it’d be if it did support multitasking with third-party apps. And a front-facing camera is something I was expecting in the first iPhone. It would just make so much sense that a friend seems very disappointed by this lack of videoconferencing potential. But we’re probably talking about predetermined expectations, here. We’re comparing the iPad with something we had in mind.

Then, there’s the issue of the competition. Tablets have been released and some multitouch tablets have recently been announced. What makes the iPad better than these? Well, we could all get in the same OS wars as have been happening with laptops and desktops. In my case, the investment in applications, files, and expertise that I have made in a Mac ecosystem rendered my XP years relatively uncomfortable and me appreciate returning to the Mac. My iPod touch fits right in that context. Oh, sure, I could use it with a Windows machine, which is in fact what I did for the first several months. But the relationship between the iPhone OS and Mac OS X is such that using devices in those two systems is much more efficient, in terms of my own workflow, than I could get while using XP and iPhone OS. There are some technical dimensions to this, such as the integration between iCal and the iPhone OS Calendar, or even the filesystem. But I’m actually thinking more about the cognitive dimensions of recognizing some of the same interface elements. “Look and feel” isn’t just about shiny and “purty.” It’s about interactions between a human brain, a complex sensorimotor apparatus, and a machine. Things go more quickly when you don’t have to think too much about where some tools are, as you’re working.

So my reasons for wanting an iPad aren’t about being dazzled by a revolutionary device. They are about the right tool for the job.

Advertisement

Personal Devices

Personal devices after multitouch smartphones? Some random thoughts.

Still thinking about touch devices, such as the iPod touch and the rumoured “Apple Tablet.”

Thinking out loud. Rambling even more crazily than usual.

Something important about those devices is the need for a real “Personal Digital Assistant.” I put PDAs as a keyword for my previous post because I do use the iPod touch like I was using my PalmOS and even NewtonOS devices. But there’s more to it than that, especially if you think about cloud computing and speech technologies.
I mentioned speech recognition in that previous post. SR tends to be a pipedream of the computing world. Despite all the hopes put into realtime dictation, it still hasn’t taken off in a big way. One reason might be that it’s still somewhat cumbersome to use, in current incarnations. Another reason is that it’s relatively expensive as a standalone product which requires some getting used to. But I get the impression that another set of reasons has to do with the fact that it’s mostly fitting on a personal device. Partly because it needs to be trained. But also because voice itself is a personal thing.

Cloud computing also takes a new meaning with a truly personal device. It’s no surprise that there are so many offerings with some sort of cloud computing feature in the App Store. Not only do Apple’s touch devices have limited file storage space but the notion of accessing your files in the cloud go well with a personal device.
So, what’s the optimal personal device? I’d say that Apple’s touch devices are getting close to it but that there’s room for improvement.

Some perspective…

Originally, the PC was supposed to be a “personal” computer. But the distinction was mostly with mainframes. PCs may be owned by a given person, but they’re not so tied to that person, especially given the fact that they’re often used in a single context (office or home, say). A given desktop PC can be important in someone’s life, but it’s not always present like a personal device should be. What’s funny is that “personal computers” became somewhat more “personal” with the ‘Net and networking in general. Each computer had a name, etc. But those machines remained somewhat impersonal. In many cases, even when there are multiple profiles on the same machine, it’s not so safe to assume who the current user of the machine is at any given point.

On paper, the laptop could have been that “personal device” I’m thinking about. People may share a desktop computer but they usually don’t share their laptop, unless it’s mostly used like a desktop computer. The laptop being relatively easy to carry, it’s common for people to bring one back and forth between different sites: work, home, café, school… Sounds tautological, as this is what laptops are supposed to be. But the point I’m thinking about is that these are still distinct sites where some sort of desk or table is usually available. People may use laptops on their actual laps, but the form factor is still closer to a portable desktop computer than to the kind of personal device I have in mind.

Then, we can go all the way to “wearable computing.” There’s been some hype about wearable computers but it has yet to really be part of our daily lives. Partly for technical reasons but partly because it may not really be what people need.

The original PDAs (especially those on NewtonOS and PalmOS) were getting closer to what people might need, as personal devices. The term “personal digital assistant” seemed to encapsulate what was needed. But, for several reasons, PDAs have been having a hard time. Maybe there wasn’t a killer app for PDAs, outside of “vertical markets.” Maybe the stylus was the problem. Maybe the screen size and bulk of the device weren’t getting to the exact points where people needed them. I was still using a PalmOS device in mid-2008 and it felt like I was among the last PDA users.
One point was that PDAs had been replaced by “smartphones.” After a certain point, most devices running PalmOS were actually phones. RIM’s Blackberry succeeded in a certain niche (let’s use the vague term “professionals”) and is even beginning to expand out of it. And devices using other OSes have had their importance. It may not have been the revolution some readers of Pen Computing might have expected, but the smartphone has been a more successful “personal device” than the original PDAs.

It’s easy to broaden our focus from smartphones and think about cellphones in general. If the 3.3B figure can be trusted, cellphones may already be outnumbering desktop and laptop computers by 3:1. And cellphones really are personal. You bring them everywhere; you don’t need any kind of surface to use them; phone communication actually does seem to be a killer app, even after all this time; there are cellphones in just about any price range; cellphone carriers outside of Canada and the US are offering plans which are relatively reasonable; despite some variation, cellphones are rather similar from one manufacturer to the next… In short, cellphones already were personal devices, even before the smartphone category really emerged.

What did smartphones add? Basically, a few PDA/PIM features and some form of Internet access or, at least, some form of email. “Whoa! Impressive!”

Actually, some PIM features were already available on most cellphones and Internet access from a smartphone is in continuity with SMS and data on regular cellphones.

What did Apple’s touch devices add which was so compelling? Maybe not so much, apart from the multitouch interface, a few games, and integration with desktop/laptop computers. Even then, most of these changes were an evolution over the basic smartphone concept. Still, it seems to have worked as a way to open up personal devices to some new dimensions. People now use the iPhone (or some other multitouch smartphone which came out after the iPhone) as a single device to do all sorts of things. Around the World, multitouch smartphones are still much further from being ubiquitous than are cellphones in general. But we could say that these devices have brought the personal device idea to a new phase. At least, one can say that they’re much more exciting than the other personal computing devices.

But what’s next for personal devices?

Any set of buzzphrases. Cloud computing, speech recognition, social media…

These things can all come together, now. The “cloud” is mostly ready and personal devices make cloud computing more interesting because they’re “always-on,” are almost-wearable, have batteries lasting just about long enough, already serve to keep some important personal data, and are usually single-user.

Speech recognition could go well with those voice-enabled personal devices. For one thing, they already have sound input. And, by this time, people are used to seeing others “talk to themselves” as cellphones are so common. Plus, voice recognition is already understood as a kind of security feature. And, despite their popularity, these devices could use a further killer app, especially in terms of text entry and processing. Some of these devices already have voice control and it’s not so much of a stretch to imagine them having what’s needed for continuous speech recognition.

In terms of getting things onto the device, I’m also thinking about such editing features as a universal rich-text editor (à la TinyMCE), predictive text, macros, better access to calendar/contact data, ubiquitous Web history, multiple pasteboards, data detectors, Automator-like processing, etc. All sorts of things which should come from OS-level features.

“Social media” may seem like too broad a category. In many ways, those devices already take part in social networking, user-generated content, and microblogging, to name a few areas of social media. But what about a unified personal profile based on the device instead of the usual authentication method? Yes, all sorts of security issues. But aren’t people unconcerned about security in the case of social media? Twitter accounts are being hacked left and right yet Twitter doesn’t seem to suffer much. And there could be added security features on a personal device which is meant to really integrate social media. Some current personal devices already work well as a way to keep login credentials to multiple sites. The next step, there, would be to integrate all those social media services into the device itself. We maybe waiting for OpenSocial, OpenID, OAuth, Facebook Connect, Google Connect, and all sorts of APIs to bring us to an easier “social media workflow.” But a personal device could simplify the “social media workflow” even further, with just a few OS-based tweaks.

Unlike my previous, I’m not holding my breath for some specific event which will bring us the ultimate personal device. After all, this is just a new version of my ultimate handheld device blogpost. But, this time, I was focusing on what it means for a device to be “personal.” It’s even more of a drafty draft than my blogposts usually have been ever since I decided to really RERO.

So be it.

Free As In Beer: The Case for No-Cost Software

To summarize the situation:

  1. Most of the software for which I paid a fee, I don’t really use.
  2. Most of the software I really use, I haven’t paid a dime for.
  3. I really like no-cost software.
  4. You might want to call me “cheap” but, if you’re developing “consumer software,” you may need to pay attention to the way people like me think about software.

No, I’m not talking about piracy. Piracy is wrong on a very practical level (not to mention legal and moral issues). Piracy and anti-piracy protection are in a dynamic that I don’t particularly enjoy. In some ways, forms of piracy are “ruining it for everyone.” So this isn’t about pirated software.

I’m not talking about “Free/Libre/Open Source Software” (FLOSS) either. I tend to relate to some of the views held by advocates of “Free as in Speech” or “Open” developments but I’ve had issues with FLOSS projects, in the past. I will gladly support FLOSS in my own ways but, to be honest, I ended up losing interest in some of the most promising projects out there. Not saying they’re not worth it. After all, I do rely on many of those projects But in talking about “no-cost software,” I’m not talking about Free, Libre, or Open Source development. At least, not directly.

Basically, I was thinking about the complex equation which, for any computer user, determines the cash value of a software application. Most of the time, this equation is somehow skewed. And I end up frustrated when I pay for software and almost giddy when I find good no-cost software.

An old but representative example of my cost-software frustration: QuickTime Pro. I paid for it a number of years ago, in preparation for a fieldwork trip. It seemed like a reasonable thing to do, especially given the fact that I was going to manipulate media files. When QuickTime was updated, my license stopped working. I was basically never able to use the QuickTime Pro features. And while it’s not a huge amount of money, the frustration of having paid for something I really didn’t need left me surprisingly bitter. It was a bad decision at that time so I’m now less likely to buy software unless I really need it and I really know how I will use it.

There’s an interesting exception to my frustration with cost-software: OmniOutliner (OO). I paid for it and have used it extensively for years. When I was “forced” to switch to Windows XP, OO was possibly the piece of software I missed the most from Mac OS X. And as soon as I was able to come back to the Mac, it’s one of the first applications I installed. But, and this is probably an important indicator, I don’t really use it anymore. Not because it lacks features I found elsewhere. But because I’ve had to adapt my workflow to OO-less conditions. I still wish there were an excellent cross-platform outliner for my needs. And, no, Microsoft OneNote isn’t it.

Now, I may not be a typical user. If the term weren’t so self-aggrandizing, I’d probably call myself a “Power User.” And, as I keep saying, I am not a coder. Therefore, I’m neither the prototypical “end user” nor the stereotypical “code monkey.” I’m just someone spending inordinate amounts of time in front of computers.

One dimension of my computer behavior which probably does put me in a special niche is that I tend to like trying out new things. Even more specifically, I tend to get overly enthusiastic about computer technology to then become disillusioned by said technology. Call me a “dreamer,” if you will. Call me “naïve.” Actually, “you can call me anything you want.” Just don’t call me to sell me things. 😉

Speaking of pressure sales. In a way, if I had truckloads of money, I might be a good target for software sales. But I’d be the most demanding user ever. I’d require things to work exactly like I expect them to work. I’d be exactly what I never am in real life: a dictator.

So I’m better off as a user of no-cost software.

I still end up making feature requests, on occasion. Especially with Open Source and other open development projects. Some developers might think I’m just complaining as I’m not contributing to the code base or offering solutions to a specific usage problem. Eh.

Going back to no-cost software. The advantage isn’t really that we, users, spend less money on the software distribution itself. It’s that we don’t really need to select the perfect software solution. We can just make do with what we have. Which is a huge “value-add proposition” in terms of computer technology, as counter-intuitive as this may sound to some people.

To break down a few no-cost options.

  • Software that came with your computer. With an Eee PC, iPhone, XO, or Mac, it’s actually an important part of the complete computing experience. Sure, there are always ways to expand the software offering. But the included software may become a big part of the deal. After all, the possibilities are already endless. Especially if you have ubiquitous Internet access.
  • Software which comes through a volume license agreement. This often works for Microsoft software, at least at large educational institutions. Even if you don’t like it so much, you end up using Microsoft Office because you have it on your computer for free and it does most of the things you want to do.
  • Software coming with a plan or paid service. Including software given by ISPs. These tend not to be “worth it.” Yet the principle (or “business model,” depending on which end of the deal you’re on) isn’t so silly. You already pay for a plan of some kind, you might as well get everything you need from that plan. Nobody (not even AT&T) has done it yet in such a way that it would be to everyone’s advantage. But it’s worth a thought.
  • “Webware” and other online applications. Call it “cloud computing” if you will (it was a buzzphrase, a few days ago). And it changes a lot of things. Not only does it simplify things like backup and migration, but it often makes for a seamless computer experience. When it works really well, the browser effectively disappears and you just work in a comfortable environment where everything you need (content, tools) is “just there.” This category is growing rather rapidly at this point but many tech enthusiasts were predicting its success a number of years ago. Typical forecasting, I guess.
  • Light/demo versions. These are actually less common than they once were, especially in terms of feature differentiation. Sure, you may still play the first few levels of a game in demo version and some “express” or “lite” versions of software are still distributed for free as teaser versions of more complete software. But, like the shareware model, demo and light software may seem to have become much less prominent a part of the typical computer user’s life than just a few years ago.
  • Software coming from online services. I’m mostly thinking about Skype but it’s a software category which would include any program with a desktop component (a “download”) and an online component, typically involving some kind of individual account (free or paid). Part subscription model, part “Webware companion.” Most of Google’s software would qualify (Sketchup, Google Earth…). If the associated “retail software” were free, I wouldn’t hesitate to put WoW in this category.
  • Actual “freeware.” Much freeware could be included in other categories but there’s still an idea of a “freebie,” in software terms. Sometimes, said freeware is distributed in view of getting people’s attention. Sometimes the freeware is just the result of a developer “scratching her/his own itch.” Sometimes it comes from lapsed shareware or even lapsed commercial software. Sometimes it’s “donationware” disguised as freeware. But, if only because there’s a “freeware” category in most software catalogs, this type of no-cost software needs to be mentioned.
  • “Free/Libre/Open Source Software.” Sure, I said earlier this was not what I was really talking about. But that was then and this is now. 😉 Besides, some of the most useful pieces of software I use do come from Free Software or Open Source. Mozilla Firefox is probably the best example. But there are many other worthy programs out there, including BibDesk, TeXShop, and FreeCiv. Though, to be honest, Firefox and Flock are probably the ones I use the most.
  • Pirated software (aka “warez”). While software piracy can technically let some users avoid the cost of purchasing a piece of software, the concept is directly tied with commercial software licenses. (It’s probably not piracy if the software distribution is meant to be open.) Sure, pirates “subvert” the licensing system for commercial software. But the software category isn’t “no-cost.” To me, there’s even a kind of “transaction cost” involved in the piracy. So even if the legal and ethical issues weren’t enough to exclude pirated software from my list of no-cost software options, the very practicalities of piracy put pirated software in the costly column, not in the “no-cost” one.

With all but the last category, I end up with most (but not all) of the software solutions I need. In fact, there are ways in which I’m better served now with no-cost software than I have ever been with paid software. I should probably make a list of these, at some point, but I don’t feel like it.

I mostly felt like assessing my needs, as a computer user. And though there always are many things I wish I could do but currently can’t, I must admit that I don’t really see the need to pay for much software.

Still… What I feel I need, here, is the “ultimate device.” It could be handheld. But I’m mostly thinking about a way to get ideas into a computer-friendly format. A broad set of issues about a very basic thing.

The spark for this blog entry was a reflection about dictation software. Not only have I been interested in speech technology for quite a while but I still bet that speech (recognition/dictation and “text-to-speech”) can become the killer app. I just think that speech hasn’t “come true.” It’s there, some people use it, the societal acceptance for it is likely (given cellphone penetration most anywhere). But its moment hasn’t yet come.

No-cost “text-to-speech” (TTS) software solutions do exist but are rather impractical. In the mid-1990s, I spent fifteen months doing speech analysis for a TTS research project in Switzerland. One of the best periods in my life. Yet, my enthusiasm for current TTS systems has been dampened. I wish I could be passionate about TTS and other speech technology again. Maybe the reason I’m notis that we don’t have a “voice desktop,” yet. But, for this voice desktop (voicetop?) to happen, we need high quality, continuous speech recognition. IOW, we need a “personal dictation device.” So, my latest 2008 prediction: we will get a voice device (smartphone?) which adapts to our voices and does very efficient and very accurate transcription of our speech. (A correlated prediction: people will complain about speech technology for a while before getting used to the continuous stream of public soliloquy.)

Dictation software is typically quite costly and complicated. Most users don’t see a need for dictation software so they don’t see a need for speech technology in computing. Though I keep thinking that speech could improve my computing life, I’ve never purchased a speech processing package. Like OCR (which is also dominated by Nuance, these days) it seems to be the kind of thing which could be useful to everyone but ends up being limited to “vertical markets.” (As it so happens, I did end up being an OCR program at some point and kept hoping my life would improve as the result of being able to transform hardcopies into searchable files. But I almost never used OCR (so my frustration with cost-software continues).)

Ah, well…

iPhone Wishlist

Yeah, everybody’s been talking about the iPhone. It’s last week’s story but it can still generate a fair bit of coverage. People are already thinking about the next models.

Apple has most of the technology to build what would be my dream handheld device but the iPhone isn’t it. Yet.

My wishful thinking for what could in fact be the coolest handheld ever. Of course, the device should have the most often discussed features which the iPhone currently misses (Flash, MMS, chat…). But I’m going much further, here.

  • Good quality audio recording (as with the recording add-ons for the iPod 5G).
  • Disk space (say, 80GB).
  • VoIP support (Skype or other, but as compatible as possible).
  • Video camera which can face the user (for videoconference).
  • Full voice interface: speech recognition and text-to-speech for dialing, commands, and text.
  • Handwriting recognition.
  • Stylus support.
  • Data transfer over Bluetooth.
  • TextEdit.
  • Adaptive technology for word recognition.
  • Not tied to cellular provider contract.
  • UMA Cell-to-WiFi (unlicensed mobile access).
  • GPS.
  • iLife support.
  • Sync with Mac OS X and Windows.
  • Truly international cellular coverage.
  • Outliner.
  • iWork support.
  • Disk mode.
  • Multilingual support.
  • Use as home account on Mac OS X “host.”
  • FrontRow
  • USB and Bluetooth printing.
  • Battery packs with standard batteries.

The key point here isn’t that the iPhone should be a mix between an iPod and a MacBook. I’m mostly thinking about the fact that the “Personal” part of the “PC” and “PDA” concepts has not come to fruition yet. Sure, your PC account has your preferences and some personal data. Your PDA contains your contacts and to-do lists. But you still end up with personal data in different places. Hence the need for Web apps. As we all know, web apps are quite useful but there’s still room for standalone applications, especially on a handheld. It wouldn’t take much for the iPhone to be the ideal tool to serve as a “universal home” where a user can edit and output files. To a musician or podcaster, it could become the ideal portable studio.

But where the logical step needs to be taken is in “personalization.” Apparently, the iPhone’s predictive keyboard doesn’t even learn from the user’s input. Since the iPhone is meant to be used by a single individual, it seems quite strange that it does not, minimally, adapt to typed input. Yet with a device already containing a headset it seems to me that speech technologies could be ideal. Full-text continuous speech recognition already exists and what it requires is exactly what the iPhone could provide: adaptation to a user’s voice and speech patterns. Though it may be awkward for people to use a voice interface in public, cellphones have created a whole group of people who seem to be talking to themselves. 😉

Though very different from speech recognition, text-to-speech could integrate really well with a voice-driven device. Sharing the same “dictionaries” across all applications on the same device, the TTS and SR features could be trained very specifically to a given user. While screens have been important on computers for quite a while, voice-activated computers have been prominent in science-fiction for probably as long. The most common tasks done on computers (writing messages, making appointments, entering data, querying databases…) could all be done quite effectively through a voice interface. And the iPhone could easily serve as a voice interface for other computers.

Yes, I’m nightdreaming. It’s a good way to get some rest.

What Radio Open Source Should Do

I probably think too much. In this case, about a podcast and radio show which has been with me for as long as I started listening to podcasts: Radio Open Source on Public Radio International. The show is hosted by Christopher Lydon and is produced in Cambridge, MA, in collaboration with WGBH Boston. The ROS staff is a full team working on not only the show and the podcast version but on a full-fledged blog (using a WordPress install, hosted by Contegix) with something of a listener community.

I recently decided not to listen to ROS anymore. Nothing personal, it just wasn’t doing it for me anymore. But I spent enough time listening to the show and thinking about it, I even have suggestions about what they should do.

At the risk of sounding opinionated, I’m posting these comments and suggestions. In my mind, honesty is always the best policy. Of course, nothing personal about the excellent work of the ROS team.

Executive summary of my suggestion: a weekly spinoff produced by the same team, as an actual podcast, possibly as a summary of highlights. Other shows do something similar on different radio stations and it fits the podcasting model. Because time-shifting is of the essence with podcasts, a rebroadcast version (instead of a live show) would make a lot of sense. Obviously, it would imply more work for the team as a whole but I sincerely think it would be worth it.

ROS has been one of the first podcasts to which I subscribed and it might be the one that I have maintained in my podcatcher for the longest time. The reason is that several episodes have inspired me in different ways. My perception is that the teamwork “behind the scenes” makes for a large part of the success of the show.

Now, I don’t know anything about the inner workings of the ROS team. But I do get the impression that some important changes are imminent. The two people who left in the last few months, the grant they received, their successful fundraiser, as well as some perceivable changes in the way the show is handled tell me that ROS may be looking for new directions. I’m just an ethnographer and not a media specialist but here are some of my (honest) critical observations.

First, some things which I find quite good about the show (or some reasons I was listening to the show).

  • In-depth discussions. As Siva Vaidhyanathan mentioned it on multiple occasions, ROS is one of few shows in the U.S . during which people can really spend an hour debating a single issue. While intriguing, Siva’s comparison with Canadian shows does seem appropriate according to my own experience with CBC and Radio-Canada. Things I’ve heard in Western Europe and West Africa would also fit this pattern. A show like ROS is somewhat more like The New Yorker than like The New York Times. (Not that these are innocent choices, of course.)
  • Research. A lot of care has been put in preparing for each show and, well, “it shows.” The “behind the scenes” team is obviously doing a great job. I include in this the capacity for the show to entice fascinating guests to come on the show. It takes diplomacy, care, and insight.
  • Podcasting. ROS was one of the first “public radio” shows to be available as a podcast and it’s possibly one of the radio shows for which the podcasting process is the most appropriate. Ease of subscribing, relatively few problems downloading shows, etc.
  • Show notes. Because the show uses a blog format for all of its episodes, it makes for excellent show notes, very convenient and easy to find. Easy to blog. Good trackback.
  • The “Community.” Though it can be troublesome at times, the fact that the show has a number of fans who act as regular commentators on the blog entries has been an intriguing feature of the show. On occasion, there is a sense that listeners can have some impact on the way the show is structured. Few shows on public radio do this and it’s a feature that makes the show, erm, let’s say “podworthy.” (Apologies to those who hate the “pod-” prefix. At least, you got my drift, right?)

On the other hand, there are things with ROS that have kept putting me off, especially as a podcast. A few of those “pet peeves.”

  • “Now the News.” While it’s perfectly natural for a radio show to have to break for news or ads, the disruption is quite annoying on a podcast. The pacing of the show as a whole becomes completely dominated by the breaks. What’s more, the podcast version makes very obvious the fact that discussions started before the break rarely if ever get any resolution after the break. A rebroadcast would allow for seamless editing. In fact, some television shows offer exclusive online content as a way to avoid this problem. Or, more accurately, some television shows use this concept as a way to entice watchers to visit their websites. Neat strategy, powerful concept.
  • Length. While the length of the show (a radio “hour”) allows for in-depth discussions, the usual pacing of the show often implies a rather high level of repetition. One gets the impression that the early part of the show contains most of the “good tidbits” one needs to understand what will be discussed later. I often listen to the first part of the show (before the first break) and end up skipping the rest of the show. This could be alleviated with a “best of ROS” podcast. In fact, it’s much less of an issue when the listener knows what to expect.
  • Host. Nothing personal. Chris Lydon is certainly a fabulous person and I would feel bad to say anything personal about him even though, to make a point, I have used a provocative title in the past which specifically criticised him. (My point was more about the show as a whole.) In fact, Lydon can be very good as a radio host, as I described in the past. Thing is, Lydon’s interviewing style seems to me more appropriate for a typical radio show than for a podcast. Obviously, he is quite knowledgeable of a wide array of subjects enabling him to relate to his guests. Also, he surely has a “good name” in U.S. journalistic milieus. But, to be perfectly honest, I sometimes feel that his respect for guests and other participants (blog commentators and callers when ROS still had them) is quite selective. In my observation, Lydon also tends to do what Larry King described on the Colbert Report as an “I-show” (host talking about her/his own experience, often preventing a guest to follow a thought). It can be endearing on some radio shows but it seems inappropriate for a podcast. What makes this interviewing style even more awkward is the fact that the show is frequently billed as a “conversation.” In conversation analysis, Lydon’s interviews would merit a lot of discussion.
  • Leading questions. While many questions asked on the show do help guests get into interesting issues, many questions sound like “leading” questions. Maybe not to the “how long have you been beating your wife?” extreme, but it does seem that the show is trying to get something specific out of each guest. Appropriate for journalism but awkward for what is billed as a “conversation.” In fact, many “questions” asked on the show are phrased as affirmative utterances instead of actual questions
  • Old School Journalism. It may sound like harsh criticism but what I hear from ROS often makes me think that they still believe that some sources are more worthy than others by mere virtue of being “a trusted source.” I’ve been quite critical of what I think of as “groupthink.” Often characterised by the fact that everybody listens, reads, or watches the same sources of information. In Quebec, it’s often Radio-Canada’s television shows. In the U.S., it typically implies that everyone reads the New York Times and thinks of it as their main “source of information.” IMHO, the ROS-NYT connection is a strong one. To me, critical thinking implies a mistrust of specific sources and an ability to process information regardless of the source. I do understand that the NYT is, to several people, the “paper of record” but the very notion of “paper of record” seems outdated in this so-called “Information Age.” In fact, as an outsider, I often find the NYT even more amenable to critical challenge than some other sources. This impression I got even before the scandals which have been plaguing the NYT. In other words, the NYT is the best example of Old School Journalism. Podcasting is going away from Old School Journalism so a podcast version of ROS should go away from NYT groupthink. Lydon’s NYT background is relevant here but what I describe goes much beyond that print newspaper.
  • The “Wolfpack.” The community around ROS is fascinating. If I had more time, I might want to spend more time “in” it. Every commentator on the show’s entries has interesting things to say and the comments are sometimes more insightful than the show itself. Yet, as contradictory as it may sound, the ROS “fanbase” makes the show less approachable to new listeners. This one is a common feature of open networks with something of a history but it’s heightened by the way the community is handled in the show. It sometimes seems as though some “frequent contributors” are appreciated more than others. The very fact that some people are mentioned as “frequent contributors to the show” makes the “community” sound more like a clique than like an open forum. While Brendan often brought in some questions from the real-time blog commentators, these questions rarely led to real two-way conversations. The overall effect is more like a typical radio talk show than like a community-oriented podcast.
  • Show suggestions. Perhaps because suggestions submitted to the show are quite numerous, very few of these suggestions have been discussed extensively. The “pitch a show idea of your own” concept is helpful but the end-result is that commentators will need to prepare a pitch which might be picked up by a member of the ROS team to be pitched during the team’s meeting. The process is thus convoluted, non-transparent, non-democratic, and cumbersome. To be perfectly honest, it sounds as if it were “lipservice” to the audience instead of being a way to have listeners be part of the show. As a semi-disclaimer, I did pitch several ideas. The one of my ideas which was picked up was completely transformed from my original idea. Nothing wrong with that but it doesn’t make the process feel transparent or open. While a digg-like system for voting on suggestions might be a bit too extreme for a show on public radio, I find myself dreaming for the ROS team working on shows pitched by listeners.
  • Time-sensitiveness. Because the show is broadcast and podcast four days a week, the production cycle is particularly tight. In this context, commentators need to post on an entry in a timely fashion to “get the chance to be heard.” Perfectly normal, but not that podfriendly. It seems that the most dedicated listeners are those who listen to the show live while posting comments on the episode’s blog entry. This alienates the actual podcasting audience. Time-shifting is at the very basis of podcasting and many shows had to adapt to this reality (say, for a contest or to get feedback). The time-sensitive nature of ROS strengthens the idea that it’s a radio show which happens to be podcast, contrary to their claims. A weekly podcast would alleviate this problem.
  • Gender bias. Though I didn’t really count, it seems to me that a much larger proportion of men than women are interviewed as guests on the show. It even seems that women are only interviewed when the show focuses specifically on gender. Women are then interviewed as women instead of being guests who happen to be females. This is especially flagrant when compared to podcasts and radio shows outside of the U.S. mainstream media. Maybe I’m too gender-conscious but a gender-balanced show often produces a dynamic which is, I would dare say, “friendlier.”
  • U.S. focus. While it makes sense that a show produced in Cambridge, MA should focus on the U.S., I naively thought that the ‘I’ in PRI implied a global reach. Many ROS episodes have discussed “international affairs” yet the focus is on “what does it mean for U.S.” This approach is quite far from what I have heard in West Africa, Western Europe, and Canada.

Phew!

Yes, that’s a lot.

Overall, I still enjoyed many things of the show while I was listening to it. I was often compelled to post a blog entry about something I heard on the show which, in itself, is a useful thing about a podcast. But the current format of the show is clearly not what I expect a podcast to be.

Now what? Well, my dream would be a podcast on disparate subjects with the team and clout of ROS but with podcasting in mind, from beginning to end. I imagine the schedule to be more of a weekly wrap-up than a live daily show. As a podcast listener, I tend to prefer weekly shows. In some cases, podcasts serve as a way to incite listeners to listen to the whole show. Makes a lot of sense.

That podcast could include a summary of what was said in the live comments. It could also have guest hosts. And exclusive content. And it could become an excellent place to get insight about a number of things. And I’d listen to it. Carefully.

Some “pie in the sky” wishes.

  • Full transcripts. Yes, it takes time and effort, but it brings audio to the blogosphere more than anything else could. Different transcribing services are available for podcasts and members of the team could make this more efficient.
  • Categorised feeds. The sadly missed DailySonic podcast had excellent customisation feature. If a mainstream radio station could do it, ROS would be a good candidate for categorised feeds.
  • Voting mechanism. Since Slashdot and Digg, voting has probably been the most common form of online participation by people who care about media. Voting on features would make the “pitching” process more than simply finding the right “hook” to make the show relevant. Results are always intriguing in those cases.
  • Community guests. People do want to get involved and the ROS community is fascinating. Bringing some members on the podcast could do a lot to give a voice to actual people. The only attempt I remember on ROS was with a kind of answering machine system. Nothing was played on the show. (What I left was arguably not that fascinating but I was surprised nothing came out of it.)
  • Guest hosts. Not to go too Bakhtin on y’all, but multiple voices in the same discussion makes for interesting stories. Being a guest host could prove how difficult it is be a host.
  • Field assignments. With a wide community of listeners, it could be interesting to have audio from people in other parts of the world, apart from phone interviews. Even an occasional one-minute segment would go a long way to give people exposure to realities outside the United States.
  • Social bookmarking. Someone recently posted an advice for a book club. With social bookmarking features, book recommendations could be part of a wider scheme.
  • Enhanced audio. While the MP3 version is really necessary, podcasts using enhanced features such as chapters and embedded images can be extremely useful, especially for owners of recent iPod/iPhone.
  • Links. ROS is not the only radio show and links are what makes podcasts alive, especially when one is linked to another. In a way, podcasts become an alternate universe through those links.

Ok, I’m getting too far astray from my original ideas about ROS. It must mean that I should leave it at that.

I do sincerely hope that ROS will take an interesting turn. I’ll be watching from my blog aggregator and I might join the ROS community again.

In the meantime, I’ll focus on other podcasts.

Googely Voice

Neat new service.

GOOG-411 offers free directory assistance – Lifehacker

Not available in Montreal, but quite useful. Apparently better than Free-411.

The speech recognition and speech synthesis are quite good. In fact, when I was working in speech, such a service was pretty much the main example we used for the need for speech research. With the prominence of cellphones in many different parts of the world, I still think that speech is a field in which technological advancements can have very interesting effects.

Why Podcasting Doesn’t Work (Reason #23)

Was listening to the latest episode of Scientific American’s ScienceTalk podcast (for Januray 3, 2007). As is often the case with some of my favourite podcasts, I wanted to blog about specific issues mentioned in this episode.

Here’s the complete “show notes” for this episode (22:31 running time).

In this episode, journalist Chip Walter, author of Thumbs, Toes and Tears, takes us on a tour of the physical traits that are unique to humans, with special attention to crying, the subject of his article in the current issue of Scientific American MIND. The University of Cambridge’s Gordon Smith discusses the alarming lack of any randomized, controlled trials to determine the efficacy of parachutes. Plus we’ll test your knowledge about some recent science in the news. Websites mentioned on this episode include http://www.sciammind.com; http://www.chipwalter.com; http://www.bmj.com.

AFAICT, there’s a direct link to the relevant MP3 file (which may be downloaded with a default name of “podcast.mp3” through a browser’s “save link as” feature),  an embedded media player to listen to the episode, some links to subscribe to podcast through RSS, My Yahoo, or iTunes, and a menu to browse for episodes by month. Kind of neat.

But what’s wrong with this picture?

In my mind, a few things. And these are pretty common for podcasts.

First, there are no clickable links in the show notes. Sure, anybody can copy/paste the URLs in a browser but there’s something just slightly frustrating about having to do that instead of just clicking on a link directly. In fact, these links are quite generic and would still require that people look for information themselves, instead of pinpointing exactly which scientific articles were featured in the podcast. What’s worse, the Chip Walter article discussed in the podcast isn’t currently found on the main page for the current issue of Scientific American’s Mind. To add insult to injury, the URL for that article is the mnemo-friendly:

http://www.sciammind.com/article.cfm?&articleID=33F8609A-E7F2-99DF-3F12706DF3E30E29

Catchy! 😉

These are common issues with show notes and are easily solved. I should just write SciAm to comment on this. But there are deeper issues.

One reason blogging caught on so well is that it’s very easy to link and quote from one blog to another. In fact, most blogging platforms have bookmarklets and other tools to make it easy to create a blog entry by selecting text and/or images from any web page, clicking on the bookmarklet, adding a few comments, and pressing the “Publish” button. In a matter of seconds, you can have your blog entry ready. If the URL to the original text is static, readers of your blog are able to click on a link accompanying the quote to put it in context. In effect, those blog entries are merely tagging web content. But the implications are deeper. You’re associating something of yourself with that content. You’re applying some basic rules of attribution by providing enough information to identify the source of an idea. You’re making it easy for readers to follow streams of thought. If the original is a trackback-/ping-enabled blog system, you’re telling the original author that you’re refering to her piece. You’re creating new content that can, in itself, serve as the basis for something new. You might even create a pseudo-community of like-minded people. All with a few clicks and types.

Compare with the typical (audio) podcast episode. You listen to it while commuting or while doing some other low-attention activity. You suddenly want to talk about what you heard. Go out and reach someone. You do have a few options. You can go and look at the show notes if they exist and use the same bookmarklet procedure to create a blog entry. Or you can simply tell someone “hey, check out the latest ScienceTalk, from SciAm, it’s got some neat things about common sense and human choking.” If the podcast has a forum, you can go in the forum and post something to listeners of that podcast. If the show notes are in blog form, you may post comments for those who read the show notes. And you could do all sorts of things with the audio recording that you have, including bookmark it (depending on the device you use to listen to audio files). But all of these are quite limited.

You can’t copy/paste an excerpt from the episode. You can’t link to a specific segment of that episode. You can’t realistically expect most of your blog readers to access the whole podcast just to get the original tidbit. Blog readers may not easily process the original information further. In short, podcasts aren’t easily bloggable.

Podcast episodes are often big enough that it’s not convenient to keep them on your computer or media device.  Though it is possible to bookmark audio and video files, there’s no standard method to keep and categorize these bookmarks. Many podcasts make it very hard to find a specific episode. Some podcasts in fact make all but the most recent episodes unavailable for download. Few devices make it convenient to just skim over a podcast. Though speed listening seems to be very effective (like speed reading) at making informative content stick in someone’s head, few solutions exist for speed listening to podcasts. A podcast’s RSS entry may contain a useful summary but there’s no way to scale up or down the amount of information we get about different podcast segments like we can do with text-based RSS feeds in, say, Safari 2.0 and up. Audio files can’t easily be indexed, searched, or automatically summarized. Most data mining procedures don’t work with audio files. Few formats allow for direct linking from the audio file to other online content and those formats that do allow for such linking aren’t ubiquitous. Responding to a podcast with a podcast (or audio/video comment) is doable but is more time-consuming than written reactions to written content. Editing audio/video content is more involving than, say, proofreading a forum comment before sending it. Relatively few people respond in writing to blogs and forums and it’s quite likely that the proportion of people who would feel comfortable responding to podcasts with audio/video recordings is much smaller than blog/forum commenters.

And, of course, video podcasts (a big trend in podcasting) aren’t better than audio podcasts on any of these fronts.

Speech recognition technology and podcast-transcription services like podzinger may make some of these issues moot but they’re all far from perfect, often quite costly, and are certainly not in widespread use. A few podcasts (well, at least one) with very dedicated listeners have listeners effectively transcribe the complete verbal content of every podcast episode and this content can work as blog-ammo. But chances that such a practice may become common are slim to none.

Altogether,  podcasting is more about passive watching/listening than about active engagement in widespread dialogue. Similar to our good old friend (and compatriot) McLuhan described as “hot,” instead of “cool” media. (Always found the distinction counter-intuitive myself, but it fits, to a degree…)

Having said all of this, I recently embarked in my first real podcasting endeavor, having my lectures be distributed in podcast form, within the Moodle course management system. Lecturecasts have been on my mind for a while. So this is an opportunity for me to see, as a limited experiment, whether it can appropriately be integrated in my teaching.

As it turns out, I don’t have much to do to make the lecturecasts possible. Concordia University has a service to set it all up for me. They give me a wireless lapel microphone, record that signal, put the MP3 file on one of their servers, and add that file in Moodle as a tagged podcast episode (Moodle handles the RSS and other technical issues). Neat!

Moodle itself makes most of the process quite easy. And because the podcasts are integrated within the broader course management structure, it might be possible to alleviate some of the previously-mentioned issues.  In this case, the podcast is a complementary/supplementary component of the complete course. It might help students revise the content, spark discussions, invite reflections about the necessity of note-taking, enable neat montages, etc. Or it might have negative impacts on classroom attendance, send the message that note-taking isn’t important, put too much of the spotlight on my performance (or lack thereof) as a speaker, etc.

Still, I like the fact that I can try this out in the limited context of my own classes.