No Office Export in Keynote/Numbers for iPad?

Sounds like iWork for iPad will export to Word but not to PowerPoint or Excel.

To be honest, I’m getting even more excited about the iPad. Not that we get that much more info about it, but:

For one thing, the Pages for iPad webpage is explicitly stating Word support:

Attach them to an email as Pages files for Mac, Microsoft Word files, or PDF documents.

Maybe this is because Steve Jobs himself promised it to Walt Mossberg?
Thing is, the equivalent pages about Keynote for iPad and about Numbers for iPad aren’t so explicit:

The presentations you create in Keynote on your iPad can be exported as Keynote files for Mac or PDF documents

and…

To share your work, export your spreadsheet as a Numbers file for Mac or PDF document

Not a huge issue, but it seems strange that Apple would have such an “export to Microsoft Office” feature on only one of the three “iWork for iPad” apps. Now, the differences in the way exports are described may not mean that Keynote won’t be able to export to Microsoft PowerPoint or that Numbers won’t be able to export to Microsoft Excel. After all, these texts may have been written at different times. But it does sound like PowerPoint and Excel will be import-only, on the iPad.

Which, again, may not be that big an issue. Maybe iWork.com will work well enough for people’s needs. And some other cloud-based tools do support Keynote. (Though Google Docs and Zoho Show don’t.)

The reason I care is simple: I do share most of my presentation files. Either to students (as resources on Moodle) or to whole wide world (through Slideshare). My desktop outliner of choice, OmniOutliner, exports to Keynote and Microsoft Word. My ideal workflow would be to send, in parallel, presentation files to Keynote for display while on stage and to PowerPoint for sharing. The Word version could also be useful for sharing.

Speaking of presenting “slides” on stage, I’m also hoping that the “iPad Dock Connector to VGA Adapter” will support “presenter mode” at some point (though it doesn’t seem to be the case, right now). I also dream of a way to control an iPad presentation with some kind of remote. In fact, it’s not too hard to imagine it as an iPod touch app (maybe made by Appiction, down in ATX).

To be clear: my “presentation files” aren’t really about presenting so much as they are a way to package and organize items. Yes, I use bullet points. No, I don’t try to make the presentation sexy. My presentation files are acting like cue cards and like whiteboard snapshots. During a class, I use the “slides” as a way to keep track of where I planned the discussion to go. I can skip around, but it’s easier for me to get at least some students focused on what’s important (the actual depth of the discussion) because they know the structure (as “slides”) will be available online. Since I also podcast my lectures, it means that they can go back to all the material.

I also use “slides” to capture things we build in class, such as lists of themes from the readings or potential exam questions.  Again, the “whiteboard” idea. I don’t typically do the same thing during a one-time talk (say, at an unconference). But I still want to share my “slides,” at some point.

So, in all of these situations, I need a file format for “slides.” I really wish there were a format which could work directly out of the browser and could be converted back and forth with other formats (especially Keynote, OpenOffice, and PowerPoint). I don’t need anything fancy. I don’t even care about transitions, animations, or even inserting pictures. But, despite some friends’ attempts at making me use open solutions, I end up having to use presentation files.

Unfortunately, at this point, PowerPoint is the de facto standard for presentation files. So I need it, somehow. Not that I really need PowerPoint itself. But it’s still the only format I can use to share “slides.”

So, if Keynote for iPad doesn’t export directly to PowerPoint, it means that I’ll have to find another way to make my workflow fit.

Ah, well…

Advertisement

I Hate Books

I want books dead. For social reasons.

In a way, this is a followup to a discussion happening on Facebook after something I posted (available publicly on Twitter): “(Alexandre) wishes physical books a quick and painfree death. / aime la connaissance.”

As I expected, the reactions I received were from friends who are aghast: how dare I dismiss physical books? Don’t I know no shame?

Apparently, no, not in this case.

And while I posted it as a quip, it’s the result of a rather long reflection. It’s not that I’m suddenly anti-books. It’s that I stopped buying several of the “pro-book” arguments a while ago.

Sure, sure. Books are the textbook case of technlogy which needs no improvement. eBooks can’t replace the experience of doing this or that with a book. But that’s what folkloristics defines as a functional shift. Like woven baskets which became objects of nostalgia, books are being maintained as the model for a very specific attitude toward knowledge construction based on monolithic authored texts vetted by gatekeepers and sold as access to information.

An important point, here, is that I’m not really thinking about fiction. I used to read two novel-length works a week (collections of short stories, plays…), for a period of about 10 years (ages 13 to 23). So, during that period, I probably read about 1,000 novels, ranging from Proust’s Recherche to Baricco’s Novecentoand the five books of Rabelais’s Pantagruel series. This was after having read a fair deal of adolescent and young adult fiction. By today’s standards, I might be considered fairly well-read.

My life has changed a lot, since that time. I didn’t exactly stop reading fiction but my move through graduate school eventually shifted my reading time from fiction to academic texts. And I started writing more and more, online and offline.
In the same time, the Web had also been making me shift from pointed longform texts to copious amounts of shortform text. Much more polyvocal than what Bakhtin himself would have imagined.

(I’ve also been shifting from French to English, during that time. But that’s almost another story. Or it’s another part of the story which can reamin in the backdrop without being addressed directly at this point. Ask, if you’re curious.)
The increase in my writing activity is, itself, a shift in the way I think, act, talk… and get feedback. See, the fact that I talk and write a lot, in a variety of circumstances, also means that I get a lot of people to play along. There’s still a risk of groupthink, in specific contexts, but one couldn’t say I keep getting things from the same perspective. In fact, the very Facebook conversation which sparked this blogpost is an example, as the people responding there come from relatively distant backgrounds (though there are similarities) and were not specifically queried about this. Their reactions have a very specific value, to me. Sure, it comes in the form of writing. But it’s giving me even more of something I used to find in writing: insight. The stuff you can’t get through Google.

So, back to books.

I dislike physical books. I wish I didn’t have to use them to read what I want to read. I do have a much easier time with short reading sessions on a computer screen that what would turn into rather long periods of time holding a book in my hands.

Physical books just don’t do it for me, anymore. The printing press is, like, soooo 1454!

Yes, books had “a good run.” No, nothing replaces them. That’s not the way it works. Movies didn’t replace theater, television didn’t replace radio, automobiles didn’t replace horses, photographs didn’t replace paintings, books didn’t replace orality. In fact, the technology itself doesn’t do much by itself. But social contexts recontextualize tools. If we take technology to be the set of both tools and the knowledge surrounding it, technology mostly goes through social processes, since tool repertoires and corresponding knowledge mostly shift in social contexts, not in their mere existence. Gutenberg’s Bible was a “game-changer” for social, as well as technical reasons.

And I do insist on orality. Journalists and other “communication is transmission of information” followers of Shannon&Weaver tend to portray writing as the annihilation of orality. How long after the invention of writing did Homer transfer an oral tradition to the writing media? Didn’t Albert Lord show the vitality of the epic well into the 20th Century? Isn’t a lot of our knowledge constructed through oral means? Is Internet writing that far, conceptually, from orality? Is literacy a simple on/off switch?

Not only did I maintain an interest in orality through the most book-focused moments of my life but I probably care more about orality now than I ever did. So I simply cannot accept the idea that books have simply replaced the human voice. It doesn’t add up.

My guess is that books won’t simply disappear either. There should still be a use for “coffee table books” and books as gifts or collectables. Records haven’t disappeared completely and CDs still have a few more days in dedicated stores. But, in general, we’re moving away from the “support medium” for “content” and more toward actual knowledge management in socially significant contexts.

In these contexts, books often make little sense. Reading books is passive while these contexts are about (hyper-)/(inter-)active.

Case in point (and the reason I felt compelled to post that Facebook/Twitter quip)…
I hear about a “just released” French book during a Swiss podcast. Of course, it’s taken a while to write and publish. So, by the time I heard about it, there was no way to participate in the construction of knowledge which led to it. It was already “set in stone” as an “opus.”

Looked for it at diverse bookstores. One bookstore could eventually order it. It’d take weeks and be quite costly (for something I’m mostly curious about, not depending on for something really important).

I eventually find it in the catalogue at BANQ. I reserve it. It wasn’t on the shelves, yet, so I had to wait until it was. It took from November to February. I eventually get a message that I have a couple of days to pick up my reservation but I wasn’t able to go. So it went back on the “just released” shelves. I had the full call number but books in that section aren’t in their call number sequence. I spent several minutes looking back and forth between eight shelves to eventually find out that there were four more shelves in the “humanities and social sciences” section. The book I was looking was on one of those shelves.

So, I was able to borrow it.

Phew!

In the metro, I browse through it. Given my academic reflex, I look for the back matter first. No bibliography, no index, a ToC with rather obscure titles (at random: «Taylor toujours à l’œuvre»/”Taylor still at work,” which I’m assuming to be a reference to continuing taylorism). The book is written by two separate dudes but there’s no clear indication of who wrote what. There’s a preface (by somebody else) but no “acknowledgments” section, so it’s hard to see who’s in their network. Footnotes include full URLs to rather broad sites as well as “discussion with <an author’s name>.” The back cover starts off with references to French popular culture (including something about “RER D,” which would be difficult to search). Information about both authors fits in less than 40 words (including a list of publication titles).

The book itself is fairly large print, ways almost a pound (422g, to be exact) for 327 pages (including front and back matter). Each page seems to be about 50 characters per line, about 30 lines per page. So, about half a million characters or 3500 tweets (including spaces). At 5+1 characters per word, about 80,000 words (I have a 7500-words blogpost, written in an afternoon). At about 250 words per minute, about five hours of reading. This book is listed at 19€ (about 27CAD).
There’s no direct way to do any “postprocessing” with the text: no speech synthesis for visually impaired, concordance analysis, no machine translation, even a simple search for occurences of “Sarkozy” is impossible. Not to mention sharing quotes with students or annotating in an easy-to-retrieve fashion (à la Diigo).

Like any book, it’s impossible to read in the dark and I actually have a hard time to find a spot where I can read with appropriate lighting.

Flipping through the book, I get the impression that there’s some valuable things to spark discussions, but there’s also a whole lot of redundancy with frequent discussions on the topic (the Future of Journalism, or #FoJ, as a matter of fact). My guesstimate is that, out of 5 hours of reading, I’d get at most 20 pieces of insight that I’d have exactly no way to find elsewhere. Comparable books to which I listened as audiobooks, recently, had much less. In other words, I’d have at most 20 tweets worth of things to say from the book. Almost a 200:1 compression.
Direct discussion with the authors could produce much more insight. The radio interviews with these authors already contained a few insight hints, which predisposed me to look for more. But, so many months later, without the streams of thought which animated me at the time, I end up with something much less valuable than what I wanted to get, back in November.

Bottomline: Books aren’t necessarily “broken” as a tool. They just don’t fit my life, anymore.

Why I Need an iPad

I’m one of those who feel the iPad is the right tool for the job.

I’m one of those who feel the iPad is the right tool for the job.

This is mostly meant as a reply to this blogthread. But it’s also more generally about my personal reaction to Apple’s iPad announcement.

Some background.

I’m an ethnographer and a teacher. I read a fair deal, write a lot of notes, and work in a variety of contexts. These days, I tend to spend a good amount of time in cafés and other public places where I like to work without being too isolated. I also commute using public transit, listen to lots of podcast, and create my own. I’m also very aural.

I’ve used a number of PDAs, over the years, from a Newton MessagePad 130 (1997) to a variety of PalmOS devices (until 2008). In fact, some people readily associated me with PDA use.

As soon as I learnt about the iPod touch, I needed one. As soon as I’ve heard about the SafariPad, I wanted one. I’ve been an intense ‘touch user since the iPhone OS 2.0 release and I’m a happy camper.

(A major reason I never bought an iPhone, apart from price, is that it requires a contract.)

In my experience, the ‘touch is the most appropriate device for all sorts of activities which are either part of an other activity (reading during a commute) or are simply too short in duration to constitute an actual “computer session.” You don’t “sit down to work at your ‘touch” the way you might sit in front of a laptop or desktop screen. This works great for “looking up stufff” or “checking email.” It also makes a lot of sense during commutes in crowded buses or metros.

In those cases, the iPod touch is almost ideal. Ubiquitous access to Internet would be nice, but that’s not a deal-breaker. Alternative text-input methods would help in some cases, but I do end up being about as fast on my ‘touch as I was with Graffiti on PalmOS.

For other tasks, I have a Mac mini. Sure, it’s limited. But it does the job. In fact, I have no intention of switching for another desktop and I even have an eMachines collecting dust (it’s too noisy to make a good server).

What I miss, though, is a laptop. I used an iBook G3 for several years and loved it. For a little while later, I was able to share a MacBook with somebody else and it was a wonderful experience. I even got to play with the OLPC XO for a few weeks. That one was not so pleasant an experience but it did give me a taste for netbooks. And it made me think about other types of iPhone-like devices. Especially in educational contexts. (As I mentioned, I’m a teacher)

I’ve been laptop-less for a while, now. And though my ‘touch replaces it in many contexts, there are still times when I’d really need a laptop. And these have to do with what I might call “mobile sessions.”

For instance: liveblogging a conference or meeting. I’ve used my ‘touch for this very purpose on a good number of occasions. But it gets rather uncomfortable, after a while, and it’s not very fast. A laptop is better for this, with a keyboard and a larger form factor. But the iPad will be even better because of lower risks of RSI. A related example: just imagine TweetDeck on iPad.

Possibly my favourite example of a context in which the iPad will be ideal: presentations. Even before learning about the prospect of getting iWork on a tablet, presentations were a context in which I really missed a laptop.

Sure, in most cases, these days, there’s a computer (usually a desktop running XP) hooked to a projector. You just need to download your presentation file from Slideshare, show it from Prezi, or transfer it through USB. No biggie.

But it’s not the extra steps which change everything. It’s the uncertainty. Even if it’s often unfounded, I usually get worried that something might just not work, along the way. The slides might not show the same way as you see it because something is missing on that computer or that computer is simply using a different version of the presentation software. In fact, that software is typically Microsoft PowerPoint which, while convenient, fits much less in my workflow than does Apple Keynote.

The other big thing about presentations is the “presenter mode,” allowing you to get more content than (or different content from) what the audience sees. In most contexts where I’ve used someone else’s computer to do a presentation, the projector was mirroring the computer’s screen, not using it as a different space. PowerPoint has this convenient “presenter view” but very rarely did I see it as an available option on “the computer in the room.” I wish I could use my ‘touch to drive presentations, which I could do if I installed software on that “computer in the room.” But it’s not something that is likely to happen, in most cases.

A MacBook solves all of these problems. and it’s an obvious use for laptops. But how, then, is the iPad better? Basically because of interface. Switching slides on a laptop isn’t hard, but it’s more awkward than we realize. Even before watching the demo of Keynote on the iPad, I could simply imagine the actual pleasure of flipping through slides using a touch interface. The fit is “natural.”

I sincerely think that Keynote on the iPad will change a number of things, for me. Including the way I teach.

Then, there’s reading.

Now, I’m not one of those people who just can’t read on a computer screen. In fact, I even grade assignments directly from the screen. But I must admit that online reading hasn’t been ideal, for me. I’ve read full books as PDF files or dedicated formats on PalmOS, but it wasn’t so much fun, in terms of the reading process. And I’ve used my ‘touch to read things through Stanza or ReadItLater. But it doesn’t work so well for longer reading sessions. Even in terms of holding the ‘touch, it’s not so obvious. And, what’s funny, even a laptop isn’t that ideal, for me, as a reading device. In a sense, this is when the keyboard “gets in the way.”

Sure, I could get a Kindle. I’m not a big fan of dedicated devices and, at least on paper, I find the Kindle a bit limited for my needs. Especially in terms of sources. I’d like to be able to use documents in a variety of formats and put them in a reading list, for extended reading sessions. No, not “curled up in bed.” But maybe lying down in a sofa without external lighting. Given my experience with the ‘touch, the iPad is very likely the ideal device for this.

Then, there’s the overall “multi-touch device” thing. People have already been quite creative with the small touchscreen on iPhones and ‘touches, I can just imagine what may be done with a larger screen. Lots has been said about differences in “screen real estate” in laptop or desktop screens. We all know it can make a big difference in terms of what you can display at the same time. In some cases, two screens isn’t even a luxury, for instance when you code and display a page at the same time (LaTeX, CSS…). Certainly, the same qualitative difference applies to multitouch devices. Probably even more so, since the display is also used for input. What Han found missing in the iPhone’s multitouch was the ability to use both hands. With the iPad, Han’s vision is finding its space.

Oh, sure, the iPad is very restricted. For instance, it’s easy to imagine how much more useful it’d be if it did support multitasking with third-party apps. And a front-facing camera is something I was expecting in the first iPhone. It would just make so much sense that a friend seems very disappointed by this lack of videoconferencing potential. But we’re probably talking about predetermined expectations, here. We’re comparing the iPad with something we had in mind.

Then, there’s the issue of the competition. Tablets have been released and some multitouch tablets have recently been announced. What makes the iPad better than these? Well, we could all get in the same OS wars as have been happening with laptops and desktops. In my case, the investment in applications, files, and expertise that I have made in a Mac ecosystem rendered my XP years relatively uncomfortable and me appreciate returning to the Mac. My iPod touch fits right in that context. Oh, sure, I could use it with a Windows machine, which is in fact what I did for the first several months. But the relationship between the iPhone OS and Mac OS X is such that using devices in those two systems is much more efficient, in terms of my own workflow, than I could get while using XP and iPhone OS. There are some technical dimensions to this, such as the integration between iCal and the iPhone OS Calendar, or even the filesystem. But I’m actually thinking more about the cognitive dimensions of recognizing some of the same interface elements. “Look and feel” isn’t just about shiny and “purty.” It’s about interactions between a human brain, a complex sensorimotor apparatus, and a machine. Things go more quickly when you don’t have to think too much about where some tools are, as you’re working.

So my reasons for wanting an iPad aren’t about being dazzled by a revolutionary device. They are about the right tool for the job.

Homeroasting and Coffee Geekness

I bought the i-Roast 2 homeroaster: I’m one happy (but crazy) coffee geek.

I’m a coffee geek. By which I mean that I have a geeky attitude to coffee. I’m passionate about the crafts and arts of coffee making, I seek coffee-related knowledge wherever I can find it, I can talk about coffee until people’s eyes glaze over (which happens more quickly than I’d guess possible), and I even dream about coffee gadgets. I’m not a typical gadget freak, as far as geek culture goes, but coffee is one area where I may invest in some gadgetry.

Perhaps my most visible acts of coffee geekery came in the form of updates I posted through diverse platforms about my home coffee brewing experiences. Did it from February to July. These posts contained cryptic details about diverse measurements, including water temperature and index of refraction. It probably contributed to people’s awareness of my coffee geek identity, which itself has been the source of fun things like a friend bringing me back coffee from Ethiopia.

But I digress, a bit. This is both about coffee geekness in general and about homeroasting in particular.

See, I bought myself this Hearthware i-Roast 2 dedicated homeroasting device. And I’m dreaming about coffee again.

Been homeroasting since December 2002, at the time I moved to Moncton, New Brunswick and was lucky enough to get in touch with Terry Montague of Down Esst Coffee.

Though I had been wishing to homeroast for a while before that and had become an intense coffee-lover fifteen years prior to contacting him, Terry is the one who enabled me to start roasting green coffee beans at home. He procured me a popcorn popper, sourced me some quality green beans, gave me some advice. And off I was.

Homeroasting is remarkably easy. And it makes a huge difference in one’s appreciation of coffee. People in the coffee industry, especially baristas and professional roasters, tend to talk about the “channel” going from the farmer to the “consumer.” In some ways, homeroasting gets the coffee-lover a few steps closer to the farmer, both by eliminating a few intermediaries in the channel and by making coffee into much less of a commodity. Once you’ve spent some time smelling the fumes emanated by different coffee varietals and looking carefully at individual beans, you can’t help but get a deeper appreciation for the farmer’s and even the picker’s work. When you roast 150g or less at a time, every coffee bean seems much more valuable. Further, as you experiment with different beans and roast profiles, you get to experience coffee in all of its splendour.

A popcorn popper may sound like a crude way to roast coffee. And it might be. Naysayers may be right in their appraisal of poppers as a coffee roasting method. You’re restricted in different ways and it seems impossible to produce exquisite coffee. But having roasted with a popper for seven years, I can say that my poppers gave me some of my most memorable coffee experiences. Including some of the most pleasant ones, like this organic Sumatra from Theta Ridge Coffee that I roasted in my campus appartment at IUSB and brewed using my beloved Brikka.

Over the years, I’ve roasted a large variety of coffee beans. I typically buy a pound each of three or four varietals and experiment with them for a while.

Mostly because I’ve been moving around quite a bit, I’ve been buying green coffee beans from a rather large variety of places. I try to buy them locally, as much as possible (those beans have travelled far enough and I’ve had enough problems with courier companies). But I did participate in a few mail orders or got beans shipped to me for some reason or another. Sourcing green coffee beans has almost been part of my routine in those different places where I’ve been living since 2002: Moncton, Montreal, Fredericton, South Bend, Northampton, Brockton, Cambridge, and Austin. Off the top of my head, I’ve sourced beans from:

  1. Down East
  2. Toi, moi & café
  3. Brûlerie Saint-Denis
  4. Brûlerie des quatre vents
  5. Terra
  6. Theta Ridge
  7. Dean’s Beans
  8. Green Beanery
  9. Cuvée
  10. Fair Bean
  11. Sweet Maria’s
  12. Evergreen Coffee
  13. Mon café vert
  14. Café-Vrac
  15. Roastmasters
  16. Santropol

And probably a few other places, including this one place in Ethiopia where my friend Erin bought some.

So, over the years, I got beans from a rather large array of places and from a wide range of regional varietals.

I rapidly started blending freshly-roasted beans. Typically, I would start a blend by roasting three batches in a row. I would taste some as “single origin” (coffee made from a single bean varietal, usually from the same farm or estate), shortly after roasting. But, typically, I would mix my batches of freshly roasted coffee to produce a main blend. I would then add fresh batches after a few days to fine-tune the blend to satisfy my needs and enhance my “palate” (my ability to pick up different flavours and aromas).

Once the quantity of green beans in a particular bag would fall below an amount I can reasonably roast as a full batch (minimum around 100g), I would put those green beans in a pre-roast blend, typically in a specially-marked ziplock bag. Roasting this blend would usually be a way for me to add some complexity to my roasted blends.

And complexity I got. Lots of diverse flavours and aromas. Different things to “write home about.”

But I was obviously limited in what I could do with my poppers. The only real controls that I had in homeroasting, apart from blending, consisted in the bean quantity and roasting time. Ambient temperature was clearly a factor, but not one over which I was able to exercise much control. Especially since I frequently ended up roasting outside, so as to not incommodate people with fumes, noise, and chaff. The few homeroast batches which didn’t work probably failed because of low ambient temperature.

One reason I stuck with poppers for so long was that I had heard that dedicated roasters weren’t that durable. I’ve probably used three or four different hot air popcorn poppers, over the years. Eventually, they just stop working, when you use them for coffee beans. As I’d buy them at garage sales and Salvation Army stores for 3-4$, replacing them didn’t feel like such a financially difficult thing to do, though finding them could occasionally be a challenge. Money was also an issue. Though homeroasting was important for me, I wasn’t ready to pay around 200$ for an entry-level dedicated roaster. I was thinking about saving money for a Behmor 1600, which offers several advantages over other roasters. But I finally gave in and bought my i-Roast as a kind of holiday gift to myself.

One broad reason is that my financial situation has improved since I started a kind of partial professional reorientation (PPR). I have a blogpost in mind about this PPR, and I’ll probably write it soon. But this post isn’t about my PPR.

Although, the series of events which led to my purchase does relate to my PPR, somehow.

See, the beans I (indirectly) got from Roastmasters came from a friend who bought a Behmor to roast cocoa beans. The green coffee beans came with the roaster but my friend didn’t want to roast coffee in his brand new Behmor, to avoid the risk of coffee oils and flavours getting into his chocolate. My friend asked me to roast some of these beans for his housemates (he’s not that intensely into coffee, himself). When I went to drop some homeroasted coffee by the Station C co-working space where he spends some of his time, my friend was discussing a project with Duncan Moore, whom I had met a few times but with whom I had had few interactions. The three of us had what we considered a very fruitful yet very short conversation. Later on, I got to do a small but fun project with Duncan. And I decided to invest that money into coffee.

A homeroaster seemed like the most appropriate investment. The Behmor was still out of reach but the i-Roast seemed like a reasonable purchase. Especially if I could buy it used.

But I was also thinking about buying it new, as long as I could get it quickly. It took me several years to make a decision about this purchase but, once I made it, I wanted something as close to “instant gratification” as possible. In some ways, the i-Roast was my equivalent to Little Mrs Sommers‘s “pair of silk stockings.”

At the time, Mon café vert seemed like the only place where I could buy a new i-Roast. I tried several times to reach them to no avail. As I was in the Mile-End as I decided to make that purchase, I went to Caffè in Gamba, both to use the WiFi signal and to check if, by any chance, they might not have started selling roasters. They didn’t, of course, homeroasters isn’t mainstream enough. But, as I was there, I saw the Hario Ceramic Coffee Mill Skerton, a “hand-cranked” coffee grinder about which I had read some rather positive reviews.

For the past few years, I had been using a Bodum Antigua conical burr electric coffee grinder. This grinder was doing the job, but maybe because of “wear and tear,” it started taking a lot longer to grind a small amount of coffee. The grind took so long, at some points, that the grounds were warm to the touch and it seemed like the grinder’s motor was itself heating.

So I started dreaming about the Baratza Vario, a kind of prosumer electric grinder which seemed like the ideal machine for someone who uses diverse coffee making methods. The Vario is rather expensive and seemed like overkill, for my current coffee setup. But I was lusting over it and, yes, dreaming about it.

One day, maybe, I’ll be able to afford a Vario.

In the meantime, and more reasonably, I had been thinking about “Turkish-style mills.” A friend lent me a box-type manual mill at some point and I did find it produced a nice grind, but it wasn’t that convenient for me, partly because the coffee drops into a small drawer which rapidly gets full. A handmill seemed somehow more convenient and there are some generic models which are sold in different parts of the World, especially in the Arab World. So I got the impression that I might be able to find handmills locally and started looking for them all over the place, enquiring at diverse stores and asking friends who have used those mills in the past. Of course, they can be purchased online. But they end up being relatively expensive and my manual experience wasn’t so positive as to convince me to spend so much money on one.

The Skerton was another story. It was much more convenient than a box-type manual mill. And, at Gamba, it was inexpensive enough for me to purchase it on the spot. I don’t tend to do this very often so I did feel strange about such an impulse purchase. But I certainly don’t regret it.

Especially since it complements my other purchases.

So, going to the i-Roast.

Over the years, I had been looking for the i-Roast and Behmor at most of the obvious sites where one might buy used devices like these. eBay, Craig’s List, Kijiji… As a matter of fact, I had seen an i-Roast on one of these, but I was still hesitating. Not exactly sure why, but it probably had to do with the fact that these homeroasters aren’t necessarily that durable and I couldn’t see how old this particular i-Roast was.

I eventually called to find out, after taking my decision to get an i-Roast. Turns out that it’s still under warranty, is in great condition, and was being sold by a very interesting (and clearly trustworthy) alto singer who happens to sing with a friend of mine who is also a local beer homebrewer. The same day I bought the roaster, I went to the cocoa-roasting friend’s place and saw a Behmor for the first time. And I tasted some really nice homemade chocolate. And met other interesting people including a couple that I saw, again, while taking the bus after purchasing the roaster.

The series of coincidences in that whole situation impressed me in a sense of awe. Not out of some strange superstition or other folk belief. But different things are all neatly packaged in a way that most of my life isn’t. Nothing weird about this. The packaging is easy to explain and mostly comes from my own perception. The effect is still there that it all fits.

And the i-Roast 2 itself fits, too.

It’s clearly not the ultimate coffee geek’s ideal roaster. But I get the impression it could become so. In fact, one reason I hesitated to buy the i-Roast 2 is that I was wondering if Hearthware might be coming out with the i-Roast 3, in the not-so-distant future.

I’m guessing that Hearthware might be getting ready to release a new roaster. I’m using unreliable information, but it’s still an educated guess. So, apparently…

I could just imagine what the i-Roast 3 might be. As I’m likely to get, I have a number of crazy ideas.

One “killer feature” actually relates both to the differences between the i-Roast and i-Roast 2 as well as to the geek factor behind homeroasting: roast profiles as computer files. Yes, I know, it sounds crazy. And, somehow, it’s quite unlikely that Hearthware would add such a feature on an entry-level machine. But I seriously think it’d make the roaster much closer to a roasting geek’s ultimate machine.

For one thing, programming a roast profile on the i-Roast is notoriously awkward. Sure, you get used to it. But it’s clearly suboptimal. And one major improvement of the i-Roast 2 over the original i-Roast is that the original version didn’t maintain profiles if you unplugged it. The next step, in my mind, would be to have some way to transfer a profile from a computer to the roaster, say via a slot for SD cards or even a USB port.

What this would open isn’t only the convenience of saving profiles, but actually a way to share them with fellow homeroasters. Since a lot in geek culture has to do with sharing information, a neat effect could come out of shareable roast profiles. In fact, when I looked for example roast profiles, I found forum threads, guides, and incredibly elaborate experiments. Eventually, it might be possible to exchange roasting profiles relating to coffee beans from the same shipment and compare roasting. Given the well-known effects of getting a group of people using online tools to share information, this could greatly improve the state of homeroasting and even make it break out of the very small niche in which it currently sits.

Of course, there are many problems with that approach, including things as trivial as voltage differences as well as bigger issues such as noise levels:

But I’m still dreaming about such things.

In fact, I go a few steps further. A roaster which could somehow connect to a computer might also be used to track data about temperature and voltage. In my own experiments with the i-Roast 2, I’ve been logging temperatures at 15 second intervals along with information about roast profile, quantity of beans, etc. It may sound extreme but it already helped me achieve a result I wanted to achieve. And it’d be precisely the kind of information I would like to share with other homeroasters, eventually building a community of practice.

Nothing but geekness, of course. Shall the geek inherit the Earth?

Groupthink in Action

Seems like I’m witnessing a clear groupthink phenomenon.

An interesting situation which, I would argue, is representative of Groupthink.

As a brief summary of the situation: a subgroup within a larger group is discussing the possibility of changing the larger group’s structure. In that larger group, similar discussions have been quite frequent, in the past. In effect, the smaller group is moving toward enacting a decision based on perceived consensus as to “the way to go.”

No bad intention on anyone’s part and the situation is far from tragic. But my clear impression is that groupthink is involved. I belong to the larger group but I feel little vested interest in what might happen with it.

An important point about this situation is that the smaller group seems to be acting as if the decision had already been made, after careful consideration. Through the history of the larger group, prior discussions on the same topic have been frequent. Through these discussions, clear consensus has never been reached. At the same time, some options have been gaining some momentum in the recent past, mostly based (in my observation) on accumulated frustration with the status quo and some reflection on the effectiveness of activities done by subgroups within the larger group. Members of that larger group (including participants in the smaller group) are quite weary of rehashing the same issues and the “rallying cry” within the subgroup has to do with “moving on.” Within the smaller group, prior discussions are described as if they had been enough to explore all the options. Weariness through the group as a whole seems to create a sense of urgency even though the group as a whole could hardly be described as being involved in time-critical activities.

Nothing personal about anyone involved and it’s possible that I’m off on this one. Where some of those involved would probably disagree is in terms of the current stage in the decision making process (i.e., they may see themselves as having gone through the process of making the primary decision, the rest is a matter of detail). I actually feel strange talking about this situation because it may seem like I’m doing the group a disservice. The reason I think it isn’t the case is that I have already voiced my concerns about groupthink to those who are involved in the smaller group. The reason I feel the urge to blog about this situation is that, as a social scientist, I take it as my duty to look at issues such as group dynamics. Simply put, I started thinking about it as a kind of “case study.”

Yes, I’m a social science geek. And proud of it, too!

Thing is, I have a hard time not noticing a rather clear groupthink pattern. Especially when I think about a few points in Janis‘s description of groupthink.

.

Antecedent Conditions Symptoms Decisions Affected

.

Insulation of the group Illusion of invulnerability Incomplete survey of alternatives

.

High group cohesiveness Unquestioned belief in the inherent morality of the group Incomplete survey of objectives

.

Directive leadership Collective rationalization of group’s decisions Failure to examine risks of preferred choice

.

Lack of norms requiring methodical procedures Shared stereotypes of outgroup, particularly opponents Failure to re-appraise initially rejected alternatives

.

Homogeneity of members’ social background and ideology Self-censorship; members withhold criticisms Poor information search

.

High stress from external threats with low hope of a better solution than the one offered by the leader(s) Illusion of unanimity (see false consensus effect) Selective bias in processing information at hand (see also confirmation bias)

.

Direct pressure on dissenters to conform Failure to work out contingency plans

.

Self-appointed “mindguards” protect the group from negative information

.

A PDF version, with some key issues highlighted.

Point by point…

Observable

Antecedent Conditions of Groupthink

Insulation of the group

A small subgroup was created based on (relatively informal) prior expression of opinion in favour of some broad changes in the structure of the larger group.

Lack of norms requiring methodical procedures

Methodical procedures about assessing the situation are either put aside or explicitly rejected.
Those methodical procedures which are accepted have to do with implementing the group’s primary decision, not with the decision making process.

Symptoms Indicative of Groupthink

Illusion of unanimity (see false consensus effect)

Agreement is stated as a fact, possibly based on private conversations outside of the small group.

Direct pressure on dissenters to conform

A call to look at alternatives is constructed as a dissenting voice.
Pressure to conform is couched in terms of “moving on.”

Symptoms of Decisions Affected by Groupthink

Incomplete survey of alternatives

Apart from the status quo, no alternative has been discussed.
When one alternative model is proposed, it’s reduced to a “side” in opposition to the assessed consensus.

Incomplete survey of objectives

Broad objectives are assumed to be common, left undiscussed.
Discussion of objectives is pushed back as being irrelevant at this stage.

Failure to examine risks of preferred choice

Comments about possible risks (including the danger of affecting the dynamics of the existing broader group) are left undiscussed or dismissed as “par for the course.”

Failure to re-appraise initially rejected alternatives

Any alternative is conceived as having been tried in the past with the strong implication that it isn’t wort revisiting.

Poor information search

Information collected concerns ways to make sure that the primary option considered will work.

Failure to work out contingency plans

Comments about the possible failure of the plan, and effects on the wider group are met with “so be it.”

Less Obvious

Antecedent Conditions of Groupthink

High group cohesiveness

The smaller group is highly cohesive but so is the broader group.

Directive leadership

Several members of the smaller group are taking positions of leadership, but there’s no direct coercion from that leadership.

Positions of authority are assessed, in a subtle way, but this authority is somewhat indirect.

Homogeneity of members’ social background and ideology

As with cohesiveness, homogeneity of social background can be used to describe the broader group as well as the smaller one.

High stress from external threats with low hope of a better solution than the one offered by the leader(s)

External “threats” are mostly subtle but there’s a clear notion that the primary option considered may be met with some opposition by a proportion of the larger group.

Symptoms Indicative of Groupthink

Illusion of invulnerability

While “invulnerability” would be an exaggeration, there’s a clear sense that members of the smaller group have a strong position within the larger group.

Unquestioned belief in the inherent morality of the group

Discussions don’t necessarily have a moral undertone, but the smaller group’s goals seem self-evident in the context or, at least, not really worth careful discussion.

Collective rationalization of group’s decisions

Since attempts to discuss the group’s assumed consensus are labelled as coming from a dissenting voice, the group’s primary decision is reified through countering individual points made about this decision.

Shared stereotypes of outgroup, particularly opponents

The smaller group’s primary “outgroup” is in fact the broader group, described in rather simple terms, not a distinct group of people.
The assumption is that, within the larger group, positions about the core issue are already set.

Self-censorship; members withhold criticisms

Self-censorship is particularly hard to observe or assess but the group’s dynamics tends to construct criticism as “nitpicking,” making it difficult to share comments.

Self-appointed “mindguards” protect the group from negative information

As with leadership, the process of shielding the smaller group from negative information is mostly organic, not located in a single individual.
Because the smaller group is already set apart from the larger group, protection from external information is built into the system, to an extent.

Symptoms of Decisions Affected by Groupthink

Selective bias in processing information at hand (see also confirmation bias)

Information brought into the discussion is treated as either reinforcing the group’s alleged consensus or taken to be easy to counter.
Examples from cases showing clear similarities are dismissed (“we have no interest in knowing what others have done”) and distant cases are used to demonstrate that the approach is sound (“there are groups in other contexts which work, so we can use the same approach”).

Personal Devices

Personal devices after multitouch smartphones? Some random thoughts.

Still thinking about touch devices, such as the iPod touch and the rumoured “Apple Tablet.”

Thinking out loud. Rambling even more crazily than usual.

Something important about those devices is the need for a real “Personal Digital Assistant.” I put PDAs as a keyword for my previous post because I do use the iPod touch like I was using my PalmOS and even NewtonOS devices. But there’s more to it than that, especially if you think about cloud computing and speech technologies.
I mentioned speech recognition in that previous post. SR tends to be a pipedream of the computing world. Despite all the hopes put into realtime dictation, it still hasn’t taken off in a big way. One reason might be that it’s still somewhat cumbersome to use, in current incarnations. Another reason is that it’s relatively expensive as a standalone product which requires some getting used to. But I get the impression that another set of reasons has to do with the fact that it’s mostly fitting on a personal device. Partly because it needs to be trained. But also because voice itself is a personal thing.

Cloud computing also takes a new meaning with a truly personal device. It’s no surprise that there are so many offerings with some sort of cloud computing feature in the App Store. Not only do Apple’s touch devices have limited file storage space but the notion of accessing your files in the cloud go well with a personal device.
So, what’s the optimal personal device? I’d say that Apple’s touch devices are getting close to it but that there’s room for improvement.

Some perspective…

Originally, the PC was supposed to be a “personal” computer. But the distinction was mostly with mainframes. PCs may be owned by a given person, but they’re not so tied to that person, especially given the fact that they’re often used in a single context (office or home, say). A given desktop PC can be important in someone’s life, but it’s not always present like a personal device should be. What’s funny is that “personal computers” became somewhat more “personal” with the ‘Net and networking in general. Each computer had a name, etc. But those machines remained somewhat impersonal. In many cases, even when there are multiple profiles on the same machine, it’s not so safe to assume who the current user of the machine is at any given point.

On paper, the laptop could have been that “personal device” I’m thinking about. People may share a desktop computer but they usually don’t share their laptop, unless it’s mostly used like a desktop computer. The laptop being relatively easy to carry, it’s common for people to bring one back and forth between different sites: work, home, café, school… Sounds tautological, as this is what laptops are supposed to be. But the point I’m thinking about is that these are still distinct sites where some sort of desk or table is usually available. People may use laptops on their actual laps, but the form factor is still closer to a portable desktop computer than to the kind of personal device I have in mind.

Then, we can go all the way to “wearable computing.” There’s been some hype about wearable computers but it has yet to really be part of our daily lives. Partly for technical reasons but partly because it may not really be what people need.

The original PDAs (especially those on NewtonOS and PalmOS) were getting closer to what people might need, as personal devices. The term “personal digital assistant” seemed to encapsulate what was needed. But, for several reasons, PDAs have been having a hard time. Maybe there wasn’t a killer app for PDAs, outside of “vertical markets.” Maybe the stylus was the problem. Maybe the screen size and bulk of the device weren’t getting to the exact points where people needed them. I was still using a PalmOS device in mid-2008 and it felt like I was among the last PDA users.
One point was that PDAs had been replaced by “smartphones.” After a certain point, most devices running PalmOS were actually phones. RIM’s Blackberry succeeded in a certain niche (let’s use the vague term “professionals”) and is even beginning to expand out of it. And devices using other OSes have had their importance. It may not have been the revolution some readers of Pen Computing might have expected, but the smartphone has been a more successful “personal device” than the original PDAs.

It’s easy to broaden our focus from smartphones and think about cellphones in general. If the 3.3B figure can be trusted, cellphones may already be outnumbering desktop and laptop computers by 3:1. And cellphones really are personal. You bring them everywhere; you don’t need any kind of surface to use them; phone communication actually does seem to be a killer app, even after all this time; there are cellphones in just about any price range; cellphone carriers outside of Canada and the US are offering plans which are relatively reasonable; despite some variation, cellphones are rather similar from one manufacturer to the next… In short, cellphones already were personal devices, even before the smartphone category really emerged.

What did smartphones add? Basically, a few PDA/PIM features and some form of Internet access or, at least, some form of email. “Whoa! Impressive!”

Actually, some PIM features were already available on most cellphones and Internet access from a smartphone is in continuity with SMS and data on regular cellphones.

What did Apple’s touch devices add which was so compelling? Maybe not so much, apart from the multitouch interface, a few games, and integration with desktop/laptop computers. Even then, most of these changes were an evolution over the basic smartphone concept. Still, it seems to have worked as a way to open up personal devices to some new dimensions. People now use the iPhone (or some other multitouch smartphone which came out after the iPhone) as a single device to do all sorts of things. Around the World, multitouch smartphones are still much further from being ubiquitous than are cellphones in general. But we could say that these devices have brought the personal device idea to a new phase. At least, one can say that they’re much more exciting than the other personal computing devices.

But what’s next for personal devices?

Any set of buzzphrases. Cloud computing, speech recognition, social media…

These things can all come together, now. The “cloud” is mostly ready and personal devices make cloud computing more interesting because they’re “always-on,” are almost-wearable, have batteries lasting just about long enough, already serve to keep some important personal data, and are usually single-user.

Speech recognition could go well with those voice-enabled personal devices. For one thing, they already have sound input. And, by this time, people are used to seeing others “talk to themselves” as cellphones are so common. Plus, voice recognition is already understood as a kind of security feature. And, despite their popularity, these devices could use a further killer app, especially in terms of text entry and processing. Some of these devices already have voice control and it’s not so much of a stretch to imagine them having what’s needed for continuous speech recognition.

In terms of getting things onto the device, I’m also thinking about such editing features as a universal rich-text editor (à la TinyMCE), predictive text, macros, better access to calendar/contact data, ubiquitous Web history, multiple pasteboards, data detectors, Automator-like processing, etc. All sorts of things which should come from OS-level features.

“Social media” may seem like too broad a category. In many ways, those devices already take part in social networking, user-generated content, and microblogging, to name a few areas of social media. But what about a unified personal profile based on the device instead of the usual authentication method? Yes, all sorts of security issues. But aren’t people unconcerned about security in the case of social media? Twitter accounts are being hacked left and right yet Twitter doesn’t seem to suffer much. And there could be added security features on a personal device which is meant to really integrate social media. Some current personal devices already work well as a way to keep login credentials to multiple sites. The next step, there, would be to integrate all those social media services into the device itself. We maybe waiting for OpenSocial, OpenID, OAuth, Facebook Connect, Google Connect, and all sorts of APIs to bring us to an easier “social media workflow.” But a personal device could simplify the “social media workflow” even further, with just a few OS-based tweaks.

Unlike my previous, I’m not holding my breath for some specific event which will bring us the ultimate personal device. After all, this is just a new version of my ultimate handheld device blogpost. But, this time, I was focusing on what it means for a device to be “personal.” It’s even more of a drafty draft than my blogposts usually have been ever since I decided to really RERO.

So be it.

Speculating on Apple’s Touch Strategy

I want a new touch device.

This is mere speculation on my part, based on some rumours.

I’m quite sure that Apple will come up with a video-enabled iPod touch on September 9, along with iTunes 9 (which should have a few new “social networking” features). This part is pretty clear from most rumour sites.

AppleInsider | Sources: Apple to unveil new iPod lineup on September 9.

Progressively, Apple will be adopting a new approach to marketing its touch devices. Away from the “poorperson’s iPhone” and into the “tiny but capable computer” domain. Because the 9/9 event is supposed to be about music, one might guess that there will be a cool new feature or two relating to music. Maybe lyrics display, karaoke mode, or whatever else. Something which will simultaneously be added to the iPhone but would remind people that the iPod touch is part of the iPod family. Apple has already been marketing the iPod touch as a gaming platform, so it’s not a radical shift. But I’d say the strategy is to make Apple’s touch devices increasingly more attractive, without cannibalizing sales in the MacBook family.

Now, I really don’t expect Apple to even announce the so-called “Tablet Mac” in September. I’m not even that convinced that the other devices Apple is preparing for expansion of its touch devices lineup will be that close to the “tablet” idea. But it seems rather clear, to me, that Apple should eventually come up with other devices in this category. Many rumours point to the same basic notion, that Apple is getting something together which will have a bigger touchscreen than the iPhone or iPod touch. But it’s hard to tell how this device will fit, in the grand scheme of things.

It’s rather obvious that it won’t be a rebirth of the eMate the same way that the iPod touch wasn’t a rebirth of the MessagePad. But it would make some sense for Apple to target some educational/learning markets, again, with an easy-to-use device. And I’m not just saying this because the rumoured “Tablet Mac” makes me think about the XOXO. Besides, the iPod touch is already being marketed to educational markets through the yearly “Back to school” program which (surprise!) ends on the day before the September press conference.

I’ve been using an iPod touch (1st Generation) for more than a year, now, and I’ve been loving almost every minute of it. Most of the time, I don’t feel the need for a laptop, though I occasionally wish I could buy a cheap one, just for some longer writing sessions in cafés. In fact, a friend recently posted information about some Dell Latitude D600 laptops going for a very low price. That’d be enough for me at this point. Really, my iPod touch suffices for a lot of things.

Sadly, my iPod touch seems to have died, recently, after catching some moisture. If I can’t revive it and if the 2nd Generation iPod touch I bought through Kijiji never materializes, I might end up buying a 3rd Generation iPod touch on September 9, right before I start teaching again. If I can get my hands on a working iPod touch at a good price before that, I may save the money in preparation for an early 2010 release of a new touch device from Apple.

Not that I’m not looking at alternatives. But I’d rather use a device which shares enough with the iPod touch that I could migrate easily, synchronize with iTunes, and keep what I got from the App Store.

There’s a number of things I’d like to get from a new touch device. First among them is a better text entry/input method. Some of the others could be third-party apps and services. For instance, a full-featured sharing app. Or true podcast synchronization with media annotation support (à la Revver or Soundcloud). Or an elaborate, fully-integrated logbook with timestamps, Twitter support, and outlining. Or even a high-quality reference/bibliography manager (think RefWorks/Zotero/Endnote). But getting text into such a device without a hardware keyboard is the main challenge. I keep thinking about all sorts of methods, including MessagEase and Dasher as well as continuous speech recognition (dictation). Apple’s surely thinking about those issues. After all, they have some handwriting recognition systems that they aren’t really putting to any significant use.

Something else which would be quite useful is support for videoconferencing. Before the iPhone came out, I thought Apple may be coming out with iChat Mobile. Though a friend announced the iPhone to me by making reference to this, the position of the camera at the back of the device and the fact that the original iPhone’s camera only supported still pictures (with the official firmware) made this dream die out, for me. But a “Tablet Mac” with an iSight-like camera and some form of iChat would make a lot of sense, as a communication device. Especially since iChat already supports such things as screen-sharing and slides. Besides, if Apple does indeed move in the direction of some social networking features, a touch device with an expanded Address Book could take a whole new dimension through just a few small tweaks.

This last part I’m not so optimistic about. Apple may know that social networking is important, at this point in the game, but it seems to approach it with about the same heart as it approached online services with eWorld, .Mac, and MobileMe. Of course, they have the tools needed to make online services work in a “social networking” context. But it’s possible that their vision is clouded by their corporate culture and some remnants of the NIH problem.

Ah, well…

Sharing Tool Wishlist

My personal (potentially crazy) wishlist for a tool to share online content (links/bookmarks).

The following is an edited version of a wishlist I had been keeping on the side. The main idea is to define what would be, in my mind, the “ultimate social bookmarking system.” Which, obviously, goes way beyond social bookmarking. In a way, I even conceive of it as the ultimate tool for sharing online content. Yes, it’s that ambitious. Will it ever exist? Probably not. Should it exist? I personally think so. But I may be alone in this. Surely, you’ll tell me that I am indeed alone, which is fine. As long as you share your own wishlist items.

The trigger for my posting this is that someone contacted me, asking for what I’d like in a social bookmarking system. I find this person’s move quite remarkable, as a thoughtful strategy. Not only because this person contacted me directly (almost flattering), but because such a request reveals an approach to listening and responding to people’s needs that I find lacking in some software development circles.

This person’s message served as a prompt for my blogging this, but I’ve been meaning to blog this for a while. In fact, my guess is that I created a first version of this wishlist in 2007 after having it on my mind for a while before that. As such, it represents a type of “diachronic” or “longitudinal” view of social bookmarking and the way it works in the broader scheme of social media.

Which also means that I wrote this before I heard about Google Wave. In fact, I’m still unclear about Google Wave and I’ll need to blog about that. Not that I expect Wave to fulfill all the needs I set up for a sharing tool, but I get the impression that Google is finally putting some cards on the table.

The main part of this post is in outline form. I often think through outlines, especially with such a type of notes. I fully realize that it may not be that clear, as a structure, for other people to understand. Some of these bullet points cover a much broader issue than what they look like. But the overall idea might be fairly obvious to grasp, even if it may sound crazy to other people.

I’m posting this to the benefit of anyone who may wish to build the killer app for social media. Of course, it’s just one man’s opinion. But it’s my entitled opinion.

Concepts

What do we share online?

  • “Link”
  • “Page”
  • Identified content
  • Text
    • Narrative
    • Contact information
    • Event description
  • Contact information
  • Event invitation
  • Image
  • Recording
  • Structured content
  • Snippet
  • Access to semi-private content
  • Site’s entry point

Selective sharing

Private
  • Archiving
  • Cloud access
Individually shared
  • “Check this out”
  • Access to address book
  • Password protection
  • Specialization/expertise
  • Friendship
Group shared
  • Shared interests (SIG)
  • Collaboration (task-based)
Shared through network
  • Define identity in network
  • Semi-public
Public
  • Publishing
  • Processed
  • Reading lists

Notetaking

  • Active reading
  • Anchoring text
  • Ad hoc list of bookmarks
  • “Empty URL”
    • Create container/page
    • Personal notes

Todos

  • To read
  • To blog
  • To share
  • To update
  • Projects
    • GTD
    • Contexts
  • Add to calendar (recognized as event)

Outlining/Mindmapping

  • Manage lists of links
  • Prioritize
  • Easily group

Social aspects of sharing

  • Gift economy
  • Personal interaction
  • Trust
  • Hype
  • Value
  • Customized

Cloud computing

  • Webware
  • “Online disk”
  • Without download
  • Touch devices
  • Edit online

Personal streaming

  • Activities through pages
  • Logging
  • Flesh out personal profile

Tagging

  • “Folksonomy”
  • Enables non-hierarchical structure
  • Semantic fields
  • Related tags
  • Can include hierarchy
  • Tagclouds define concept map

Required Features

Crossplatform, crossbrowser

  • Browser-specific tools
  • Bookmarklets
  • Complete access through cloud
Keyboard shortcuts
  • Quick add (to account)
  • Vote
  • Bookmark all tabs (à la Flock)
  • Quick tags

Related pages

Recommended
  • Based on social graph
  • Based on tags
  • Based on content
  • Based on popularity
  • Pointing to this page

Quickly enter links

  • Add in place (while editing)
  • Similar to “spell as you type”
  • Incremental search
  • Add full link (title, URL, text, metadata)

Archiving

  • Prevent linkrot
  • Prepare for post-processing (offline reading, blogging…)
  • Enable bulk processing
  • Maintain version history
  • Internet Archive

Automatic processing

  • Tags
  • Summary
  • Wordcount
  • Reading time
  • Language(s)
  • Page structure analysis
  • Geotagging
  • Vote

Thread following

  • Blog comments
  • Forum comments
  • Trackbacks
  • Pings

Exporting

All
  • Archiving
  • Prepare for import
  • Maintain hierarchy
Selected
  • Tag
  • Category
  • Recently used
  • Shared
  • Site homepage
  • Blogroll
  • Blogs
Formats
  • Other services
  • HTML
  • RSS
  • OPML
  • Widget
Features
  • Comments
  • Tags
  • Statistics
  • Content

Offline processing

  • Browser-based
  • Device based
  • Offline archiving
  • Include content
  • Synchronization

Microblogging support

  • Laconi.ca/Identi.ca
  • Twitter
  • Ping.fm
  • Jaiku

Fixed/Static URL

  • Prevent linkrot
  • Maintain list for same page
  • Short URLs
  • Automatically generated
  • Expansion on mouseover
  • Statistics

Authentication

  • Use of resources
  • Identify
  • Privacy
  • Unnecessary for basic processing
  • Sticks (no need to login frequently)
  • Access to contacts and social graph
  • Multiple accounts
    • Personal/professional
    • Contexts
    • Group accounts
  • Premium accounts
    • Server space
    • Usage statistics
    • Promotion
  • Support
    • OpenID
      • As group login
    • Google Accounts
    • Facebook Connect
    • OAuth

Integration

  • Web history
  • Notebook
  • Blogging platform
  • Blog editor
  • Microblogging platform
  • Logbook
  • General purpose content editor
  • Toolbar
  • URL shortening
  • Address book
  • Social graph
  • Personal profile
  • Browser
    • Bookmarks
    • History
    • Autocomplete
  • Analytics
  • Email
  • Search
    • Online
    • Offline

Related Tools

  • Diigo
  • WebCitation
  • Ping.fm
  • BackType
  • Facebook share
  • Blog This
  • Link This
  • Share this
  • Digg
  • Plum
  • Spurl
  • CoComments
  • MyBlogLog
  • TwtVite
  • Twistory
  • Windows Live Writer
  • Magnolia
  • Stumble Upon
  • Delicious
  • Google Reader
  • Yahoo Pipes
  • Google Notebook
  • Zoho Notebook
  • Google Browser Sync
  • YouTube
  • Flock
  • Zotero

Relevant Blogposts

A Glocal Network of City-States?

Can we even think about a glocal network of city-states?

This one should probably be in a fictive mode, maybe even in a science-fiction genre. In fact, I’m reconnecting with literature after a long hiatus and now would be an interesting time to start writing fiction. But I’ll still start this as one of those  “ramblings” blogposts that I tend to build or which tend to come to me.

The reason this should be fiction is that it might sound exceedingly naïve, especially for a social scientist. I tend to “throw ideas out there” and see what sticks to other ideas, but this broad idea about which I’ve been thinking for a while may sound rather crazy, quaint, unsophisticated.

See, while my academic background is rather solid, I don’t have formal training in political science. In fact, I’ve frequently avoided several academic activities related to political science as a discipline. Or to journalism as a discipline. Part of my reluctance to involve myself in academic activities related political science relates to my reaction to journalism. The connection may not seem obvious to everyone but I see political science as a discipline in the same frame, and participating in the same worldview, as what I find problematic in journalism.

The simplest way to contextualize this connection is the (“modern”) notion of the “Nation-State.” That context involves me personally. As an anthropologist, as a post-modernist, as a “dual citizen” of two countries, as a folklorist, as a North American with a relatively salient European background, as a “citizen of the World,” and as a member of a community which has switched in part from a “nationalist” movement to other notions of statehood. Simply put: I sincerely think that the notion of a “Nation-State” is outdated and that it will (whether it should or not) give way to other social constructs.

A candidate to replace the conceptual apparatus of the “Nation-State” is both global and local, both post-modern and ancient: a glocal network of city-states (GNoCS).

Yes, I know, it sounds awkward. No, I’m not saying that things would necessarily be better in a post-national world. And I have no idea when this shift from the “nation-states” frame to a network of city-states may happen. But I sincerely think that it could happen. And that it could happen rather quickly.

Not that the shift would be so radical as to obliterate the notion of “nation-state” overnight. In this case, I’m closer to Foucault’s épistémè than to Kuhn’s paradigm. After all, while the “Democratic Nation-State” model is global, former social structures are still present around the Globe and the very notion of a “Nation-State” takes different values in different parts of the world. What I envision has less to do with the linear view of history than with a perspective in which different currents of social change interact with one another over time, evoking shifts in polarity for those who hold a binary perspective on social issues.

I started “working on” this post four months ago. I was just taking some notes in a blog draft, in view of a blogpost, instead of simply keeping general notes, as I tend to do. This post remained on my mind and I’ve been accumulating different threads which can connect to my basic idea. I now realize that this blogpost will be more of a placeholder for further thinking than a “milestone” in my reflection on the topic. My reluctance to publish this blog entry had as much to do with an idiosyncratic sense of prudence as with time-management or any other issue. In other words, I was wary of sticking my neck out. Which might explain why this post is so personal as compared to most of my posts in English.

As uninformed as I may seem of the minutiae of national era political science, I happen to think that there’s a lot of groupthink involved in the way several people describe political systems. For instance, there’s a strong tendency for certain people, journalists especially, to “count countries.” With relatively few exceptions (especially those which have to do with specific international institutions like the United Nations or the “G20”) the number of countries involved in an event only has superficial significance. Demographic discrepancies between these national entities, not tio mention a certain degree of diversity in their social structures or even government apparatus, makes “counting countries” appear quite misleading, especially when the issue has to do with, say, social dynamics or geography. It sounds at times like people have a vague “political map of the World” in their heads and that this image preempts other approaches to global diversity. This may sound like a defensive stance on my part, as I try to position myself as “perhaps crazy but not more than others are.” But the issue goes deeper. In fact, it seems that “countries” are so ingrained  in some people’s minds and political borders are so obvious that local and regional issues are perceived as micro-version of what happens at the “national level.” This image doesn’t seem so strange when we talk about partisan politics but it appears quite inappropriate when we talk about a broad range of other subjects, from epidemiology to climate change, from online communication to geology, from language to religion.

An initial spark in my thinking about several of these issues came during Beverly Stoeltje‘s interdisciplinary Ph.D. seminar on nationalism at Indiana University Bloomington, back in 2000. Not only was this seminar edifying on many levels, but it represented a kind of epiphany moment in my reflections on not only nationalism itself (with related issues of patriotism, colonialism, and citizenship) but on a range of social issues and changes.

My initial “realization” was on the significance of the shift from Groulx-style French-Canadian nationalism to what Lévesque called «souveraineté-association» (“sovereignty-association”) and which served as the basis for the Quebec sovereignty movement.

While this all connects to well-known issues in political science and while it may (again) sound exceedingly naïve, I mean it in a very specific way which, I think, many people who discuss Quebec’s political history may rarely visit. As with other shifts about which I think, I don’t envision the one from French-Canadian nationalism (FCN) to Quebec sovereignty movement (QSM) to be radical or complete. But it was significant and broad-reaching.

Regardless of Lévesque’s personal view on nationalism (a relatively recent television series on his life had it that he became anti-nationalist after a visit to concentration camps), the very idea that there may exist a social movement oriented toward sovereignty outside of the nationalist logic seems quite important to me personally. The fact that this movement may only be represented in partisan politics as nationalism complicates the issue and may explain a certain confusion in terms of the range of Quebec’s current social movements. In other words, the fact that anti-nationalists are consistently lumped together with nationalists in the public (and journalistic) eye makes it difficult to discuss post-nationalism in this part of the Globe.

But Quebec’s history is only central to my thinking because I was born and Montreal and grew up through the Quiet Revolution. My reflections on a post-national shift are hopefully broader than historical events in a tiny part of the Globe.

In fact, my initial attempt at drafting this blogpost came after I attended a talk by Satoshi Ikeda entitled The Global Financial Crisis and the End of Neoliberalism. (November 27, 2008, Concordia University, SGW H-1125-12; found thanks to Twistory). My main idea at this point was that part of the solution to global problems were local.

But I was also thinking about The Internet.

Contrary to what technological determinists tend to say, the ‘Net isn’t changing things as much as it is part of a broad set of changes. In other words, the global communication network we now know as the Internet is embedded in historical contexts, not the ultimate cause of History. At the risk of replacing technological determinism with social determinism, one might point out that the ‘Net existed (both technologically and institutionally) long before its use became widespread. Those of us who observed a large influx of people online during the early to mid-1990s might even think that social changes were more significant in making the ‘Net what it is today than any “immanent” feature of the network as it was in, say, 1991.

Still, my thinking about the ‘Net has to do with the post-national shift. The ‘Net won’t cause the shift to new social and political structures. But it’s likely to “play a part” in that shift, to be prominently places as we move into a post-national reality.

There’s a number of practical and legal issues with a wide range of online activities which make it clear that the ‘Net fits more in a global structure than in an “international” one. Examples I have in mind include issues of copyright, broadcast rights, “national content,” and access to information, not to mention the online setting for some grassroots movements and the notion of “Internet citizenry.” In all of these cases, “Globalization” expands much beyond trade and currency-based economy.

Then, there’s the notion of “glocalization.” Every time I use the term “glocal,” I point out how “ugly” it is. The term hasn’t gained any currency (AFAICT) but I keep thinking that the concept can generate something interesting. What I personally have in mind is a movement away from national structures into both a globally connected world and a more local significance. The whole “Think Local, Act Global” idea (which I mostly encountered as “Think Global, Drink Local” as a motto). “Despite” the ‘Net, location still matters. But many people are also global-looking.

All of this is part of the setup for some of my reflections on a GNoCS. A kind of prelude/prologue. While my basic idea is very much a “pie in the sky,” I do have more precise notions about what the future may look like and the conditions in which some social changes might happen. At this point, I realize that these thoughts will be part of future blogposts, including some which might be closer to science-fiction than to this type semi- (or pseudo-) scholarly rambling.

But I might still flesh out a few notes.

Demographically, cities may matter more now than ever as the majority of the Globe’s population is urban. At least, the continued urbanization trend may fit well with a city-focused post-national model.

Some metropolitan areas have become so large as to connect with one another, constituting a kind of urban continuum. Contrary to boundaries between “nation-states,” divisions between cities can be quite blurry. In fact, a same location can be connected to dispersed centres of activity and people living in the same place can participate in more than one local sphere. Rotterdam-Amsterdam, Tokyo-Kyoto, Boston-NYC…

Somewhat counterintuitvely, urban areas tend to work relatively as the source of solutions to problems in the natural environment. For instance, some mayors have taken a lead in terms of environmental initiatives, not waiting for their national governments. And such issues as public transportations represent core competencies for municipal governments.

While transborder political entities like the European Union (EU), the African Union (AU), and the North American Free-Trade Agreement (NAFTA) are enmeshed in the national logic, they fit well with notions of globalized decentralization. As the mayor of a small Swiss town was saying on the event of Switzerland’s official 700th anniversary, we can think about «l’Europe des régions» (“Europe of regions”), beyond national borders.

Speaking of Switzerland, the confederacy/confederation model fits rather well with a network structure, perhaps more than with the idea of a “nation-state.” It also seems to go well with some forms of participatory democracy (as opposed to representative democracy). Not to mean that Switzerland or any other confederation/confederacy works as a participatory democracy. But these notions can help situate this GNoCS.

While relatively rare and unimportant “on the World Stage,” micro-states and micro-nations represent interesting cases in view of post-nationalist entities. For one thing, they may help dispel the belief that any political apart from the “nation-state” is a “reversal” to feudalism or even (Greek) Antiquity. The very existence of those entities which are “the exceptions to the rule” make it possible to “think outside of the national box.”

Demographically at the opposite end of the spectrum from microstates and micronations, the notion of a China-India union (or even a collaboration between China, India, Brazil, and Russia) may sound crazy in the current state of national politics but it would go well with a restructuring of the Globe, especially if this “New World Order” goes beyond currency-based trade.

Speaking of currency, the notion of the International Monetary Fund having its own currency is quite striking as a sign of a major shift from the “nation-state” logic. Of course, the IMF is embedded in “national” structures, but it can shift the focus away from “individual countries.”

The very notion of “democracy” has been on many lips, over the years. Now may be the time to pay more than lipservice to a notion of “Global Democracy,” which would transcend national boundaries (and give equal rights to all people across the Globe). Chances are that representative democracy may still dominate but a network structure connecting a large number of localized entities can also fit in other systems including participatory democracy, consensus culture, republicanism, and even the models of relatively egalitarian systems that some cultural anthropologists have been constructing over the years.

I still have all sorts of notes about examples and issues related to this notion of a GNoCS. But that will do for now.

Social Networks and Microblogging

Event-based microblogging and the social dimensions of online social networks.

Microblogging (Laconica, Twitter, etc.) is still a hot topic. For instance, during the past few episodes of This Week in Tech, comments were made about the preponderance of Twitter as a discussion theme: microblogging is so prominent on that show that some people complain that there’s too much talk about Twitter. Given the centrality of Leo Laporte’s podcast in geek culture (among Anglos, at least), such comments are significant.

The context for the latest comments about TWiT coverage of Twitter had to do with Twitter’s financials: during this financial crisis, Twitter is given funding without even asking for it. While it may seem surprising at first, given the fact that Twitter hasn’t publicized a business plan and doesn’t appear to be profitable at this time, 

Along with social networking, microblogging is even discussed in mainstream media. For instance, Médialogues (a media critique on Swiss national radio) recently had a segment about both Facebook and Twitter. Just yesterday, Comedy Central’s The Daily Show with Jon Stewart made fun of compulsive twittering and mainstream media coverage of Twitter (original, Canadian access).

Clearly, microblogging is getting some mindshare.

What the future holds for microblogging is clearly uncertain. Anything can happen. My guess is that microblogging will remain important for a while (at least a few years) but that it will transform itself rather radically. Chances are that other platforms will have microblogging features (something Facebook can do with status updates and something Automattic has been trying to do with some WordPress themes). In these troubled times, Montreal startup Identi.ca received some funding to continue developing its open microblogging platform.  Jaiku, bought by Google last year, is going open source, which may be good news for microblogging in general. Twitter itself might maintain its “marketshare” or other players may take over. There’s already a large number of third-party tools and services making use of Twitter, from Mahalo Answers to Remember the Milk, Twistory to TweetDeck.

Together, these all point to the current importance of microblogging and the potential for further development in that sphere. None of this means that microblogging is “The Next Big Thing.” But it’s reasonable to expect that microblogging will continue to grow in use.

(Those who are trying to grok microblogging, Common Craft’s Twitter in Plain English video is among the best-known descriptions of Twitter and it seems like an efficient way to “get the idea.”)

One thing which is rarely mentioned about microblogging is the prominent social structure supporting it. Like “Social Networking Systems” (LinkedIn, Facebook, Ning, MySpace…), microblogging makes it possible for people to “connect” to one another (as contacts/acquaintances/friends). Like blogs, microblogging platforms make it possible to link to somebody else’s material and get notifications for some of these links (a bit like pings and trackbacks). Like blogrolls, microblogging systems allow for lists of “favourite authors.” Unlike Social Networking Systems but similar to blogrolls, microblogging allow for asymmetrical relations, unreciprocated links: if I like somebody’s microblogging updates, I can subscribe to those (by “following” that person) and publicly show my appreciation of that person’s work, regardless of whether or not this microblogger likes my own updates.

There’s something strangely powerful there because it taps the power of social networks while avoiding tricky issues of reciprocity, “confidentiality,” and “intimacy.”

From the end user’s perspective, microblogging contacts may be easier to establish than contacts through Facebook or Orkut. From a social science perspective, microblogging links seem to approximate some of the fluidity found in social networks, without adding much complexity in the description of the relationships. Subscribing to someone’s updates gives me the role of “follower” with regards to that person. Conversely, those I follow receive the role of “following” (“followee” would seem logical, given the common “-er”/”-ee” pattern). The following and follower roles are complementary but each is sufficient by itself as a useful social link.

Typically, a microblogging system like Twitter or Identi.ca qualifies two-way connections as “friendship” while one-way connections could be labelled as “fandom” (if Andrew follows Betty’s updates but Betty doesn’t follow Andrew’s, Andrew is perceived as one of Betty’s “fans”). Profiles on microblogging systems are relatively simple and public, allowing for low-involvement online “presence.” As long as updates are kept public, anybody can connect to anybody else without even needing an introduction. In fact, because microblogging systems send notifications to users when they get new followers (through email and/or SMS), subscribing to someone’s update is often akin to introducing yourself to that person. 

Reciprocating is the object of relatively intense social pressure. A microblogger whose follower:following ratio is far from 1:1 may be regarded as either a snob (follower:following much higher than 1:1) or as something of a microblogging failure (follower:following much lower than 1:1). As in any social context, perceived snobbery may be associated with sophistication but it also carries opprobrium. Perry Belcher  made a video about what he calls “Twitter Snobs” and some French bloggers have elaborated on that concept. (Some are now claiming their right to be Twitter Snobs.) Low follower:following ratios can result from breach of etiquette (for instance, ostentatious self-promotion carried beyond the accepted limit) or even non-human status (many microblogging accounts are associated to “bots” producing automated content).

The result of the pressure for reciprocation is that contacts are reciprocated regardless of personal relations.  Some users even set up ways to automatically follow everyone who follows them. Despite being tricky, these methods escape the personal connection issue. Contrary to Social Networking Systems (and despite the term “friend” used for reciprocated contacts), following someone on a microblogging service implies little in terms of friendship.

One reason I personally find this fascinating is that specifying personal connections has been an important part of the development of social networks online. For instance, long-defunct SixDegrees.com (one of the earliest Social Networking Systems to appear online) required of users that they specified the precise nature of their relationship to users with whom they were connected. Details escape me but I distinctly remember that acquaintances, colleagues, and friends were distinguished. If I remember correctly, only one such personal connection was allowed for any pair of users and this connection had to be confirmed before the two users were linked through the system. Facebook’s method to account for personal connections is somewhat more sophisticated despite the fact that all contacts are labelled as “friends” regardless of the nature of the connection. The uniform use of the term “friend” has been decried by many public commentators of Facebook (including in the United States where “friend” is often applied to any person with whom one is simply on friendly terms).

In this context, the flexibility with which microblogging contacts are made merits consideration: by allowing unidirectional contacts, microblogging platforms may have solved a tricky social network problem. And while the strength of the connection between two microbloggers is left unacknowledged, there are several methods to assess it (for instance through replies and republished updates).

Social contacts are the very basis of social media. In this case, microblogging represents a step towards both simplified and complexified social contacts.

Which leads me to the theme which prompted me to start this blogpost: event-based microblogging.

I posted the following blog entry (in French) about event-based microblogging, back in November.

Microblogue d’événement

I haven’t received any direct feedback on it and the topic seems to have little echoes in the social media sphere.

During the last PodMtl meeting on February 18, I tried to throw my event-based microblogging idea in the ring. This generated a rather lengthy between a friend and myself. (Because I don’t want to put words in this friend’s mouth, who happens to be relatively high-profile, I won’t mention this friend’s name.) This friend voiced several objections to my main idea and I got to think about this basic notion a bit further. At the risk of sounding exceedingly opinionated, I must say that my friend’s objections actually comforted me in the notion that my “event microblog” idea makes a lot of sense.

The basic idea is quite simple: microblogging instances tied to specific events. There are technical issues in terms of hosting and such but I’m mostly thinking about associating microblogs and events.

What I had in mind during the PodMtl discussion has to do with grouping features, which are often requested by Twitter users (including by Perry Belcher who called out Twitter Snobs). And while I do insist on events as a basis for those instances (like groups), some of the same logic applies to specific interests. However, given the time-sensitivity of microblogging, I still think that events are more significant in this context than interests, however defined.

In the PodMtl discussion, I frequently referred to BarCamp-like events (in part because my friend and interlocutor had participated in a number of such events). The same concept applies to any event, including one which is just unfolding (say, assassination of Guinea-Bissau’s president or bombings in Mumbai).

Microblogging users are expected to think about “hashtags,” those textual labels preceded with the ‘#’ symbol which are meant to categorize microblogging updates. But hashtags are problematic on several levels.

  • They require preliminary agreement among multiple microbloggers, a tricky proposition in any social media. “Let’s use #Bissau09. Everybody agrees with that?” It can get ugly and, even if it doesn’t, the process is awkward (especially for new users).
  • Even if agreement has been reached, there might be discrepancies in the way hashtags are typed. “Was it #TwestivalMtl or #TwestivalMontreal, I forgot.”
  • In terms of language economy, it’s unsurprising that the same hashtag would be used for different things. Is “#pcmtl” about Podcamp Montreal, about personal computers in Montreal, about PCM Transcoding Library…?
  • Hashtags are frequently misunderstood by many microbloggers. Just this week, a tweep of mine (a “peep” on Twitter) asked about them after having been on Twitter for months.
  • While there are multiple ways to track hashtags (including through SMS, in some regions), there is no way to further specify the tracked updates (for instance, by user).
  • The distinction between a hashtag and a keyword is too subtle to be really useful. Twitter Search, for instance, lumps the two together.
  • Hashtags take time to type. Even if microbloggers aren’t necessarily typing frantically, the time taken to type all those hashtags seems counterproductive and may even distract microbloggers.
  • Repetitively typing the same string is a very specific kind of task which seems to go against the microblogging ethos, if not the cognitive processes associated with microblogging.
  • The number of character in a hashtag decreases the amount of text in every update. When all you have is 140 characters at a time, the thirteen characters in “#TwestivalMtl” constitute almost 10% of your update.
  • If the same hashtag is used by a large number of people, the visual effect can be that this hashtag is actually dominating the microblogging stream. Since there currently isn’t a way to ignore updates containing a certain hashtag, this effect may even discourage people from using a microblogging service.

There are multiple solutions to these issues, of course. Some of them are surely discussed among developers of microblogging systems. And my notion of event-specific microblogs isn’t geared toward solving these issues. But I do think separate instances make more sense than hashtags, especially in terms of specific events.

My friend’s objections to my event microblogging idea had something to do with visibility. It seems that this friend wants all updates to be visible, regardless of the context. While I don’t disagree with this, I would claim that it would still be useful to “opt out” of certain discussions when people we follow are involved. If I know that Sean is participating in a PHP conference and that most of his updates will be about PHP for a period of time, I would enjoy the possibility to hide PHP-related updates for a specific period of time. The reason I talk about this specific case is simple: a friend of mine has manifested some frustration about the large number of updates made by participants in Podcamp Montreal (myself included). Partly in reaction to this, he stopped following me on Twitter and only resumed following me after Podcamp Montreal had ended. In this case, my friend could have hidden Podcamp Montreal updates and still have received other updates from the same microbloggers.

To a certain extent, event-specific instances are a bit similar to “rooms” in MMORPG and other forms of real-time many-to-many text-based communication such as the nostalgia-inducing Internet Relay Chat. Despite Dave Winer’s strong claim to the contrary (and attempt at defining microblogging away from IRC), a microblogging instance could, in fact, act as a de facto chatroom. When such a structure is needed. Taking advantage of the work done in microblogging over the past year (which seems to have advanced more rapidly than work on chatrooms has, during the past fifteen years). Instead of setting up an IRC channel, a Web-based chatroom, or even a session on MSN Messenger, users could use their microblogging platform of choice and either decide to follow all updates related to a given event or simply not “opt-out” of following those updates (depending on their preferences). Updates related to multiple events are visible simultaneously (which isn’t really the case with IRC or chatrooms) and there could be ways to make event-specific updates more prominent. In fact, there would be easy ways to keep real-time statistics of those updates and get a bird’s eye view of those conversations.

And there’s a point about event-specific microblogging which is likely to both displease “alpha geeks” and convince corporate users: updates about some events could be “protected” in the sense that they would not appear in the public stream in realtime. The simplest case for this could be a company-wide meeting during which backchannel is allowed and even expected “within the walls” of the event. The “nothing should leave this room” attitude seems contradictory to social media in general, but many cases can be made for “confidential microblogging.” Microblogged conversations can easily be archived and these archives could be made public at a later date. Event-specific microblogging allows for some control of the “permeability” of the boundaries surrounding the event. “But why would people use microblogging instead of simply talking to another?,” you ask. Several quick answers: participants aren’t in the same room, vocal communication is mostly single-channel, large groups of people are unlikely to communicate efficiently through oral means only, several things are more efficiently done through writing, written updates are easier to track and archive…

There are many other things I’d like to say about event-based microblogging but this post is already long. There’s one thing I want to explain, which connects back to the social network dimension of microblogging.

Events can be simplistically conceived as social contexts which bring people together. (Yes, duh!) Participants in a given event constitute a “community of experience” regardless of the personal connections between them. They may be strangers, ennemies, relatives, acquaintances, friends, etc. But they all share something. “Participation,” in this case, can be relatively passive and the difference between key participants (say, volunteers and lecturers in a conference) and attendees is relatively moot, at a certain level of analysis. The key, here, is the set of connections between people at the event.

These connections are a very powerful component of social networks. We typically meet people through “events,” albeit informal ones. Some events are explicitly meant to connect people who have something in common. In some circles, “networking” refers to something like this. The temporal dimension of social connections is an important one. By analogy to philosophy of language, the “first meeting” (and the set of “first impressions”) constitute the “baptism” of the personal (or social) connection. In social media especially, the nature of social connections tends to be monovalent enough that this “baptism event” gains special significance.

The online construction of social networks relies on a finite number of dimensions, including personal characteristics described in a profile, indirect connections (FOAF), shared interests, textual content, geographical location, and participation in certain activities. Depending on a variety of personal factors, people may be quite inclusive or rather exclusive, based on those dimensions. “I follow back everyone who lives in Austin” or “Only people I have met in person can belong to my inner circle.” The sophistication with which online personal connections are negotiated, along such dimensions, is a thing of beauty. In view of this sophistication, tools used in social media seem relatively crude and underdeveloped.

Going back to the (un)conference concept, the usefulness of having access to a list of all participants in a given event seems quite obvious. In an open event like BarCamp, it could greatly facilitate the event’s logistics. In a closed event with paid access, it could be linked to registration (despite geek resistance, closed events serve a purpose; one could even imagine events where attendance is free but the microblogging backchannel incurs a cost). In some events, everybody would be visible to everybody else. In others, there could be a sort of ACL for diverse types of participants. In some cases, people could be allowed to “lurk” without being seen while in others radically transparency could be enforced. For public events with all participants visible, lists of participants could be archived and used for several purposes (such as assessing which sessions in a conference are more popular or “tracking” event regulars).

One reason I keep thinking about event-specific microblogging is that I occasionally use microblogging like others use business cards. In a geek crowd, I may ask for someone’s Twitter username in order to establish a connection with that person. Typically, I will start following that person on Twitter and find opportunities to communicate with that person later on. Given the possibility for one-way relationships, it establishes a social connection without requiring personal involvement. In fact, that person may easily ignore me without the danger of a face threat.

If there were event-specific instances from microblogging platforms, we could manage connections and profiles in a more sophisticated way. For instance, someone could use a barebones profile for contacts made during an impersonal event and a full-fledged profile for contacts made during a more “intimate” event. After noticing a friend using an event-specific business card with an event-specific email address, I got to think that this event microblogging idea might serve as a way to fill a social need.

 

More than most of my other blogposts, I expect comments on this one. Objections are obviously welcomed, especially if they’re made thoughtfully (like my PodMtl friend made them). Suggestions would be especially useful. Or even questions about diverse points that I haven’t addressed (several of which I can already think about).

So…

 

What do you think of this idea of event-based microblogging? Would you use a microblogging instance linked to an event, say at an unconference? Can you think of fun features an event-based microblogging instance could have? If you think about similar ideas you’ve seen proposed online, care to share some links?

 

Thanks in advance!

My Year in Social Media

In some ways, this post is a belated follow-up to my last blogpost about some of my blog statistics:

Almost 30k « Disparate.

In the two years since I published that post, I’ve received over 100 000 visits on this blog and I’ve diversified my social media activities.

Altogether, 2008 has been an important year, for me, in terms of social media. I began the year in Austin, TX and moved back to Quebec in late April. Many things have happened in my personal life and several of them have been tied to my social media activities.

The most important part of my social media life, through 2008 as through any year, is the contact I have with diverse people. I’ve met a rather large number of people in 2008 and some of these people have become quite important in my life. In fact, there are people I have met in 2008 whose impact on my life makes it feel as though we have been friends for quite a while. Many of these contacts have happened through social media or, at least, they have been mediated online. As a “people person,” a social butterfly, a humanist, and a social scientist, I care more about these people I’ve met than about the tools I’ve used.

Obviously, most of the contacts I’ve had through the year were with people I already knew. And my relationship with many of these people has changed quite significantly through the year. As is obvious for anyone who knows me, 2008 has been an important year in my personal life. A period of transition. My guess is that 2009 will be even more important, personally.

But this post is about my social media activities. Especially about (micro)blogging and about social networking, in my case. I also did a couple of things in terms of podcasting and online video, but my main activities online tend to be textual. This might change a bit in 2009, but probably not much. I expect 2009 to be an “incremental evolution” in terms of my social media activities. In fact, I mostly want to intensify my involvement in social media spheres, in continuity with what I’ve been doing in 2008.

So it’s the perfect occasion to think back about 2008.

Perhaps my main highlight of 2008 in terms of social media is Twitter. You can say I’m a late adopter to Twitter. I’ve known about it since it came out and I probably joined Twitter a while ago but I really started using it in preparation for SXSWi and BarCampAustin, in early March of this year. As I wanted to integrate Austin’s geek scene and Twitter clearly had some importance in that scene, I thought I’d “play along.” Also, I didn’t have a badge for SXSWi but I knew I could learn about off-festival events through Twitter. And Twitter has become rather important, for me.

For one thing, it allows me to make a distinction between actual blogposts and short thoughts. I’ve probably been posting fewer blog entries since I became active on Twitter and my blogposts are probably longer, on average, than they were before. In a way, I feel it enhances my blogging experience.

Twitter also allows me to “take notes in public,” a practise I find surprisingly useful. For instance, when I go to some kind of presentation (academic or otherwise) I use Twitter to record my thoughts on both the event and the content. This practise is my version of “liveblogging” and I enjoy it. On several occasions, these liveblogging sessions have been rather helpful. Some “tweeps” (Twitter+peeps) dislike this kind of liveblogging practise and claim that “Twitter isn’t meant for this,” but I’ve had more positive experiences through liveblogging on Twitter than negative ones.

The device which makes all of this liveblogging possible, for me, is the iPod touch I received from a friend in June of this year. It has had important implications for my online life and, to a certain extent, the ‘touch has become my primary computer. The iTunes App Store, which opened its doors in July, has changed the game for me as I was able to get a number of dedicated applications, some of which I use several times a day. I’ve blogged about several things related to the iPod touch and the whole process has changed my perspective on social media in general. Of course, an iPhone would be an even more useful tool for me: SMS, GPS, camera, and ubiquitous Internet are all useful features in connection to social media. But, for now, the iPod touch does the trick. Especially through Twitter and Facebook.

One tool I started using quite frequently through the year is Ping.fm. I use it to post to: Twitter, Identi.ca, Facebook, LinkedIn, Brightkite, Jaiku, FriendFeed, Blogger, and WordPress.com (on another blog). I receive the most feedback on Facebook and Twitter but I occasionally get feedback through the other services (including through Pownce, which was recently sold). One thing I notice through this cross-posting practise is that, on these different services, the same activity has a range of implications. For instance, while I’m mostly active on Twitter, I actually get more out of Facebook postings (status updates, posted items, etc.). And reactions on different services tend to be rather different, as the relationships I have with people who provide that feedback tend to range from indirect acquaintance to “best friend forever.” Given my social science background, I find these differences quite interesting to think about.

One thing I’ve noticed on Twitter is that my “ranking among tweeps” has increased very significantly. On Twinfluence, my rank has gone as high as the 86th percentile (though it recently went down to the 79th percentile) while, on Twitter Grader, my “Twitter grade” is now at a rather unbelievable 98.1%. I don’t tend to care much about “measures of influence” but I find these ratings quite interesting. One reason is that they rely on relatively sophisticated concepts from social sciences. Another reason is that I’m intrigued by what causes increases in my ranking on those services. In this case, I think the measures give me way too much credit at this point but I also think that my “influence” is found outside of Twitter.

One “sphere of influence” which remained important for me through 2008 is Facebook. While Facebook had a more central role in my life through 2007, it now represents a stable part of my social media involvement. One thing which tends to happen is that first contacts happen through Twitter (I often use it as the equivalent of a business card during event) and Facebook represents a second step in the relationship. In a way, this distinction foregrounds the obvious concept of “intimacy” in social media. Twitter is public, ties are weak. Facebook is intimate, ties are stronger. On the other hand, there seems to be much more clustering among my tweeps than among my Facebook contacts, in part because my connection to local geek scenes in Austin and Montreal happens primarily through Twitter.

Through Facebook I was able to organize a fun little brunch with a few friends from elementary school. Though this brunch may not have been the most important event of 2008, for me, I’ve learnt a lot about the power of social media through contacting these friends, meeting them, and thinking about the whole affair.

In a way, Twitter and Facebook have helped me expand my social media activities in diverse directions. But most of the important events in my social media life in 2008 have been happening offline. Several of these events were unconferences and informal events happening around conferences.

My two favourite events of the year, in terms of social media, were BarCampAustin and PodCamp Montreal. Participating in (and observing) both events has had some rather profound implications in my social media life. These two unconferences were somewhat different but both were probably as useful, to me. One regret I have is that it’s unlikely that I’ll be able to attend BarCampAustinIV now that I’ve left Austin.

Other events have happened throughout 2008 which I find important in terms of social media. These include regular meetings like Yulblog, Yulbiz, and PodMtl. There are many other events which aren’t necessarily tied to social media but that I find interesting from a social media perspective. The recent Infopresse360 conference on innovation (with Malcolm Gladwell as keynote speaker) and a rather large number of informal meetups with people I’ve known through social media would qualify.

Despite the diversification of my social media life through 2008, blogging remains my most important social media activity. I now consider myself a full-fledged blogger and I think that my blog is representative of something about me.

Simply put, I’m proud to be a blogger. 

In 2008, a few things have happened through my blog which, I think, are rather significant. One is that someone who found me through Google contacted me directly about a contract in private-sector ethnography. As I’m currently going through professional reorientation, I take this contract to be rather significant. It’s actually possible that the Google result this person noticed wasn’t directly about my blog (the ranking of my diverse online profiles tends to shift around fairly regularly) but I still associate online profiles with blogging.

A set of blog-related occurences which I find significant has to do with the fact that my blog has been at the centre of a number of discussions with diverse people including podcasters and other social media people. My guess is that some of these discussions may lead to some interesting things for me in 2009.

Through 2008, this blog has become more anthropological. For several reasons, I wish to maintain it as a disparate blog, a blog about disparate topics. But it still participates in my gaining some recognition as an anthroblogger. One reason is that anthrobloggers are now more closely connected than before. Recently, anthroblogger Daniel Lende has sent a call for nominations for the best of the anthro blogosphere which he then posted as both a “round up” and a series of prizes. Before that, Savage Minds had organized an “awards ceremony” for an academic conference. And, perhaps the most important dimension of my ow blog being recognized in the anthroblogosphere, I have been discussing a number of things with Concordia-based anthrobloggers Owen Wiltshire and Maximilian Forte.

Still, anthropology isn’t the most prominent topic on this blog. In fact, my anthro-related posts tend to receive relatively little attention, outside of discussions with colleagues.

Since I conceive of this post as a follow-up on posts about statistics, I’ve gone through some of my stats here on Disparate.  Upgrades to  Wordpress.com also allow me to get a more detailed picture of what has been happening on this blog.

Through 2008, I’ve received over 55 131 hits on this blog, about 11% more than in 2007 for an average of 151 hits a day (I actually thought it was more but there are some days during which I receive relatively few hits, especially during weekends). The month I received the most hits was February 2007 with 5 967 hits but February and March 2008 were relatively close. The day I received the most hits was October 28, 2008, with 310 hits. This was the day after Myriade opened.

These numbers aren’t so significant. For one thing, hits don’t imply that people have read anything on my blog. Since all of my blogs are ad-free, I haven’t tried to increase traffic to this blog. But it’s still interesting to notice a few things.

The most obvious thing is that hits to rather silly posts are much more frequent than hits to posts I actually care about.

For instance, my six blogposts with the most hits:

Title Hits  
Facebook Celebs and Fakes 5 782 More stats
emachines Power Supply 4 800 More stats
Recording at 44.1 kHz, 16b with iPod 5G? 2 834 More stats
Blogspot v. WordPress.com, Blogger v. Wo 2 571 More stats
GERD and Stress 2 377 More stats
University Rankings and Diversity 2 219 More stats

And for 2008:

Title Hits  
Facebook Celebs and Fakes 3 984 More stats
emachines Power Supply 2 265 More stats
AT&T Yahoo Pro DSL to Belkin WiFi 1 527 More stats
GERD and Stress 1 430 More stats
Blogspot v. WordPress.com, Blogger v. Wo 1 151 More stats
University Rankings and Diversity 995 More stats

The Facebook post I wrote very quickly in July 2007. It was a quick reaction to something I had heard. Obviously, the post’s title  is the single reason for that post’s popularity. I get an average of 11 hits a day on that post for 4 001 hits in 2008. If I wanted to increase traffic, I’d post as many of these as possible.

The emachines post is my first post on this new blog (but I did import posts from my previous blog), back in January 2006. It seems to have helped a few people and gets regular traffic (six hits a day, in 2008). It’s not my most thoughtful post but it has its place. It’s still funny to notice that traffic to this blogpost increases even though one would assume it’s less relevant.

Rather unsurprisingly, my post about then-upcoming recording capabilities on the iPod 5G, from March 2006, is getting very few hits. But, for a while, it did get a number of hits (six a day in 2006) and I was a bit puzzled by that.

The AT&T post is my most popular post written in 2008. It was a simple troubleshooting session, like the aforementioned emachines post. These posts might be useful for some people and I occasionally get feedback from people about them. Another practical post regularly getting a few hits is about an inflatable mattress with built-in pump which came without clear instructions.

My post about blogging platform was in fact a repost of a comment I made on somebody else’s blog entry (though the original seems to be lost). From what I can see, it was most popular from June, 2007 through May, 2008. Since it was first posted, WordPress.com has been updated quite a bit and Blogger/Blogspot seems to have pretty much stalled. My comment/blogpost on the issue is fairly straightforward and it has put me in touch with some other bloggers.

The other two blogposts getting the most hits in 2008 are closer to things about which I care. Both entries were written in mid-2006 and are still relevant. The rankings post is short on content, but it serves as an “anchor” for some things I like to discuss in terms of educational institutions. The GERD post is among my most personal posts on this blog, especially in English. It’s one of the posts for which I received the most feedback. My perspective on the issue hasn’t changed much in the meantime.

Privilege: Library Edition

When I came out against privilege, over a month ago, I wasn’t thinking about libraries. But, last week, while running some errands at three local libraries (within an hour), I got to think about library privileges.

During that day, I first started thinking about library privileges because I was renewing my CREPUQ card at Concordia. With that card, graduate students and faculty members at a university in Quebec are able to get library privileges at other universities, a nice “perk” that we have. While renewing my card, I was told (or, more probably, reminded) that the card now gives me borrowing privileges at any university library in Canada through CURBA (Canadian University Reciprocal Borrowing Agreement).

My gut reaction: “Aw-sum!” (I was having a fun day).

It got me thinking about what it means to be an academic in Canada. Because I’ve also spent part of my still short academic career in the United States, I tend to compare the Canadian academe to US academic contexts. And while there are some impressive academic consortia in the US, I don’t think that any of them may offer as wide a set of library privileges as this one. If my count is accurate, there are 77 institutions involved in CURBA. University systems and consortia in the US typically include somewhere between ten and thirty institutions, usually within the same state or region. Even if members of both the “UC System” and “CalState” have similar borrowing privileges, it would only mean 33 institutions, less than half of CURBA (though the population of California is about 20% more than that of Canada as a whole). Some important university consortia through which I’ve had some privileges were the CIC (Committee on Institutional Cooperation), a group of twelve Midwestern universities, and the BLC (Boston Library Consortium), a group of twenty university in New England. Even with full borrowing privileges in all three groups of university libraries, an academic would only have access to library material from 65 institutions.

Of course, the number of institutions isn’t that relevant if the libraries themselves have few books. But my guess is that the average size of a Canadian university’s library collection is quite comparable to its US equivalents, including in such well-endowed institutions as those in the aforementioned consortia and university systems. What’s more, I would guess that there might be a broader range of references across Canadian universities than in any region of the US. Not to mention that BANQ (Quebec’s national library and archives) are part of CURBA and that their collections overlap very little with a typical university library.

So, I was thinking about access to an extremely wide range of references given to graduate students and faculty members throughout Canada. We get this very nice perk, this impressive privilege, and we pretty much take it for granted.

Which eventually got me to think about my problem with privilege. Privilege implies a type of hierarchy with which I tend to be uneasy. Even (or especially) when I benefit from a top position. “That’s all great for us but what about other people?”

In this case, there are obvious “Others” like undergraduate students at Canadian institutions,  Canadian non-academics, and scholars at non-Canadian institutions. These are very disparate groups but they are all denied something.

Canadian undergrads are the most direct “victims”: they participate in Canada’s academe, like graduate students and faculty members, yet their access to resources is severely limited by comparison to those of us with CURBA privileges. Something about this strikes me as rather unfair. Don’t undegrads need access as much as we do? Is there really such a wide gap between someone working on an honour’s thesis at the end of a bachelor’s degree and someone starting work on a master’s thesis that the latter requires much wider access than the former? Of course, the main rationale behind this discrepancy in access to library material probably has to do with sheer numbers: there are many undergraduate students “fighting for the same resources” and there are relatively few graduate students and faculty members who need access to the same resources. Or something like that. It makes sense but it’s still a point of tension, as any matter of privilege.

The second set of “victims” includes Canadians who happen to not be affiliated directly with an academic institution. While it may seem that their need for academic resources are more limited than those of students, many people in this category have a more unquenchable “thirst for knowledge” than many an academic. In fact, there are people in this category who could probably do a lot of academically-relevant work “if only they had access.” I mostly mean people who have an academic background of some sort but who are currently unaffiliated with formal institutions. But the “broader public” counts, especially when a specific topic becomes relevant to them. These are people who take advantage of public libraries but, as mentioned in the BANQ case, public and university libraries don’t tend to overlap much. For instance, it’s quite unlikely that someone without academic library privileges would have been able to borrow Visual Information Processing (Chase, William 1973), a proceedings book that I used as a source for a recent blogpost on expertise. Of course, “the public” is usually allowed to browse books in most university libraries in North America (apart from Harvard). But, depending on other practical factors, borrowing books can be much more efficient than browsing them in a library. I tend to hear from diverse people who would enjoy some kind of academic status for this very reason: library privileges matter.

A third category of “victims” of CURBA privileges are non-Canadian academics. Since most of them may only contribute indirectly to Canadian society, why should they have access to Canadian resources? As any social context, the national academe defines insiders and outsiders. While academics are typically inclusive, this type of restriction seems to make sense. Yet many academics outside of Canada could benefit from access to resources broadly available to Canadian academics. In some cases, there are special agreements to allow outside scholars to get temporary access to local, regional, or national resources. Rather frequently, these agreements come with special funding, the outside academic being a special visitor, sometimes with even better access than some local academics.  I have very limited knowledge of these agreements (apart from infrequent discussions with colleagues who benefitted from them) but my sense is that they are costly, cumbersome, and restrictive. Access to local resources is even more exclusive a privilege in this case than in the CURBA case.

Which brings me to my main point about the issue: we all need open access.

When I originally thought about how impressive CURBA privileges were, I was thinking through the logic of the physical library. In a physical library, resources are scarce, access to resources need to be controlled, and library privileges have a high value. In fact, it costs an impressive amount of money to run a physical library. The money universities invest in their libraries is relatively “inelastic” and must figure quite prominently in their budgets. The “return” on that investment seems to me a bit hard to measure: is it a competitive advantage, does a better-endowed library make a university more cost-effective, do university libraries ever “recoup” any portion of the amounts spent?

Contrast all of this with a “virtual” library. My guess is that an online collection of texts costs less to maintain than a physical library by any possible measure. Because digital data may be copied at will, the notion of “scarcity” makes little sense online. Distributing millions of copies of a digital text doesn’t make the original text unavailable to anyone. As long as the distribution system is designed properly, the “transaction costs” in distributing a text of any length are probably much less than those associated with borrowing a book.  And the differences between “browsing” and “borrowing,” which do appear significant with physical books, seem irrelevant with digital texts.

These are all well-known points about online distribution. And they all seem to lead to the same conclusion: “information wants to be free.” Not “free as in beer.” Maybe not even “free as in speech.” But “free as in unchained.”

Open access to academic resources is still a hot topic. Though I do consider myself an advocate of “OA” (the “Open Access movement”), what I mean here isn’t so much about OA as opposed to TA (“toll-access”) in the case of academic journals. Physical copies of periodicals may usually not be borrowed, regardless of library privileges, and online resources are typically excluded from borrowing agreements between institutions. The connection between OA and my perspective on library privileges is that I think the same solution could solve both issues.

I’ve been thinking about a “global library” for a while. Like others, the Library of Alexandria serves as a model but texts would be online. It sounds utopian but my main notion, there, is that “library privileges” would be granted to anyone. Not only senior scholars at accredited academic institutions. Anyone. Of course, the burden of maintaining that global library would also be shared by anyone.

There are many related models, apart from the Library of Alexandria: French «Encyclopédistes» through the Englightenment, public libraries, national libraries (including the Library of Congress), Tim Berners-Lee’s original “World Wide Web” concept, Brewster Kahle’s Internet Archive, Google Books, etc. Though these models differ, they all point to the same basic idea: a “universal” collection with the potential for “universal” access. In historical perspective, this core notion of a “universal library” seems relatively stable.

Of course, there are many obstacles to a “global” or “universal” library. Including issues having to do with conflicts between social groups across the Globe or the current state of so-called “intellectual property.” These are all very tricky and I don’t think they can be solved in any number of blogposts. The main thing I’ve been thinking about, in this case, is the implications of a global library in terms of privileges.

Come to think of it, it’s possible that much of the resistance to a global library have to do with privilege: unlike me, some people enjoy privilege.

My Problem With Journalism

I hate having an axe to grind. Really, I do. “It’s unlike me.” When I notice that I catch myself grinding an axe, I “get on my own case.” I can be quite harsh with my own self.

But I’ve been trained to voice my concerns. And I’ve been perceiving an important social problem for a while.

So I “can’t keep quiet about it.”

If everything goes really well, posting this blog entry might be liberating enough that I will no longer have any axe to grind. Even if it doesn’t go as well as I hope, it’ll be useful to keep this post around so that people can understand my position.

Because I don’t necessarily want people to agree with me. I mostly want them to understand “where I come from.”

So, here goes:

Journalism may have outlived its usefulness.

Like several other “-isms” (including nationalism, colonialism, imperialism, and racism) journalism is counterproductive in the current state of society.

This isn’t an ethical stance, though there are ethical positions which go with it. It’s a statement about the anachronic nature of journalism. As per functional analysis, everything in society needs a function if it is to be maintained. What has been known as journalism is now taking new functions. Eventually, “journalism as we know it” should, logically, make way for new forms.

What these new forms might be, I won’t elaborate in this post. I have multiple ideas, especially given well-publicised interests in social media. But this post isn’t about “the future of journalism.”

It’s about the end of journalism.

Or, at least, my looking forward to the end of journalism.

Now, I’m not saying that journalists are bad people and that they should just lose their jobs. I do think that those who were trained as journalists need to retool themselves, but this post isn’t not about that either.

It’s about an axe I’ve been grinding.

See, I can admit it, I’ve been making some rather negative comments about diverse behaviours and statements, by media people. It has even become a habit of mine to allow myself to comment on something a journalist has said, if I feel that there is an issue.

Yes, I know: journalists are people too, they deserve my respect.

And I do respect them, the same way I respect every human being. I just won’t give them the satisfaction of my putting them on a pedestal. In my mind, journalists are people: just like anybody else. They deserve no special treatment. And several of them have been arrogant enough that I can’t help turning their arrogance back to them.

Still, it’s not about journalist as people. It’s about journalism “as an occupation.” And as a system. An outdated system.

Speaking of dates, some context…

I was born in 1972 and, originally,I was quite taken by journalism.

By age twelve, I was pretty much a news junkie. Seriously! I was “consuming” a lot of media at that point. And I was “into” media. Mostly television and radio, with some print mixed in, as well as lots of literary work for context: this is when I first read French and Russian authors from the late 19th and early 20th centuries.

I kept thinking about what was happening in The World. Back in 1984, the Cold War was a major issue. To a French-Canadian tween, this mostly meant thinking about the fact that there were (allegedly) US and USSR “bombs pointed at us,” for reasons beyond our direct control.

“Caring about The World” also meant thinking about all sorts of problems happening across The Globe. Especially poverty, hunger, diseases, and wars. I distinctly remember caring about the famine in Ethiopia. And when We Are the World started playing everywhere, I felt like something was finally happening.

This was one of my first steps toward cynicism. And I’m happy it occured at age twelve because it allowed me to eventually “snap out of it.” Oh, sure, I can still be a cynic on occasion. But my cynicism is contextual. I’m not sure things would have been as happiness-inducing for me if it hadn’t been for that early start in cynicism.

Because, you see, The World disinterested itself quite rapidly with the plight of Ethiopians. I distinctly remember asking myself, after the media frenzy died out, what had happened to Ethiopians in the meantime. I’m sure there has been some report at the time claiming that the famine was over and that the situation was “back to normal.” But I didn’t hear anything about it, and I was looking. As a twelve-year-old French-Canadian with no access to a modem, I had no direct access to information about the situation in Ethiopia.

Ethiopia still remained as a symbol, to me, of an issue to be solved. It’s not the direct cause of my later becoming an africanist. But, come to think of it, there might be a connection, deeper down than I had been looking.

So, by the end of the Ethiopian famine of 1984-85, I was “losing my faith in” journalism.

I clearly haven’t gained a new faith in journalism. And it all makes me feel quite good, actually. I simply don’t need that kind of faith. I was already training myself to be a critical thinker. Sounds self-serving? Well, sorry. I’m just being honest. What’s a blog if the author isn’t honest and genuine?

Flash forward to 1991, when I started formal training in anthropology. The feeling was exhilarating. I finally felt like I belonged. My statement at the time was to the effect that “I wasn’t meant for anthropology: anthropology was meant for me!” And I was learning quite a bit about/from The World. At that point, it already did mean “The Whole Wide World,” even though my knowledge of that World was fairly limited. And it was a haven of critical thinking.

Ideal, I tell you. Moan all you want, it felt like the ideal place at the ideal time.

And, during the summer of 1993, it all happened: I learnt about the existence of the “Internet.” And it changed my life. Seriously, the ‘Net did have a large part to play in important changes in my life.

That event, my discovery of the ‘Net, also has a connection to journalism. The person who described the Internet to me was Kevin Tuite, one of my linguistic anthropology teachers at Université de Montréal. As far as I can remember, Kevin was mostly describing Usenet. But the potential for “relatively unmediated communication” was already a big selling point. Kevin talked about the fact that members of the Caucasian diaspora were able to use the Internet to discuss with their relatives and friends back in the Caucasus about issues pertaining to these independent republics after the fall of the USSR. All this while media coverage was sketchy at best (sounded like journalism still had a hard time coping with the new realities).

As you can imagine, I was more than intrigued and I applied for an account as soon as possible. In the meantime, I bought at 2400 baud modem, joined some local BBSes, and got to chat about the Internet with several friends, some of whom already had accounts. Got my first email account just before semester started, in August, 1993. I can still see traces of that account, but only since April, 1994 (I guess I wasn’t using my address in my signature before this). I’ve been an enthusiastic user of diverse Internet-based means of communication since then.

But coming back to journalism, specifically…

Journalism missed the switch.

During the past fifteen years, I’ve been amazed at how clueless members of mainstream media institutions have been to “the power of the Internet.” This was during Wired Magazine’s first year as a print magazine and we (some friends and I) were already commenting upon the fact that print journalists should look at what was coming. Eventually, they would need to adapt. “The Internet changes everything,” I thought.

No, I didn’t mean that the Internet would cause any of the significant changes that we have seeing around us. I tend to be against technological determinism (and other McLuhan tendencies). Not that I prefer sociological determinism yet I can’t help but think that, from ARPAnet to the current state of the Internet, most of the important changes have been primarily social: if the Internet became something, it’s because people are making it so, not because of some inexorable technological development.

My enthusiastic perspective on the Internet was largely motivated by the notion that it would allow people to go beyond the model from the journalism era. Honestly, I could see the end of “journalism as we knew it.” And I’m surprised, fifteen years later, that journalism has been among the slowest institutions to adapt.

In a sense, my main problem with journalism is that it maintains a very stratified structure which gives too much weight to the credibility of specific individuals. Editors and journalists, who are part of the “medium” in the old models of communication, have taken on a gatekeeping role despite the fact that they rarely are much more proficient thinkers than people who read them. “Gatekeepers” even constitute a “textbook case” in sociology, especially in conflict theory. Though I can easily perceive how “constructed” that gatekeeping model may be, I can easily relate to what it entails in terms of journalism.

There’s a type of arrogance embedded in journalistic self-perception: “we’re journalists/editors so we know better than you; you need us to process information for you.” Regardless of how much I may disagree with some of his words and actions, I take solace in the fact that Murdoch, a key figure in today’s mainstream media, talked directly at this arrogance. Of course, he might have been pandering. But the very fact that he can pay lip-service to journalistic arrogance is, in my mind, quite helpful.

I think the days of fully stratified gatekeeping (a “top-down approach” to information filtering) are over. Now that information is easily available and that knowledge is constructed socially, any “filtering” method can be distributed. I’m not really thinking of a “cream rises to the top” model. An analogy with water sources going through multiple layers of mountain rock would be more appropriate to a Swiss citizen such as myself. But the model I have in mind is more about what Bakhtin called “polyvocality” and what has become an ethical position on “giving voice to the other.” Journalism has taken voice away from people. I have in mind a distributed mode of knowledge construction which gives everyone enough voice to have long-distance effects.

At the risk of sounding too abstract (it’s actually very clear in my mind, but it requires a long description), it’s a blend of ideas like: the social butterfly effect, a post-encyclopedic world, and cultural awareness. All of these, in my mind, contribute to this heightened form of critical thinking away from which I feel journalism has led us.

The social butterfly effect is fairly easy to understand, especially now that social networks are so prominent. Basically, the “butterfly effect” from chaos theory applied to social networks. In this context, a “social butterfly” is a node in multiple networks of varying degrees of density and clustering. Because such a “social butterfly” can bring things (ideas, especially) from one such network to another, I argue that her or his ultimate influence (in agregate) is larger than that of someone who sits at the core of a highly clustered network. Yes, it’s related to “weak ties” and other network classics. But it’s a bit more specific, at least in my mind. In terms of journalism, the social butterfly effect implies that the way knowledge is constructed needs not come from a singular source or channel.

The “encyclopedic world” I have in mind is that of our good friends from the French Enlightenment: Diderot and the gang. At that time, there was a notion that the sum of all knowledge could be contained in the Encyclopédie. Of course, I’m simplifying. But such a notion is still discussed fairly frequently. The world in which we now live has clearly challenged this encyclopedic notion of exhaustiveness. Sure, certain people hold on to that notion. But it’s not taken for granted as “uncontroversial.” Actually, those who hold on to it tend to respond rather positively to the journalistic perspective on human events. As should be obvious, I think the days of that encyclopedic worldview are counted and that “journalism as we know it” will die at the same time. Though it seems to be built on an “encyclopedia” frame, Wikipedia clearly benefits from distributed model of knowledge management. In this sense, Wikipedia is less anachronistic than Britannica. Wikipedia also tends to be more insightful than Britannica.

The cultural awareness point may sound like an ethnographer’s pipe dream. But I perceive a clear connection between Globalization and a certain form of cultural awareness in information and knowledge management. This is probably where the Global Voices model can come in. One of the most useful representations of that model comes from a Chris Lydon’s Open Source conversation with Solana Larsen and Ethan Zuckerman. Simply put, I feel that this model challenges journalism’s ethnocentrism.

Obviously, I have many other things to say about journalism (as well as about its corrolate, nationalism).

But I do feel liberated already. So I’ll leave it at that.

La Renaissance du café à Montréal

J’ai récemment publié un très long billet sur la scène du café à Montréal. Sans doûte à cause de sa longueur, ce billet ne semble pas avoir les effets escomptés. J’ai donc décidé de republier ce billet, section par section. Ce billet est la dernière section de ce long billet. Il consiste en une espèce de résumé de la situation actuelle de la scène montréalaise du café, avec un regard porté vers son avenir. Vous pouvez consulter l’introduction qui contient des liens aux autres sections et ainsi avoir un contexte plus large.

J’ai récemment publié un très long billet sur la scène du café à Montréal. Sans doûte à cause de sa longueur, ce billet ne semble pas avoir les effets escomptés. J’ai donc décidé de republier ce billet, section par section. Ce billet est la dernière section de ce long billet. Il consiste en une espèce de résumé de la situation actuelle de la scène montréalaise du café, avec un regard porté vers son avenir. Vous pouvez consulter l’introduction qui contient des liens aux autres sections et ainsi avoir un contexte plus large.

À mon humble avis, l’arrivée de la Troisième vague à Montréal nous permet maintenant d’explorer le café dans toute sa splendeur. En quelque sorte, c’était la pièce qui manquait au casse-tête.

Dans mon précédent billet, j’ai omis de comparer le café à l’italienne au café à la québécoise (outre l’importance de l’allongé). C’est en partie parce que les différences sont un peu difficile à expliquer. Mais disons qu’il y a une certaine diversité de saveurs, à travers la dimension «à la québécoise» de la scène montréalaise du café. Malgré certains points communs, les divers cafés de Montréalais n’ont jamais été d’une très grande homogénéité, au niveau du goût. Les ressemblances venaient surtout de l’utilisation des quelques maisons de torréfaction locales plutôt que d’une unité conceptuelle sur la façon de faire le café. D’ailleurs, j’ai souvent perçu qu’il y avait eu une baisse de diversité dans les goûts proposés par différents cafés montréalais au cours des quinze dernières années, et je considère ce processus de quasi-standardisation (qui n’a jamais été menée à terme) comme un aspect néfaste de cette période dans l’histoire du café à Montréal. Les nouveaux développements de la scène montréalaise du café me donne espoir que la diversité de cette scène grandit de nouveau après cette période de «consolidation».

D’ailleurs, c’est non sans fierté que je pense au fait que les grandes chaînes «étrangères» de cafés ont eu de la difficulté à s’implanter à Montréal. Si Montréal n’a eu sa première succursale Starbucks qu’après plusieurs autres villes nord-américaines et si Second Cup a rapidement dû fermer une de ses succursales montréalaises, c’est entre autres parce que la scène montréalaise du café était très vivante, bien avant l’arrivée des chaînes. D’ailleurs, plusieurs chaînes se sont développé localement avant de se disperser à l’extérieur de Montréal. Le résultat est qu’il y a probablement, à l’heure actuelle, autant sinon plus de succursales de chaînes de cafés à Montréal que dans n’importe autre grande ville, mais qu’une proportion significative de ces cafés est originaire de Montréal. Si l’existence de chaînes locales de cafés n’a aucune corrélation avec la qualité moyenne du café qu’on dans une région donnée (j’ai même tendance à croire qu’il y a une corrélation inverse entre le nombre de chaînes et la qualité moyenne du café), la «conception montréalaise» du café me semble révêlée par les difficultés rencontrées par les chaînes extrogènes.

En fait, une caractéristique de la scène du café à Montréal est que la diversité est liée à la diversité de la population. Non seulement la diversité linguistique, culturelle, ethnique et sociale. Mais la diversité en terme de goûts et de perspectives. La diversité humaine à Montréal évoque l’image de la «salade mixte»: un mélange harmonieux mais avec des éléments qui demeurent distincts. D’aucuns diront que c’est le propre de toute grande ville, d’être intégrée de la sorte. D’autres diront que Montréal est moins bien intégrée que telle ou telle autre grande ville. Mais le portrait que j’essaie de brosser n’est ni plus beau, ni plus original que celui d’une autre ville. Il est simplement typique.

Outre les cafés «à la québécoise», «à l’italienne» et «troisième vague» que j’ai décrits, Montréal dispose de plusieurs cafés qui sont liés à diverses communautés. Oui, je pense à des cafés liés à des communautés culturelles, comme un café guatémaltèque ou un café libanais. Mais aussi à des cafés liés à des groupes sociaux particuliers ou à des communautés religieuses. Au point de vue du goût, le café servi à ces divers endroits n’est peut-être pas si distinctif. Mais l’expérience du café prend un sens spécifique à chacun de ces endroits.

Et si j’ai parlé presqu’exclusivement de commerces liés au café, je pense beaucoup à la dimension disons «domestique» du café.

Selon moi, la population de la région montréalaise a le potentiel d’un réel engouement pour le café de qualité. Même s’ils n’ont pas toujours une connaissance très approfondie du café et même s’il consomme du café de moins bonne qualité, plusieurs Montréalais semblent très intéressés par le café. Certains d’entre eux croient connaître le café au point de ne pas vouloir en découvrir d’autres aspects. Mais les discussions sur le goût du café sont monnaie courante parmi des gens de divers milieux, ne serait-ce que dans le choix de certains cafés.

Évidemment, ces discussions ont lieu ailleurs et le café m’a souvent aidé à m’intégrer à des réseaux sociaux de villes où j’ai habité. Mais ce que je crois être assez particulier à Montréal, c’est qu’il ne semble pas y avoir une «idéologie dominante» du café. Certains amateurs de café (et certains professionnels du café) sont très dogmatiques, voire doctrinaires. Mais je ne perçois aucune  idée sur le café qui serait réellement acquise par tous. Il y a des Tim Hortons et des Starbucks à Montréal mais, contrairement à d’autres coins du continent, il ne semble pas y avoir un café qui fait consensus.

Par contre, il y a une sorte de petite oligarchie. Quelques maisons de torréfaction et de distribution du café semblent avoir une bonne part du marché. Je pense surtout à Union, Brossard et Van Houtte (qui a aussi une chaîne de café et qui était pris à une certaine époque comme exemple de succès financier). À ce que je sache, ces trois entreprises sont locales. À l’échelle globale, l’oligarchie du monde du café est constituée par Nestlé, Sara Lee, Kraft et Proctor & Gamble. J’imagine facilement que ces multinationales ont autant de succès à Montréal qu’ailleurs dans le monde mais je trouve intéressant de penser au poids relatif de quelques chaînes locales.

Parlant de chaînes locales, je crois que certaines entreprises locales peuvent avoir un rôle déterminant dans la «Renaissance du café à Montréal». Je pense surtout à Café Terra de Carlo Granito, à Café Mystique et Toi, Moi & Café de Sevan Istanboulian, à Café Rico de Sévanne Kordahi et à la coop La Maison verte à Notre-Dame-de-Grâce. Ces choix peuvent sembler par trop personnels, voire arbitraires. Mais chaque élément me semble représentatif de la scène montréalaise du café. Carlo Granito, par exemple, a participé récemment à l’émission Samedi et rien d’autre de Radio-Canada, en compagnie de Philippe Mollé (audio de 14:30 à 32:30). Sevan Istanboulian est juge certifié du World Barista Championship et distribue ses cafés à des endroits stratégiques. Sévanne Kordahi a su concentrer ses activités dans des domaines spécifiques et ses cafés sont fort appréciés par des groupes d’étudiants (entre autres grâce à un rabais étudiant). Puis j’ai appris dernièrement que La Maison verte servait du Café Femenino qui met de l’avant une des plus importantes dimensions éthiques du monde du café.

Pour revenir au «commun des mortels», l’amateur de café. Au-delà de la spécificité locale, je crois qu’une scène du café se bâtit par une dynamique entre individus, une série de «petites choses qui finissent par faire une différence». Et c’est cette dynamique qui me rend confiant.

La communauté des enthousiastes du café à Montréal est somme toute assez petite mais bien vivante. Et je me place dans les rangs de cette communauté.

Certains d’entre nous avons participé à divers événements ensemble, comme des dégustations et des séances de préparation de café. Les discussions à propos du café se multiplient, entre nous. D’ailleurs, nous nous croisons assez régulièrement, dans l’un ou l’autre des hauts lieux du café à Montréal. D’ailleurs, d’autres dimensions du monde culinaire sont représentés parmi nous, depuis la bière artisanale au végétalianisme en passant par le chocolat et le thé. Ces liens peuvent sembler évident mais c’est surtout parce que chacun d’entre nous fait partie de différents réseaux que la communauté me semble riche. En discutant ensemble, nous en venons à parler de plusieurs autres arts culinaires au-delà du café, ce qui renforce les liens entre le café et le reste du monde culinaire. En parlant de café avec nos autres amis, nous créons un effet de vague, puisque nous participons à des milieux distincts. C’est d’ailleurs une représentation assez efficace de ce que je continue d’appeler «l’effet du papillon social»: le battement de ses ailes se répercute dans divers environnements. Si la friction n’est pas trop grande, l’onde de choc provenant de notre communauté risque de se faire sentir dans l’ensemble de la scène du café à Montréal.

Pour boucler la boucle (avant d’aller me coucher), je dois souligner le fait que, depuis peu, le lieu de rencontre privilégié de notre petit groupe d’enthousiastes est le Café Myriade.

Café à la montréalaise

Montréal est en passe de (re)devenir une destination pour le café. Mieux encore, la «Renaissance du café à Montréal» risque d’avoir des conséquences bénéfiques pour l’ensemble du milieu culinaire de la métropole québécoise.

Cette thèse peut sembler personnelle et je n’entends pas la proposer de façon dogmatique. Mais en me mêlant au milieu du café à Montréal, j’ai accumulé un certain nombre d’impressions qu’il me ferait plaisir de partager. Il y a même de la «pensée magique» dans tout ça en ce sens qu’il me semble plus facile de rebâtir la scène montréalaise du café si nous avons une idée assez juste de ce qui constitue la spécificité montréalaise.

Continue reading “Café à la montréalaise”

Microblogue d’événement

Version éditée d’un message que je viens d’envoyer à mon ami Martin Lessard.

Le contexte direct, c’est une discussion que nous avons eue au sujet de mon utilisation de Twitter, la principale plateforme de microblogue. Pendant un événement quelconque (conférence, réunion, etc.), j’utilise Twitter pour faire du blogue en temps réel, du liveblogue.

Contrairement à certains, je pense que l’utilisation du microblogue peut être adaptée aux besoins de chaque utilisateur. D’ailleurs, c’est un aspect de la technologie que je trouve admirable: la possibilité d’utiliser des outils pour d’autres usages que ceux pour lesquels ils ont été conçus. C’est là que la technologie au sens propre dépasse l’outil. Dans mon cours de culture matérielle, j’appelle ça “unintended uses”, concept tout simple qui a beaucoup d’implications en rapport aux liens sociaux dans la chaîne qui va de la conception et de la construction d’un outil jusqu’à son utilisation et son «impact» social.

Donc, mon message édité.
Je pense pas mal à cette question de tweets («messages» sur Twitter) considérés comme intempestifs. Alors je lance quelques idées.

Ça m’apporte pas mal, de bloguer en temps réel par l’entremise de Twitter. Vraiment, je vois ça comme prendre des notes en public. Faut dire que la prise de notes est une seconde nature, pour moi. C’est comme ça que je structure ma pensée. Surtout avec des “outliners” mais ça marche aussi en linéaire.

De ce côté, je fais un peu comme ces journalistes sur Twitter qui utilisent le microblogue comme carnet de notes. Andy Carvin est mon exemple préféré. Il tweete plus vite que moi et ses tweets sont aussi utiles qu’un article de journal. Ma démarche est plus proche de la «lecture active» et du sens critique, mais c’est un peu la même idée. Dans mon cas, ça me permet même de remplacer un billet de blogue par une série de tweets.

L’avantage de la prise de notes en temps réel s’est dévoilé entre autres lors d’une présentation de Johannes Fabian, anthropologue émérite qui était à Montréal pendant une semaine bien remplie, le mois dernier. Je livebloguais sa première présentation, sur Twitter. En face de moi, il y avait deux anthropologues de Concordia (Maximilian Forte et Owen Wiltshire) que je connais entre autres comme blogueurs. Les deux prenaient des notes et l’un d’entre eux enregistrait la séance. Dans mes tweets, j’ai essayé de ne pas trop résumer ce que Fabian disait mais je prenais des notes sur mes propres réactions, je faisais part de mes observations de l’auditoire et je réfléchissais à des implications des idées énoncées. Après la présentation, Maximilian me demandait si j’allais bloguer là-dessus. J’ai pu lui dire en toute franchise que c’était déjà fait. Et Owen, un de mes anciens étudiants qui travaille maintenant sur la publication académique et le blogue, a maintenant accès à mes notes complètes, avec “timeline”.
Puissante méthode de prise de notes!

L’avantage de l’aspect public c’est premièrement que je peux avoir des «commentaires» en temps réel. J’en ai pas autant que j’aimerais, mais ça reste ce que je cherche, les commentaires. Le microbloguage me donne plus de commentaires que mon blogue principal, ici même sur WordPress. Facebook me donne plus de commentaires que l’un ou l’autre, mais c’est une autre histoire.

Dans certains cas, le livebloguage donne lieu à une véritable conversation parallèle. Mon exemple préféré, c’est probablement cette interaction que j’ai eue avec John Milles à la fin de la session d’Isabelle Lopez, lors de PodCamp Montréal (#pcmtl08). On parlait de culture d’Internet et je proposais qu’il y avait «une» culture d’Internet (comme on peut dire qu’il y a «une» culture chrétienne, disons). Milles, qui ne me savait pas anthropologue, me fait alors un tweet à propos de la notion classique de culture pour les anthropologues (monolithique, spécifiée dans l’espace, intemporelle…). J’ai alors pu le diriger vers la «crise de la représentation» en anthropologie depuis 1986 avec Writing Culture de Clifford et Marcus. Il m’a par la suite envoyé des références de la littérature juridique.

Bien sûr, c’est l’idée du “backchannel” appliqué au ‘Net. Ça fonctionne de façon très efficace pour des événements comme SXSW et BarCamp puisque tout le monde tweete en même temps. Mais ça peut fonctionner pour d’autres événements, si la pratique devient plus commune.

More on this later.”

Je crois que le bloguage en temps réel lors d’événements augmente la visibilité de l’événement lui-même. Ça marcherait mieux si je mettais des “hashtags” à chaque tweet. (Les “hashtags” sont des étiquettes textuelles précédées de la notation ‘#’, qui permettent d’identifier des «messages»). Le problème, c’est que c’est pas vraiment pratique de taper des hashtags continuellement, du moins sur un iPod touch. De toutes façons, ce type de redondance semble peu utile.

More on this later.”

Évidemment, le fait de microbloguer autant augmente un peu ma propre visibilité. Ces temps-ci, je commence à penser à des façons de me «vendre». C’est un peu difficile pour moi parce que j’ai pas l’habitude de me vendre et que je vois l’humilité comme une vertu. Mais ça semble nécessaire et je me cherche des moyens de me vendre tout en restant moi-même. Twitter me permet de me mettre en valeur dans un contexte qui rend cette pratique tout à fait appropriée (selon moi).

D’ailleurs, j’ai commencé à utiliser Twitter comme méthode de réseautage, pendant que j’étais à Austin. C’était quelques jours avant SXSW et je voulais me faire connaître localement. D’ailleurs, je conserve certaines choses de cette époque, y compris des contacts sur Twitter.

Ma méthode était toute simple: je me suis mis à «suivre» tous ceux qui suivaient @BarCampAustin. Ça faisait un bon paquet et ça me permettait de voir ce qui se passait. D’ailleurs, ça m’a permis d’aller observer des événements organisés par du monde de SXSW comme Gary Vaynerchuk et Scott Beale. Pour un ethnographe, y’a rien comme voir Kevin Rose avec son «entourage» ou d’apprendre que Dr. Tiki est d’origine lavalloise. 😉

Dans les “features” du microbloguage que je trouve particulièrement intéressantes, il y a les notations en ‘@’ et en ‘#’. Ni l’une, ni l’autre n’est si pratique sur un iPod touch, du moins avec les applis qu’on a. Mais le concept de base est très intéressant. Le ‘@’ est un peu l’équivalent du ping ou trackback, pouvant servir à attirer l’attention de quelqu’un d’autre (cette notation permet les réponses directes à des messages). C’est assez puissant comme principe et ça aide beaucoup dans le livebloguage (Muriel Ide et Martin Lessard ont utilisé cette méthode pour me contacter pendant WebCom/-Camp).

More on this later.”

D’après moi, avec des geeks, cette pratique du microblogue d’événement s’intensifie. Il prend même une place prépondérante, donnant au microblogue ce statut que les journalistes ont tant de difficulté à saisir. Lorsqu’il se passe quelque-chose, le microblogue est là pour couvrir l’événement.

Ce qui m’amène à ce “later“. Tout simple, dans le fond. Des instances de microblogues pour des événements. Surtout pour des événements préparés à l’avance, mais ça peut être une structure ad hoc à la Ushahidi d’Erik Hersman.

Laconica d’Evan Prodromou est tout désigné pour remplir la fonction à laquelle je pense mais ça peut être sur n’importe quelle plateforme. J’aime bien Identi.ca, qui est la plus grande instance Laconica. Par contre, j’utilise plus facilement Twitter, entre autres parce qu’il y a des clients Twitter pour l’iPod touch (y compris avec localisation).

Imaginons une (anti-)conférence à la PodCamp. Le même principe s’applique aux événements en-ligne (du genre “WebConference”) mais les rencontres face-à-face ont justement des avantages grâce au microbloguage. Surtout si on pense à la “serendipity”, à l’utilisation de plusieurs canaux de communication (cognitivement moins coûteuse dans un contexte de coprésence), à la facilité des conversations en petits groupes et au «langage non-verbal».

Donc, chaque événement a une instance de microblogue. Ça coûte pratiquement rien à gérer et ça peut vraiment ajouter de la valeur à l’événement.

Chaque personne inscrite à l’événement a un compte de microblogue qui est spécifique à l’instance de cet événement (ou peut utiliser un compte Laconica d’une autre instance et s’inscrire sur la nouvelle instance). Par défaut, tout le monde «suit» tout le monde (tout le monde est incrit pour voir tous les messages). Sur chaque “nametag” de la conférence, l’identifiant de la personne apparaît. Chaque présentateur est aussi lié à son identifiant. Le profil de chaque utilisateur peut être calqué sur un autre profil ou créé spécifiquement pour l’événement. Les portraits photos sont privilégiés, mais les avatars sont aussi permis. Tout ce qui est envoyé à travers l’instance est archivé et catalogué. S’il y a des façons de spécifier des positions dans l’espace, de façon précise (peut-être même avec une RFID qu’on peut désactiver), ce positionnement est inscrit dans l’instance. Comme ça, on peut se retrouver plus facilement pour discuter en semi-privé. D’ailleurs, ça serait facile d’inclure une façon de prendre des rendez-vous ou de noter des détails de conversations, pour se remémorer le tout plus tard. De belles intégrations possibles avec Google Calendar, par exemple.

Comme la liste des membres de l’instance est limitée, on peut avoir une appli qui facilite les notations ‘@’. Recherche «incrémentale», carnet d’adresse, auto-complétion… Les @ des présentateurs sont sous-entendus lors des présentations, on n’a pas à taper leurs noms au complet pour les citer. Dans le cas de conversations à plusieurs, ça devient légèrement compliqué, mais on peut quand même avoir une liste courte si c’est un panel ou d’autres méthodes si c’est plus large. D’ailleurs, les modérateurs pourraient utiliser ça pour faire la liste d’attente des interventions. (Ça, c’est du bonbon! J’imagine ce que ça donnerait à L’Université autrement!)

Comme Evan Prodromou en parlait lors de PodCamp Montréal, il y a toute la question du “microcasting” qui prend de l’ampleur. Avec une instance de microblogue liée à un événement, on pourrait avoir de la distribution de fichiers à l’interne. Fichiers de présentation (Powerpoint ou autre), fichiers médias, liens, etc. Les présentateurs peuvent préparer le tout à l’avance et envoyer leurs trucs au moment opportun. À la rigueur, ça peut même remplacer certaines utilisations de Powerpoint!

Plutôt que de devoir taper des hashtags d’événements (#pcmtl08), on n’a qu’à envoyer ses messages sur l’instance spécifique. Ceux qui ne participent pas à l’événement ne sont pas inondés de messages inopportuns. Nul besoin d’arrêter de suivre quelqu’un qui participe à un tel événement (comme ç’a été le cas avec #pcmtl08).

Une fois l’événement terminé, on peut faire ce qu’on veut avec l’instance. On peut y revenir, par exemple pour consulter la liste complète des participants. On peut retravailler ses notes pour les transformer en billets et même rapports. Ou on peut tout mettre ça de côté.

Pour le reste, ça serait comme l’utilisation de Twitter lors de SXSWi (y compris le cas Lacy, que je trouve fascinant) ou autre événement geek typique. Dans certains cas, les gens envoient les tweets directement sur des écrans autour des présentateurs.

Avec une instance spécifique, les choses sont plus simple à gérer. En plus, peu de risques de voir l’instance tomber en panne, comme c’était souvent le cas avec Twitter, pendant une assez longue période.

C’est une série d’idées en l’air et je tiens pas au détail spécifique. Mais je crois qu’il y a un besoin réel et que ça aide à mettre plusieurs choses sur une même plateforme. D’ailleurs, j’y avais pas trop pensé mais ça peut avoir des effets intéressants pour la gestion de conférences, pour des rencontres en-ligne, pour la couverture médiatique d’événements d’actualités, etc. Certains pourraient même penser à des modèles d’affaire qui incluent le microblogue comme valeur ajoutée. (Différents types de comptes, possibilité d’assister gratuitement à des conférences sans compte sur l’instance…)

Qu’en pensez-vous?

Crazy App Idea: Happy Meter

I keep getting ideas for apps I’d like to see on Apple’s App Store for iPod touch and iPhone. This one may sound a bit weird but I think it could be fun. An app where you can record your mood and optionally broadcast it to friends. It could become rather sophisticated, actually. And I think it can have interesting consequences.

The idea mostly comes from Philippe Lemay, a psychologist friend of mine and fellow PDA fan. Haven’t talked to him in a while but I was just thinking about something he did, a number of years ago (in the mid-1990s). As part of an academic project, Philippe helped develop a PDA-based research program whereby subjects would record different things about their state of mind at intervals during the day. Apart from the neatness of the data gathering technique, this whole concept stayed with me. As a non-psychologist, I personally get the strong impression that recording your moods frequently during the day can actually be a very useful thing to do in terms of mental health.

And I really like the PDA angle. Since I think of the App Store as transforming Apple’s touch devices into full-fledged PDAs, the connection is rather strong between Philippe’s work at that time and the current state of App Store development.

Since that project of Philippe’s, a number of things have been going on which might help refine the “happy meter” concept.

One is that “lifecasting” became rather big, especially among certain groups of Netizens (typically younger people, but also many members of geek culture). Though the lifecasting concept applies mostly to video streams, there are connections with many other trends in online culture. The connection with vidcasting specifically (and podcasting generally) is rather obvious. But there are other connections. For instance, with mo-, photo-, or microblogging. Or even with all the “mood” apps on Facebook.

Speaking of Facebook as a platform, I think it meshes especially well with touch devices.

So, “happy meter” could be part of a broader app which does other things: updating Facebook status, posting tweets, broadcasting location, sending personal blogposts, listing scores in a Brain Age type game, etc.

Yet I think the “happy meter” could be useful on its own, as a way to track your own mood. “Turns out, my mood was improving pretty quickly on that day.” “Sounds like I didn’t let things affect me too much despite all sorts of things I was going through.”

As a mood-tracker, the “happy meter” should be extremely efficient. Because it’s easy, I’m thinking of sliders. One main slider for general mood and different sliders for different moods and emotions. It would also be possible to extend the “entry form” on occasion, when the user wants to record more data about their mental state.

Of course, everything would be save automatically and “sent to the cloud” on occasion. There could be a way to selectively broadcast some slider values. The app could conceivably send reminders to the user to update their mood at regular intervals. It could even serve as a “break reminder” feature. Though there are limitations on OSX iPhone in terms of interapplication communication, it’d be even neater if the app were able to record other things happening on the touch device at the same time, such as music which is playing or some apps which have been used.

Now, very obviously, there are lots of privacy issues involved. But what social networking services have taught us is that users can have pretty sophisticated notions of privacy management, if they’re given the chance. For instance, adept Facebook users may seem to indiscrimately post just about everything about themselves but are often very clear about what they want to “let out,” in context. So, clearly, every type of broadcasting should be controlled by the user. No opt-out here.

I know this all sounds crazy. And it all might be a very bad idea. But the thing about letting my mind wander is that it helps me remain happy.

Handhelds for the Rest of Us?

Ok, it probably shouldn’t become part of my habits but this is another repost of a blog comment motivated by the OLPC XO.

This time, it’s a reply to Niti Bhan’s enthusiastic blogpost about the eeePC: Perspective 2.0: The little eeePC that could has become the real “iPod” of personal computing

This time, I’m heavily editing my comments. So it’s less of a repost than a new blogpost. In some ways, it’s partly a follow-up to my “Ultimate Handheld Device” post (which ended up focusing on spatial positioning).

Given the OLPC context, the angle here is, hopefully, a culturally aware version of “a handheld device for the rest of us.”

Here goes…

I think there’s room in the World for a device category more similar to handhelds than to subnotebooks. Let’s call it “handhelds for the rest of us” (HftRoU). Something between a cellphone, a portable gaming console, a portable media player, and a personal digital assistant. Handheld devices exist which cover most of these features/applications, but I’m mostly using this categorization to think about the future of handhelds in a globalised World.

The “new” device category could serve as the inspiration for a follow-up to the OLPC project. One thing about which I keep thinking, in relation to the “OLPC” project, is that the ‘L’ part was too restrictive. Sure, laptops can be great tools for students, especially if these students are used to (or need to be trained in) working with and typing long-form text. But I don’t think that laptops represent the most “disruptive technology” around. If we think about their global penetration and widespread impact, cellphones are much closer to the leapfrog effect about which we all have been writing.

So, why not just talk about a cellphone or smartphone? Well, I’m trying to think both more broadly and more specifically. Cellphones are already helping people empower themselves. The next step might to add selected features which bring them closer to the OLPC dream. Also, since cellphones are widely distributed already, I think it’s important to think about devices which may complement cellphones. I have some ideas about non-handheld tools which could make cellphones even more relevant in people’s lives. But they will have to wait for another blogpost.

So, to put it simply, “handhelds for the rest of us” (HftRoU) are somewhere between the OLPC XO-1 and Apple’s original iPhone, in terms of features. In terms of prices, I dream that it could be closer to that of basic cellphones which are in the hands of so many people across the globe. I don’t know what that price may be but I heard things which sounded like a third of the price the OLPC originally had in mind (so, a sixth of the current price). Sure, it may take a while before such a low cost can be reached. But I actually don’t think we’re in a hurry.

I guess I’m just thinking of the electronics (and global) version of the Ford T. With more solidarity in mind. And cultural awareness.

Google’s Open Handset Alliance (OHA) may produce something more appropriate to “global contexts” than Apple’s iPhone. In comparison with Apple’s iPhone, devices developed by the OHA could be better adapted to the cultural, climatic, and economic conditions of those people who don’t have easy access to the kind of computers “we” take for granted. At the very least, the OHA has good representation on at least three continents and, like the old OLPC project, the OHA is officially dedicated to openness.

I actually care fairly little about which teams will develop devices in this category. In fact, I hope that new manufacturers will spring up in some local communities and that major manufacturers will pay attention.

I don’t care about who does it, I’m mostly interested in what the devices will make possible. Learning, broadly speaking. Communicating, in different ways. Empowering themselves, generally.

One thing I have in mind, and which deviates from the OLPC mission, is that there should be appropriate handheld devices for all age-ranges. I do understand the focus on 6-12 year-olds the old OLPC had. But I don’t think it’s very productive to only sell devices to that age-range. Especially not in those parts of the world (i.e., almost anywhere) where generation gaps don’t imply that children are isolated from adults. In fact, as an anthropologist, I react rather strongly to the thought that children should be the exclusive target of a project meant to empower people. But I digress, as always.

I don’t tend to be a feature-freak but I have been thinking about the main features the prototypical device in this category should have. It’s not a rigid set of guidelines. It’s just a way to think out loud about technology’s integration in human life.

The OS and GUI, which seem like major advantages of the eeePC, could certainly be of the mobile/handheld type instead of the desktop/laptop type. The usual suspects: Symbian, NewtonOS, Android, Zune, PalmOS, Cocoa Touch, embedded Linux, Playstation Portable, WindowsCE, and Nintendo DS. At a certain level of abstraction, there are so many commonalities between all of these that it doesn’t seem very efficient to invent a completely new GUI/OS “paradigm,” like OLPC’s Sugar was apparently trying to do.

The HftRoU require some form of networking or wireless connectivity feature. WiFi (802.11*), GSM, UMTS, WiMAX, Bluetooth… Doesn’t need to be extremely fast, but it should be flexible and it absolutely cannot be cost-prohibitive. IP might make much more sense than, say, SMS/MMS, but a lot can be done with any kind of data transmission between devices. XO-style mesh networking could be a very interesting option. As VoIP has proven, voice can efficiently be transmitted as data so “voice networks” aren’t necessary.

My sense is that a multitouch interface with an accelerometer would be extremely effective. Yes, I’m thinking of Apple’s Touch devices and MacBooks. As well as about the Microsoft Surface, and Jeff Han’s Perceptive Pixel. One thing all of these have shown is how “intuitive” it can be to interact with a machine using gestures. Haptic feedback could also be useful but I’m not convinced it’s “there yet.”

I’m really not sure a keyboard is very important. In fact, I think that keyboard-focused laptops and tablets are the wrong basis for thinking about “handhelds for the rest of us.” Bear in mind that I’m not thinking about devices for would-be office workers or even programmers. I’m thinking about the broadest user base you can imagine. “The Rest of Us” in the sense of, those not already using computers very directly. And that user base isn’t that invested in (or committed to) touch-typing. Even people who are very literate don’t tend to be extremely efficient typists. If we think about global literacy rates, typing might be one thing which needs to be leapfrogged. After all, a cellphone keypad can be quite effective in some hands and there are several other ways to input text, especially if typing isn’t too ingrained in you. Furthermore, keyboards aren’t that convenient in multilingual contexts (i.e., in most parts of the world). I say: avoid the keyboard altogether, make it available as an option, or use a virtual one. People will complain. But it’s a necessary step.

If the device is to be used for voice communication, some audio support is absolutely required. Even if voice communication isn’t part of it (and I’m not completely convinced it’s the one required feature), audio is very useful, IMHO (I’m an aural guy). In some parts of the world, speakers are much favoured over headphones or headsets. But I personally wish that at least some HftRoU could have external audio inputs/outputs. Maybe through USB or an iPod-style connector.

A voice interface would be fabulous, but there still seem to be technical issues with both speech recognition and speech synthesis. I used to work in that field and I keep dreaming, like Bill Gates and others do, that speech will finally take the world by storm. But maybe the time still hasn’t come.

It’s hard to tell what size the screen should be. There probably needs to be a range of devices with varying screen sizes. Apple’s Touch devices prove that you don’t need a very large screen to have an immersive experience. Maybe some HftRoU screens should in fact be larger than that of an iPhone or iPod touch. Especially if people are to read or write long-form text on them. Maybe the eeePC had it right. Especially if the devices’ form factor is more like a big handheld than like a small subnotebook (i.e., slimmer than an eeePC). One reason form factor matters, in my mind, is that it could make the devices “disappear.” That, and the difference between having a device on you (in your pocket) and carrying a bag with a device in it. Form factor was a big issue with my Newton MessagePad 130. As the OLPC XO showed, cost and power consumption are also important issues regarding screen size. I’d vote for a range of screens between 3.5 inch (iPhone) and 8.9 inch (eeePC 900) with a rather high resolution. A multitouch version of the XO’s screen could be a major contribution.

In terms of both audio and screen features, some consideration should be given to adaptive technologies. Most of us take for granted that “almost anyone” can hear and see. We usually don’t perceive major issues in the fact that “personal computing” typically focuses on visual and auditory stimuli. But if these devices truly are “for the rest of us,” they could help empower visually- or hearing-impaired individuals, who are often marginalized. This is especially relevant in the logic of humanitarianism.

HftRoU needs a much autonomy from a power source as possible. Both in terms of the number of hours devices can be operated without needing to be connected to a power source and in terms of flexibility in power sources. Power management is a major technological issue, with portable, handheld, and mobile devices. Engineers are hard at work, trying to find as many solutions to this issue as they can. This was, obviously, a major area of research for the OLPC. But I’m not even sure the solutions they have found are the only relevant ones for what I imagine HftRoU to be.

GPS could have interesting uses, but doesn’t seem very cost-effective. Other “wireless positioning systems” (à la Skyhook) might reprsent a more rational option. Still, I think positioning systems are one of the next big things. Not only for navigation or for location-based targeting. But for a set of “unintended uses” which are the hallmark of truly disruptive technology. I still remember an article (probably in the venerable Wired magazine) about the use of GPS/GIS for research into climate change. Such “unintended uses” are, in my mind, much closer to the constructionist ideal than the OLPC XO’s unified design can ever get.

Though a camera seems to be a given in any portable or mobile device (even the OLPC XO has one), I’m not yet that clear on how important it really is. Sure, people like taking pictures or filming things. Yes, pictures taken through cellphones have had a lasting impact on social and cultural events. But I still get the feeling that the main reason cameras are included on so many devices is for impulse buying, not as a feature to be used so frequently by all users. Also, standalone cameras probably have a rather high level of penetration already and it might be best not to duplicate this type of feature. But, of course, a camera could easily be a differentiating factor between two devices in the same category. I don’t think that cameras should be absent from HftRoU. I just think it’s possible to have “killer apps” without cameras. Again, I’m biased.

Apart from networking/connectivity uses, Bluetooth seems like a luxury. Sure, it can be neat. But I don’t feel it adds that much functionality to HftRoU. Yet again, I could be proven wrong. Especially if networking and other inter-device communication are combined. At some abstract level, there isn’t that much difference between exchanging data across a network and controlling a device with another device.

Yes, I do realize I pretty much described an iPod touch (or an iPhone without camera, Bluetooth, or cellphone fees). I’ve been lusting over an iPod touch since September and it does colour my approach. I sincerely think the iPod touch could serve as an inspiration for a new device type. But, again, I care very little about which company makes that device. I don’t even care about how open the operating system is.

As long as our minds are open.

Free As In Beer: The Case for No-Cost Software

To summarize the situation:

  1. Most of the software for which I paid a fee, I don’t really use.
  2. Most of the software I really use, I haven’t paid a dime for.
  3. I really like no-cost software.
  4. You might want to call me “cheap” but, if you’re developing “consumer software,” you may need to pay attention to the way people like me think about software.

No, I’m not talking about piracy. Piracy is wrong on a very practical level (not to mention legal and moral issues). Piracy and anti-piracy protection are in a dynamic that I don’t particularly enjoy. In some ways, forms of piracy are “ruining it for everyone.” So this isn’t about pirated software.

I’m not talking about “Free/Libre/Open Source Software” (FLOSS) either. I tend to relate to some of the views held by advocates of “Free as in Speech” or “Open” developments but I’ve had issues with FLOSS projects, in the past. I will gladly support FLOSS in my own ways but, to be honest, I ended up losing interest in some of the most promising projects out there. Not saying they’re not worth it. After all, I do rely on many of those projects But in talking about “no-cost software,” I’m not talking about Free, Libre, or Open Source development. At least, not directly.

Basically, I was thinking about the complex equation which, for any computer user, determines the cash value of a software application. Most of the time, this equation is somehow skewed. And I end up frustrated when I pay for software and almost giddy when I find good no-cost software.

An old but representative example of my cost-software frustration: QuickTime Pro. I paid for it a number of years ago, in preparation for a fieldwork trip. It seemed like a reasonable thing to do, especially given the fact that I was going to manipulate media files. When QuickTime was updated, my license stopped working. I was basically never able to use the QuickTime Pro features. And while it’s not a huge amount of money, the frustration of having paid for something I really didn’t need left me surprisingly bitter. It was a bad decision at that time so I’m now less likely to buy software unless I really need it and I really know how I will use it.

There’s an interesting exception to my frustration with cost-software: OmniOutliner (OO). I paid for it and have used it extensively for years. When I was “forced” to switch to Windows XP, OO was possibly the piece of software I missed the most from Mac OS X. And as soon as I was able to come back to the Mac, it’s one of the first applications I installed. But, and this is probably an important indicator, I don’t really use it anymore. Not because it lacks features I found elsewhere. But because I’ve had to adapt my workflow to OO-less conditions. I still wish there were an excellent cross-platform outliner for my needs. And, no, Microsoft OneNote isn’t it.

Now, I may not be a typical user. If the term weren’t so self-aggrandizing, I’d probably call myself a “Power User.” And, as I keep saying, I am not a coder. Therefore, I’m neither the prototypical “end user” nor the stereotypical “code monkey.” I’m just someone spending inordinate amounts of time in front of computers.

One dimension of my computer behavior which probably does put me in a special niche is that I tend to like trying out new things. Even more specifically, I tend to get overly enthusiastic about computer technology to then become disillusioned by said technology. Call me a “dreamer,” if you will. Call me “naïve.” Actually, “you can call me anything you want.” Just don’t call me to sell me things. 😉

Speaking of pressure sales. In a way, if I had truckloads of money, I might be a good target for software sales. But I’d be the most demanding user ever. I’d require things to work exactly like I expect them to work. I’d be exactly what I never am in real life: a dictator.

So I’m better off as a user of no-cost software.

I still end up making feature requests, on occasion. Especially with Open Source and other open development projects. Some developers might think I’m just complaining as I’m not contributing to the code base or offering solutions to a specific usage problem. Eh.

Going back to no-cost software. The advantage isn’t really that we, users, spend less money on the software distribution itself. It’s that we don’t really need to select the perfect software solution. We can just make do with what we have. Which is a huge “value-add proposition” in terms of computer technology, as counter-intuitive as this may sound to some people.

To break down a few no-cost options.

  • Software that came with your computer. With an Eee PC, iPhone, XO, or Mac, it’s actually an important part of the complete computing experience. Sure, there are always ways to expand the software offering. But the included software may become a big part of the deal. After all, the possibilities are already endless. Especially if you have ubiquitous Internet access.
  • Software which comes through a volume license agreement. This often works for Microsoft software, at least at large educational institutions. Even if you don’t like it so much, you end up using Microsoft Office because you have it on your computer for free and it does most of the things you want to do.
  • Software coming with a plan or paid service. Including software given by ISPs. These tend not to be “worth it.” Yet the principle (or “business model,” depending on which end of the deal you’re on) isn’t so silly. You already pay for a plan of some kind, you might as well get everything you need from that plan. Nobody (not even AT&T) has done it yet in such a way that it would be to everyone’s advantage. But it’s worth a thought.
  • “Webware” and other online applications. Call it “cloud computing” if you will (it was a buzzphrase, a few days ago). And it changes a lot of things. Not only does it simplify things like backup and migration, but it often makes for a seamless computer experience. When it works really well, the browser effectively disappears and you just work in a comfortable environment where everything you need (content, tools) is “just there.” This category is growing rather rapidly at this point but many tech enthusiasts were predicting its success a number of years ago. Typical forecasting, I guess.
  • Light/demo versions. These are actually less common than they once were, especially in terms of feature differentiation. Sure, you may still play the first few levels of a game in demo version and some “express” or “lite” versions of software are still distributed for free as teaser versions of more complete software. But, like the shareware model, demo and light software may seem to have become much less prominent a part of the typical computer user’s life than just a few years ago.
  • Software coming from online services. I’m mostly thinking about Skype but it’s a software category which would include any program with a desktop component (a “download”) and an online component, typically involving some kind of individual account (free or paid). Part subscription model, part “Webware companion.” Most of Google’s software would qualify (Sketchup, Google Earth…). If the associated “retail software” were free, I wouldn’t hesitate to put WoW in this category.
  • Actual “freeware.” Much freeware could be included in other categories but there’s still an idea of a “freebie,” in software terms. Sometimes, said freeware is distributed in view of getting people’s attention. Sometimes the freeware is just the result of a developer “scratching her/his own itch.” Sometimes it comes from lapsed shareware or even lapsed commercial software. Sometimes it’s “donationware” disguised as freeware. But, if only because there’s a “freeware” category in most software catalogs, this type of no-cost software needs to be mentioned.
  • “Free/Libre/Open Source Software.” Sure, I said earlier this was not what I was really talking about. But that was then and this is now. 😉 Besides, some of the most useful pieces of software I use do come from Free Software or Open Source. Mozilla Firefox is probably the best example. But there are many other worthy programs out there, including BibDesk, TeXShop, and FreeCiv. Though, to be honest, Firefox and Flock are probably the ones I use the most.
  • Pirated software (aka “warez”). While software piracy can technically let some users avoid the cost of purchasing a piece of software, the concept is directly tied with commercial software licenses. (It’s probably not piracy if the software distribution is meant to be open.) Sure, pirates “subvert” the licensing system for commercial software. But the software category isn’t “no-cost.” To me, there’s even a kind of “transaction cost” involved in the piracy. So even if the legal and ethical issues weren’t enough to exclude pirated software from my list of no-cost software options, the very practicalities of piracy put pirated software in the costly column, not in the “no-cost” one.

With all but the last category, I end up with most (but not all) of the software solutions I need. In fact, there are ways in which I’m better served now with no-cost software than I have ever been with paid software. I should probably make a list of these, at some point, but I don’t feel like it.

I mostly felt like assessing my needs, as a computer user. And though there always are many things I wish I could do but currently can’t, I must admit that I don’t really see the need to pay for much software.

Still… What I feel I need, here, is the “ultimate device.” It could be handheld. But I’m mostly thinking about a way to get ideas into a computer-friendly format. A broad set of issues about a very basic thing.

The spark for this blog entry was a reflection about dictation software. Not only have I been interested in speech technology for quite a while but I still bet that speech (recognition/dictation and “text-to-speech”) can become the killer app. I just think that speech hasn’t “come true.” It’s there, some people use it, the societal acceptance for it is likely (given cellphone penetration most anywhere). But its moment hasn’t yet come.

No-cost “text-to-speech” (TTS) software solutions do exist but are rather impractical. In the mid-1990s, I spent fifteen months doing speech analysis for a TTS research project in Switzerland. One of the best periods in my life. Yet, my enthusiasm for current TTS systems has been dampened. I wish I could be passionate about TTS and other speech technology again. Maybe the reason I’m notis that we don’t have a “voice desktop,” yet. But, for this voice desktop (voicetop?) to happen, we need high quality, continuous speech recognition. IOW, we need a “personal dictation device.” So, my latest 2008 prediction: we will get a voice device (smartphone?) which adapts to our voices and does very efficient and very accurate transcription of our speech. (A correlated prediction: people will complain about speech technology for a while before getting used to the continuous stream of public soliloquy.)

Dictation software is typically quite costly and complicated. Most users don’t see a need for dictation software so they don’t see a need for speech technology in computing. Though I keep thinking that speech could improve my computing life, I’ve never purchased a speech processing package. Like OCR (which is also dominated by Nuance, these days) it seems to be the kind of thing which could be useful to everyone but ends up being limited to “vertical markets.” (As it so happens, I did end up being an OCR program at some point and kept hoping my life would improve as the result of being able to transform hardcopies into searchable files. But I almost never used OCR (so my frustration with cost-software continues).)

Ah, well…

Crazy Predictions: Amazon Kindle

 

Yeah, I tend to get overly enthusiastic about new devices. And so does a large part of the “tech press.” But, once in a while, a device comes which pretty much everyone predicts will fail. So, recently, I’ve been thinking about playing devil’s advocate with those predictions. Basically, stating that some device which seems to be doomed from the start (”a dud,” “another DOA product”) will in fact succeed. Kind of a creative exercise.Case in point, Amazon’s just released Kindle eBook reader:Amazon.com: Kindle: Amazon’s New Wireless Reading Device: Kindle StoreThe consensus opinion seems to be that it’s “too little, too late” or that the product doesn’t meet its set goals. In other words, a big “hype factor” (hyperbolic language surrounding its release) for something which isn’t that revolutionary.  Tech enthusiasts aren’t impressed. But they do get to think, yet again, about books from a technological standpoint.I happen to think that the Kindle will likely fail. But if it does eventually succeed, what will I need to rethink?

  1. Screen readability trumps everything else.
    • I tend to read a lot of things (including student assignments) on computer screens. But many people keep saying that they can’t read from a computer screen for a very long period of time. If E Ink is in fact so much more readable than a computer screen that it makes a real difference, maybe the Kindle is one of those things you adopt once you try them.
  2. The hardcover’s form factor can work.
    • Looks like the Kindle is too big to fit in a pocket. “Conventional wisdom” (and experience with Newton MessagePad devices) says that handheld devices should fit in pockets. So, if the Kindle works, it means that the form factor isn’t an issue. And, in this case, there’d be some logic to it. Compared to a hardcover book, the Kindle is relatively small. And it’s incredibly small when compared to the number of books it could replace. I tend not to like hardcovers because of their form factor but having a single hardcover to replace any number of books and magazines could make me change my mind.
  3. There’s room for single-function devices.
    • What is already discussed with the Kindle is that multipurpose devices (say, Apple’s iPhone) can serve the “book-reading function” to a certain extent. If it is the case, then people are unlikely to spend as much on a device which only does one thing than on a device which can do a number of things. Yet, “book-reading” is among the trickiest things computer-based technology can do and a case is often made for a device which “does one thing and does it well.”
  4. Free wireless access is a “killer app” and Sprint’s EVDO (used by Kindle) could do. For now.
    • I tend to think a lot about free wireless connectivity, these days. In my mind, the stage seems to be set for the true “wireless revolution.” So I imagine convenient devices which do all sorts of neat things thanks to ubiquitous wireless access, either from cellphone networks or from computer networks. In fact, I keep imagining some kind of “cross-technology mesh network device” which could get connectivity through WiFi/WiMax and/or cellphone 3G, and redistribute it to other devices. Partly the model used for the OLPC’s XO, but brought to an even broader concept. Speeds are sufficient at this point for simple use and there could be ways to alleviate some bandwidth problems.
  5. People are willing to pay for restricted content.
    • I’m a proponent of Open Access and I really think openness is the direction where most Internet-manageable content is headed. But it’s quite possible that people are passionate about some compelling content that they will be willing to pay for access to it regardless of what else is available. In other words, if people really want to read some specific books, they are going to pay for the privilege to read it when they want. That’s probably why some public libraries have fees on best-sellers. I still don’t understand why people would need to pay to access blog content, but maybe paying for blog content will make blogs more “important.”
  6. Not needing a computer is a cool feature.
    • Some people simply don’t have computers, others only have access to public computers, yet others would prefer to leave computer use as a part of their work life. It’s quite likely that, as a standalone device, the Kindle could win the hearts of many people who would otherwise not buy any portable device. In fact, I kind of wish that other handheld devices were less reliant on computers. For instance, even MP3 players with wireless capabilities usually need to be connected to computers on occasion (though Microsoft’s new Zune firmware does eliminate the need for a computer to synchronise podcasts). The difference can be huge in terms of “peace of mind.” Forgot to add new content to your device? Easy, you can fetch it from anywhere.
  7. Battery life matters.
    • At this point, most handheld devices have pretty decent battery life in that you only have to recharge the batteries once a day. But, if the Kindle really does get 30 hours of battery life, it could have an excellent “peace of mind” factor. Forgot to plug in your device, last night? That’s ok, you still have a long time to go before the battery is drained. When you’re travelling for a few days, this could be really useful as it’s often annoying to have to recharge your devices on a regular basis. There’s also something to be said about non-volatile memory (that’s one reason I miss my Newton MessagePad).
  8. Design style needs not be flashy.
    • The Kindle looks rather “clunky” from pictures but it seems that part of this might be on purpose. The device isn’t meant as a fashion statement. It’s supposed to be as “classy” as a book. Not sure the actual device really looks “classy” in anybody’s view but there’s something to be said about devices which “look serious.”
  9. People don’t need colour after all.
    • Grayscale displays have been replaced by colour displays in most handheld devices, including MP3 players and PDAs. But maybe colour isn’t that important for most people.
  10. Jeff Bezos is a neat fellow
    • Maybe the current incarnation of the Kindle is just a way to test the waters and Bezos has a broader strategy to take not only the book world but also all the “online content” world with the Kindle. So, maybe the next Kindle will do audio and/or video. And maybe, just maybe, it could become a full-fledged “Internet appliance.”

So… Just for fun, I’m predicting that the Kindle will be a huge success.