WordPress as Content Directory: Getting Somewhere

Using WordPress to build content directories and databases.

{I tend to ramble a bit. If you just want a step-by-step tutorial, you can skip to here.}

Woohoo!

I feel like I’ve reached a milestone in a project I’ve had in mind, ever since I learnt about Custom Post Types in WordPress 3.0: Using WordPress as a content directory.

The concept may not be so obvious to anyone else, but it’s very clear to me. And probably much clearer for anyone who has any level of WordPress skills (I’m still a kind of WP newbie).

Basically, I’d like to set something up through WordPress to make it easy to create, review, and publish entries in content databases. WordPress is now a Content Management System and the type of “content management” I’d like to enable has to do with something of a directory system.

Why WordPress? Almost glad you asked.

These days, several of the projects on which I work revolve around WordPress. By pure coincidence. Or because WordPress is “teh awsum.” No idea how representative my sample is. But I got to work on WordPress for (among other things): an academic association, an adult learners’ week, an institute for citizenship and social change, and some of my own learning-related projects.

There are people out there arguing about the relative value of WordPress and other Content Management Systems. Sometimes, WordPress may fall short of people’s expectations. Sometimes, the pro-WordPress rhetoric is strong enough to sound like fanboism. But the matter goes beyond marketshare, opinions, and preferences.

In my case, WordPress just happens to be a rather central part of my life, these days. To me, it’s both a question of WordPress being “the right tool for the job” and the work I end up doing being appropriate for WordPress treatment. More than a simple causality (“I use WordPress because of the projects I do” or “I do these projects because I use WordPress”), it’s a complex interaction which involves diverse tools, my skillset, my social networks, and my interests.

Of course, WordPress isn’t perfect nor is it ideal for every situation. There are cases in which it might make much more sense to use another tool (Twitter, TikiWiki, Facebook, Moodle, Tumblr, Drupal..). And there are several things I wish WordPress did more elegantly (such as integrating all dimensions in a single tool). But I frequently end up with WordPress.

Here are some things I like about WordPress:

This last one is where the choice of WordPress for content directories starts making the most sense. Not only is it easy for me to use and build on WordPress but the learning curves are such that it’s easy for me to teach WordPress to others.

A nice example is the post editing interface (same in the software and service). It’s powerful, flexible, and robust, but it’s also very easy to use. It takes a few minutes to learn and is quite sufficient to do a lot of work.

This is exactly where I’m getting to the core idea for my content directories.

I emailed the following description to the digital content editor for the academic organization for which I want to create such content directories:

You know the post editing interface? What if instead of editing posts, someone could edit other types of contents, like syllabi, calls for papers, and teaching resources? What if fields were pretty much like the form I had created for [a committee]? What if submissions could be made by people with a specific role? What if submissions could then be reviewed by other people, with another role? What if display of these items were standardised?

Not exactly sure how clear my vision was in her head, but it’s very clear for me. And it came from different things I’ve seen about custom post types in WordPress 3.0.

For instance, the following post has been quite inspiring:

I almost had a drift-off moment.

But I wasn’t able to wrap my head around all the necessary elements. I perused and read a number of things about custom post types, I tried a few things. But I always got stuck at some point.

Recently, a valuable piece of the puzzle was provided by Kyle Jones (whose blog I follow because of his work on WordPress/BuddyPress in learning, a focus I share).

Setting up a Staff Directory using WordPress Custom Post Types and Plugins | The Corkboard.

As I discussed in the comments to this post, it contained almost everything I needed to make this work. But the two problems Jones mentioned were major hurdles, for me.

After reading that post, though, I decided to investigate further. I eventually got some material which helped me a bit, but it still wasn’t sufficient. Until tonight, I kept running into obstacles which made the process quite difficult.

Then, while trying to solve a problem I was having with Jones’s code, I stumbled upon the following:

Rock-Solid WordPress 3.0 Themes using Custom Post Types | Blancer.com Tutorials and projects.

This post was useful enough that I created a shortlink for it, so I could have it on my iPad and follow along: http://bit.ly/RockSolidCustomWP

By itself, it might not have been sufficient for me to really understand the whole process. And, following that tutorial, I replaced the first bits of code with use of the neat plugins mentioned by Jones in his own tutorial: More Types, More Taxonomies, and More Fields.

I played with this a few times but I can now provide an actual tutorial. I’m now doing the whole thing “from scratch” and will write down all steps.

This is with the WordPress 3.0 blogging software installed on a Bluehost account. (The WordPress.com blogging service doesn’t support custom post types.) I use the default Twenty Ten theme as a parent theme.

Since I use WordPress Multisite, I’m creating a new test blog (in Super Admin->Sites, “Add New”). Of course, this wasn’t required, but it helps me make sure the process is reproducible.

Since I already installed the three “More Plugins” (but they’re not “network activated”) I go in the Plugins menu to activate each of them.

I can now create the new “Product” type, based on that Blancer tutorial. To do so, I go to the “More Types” Settings menu, I click on “Add New Post Type,” and I fill in the following information: post type names (singular and plural) and the thumbnail feature. Other options are set by default.

I also set the “Permalink base” in Advanced settings. Not sure it’s required but it seems to make sense.

I click on the “Save” button at the bottom of the page (forgot to do this, the last time).

I then go to the “More Fields” settings menu to create a custom box for the post editing interface.

I add the box title and change the “Use with post types” options (no use in having this in posts).

(Didn’t forget to click “save,” this time!)

I can now add the “Price” field. To do so, I need to click on the “Edit” link next to the “Product Options” box I just created and add click “Add New Field.”

I add the “Field title” and “Custom field key”:

I set the “Field type” to Number.

I also set the slug for this field.

I then go to the “More Taxonomies” settings menu to add a new product classification.

I click “Add New Taxonomy,” and fill in taxonomy names, allow permalinks, add slug, and show tag cloud.

I also specify that this taxonomy is only used for the “Product” type.

(Save!)

Now, the rest is more directly taken from the Blancer tutorial. But instead of copy-paste, I added the files directly to a Twenty Ten child theme. The files are available in this archive.

Here’s the style.css code:

/*
Theme Name: Product Directory
Theme URI: http://enkerli.com/
Description: A product directory child theme based on Kyle Jones, Blancer, and Twenty Ten
Author: Alexandre Enkerli
Version: 0.1
Template: twentyten
*/

@import url("../twentyten/style.css");

The code for functions.php:

<!--?php /**  * ProductDir functions and definitions  * @package WordPress  * @subpackage Product_Directory  * @since Product Directory 0.1  */ /*Custom Columns*/ add_filter("manage_edit-product_columns", "prod_edit_columns"); add_action("manage_posts_custom_column",  "prod_custom_columns"); function prod_edit_columns($columns){ 		$columns = array( 			"cb" =--> "<input type="\&quot;checkbox\&quot;" />",
			"title" => "Product Title",
			"description" => "Description",
			"price" => "Price",
			"catalog" => "Catalog",
		);

		return $columns;
}

function prod_custom_columns($column){
		global $post;
		switch ($column)
		{
			case "description":
				the_excerpt();
				break;
			case "price":
				$custom = get_post_custom();
				echo $custom["price"][0];
				break;
			case "catalog":
				echo get_the_term_list($post->ID, 'catalog', '', ', ','');
				break;
		}
}
?>

And the code in single-product.php:

<!--?php /**  * Template Name: Product - Single  * The Template for displaying all single products.  *  * @package WordPress  * @subpackage Product_Dir  * @since Product Directory 1.0  */ get_header(); ?-->
<div id="container">
<div id="content">
<!--?php the_post(); ?-->

<!--?php 	$custom = get_post_custom($post--->ID);
	$price = "$". $custom["price"][0];

?>
<div id="post-<?php the_ID(); ?><br />">>
<h1 class="entry-title"><!--?php the_title(); ?--> - <!--?=$price?--></h1>
<div class="entry-meta">
<div class="entry-content">
<div style="width: 30%; float: left;">
			<!--?php the_post_thumbnail( array(100,100) ); ?-->
			<!--?php the_content(); ?--></div>
<div style="width: 10%; float: right;">
			Price
<!--?=$price?--></div>
</div>
</div>
</div>
<!-- #content --></div>
<!-- #container -->

<!--?php get_footer(); ?-->

That’s it!

Well, almost..

One thing is that I have to activate my new child theme.

So, I go to the “Themes” Super Admin menu and enable the Product Directory theme (this step isn’t needed with single-site WordPress).

I then activate the theme in Appearance->Themes (in my case, on the second page).

One thing I’ve learnt the hard way is that the permalink structure may not work if I don’t go and “nudge it.” So I go to the “Permalinks” Settings menu:

And I click on “Save Changes” without changing anything. (I know, it’s counterintuitive. And it’s even possible that it could work without this step. But I spent enough time scratching my head about this one that I find it important.)

Now, I’m done. I can create new product posts by clicking on the “Add New” Products menu.

I can then fill in the product details, using the main WYSIWYG box as a description, the “price” field as a price, the “featured image” as the product image, and a taxonomy as a classification (by clicking “Add new” for any tag I want to add, and choosing a parent for some of them).

Now, in the product management interface (available in Products->Products), I can see the proper columns.

Here’s what the product page looks like:

And I’ve accomplished my mission.

The whole process can be achieved rather quickly, once you know what you’re doing. As I’ve been told (by the ever-so-helpful Justin Tadlock of Theme Hybrid fame, among other things), it’s important to get the data down first. While I agree with the statement and its implications, I needed to understand how to build these things from start to finish.

In fact, getting the data right is made relatively easy by my background as an ethnographer with a strong interest in cognitive anthropology, ethnosemantics, folk taxonomies (aka “folksonomies“), ethnography of communication, and ethnoscience. In other words, “getting the data” is part of my expertise.

The more technical aspects, however, were a bit difficult. I understood most of the principles and I could trace several puzzle pieces, but there’s a fair deal I didn’t know or hadn’t done myself. Putting together bits and pieces from diverse tutorials and posts didn’t work so well because it wasn’t always clear what went where or what had to remain unchanged in the code. I struggled with many details such as the fact that Kyle Jones’s code for custom columns wasn’t working first because it was incorrectly copied, then because I was using it on a post type which was “officially” based on pages (instead of posts). Having forgotten the part about “touching” the Permalinks settings, I was unable to get a satisfying output using Jones’s explanations (the fact that he doesn’t use titles didn’t really help me, in this specific case). So it was much harder for me to figure out how to do this than it now is for me to build content directories.

I still have some technical issues to face. Some which are near essential, such as a way to create archive templates for custom post types. Other issues have to do with features I’d like my content directories to have, such as clearly defined roles (the “More Plugins” support roles, but I still need to find out how to define them in WordPress). Yet other issues are likely to come up as I start building content directories, install them in specific contexts, teach people how to use them, observe how they’re being used and, most importantly, get feedback about their use.

But I’m past a certain point in my self-learning journey. I’ve built my confidence (an important but often dismissed component of gaining expertise and experience). I found proper resources. I understood what components were minimally necessary or required. I succeeded in implementing the system and testing it. And I’ve written enough about the whole process that things are even clearer for me.

And, who knows, I may get feedback, questions, or advice..

Advertisement

Getting Started with Sorbet, Yogurt, Juice

Got a sorbet maker yesterday and I made my first batch today.

Had sent a message on Chowhounrd:

Store-Bought Simple Syrup in Sorbet? – Home Cooking – Chowhound.

Received some useful answers but didn’t notice them until after I made my sorbet.

Here’s my follow-up:

Hadn’t noticed replies were added (thought I’d receive notifications). Thanks a lot for all the useful advice!

And… it did work.

My lemon-ginger sorbet was a bit soft on its way out of the machine, but the flavour profile is exactly what I wanted.

I used almost a liter of this store-bought syrup with more than a half-liter of a ginger-lemon concoction I made (lemon juice and food-processed peeled ginger). All of this blended together. The resulting liquid was more than the 1.5 quart my sorbet-maker can withstand so I reserved a portion to mix with a syrup made from ginger peel infused in a brown sugar and water solution. I also did a simple sugar to which I added a good quantity of lemon zest. These two syrups I pressure-cooked and will use in later batches.

Judging the amount of sugar may be tricky but, in this case, I decided to go by taste. It’s not sweet enough according to some who tried it but it’s exactly what I wanted. Having this simple syrup on hand (chilled) was quite helpful, as I could adjust directly by adding syrup to the mix.

One thing is for sure, I’ll be doing an apple-ginger sorbet soon. The ginger syrup I made just cries out “apple sauce sorbet.” Especially the solids (which I didn’t keep in the syrup). I might even add some homemade hard cider that I like.

As for consistency, it’s not even a problem but I get the impression that the sorbet will get firmer as it spends time in the freezer. It’s been there for almost two hours already and I should be able to leave it there for another two hours before I bring it out (actually, traveling with it). The machine’s book mentions two hours in the freezer for a firmer consistency and I’ve seen several mentions of “ripening” so it sounds like it’d make sense to do this.

Also, the lemon-ginger mixture I used wasn’t chilled, prior to use. It may have had an impact on the firmness, I guess…

As a first attempt at sorbet-making, it’s quite convincing. I’ve had a few food-related hobbies, in the recent past, and sorbet-making might easily take some space among them, especially if results are this satisfying without effort. I was homebrewing beer until recently (and will probably try beer sorbets, as I’ve tasted some nice ones made by friends and I have a lot of leftover beer from the time I was still brewing). By comparison to homebrewing, sorbet-making seems to be a (proverbial) “piece of cake.”

Dunno if such a long tirade violates any Chow forum rule but I just wanted to share my first experience.

Thanks again for all your help!

Alex

Among links given in this short thread was the sorbet section of the French Cooking guide on About.com. I’ve already had good experiences with About’s BBQ Guide. So my preliminary impression of these sorbet recipes is rather positive. And, in fact, they’re quite inspiring.

A sampling:

  • Apple and Calvados
  • Beet
  • Cardamom/Pear
  • Pomegranate/Cranberry
  • Spiced Apple Cider

It’s very clear that, with sorbet, the only limit is your imagination. I’ll certainly make some savoury sorbets, including this spiced tomato one. Got a fairly large number of ideas for interesting combinations. But, perhaps unsurprisingly, it seems that pretty much everything has been tried. Hibiscus flowers, hard cider, horchata, teas…

And using premade syrups is probably a good strategy, especially if mixed with fresh purées, possibly made with frozen fruit. Being able to adjust sweetness is a nice advantage. Although, one comment in that Chowhound thread mentioned a kind of “homemade hydrometer” technique (clean egg floating in the liquid…), the notion apparently being that the quantity of sugar is important in and of itself (for texture and such) and you can adjust flavour with other ingredients, including acids.

One reason I like sorbets so much is that I’m lactose intolerant. More specifically: I discovered fairly recently that I was lactose intolerant. So I’m not completely weaned from ice cream. It’s not the ideal time to start making sorbet as the weather isn’t that warm, at this point. But I’ve never had an objection to sorbet in cold weather.

As happened with other hobbies, I’ve been having some rather crazy ideas. And chances are that this won’t be my last sorbet making machine.

Nor will it only be a sorbet machine. While I have no desire to make ice cream in it, it’s already planned as a “froyo” maker and a “frozen soy-based dessert” maker. In some cases, I actually prefered frozen yogurt and frozen tofu to ice cream (maybe because my body was telling me to avoid lactose). And I’m getting a yogurt maker soon, which will be involved in all sorts of yogurt-based experiments from “yogurt cheese” (lebneh) to soy-milk “yogurt” and even whey-enhanced food (from the byproduct from lebneh). So, surely, frozen yogurt will be involved.

And I didn’t mention my juicer, here (though I did mention it elsewhere). Not too long ago, I was using a juicer on a semi-regular basis and remember how nice the results were, sometimes using unlikely combinations (cucumber/pineapple being a friend’s favourite which was relatively convincing). A juicer will also be useful in preparing sorbets, I would guess. Sure, it’s probably a good idea to have a thicker base than juice for a firm sorbet, but I might actually add banana, goyava, or fig to some sorbets. Besides, the solids left behind by the juice extraction can be made into interesting things too and possibly added back to the sorbet base. I can easily imagine how it’d work with apples and some vegetables.

An advantage of all of this is that it’ll directly increase the quantity of fruits and vegetables I consume. Juices are satisfying and can be made into soups (which I also like). Yogurt itself I find quite appropriate in my diet. And there surely are ways to have low-sugar sorbet. These are all things I enjoy on their own. And they’re all extremely easy to make (I’ve already made yogurt and juice, so I don’t foresee any big surprise). And they all fit in a lactose-free (or, at least, low-lactose) diet.

Food is fun.

Actively Reading: Organic Ideas for Startups

Annotations on Paul Graham’s Organic Startup Ideas.

Been using Diigo as a way to annotate online texts. In this case, I was as interested in the tone as in the text itself. At the same time, I kept thinking about things which seem to be missing from Diigo.

One thing I like about this text is its tone. There’s an honesty, an ingenuity that I find rare in this type of writing.

  • startup ideas
    • The background is important, in terms of the type of ideas about which we’re constructing something.
  • what do you wish someone would make for you?
    • My own itch has to do with Diigo, actually. There’s a lot I wish Diigo would make for me. I may be perceived as an annoyance, but I think my wishlist may lead to something bigger and possibly quite successful.
    • The difference between this question and the “scratch your own itch” principle seems significant, and this distinction may have some implications in terms of success: we’re already talking about others, not just running ideas in our own head.
  • what do you wish someone would make for you?
    • It’s somewhat different from the well-known “scratch your own itch” principle. In this difference might be located something significant. In a way, part of the potential for this version to lead to success comes from the fact that it’s already connected with others, instead of being about running ideas in your own mind.
  • grow organically
    • The core topic of the piece, put in a comparative context. The comparison isn’t the one people tend to make and one may argue about the examples used. But the concept of organic ideas is fascinating and inspiring.
  • you decide, from afar,
    • What we call, in anthropology, the “armchair” approach. Also known as “backbenching.” For this to work, you need to have a deep knowledge of the situation, which is part of the point in this piece. Nice that it’s not demonizing this position but putting it in context.
  • Apple
    was the first type
    • One might argue that it was a hybrid case. Although, it does sound like the very beginnings of Apple weren’t about “thinking from afar.”
  • class of users other than you
    • Since developers are part of a very specific “class” of people, this isn’t insignificant a way to phrase this.
  • They still rely on this principle today, incidentally.
    The iPhone is the phone Steve Jobs wants.
    • Apple tends to be perceived in a different light. According to many people, it’s the “textbook example” of a company where decisions are made without concerns for what people need. “Steve Jobs uses a top-down approach,” “They don’t even use focus groups,” “They don’t let me use their tools the way I want to use them.” But we’re not talking about the same distinction between top-down and bottom-up. Though “organic ideas” seem to imply that it’s a grassroots/bottom-up phenomenon, the core distinction isn’t about the origin of the ideas (from the “top,” in both cases) but on the reasoning behind these ideas.
  • We didn’t need this software ourselves.
    • Sounds partly like a disclaimer but this approach is quite common and “there’s nothing wrong with it.”
  • comparatively old
    • Age and life experience make for an interesting angle. It’s not that this strategy needs people of a specific age to work. It’s that there’s a connection between one’s experience and the way things may pan out.
  • There is no sharp line between the two types of ideas,
    • Those in the “engineering worldview” might go nuts, at this point. I can hear the claims of “hand waving.” But we’re talking about something complex, here, not a merely complicated problem.
  • Apple type
    • One thing to note in the three examples here: they’re all made by pairs of guys. Jobs and Woz, Gates and Allen, Page and Brin. In many cases, the formula might be that one guy (or gal, one wishes) comes up with ideas knowing that the other can implement them. Again, it’s about getting somebody else to build it for you, not about scratching your own itch.
  • Bill Gates was writing something he would use
    • Again, Gates may not be the most obvious example, since he’s mostly known for another approach. It’s not inaccurate to say he was solving his own problem, at the time, but it may not be that convincing as an example.
  • Larry and Sergey when they wrote the first versions of Google.
    • Although, the inception of the original ideas was academic in context. They weren’t solving a search problem or thinking about monetization. They were discovering the power of CitationRank.
  • generally preferable
    • Nicely relativistic.
  • It takes experience
    to predict what other people will want.
    • And possibly a lot more. Interesting that he doesn’t mention empirical data.
  • young founders
    • They sound like a fascinating group to observe. They do wonders when they open up to others, but they seem to have a tendency to impose their worldviews.
  • I’d encourage you to focus initially on organic ideas
    • Now, this advice sounds more like the “scratch your own itch” advocation. But there’s a key difference in that it’s stated as part of a broader process. It’s more of a “walk before you run” or “do your homework” piece of advice, not a “you can’t come up with good ideas if you just think about how people will use your tool.”
  • missing or broken
    • It can cover a lot, but it’s couched in terms of the typical “problem-solving” approach at the centre of the engineering worldview. Since we’re talking about developing tools, it makes sense. But there could be a broader version, admitting for dreams, inspiration, aspiration. Not necessarily of the “what would make you happy?” kind, although there’s a lot to be said about happiness and imagination. You’re brainstorming, here.
  • immediate answers
    • Which might imply that there’s a second step. If you keep asking yourself the same question, you may be able to get a very large number of ideas. The second step could be to prioritize them but I prefer “outlining” as a process: you shuffle things together and you group some ideas to get one which covers several. What’s common between your need for a simpler way to code on the Altair and your values? Why do you care so much about algorithms instead of human encoding?
  • You may need to stand outside yourself a bit to see brokenness
    • Ah, yes! “Taking a step back,” “distancing yourself,” “seeing the forest for the trees”… A core dimension of the ethnographic approach and the need for a back-and-forth between “inside” and “outside.” There’s a reflexive component in this “being an outsider to yourself.” It’s not only psychological, it’s a way to get into the social, which can lead to broader success if it’s indeed not just about scratching your own itch.
  • get used to it and take it for granted
    • That’s enculturation, to you. When you do things a certain way simply because “we’ve always done them that way,” you may not create these organic ideas. But it’s a fine way to do your work. Asking yourself important questions about what’s wrong with your situation works well in terms of getting new ideas. But, sometimes, you need to get some work done.
  • a Facebook
    • Yet another recontextualized example. Zuckerberg wasn’t trying to solve that specific brokenness, as far as we know. But Facebook became part of what it is when Zuck began scratching that itch.
  • organic startup ideas usually don’t
    seem like startup ideas at first
    • Which gets us to the pivotal importance of working with others. Per this article, VCs and “angel investors,” probably. But, in the case of some of cases cited, those we tend to forget, like Paul Allen, Narendra, and the Winklevosses.
  • end up making
    something of value to a lot of people
    • Trial and error, it’s an iterative process. So you must recognize errors quickly and not invest too much effort in a specific brokenness. Part of this requires maturity.
  • something
    other people dismiss as a toy
    • The passage on which Gruber focused and an interesting tidbit. Not that central, come to think of it. But it’s important to note that people’s dismissive attitude may be misled, that “toys” may hide tools, that it’s probably a good idea not to take all feedback to heart…
  • At this point, when someone comes to us with
    something that users like but that we could envision forum trolls
    dismissing as a toy, it makes us especially likely to invest.
  • the best source of organic ones
    • Especially to investors. Potentially self-serving… in a useful way.
  • they’re at the forefront of technology
    • That part I would dispute, actually. Unless we talk about a specific subgroup of young founders and a specific set of tools. Young founders tend to be oblivious to a large field in technology, including social tools.
  • they’re in a position to discover
    valuable types of fixable brokenness first
    • The focus on fixable brokenness makes sense if we’re thinking exclusively through the engineering worldview, but it’s at the centre of some failures like the Google Buzz launch.
  • you still have to work hard
    • Of the “inspiration shouldn’t make use forget perspiration” kind. Makes for a more thoughtful approach than the frequent “all you need to do…” claims.
  • I’d encourage anyone
    starting a startup to become one of its users, however unnatural it
    seems.
    • Not merely an argument for dogfooding. It’s deeper than that. Googloids probably use Google tools but they didn’t actually become users. They’re beta testers with a strong background in troubleshooting. Not the best way to figure out what users really want or how the tool will ultimately fail.
  • It’s hard to compete directly with open source software
    • Open Source as competition isn’t new as a concept, but it takes time to seep in.
  • there has to be some part
    you can charge for
    • The breach through which old-school “business models” enter with little attention paid to everything else. To the extent that much of the whole piece might crumble from pressure built up by the “beancounter” worldview. Good thing he acknowledges it.

No Office Export in Keynote/Numbers for iPad?

Sounds like iWork for iPad will export to Word but not to PowerPoint or Excel.

To be honest, I’m getting even more excited about the iPad. Not that we get that much more info about it, but:

For one thing, the Pages for iPad webpage is explicitly stating Word support:

Attach them to an email as Pages files for Mac, Microsoft Word files, or PDF documents.

Maybe this is because Steve Jobs himself promised it to Walt Mossberg?
Thing is, the equivalent pages about Keynote for iPad and about Numbers for iPad aren’t so explicit:

The presentations you create in Keynote on your iPad can be exported as Keynote files for Mac or PDF documents

and…

To share your work, export your spreadsheet as a Numbers file for Mac or PDF document

Not a huge issue, but it seems strange that Apple would have such an “export to Microsoft Office” feature on only one of the three “iWork for iPad” apps. Now, the differences in the way exports are described may not mean that Keynote won’t be able to export to Microsoft PowerPoint or that Numbers won’t be able to export to Microsoft Excel. After all, these texts may have been written at different times. But it does sound like PowerPoint and Excel will be import-only, on the iPad.

Which, again, may not be that big an issue. Maybe iWork.com will work well enough for people’s needs. And some other cloud-based tools do support Keynote. (Though Google Docs and Zoho Show don’t.)

The reason I care is simple: I do share most of my presentation files. Either to students (as resources on Moodle) or to whole wide world (through Slideshare). My desktop outliner of choice, OmniOutliner, exports to Keynote and Microsoft Word. My ideal workflow would be to send, in parallel, presentation files to Keynote for display while on stage and to PowerPoint for sharing. The Word version could also be useful for sharing.

Speaking of presenting “slides” on stage, I’m also hoping that the “iPad Dock Connector to VGA Adapter” will support “presenter mode” at some point (though it doesn’t seem to be the case, right now). I also dream of a way to control an iPad presentation with some kind of remote. In fact, it’s not too hard to imagine it as an iPod touch app (maybe made by Appiction, down in ATX).

To be clear: my “presentation files” aren’t really about presenting so much as they are a way to package and organize items. Yes, I use bullet points. No, I don’t try to make the presentation sexy. My presentation files are acting like cue cards and like whiteboard snapshots. During a class, I use the “slides” as a way to keep track of where I planned the discussion to go. I can skip around, but it’s easier for me to get at least some students focused on what’s important (the actual depth of the discussion) because they know the structure (as “slides”) will be available online. Since I also podcast my lectures, it means that they can go back to all the material.

I also use “slides” to capture things we build in class, such as lists of themes from the readings or potential exam questions.  Again, the “whiteboard” idea. I don’t typically do the same thing during a one-time talk (say, at an unconference). But I still want to share my “slides,” at some point.

So, in all of these situations, I need a file format for “slides.” I really wish there were a format which could work directly out of the browser and could be converted back and forth with other formats (especially Keynote, OpenOffice, and PowerPoint). I don’t need anything fancy. I don’t even care about transitions, animations, or even inserting pictures. But, despite some friends’ attempts at making me use open solutions, I end up having to use presentation files.

Unfortunately, at this point, PowerPoint is the de facto standard for presentation files. So I need it, somehow. Not that I really need PowerPoint itself. But it’s still the only format I can use to share “slides.”

So, if Keynote for iPad doesn’t export directly to PowerPoint, it means that I’ll have to find another way to make my workflow fit.

Ah, well…

Why I Need an iPad

I’m one of those who feel the iPad is the right tool for the job.

I’m one of those who feel the iPad is the right tool for the job.

This is mostly meant as a reply to this blogthread. But it’s also more generally about my personal reaction to Apple’s iPad announcement.

Some background.

I’m an ethnographer and a teacher. I read a fair deal, write a lot of notes, and work in a variety of contexts. These days, I tend to spend a good amount of time in cafés and other public places where I like to work without being too isolated. I also commute using public transit, listen to lots of podcast, and create my own. I’m also very aural.

I’ve used a number of PDAs, over the years, from a Newton MessagePad 130 (1997) to a variety of PalmOS devices (until 2008). In fact, some people readily associated me with PDA use.

As soon as I learnt about the iPod touch, I needed one. As soon as I’ve heard about the SafariPad, I wanted one. I’ve been an intense ‘touch user since the iPhone OS 2.0 release and I’m a happy camper.

(A major reason I never bought an iPhone, apart from price, is that it requires a contract.)

In my experience, the ‘touch is the most appropriate device for all sorts of activities which are either part of an other activity (reading during a commute) or are simply too short in duration to constitute an actual “computer session.” You don’t “sit down to work at your ‘touch” the way you might sit in front of a laptop or desktop screen. This works great for “looking up stufff” or “checking email.” It also makes a lot of sense during commutes in crowded buses or metros.

In those cases, the iPod touch is almost ideal. Ubiquitous access to Internet would be nice, but that’s not a deal-breaker. Alternative text-input methods would help in some cases, but I do end up being about as fast on my ‘touch as I was with Graffiti on PalmOS.

For other tasks, I have a Mac mini. Sure, it’s limited. But it does the job. In fact, I have no intention of switching for another desktop and I even have an eMachines collecting dust (it’s too noisy to make a good server).

What I miss, though, is a laptop. I used an iBook G3 for several years and loved it. For a little while later, I was able to share a MacBook with somebody else and it was a wonderful experience. I even got to play with the OLPC XO for a few weeks. That one was not so pleasant an experience but it did give me a taste for netbooks. And it made me think about other types of iPhone-like devices. Especially in educational contexts. (As I mentioned, I’m a teacher)

I’ve been laptop-less for a while, now. And though my ‘touch replaces it in many contexts, there are still times when I’d really need a laptop. And these have to do with what I might call “mobile sessions.”

For instance: liveblogging a conference or meeting. I’ve used my ‘touch for this very purpose on a good number of occasions. But it gets rather uncomfortable, after a while, and it’s not very fast. A laptop is better for this, with a keyboard and a larger form factor. But the iPad will be even better because of lower risks of RSI. A related example: just imagine TweetDeck on iPad.

Possibly my favourite example of a context in which the iPad will be ideal: presentations. Even before learning about the prospect of getting iWork on a tablet, presentations were a context in which I really missed a laptop.

Sure, in most cases, these days, there’s a computer (usually a desktop running XP) hooked to a projector. You just need to download your presentation file from Slideshare, show it from Prezi, or transfer it through USB. No biggie.

But it’s not the extra steps which change everything. It’s the uncertainty. Even if it’s often unfounded, I usually get worried that something might just not work, along the way. The slides might not show the same way as you see it because something is missing on that computer or that computer is simply using a different version of the presentation software. In fact, that software is typically Microsoft PowerPoint which, while convenient, fits much less in my workflow than does Apple Keynote.

The other big thing about presentations is the “presenter mode,” allowing you to get more content than (or different content from) what the audience sees. In most contexts where I’ve used someone else’s computer to do a presentation, the projector was mirroring the computer’s screen, not using it as a different space. PowerPoint has this convenient “presenter view” but very rarely did I see it as an available option on “the computer in the room.” I wish I could use my ‘touch to drive presentations, which I could do if I installed software on that “computer in the room.” But it’s not something that is likely to happen, in most cases.

A MacBook solves all of these problems. and it’s an obvious use for laptops. But how, then, is the iPad better? Basically because of interface. Switching slides on a laptop isn’t hard, but it’s more awkward than we realize. Even before watching the demo of Keynote on the iPad, I could simply imagine the actual pleasure of flipping through slides using a touch interface. The fit is “natural.”

I sincerely think that Keynote on the iPad will change a number of things, for me. Including the way I teach.

Then, there’s reading.

Now, I’m not one of those people who just can’t read on a computer screen. In fact, I even grade assignments directly from the screen. But I must admit that online reading hasn’t been ideal, for me. I’ve read full books as PDF files or dedicated formats on PalmOS, but it wasn’t so much fun, in terms of the reading process. And I’ve used my ‘touch to read things through Stanza or ReadItLater. But it doesn’t work so well for longer reading sessions. Even in terms of holding the ‘touch, it’s not so obvious. And, what’s funny, even a laptop isn’t that ideal, for me, as a reading device. In a sense, this is when the keyboard “gets in the way.”

Sure, I could get a Kindle. I’m not a big fan of dedicated devices and, at least on paper, I find the Kindle a bit limited for my needs. Especially in terms of sources. I’d like to be able to use documents in a variety of formats and put them in a reading list, for extended reading sessions. No, not “curled up in bed.” But maybe lying down in a sofa without external lighting. Given my experience with the ‘touch, the iPad is very likely the ideal device for this.

Then, there’s the overall “multi-touch device” thing. People have already been quite creative with the small touchscreen on iPhones and ‘touches, I can just imagine what may be done with a larger screen. Lots has been said about differences in “screen real estate” in laptop or desktop screens. We all know it can make a big difference in terms of what you can display at the same time. In some cases, two screens isn’t even a luxury, for instance when you code and display a page at the same time (LaTeX, CSS…). Certainly, the same qualitative difference applies to multitouch devices. Probably even more so, since the display is also used for input. What Han found missing in the iPhone’s multitouch was the ability to use both hands. With the iPad, Han’s vision is finding its space.

Oh, sure, the iPad is very restricted. For instance, it’s easy to imagine how much more useful it’d be if it did support multitasking with third-party apps. And a front-facing camera is something I was expecting in the first iPhone. It would just make so much sense that a friend seems very disappointed by this lack of videoconferencing potential. But we’re probably talking about predetermined expectations, here. We’re comparing the iPad with something we had in mind.

Then, there’s the issue of the competition. Tablets have been released and some multitouch tablets have recently been announced. What makes the iPad better than these? Well, we could all get in the same OS wars as have been happening with laptops and desktops. In my case, the investment in applications, files, and expertise that I have made in a Mac ecosystem rendered my XP years relatively uncomfortable and me appreciate returning to the Mac. My iPod touch fits right in that context. Oh, sure, I could use it with a Windows machine, which is in fact what I did for the first several months. But the relationship between the iPhone OS and Mac OS X is such that using devices in those two systems is much more efficient, in terms of my own workflow, than I could get while using XP and iPhone OS. There are some technical dimensions to this, such as the integration between iCal and the iPhone OS Calendar, or even the filesystem. But I’m actually thinking more about the cognitive dimensions of recognizing some of the same interface elements. “Look and feel” isn’t just about shiny and “purty.” It’s about interactions between a human brain, a complex sensorimotor apparatus, and a machine. Things go more quickly when you don’t have to think too much about where some tools are, as you’re working.

So my reasons for wanting an iPad aren’t about being dazzled by a revolutionary device. They are about the right tool for the job.

Homeroasting and Coffee Geekness

I bought the i-Roast 2 homeroaster: I’m one happy (but crazy) coffee geek.

I’m a coffee geek. By which I mean that I have a geeky attitude to coffee. I’m passionate about the crafts and arts of coffee making, I seek coffee-related knowledge wherever I can find it, I can talk about coffee until people’s eyes glaze over (which happens more quickly than I’d guess possible), and I even dream about coffee gadgets. I’m not a typical gadget freak, as far as geek culture goes, but coffee is one area where I may invest in some gadgetry.

Perhaps my most visible acts of coffee geekery came in the form of updates I posted through diverse platforms about my home coffee brewing experiences. Did it from February to July. These posts contained cryptic details about diverse measurements, including water temperature and index of refraction. It probably contributed to people’s awareness of my coffee geek identity, which itself has been the source of fun things like a friend bringing me back coffee from Ethiopia.

But I digress, a bit. This is both about coffee geekness in general and about homeroasting in particular.

See, I bought myself this Hearthware i-Roast 2 dedicated homeroasting device. And I’m dreaming about coffee again.

Been homeroasting since December 2002, at the time I moved to Moncton, New Brunswick and was lucky enough to get in touch with Terry Montague of Down Esst Coffee.

Though I had been wishing to homeroast for a while before that and had become an intense coffee-lover fifteen years prior to contacting him, Terry is the one who enabled me to start roasting green coffee beans at home. He procured me a popcorn popper, sourced me some quality green beans, gave me some advice. And off I was.

Homeroasting is remarkably easy. And it makes a huge difference in one’s appreciation of coffee. People in the coffee industry, especially baristas and professional roasters, tend to talk about the “channel” going from the farmer to the “consumer.” In some ways, homeroasting gets the coffee-lover a few steps closer to the farmer, both by eliminating a few intermediaries in the channel and by making coffee into much less of a commodity. Once you’ve spent some time smelling the fumes emanated by different coffee varietals and looking carefully at individual beans, you can’t help but get a deeper appreciation for the farmer’s and even the picker’s work. When you roast 150g or less at a time, every coffee bean seems much more valuable. Further, as you experiment with different beans and roast profiles, you get to experience coffee in all of its splendour.

A popcorn popper may sound like a crude way to roast coffee. And it might be. Naysayers may be right in their appraisal of poppers as a coffee roasting method. You’re restricted in different ways and it seems impossible to produce exquisite coffee. But having roasted with a popper for seven years, I can say that my poppers gave me some of my most memorable coffee experiences. Including some of the most pleasant ones, like this organic Sumatra from Theta Ridge Coffee that I roasted in my campus appartment at IUSB and brewed using my beloved Brikka.

Over the years, I’ve roasted a large variety of coffee beans. I typically buy a pound each of three or four varietals and experiment with them for a while.

Mostly because I’ve been moving around quite a bit, I’ve been buying green coffee beans from a rather large variety of places. I try to buy them locally, as much as possible (those beans have travelled far enough and I’ve had enough problems with courier companies). But I did participate in a few mail orders or got beans shipped to me for some reason or another. Sourcing green coffee beans has almost been part of my routine in those different places where I’ve been living since 2002: Moncton, Montreal, Fredericton, South Bend, Northampton, Brockton, Cambridge, and Austin. Off the top of my head, I’ve sourced beans from:

  1. Down East
  2. Toi, moi & café
  3. Brûlerie Saint-Denis
  4. Brûlerie des quatre vents
  5. Terra
  6. Theta Ridge
  7. Dean’s Beans
  8. Green Beanery
  9. Cuvée
  10. Fair Bean
  11. Sweet Maria’s
  12. Evergreen Coffee
  13. Mon café vert
  14. Café-Vrac
  15. Roastmasters
  16. Santropol

And probably a few other places, including this one place in Ethiopia where my friend Erin bought some.

So, over the years, I got beans from a rather large array of places and from a wide range of regional varietals.

I rapidly started blending freshly-roasted beans. Typically, I would start a blend by roasting three batches in a row. I would taste some as “single origin” (coffee made from a single bean varietal, usually from the same farm or estate), shortly after roasting. But, typically, I would mix my batches of freshly roasted coffee to produce a main blend. I would then add fresh batches after a few days to fine-tune the blend to satisfy my needs and enhance my “palate” (my ability to pick up different flavours and aromas).

Once the quantity of green beans in a particular bag would fall below an amount I can reasonably roast as a full batch (minimum around 100g), I would put those green beans in a pre-roast blend, typically in a specially-marked ziplock bag. Roasting this blend would usually be a way for me to add some complexity to my roasted blends.

And complexity I got. Lots of diverse flavours and aromas. Different things to “write home about.”

But I was obviously limited in what I could do with my poppers. The only real controls that I had in homeroasting, apart from blending, consisted in the bean quantity and roasting time. Ambient temperature was clearly a factor, but not one over which I was able to exercise much control. Especially since I frequently ended up roasting outside, so as to not incommodate people with fumes, noise, and chaff. The few homeroast batches which didn’t work probably failed because of low ambient temperature.

One reason I stuck with poppers for so long was that I had heard that dedicated roasters weren’t that durable. I’ve probably used three or four different hot air popcorn poppers, over the years. Eventually, they just stop working, when you use them for coffee beans. As I’d buy them at garage sales and Salvation Army stores for 3-4$, replacing them didn’t feel like such a financially difficult thing to do, though finding them could occasionally be a challenge. Money was also an issue. Though homeroasting was important for me, I wasn’t ready to pay around 200$ for an entry-level dedicated roaster. I was thinking about saving money for a Behmor 1600, which offers several advantages over other roasters. But I finally gave in and bought my i-Roast as a kind of holiday gift to myself.

One broad reason is that my financial situation has improved since I started a kind of partial professional reorientation (PPR). I have a blogpost in mind about this PPR, and I’ll probably write it soon. But this post isn’t about my PPR.

Although, the series of events which led to my purchase does relate to my PPR, somehow.

See, the beans I (indirectly) got from Roastmasters came from a friend who bought a Behmor to roast cocoa beans. The green coffee beans came with the roaster but my friend didn’t want to roast coffee in his brand new Behmor, to avoid the risk of coffee oils and flavours getting into his chocolate. My friend asked me to roast some of these beans for his housemates (he’s not that intensely into coffee, himself). When I went to drop some homeroasted coffee by the Station C co-working space where he spends some of his time, my friend was discussing a project with Duncan Moore, whom I had met a few times but with whom I had had few interactions. The three of us had what we considered a very fruitful yet very short conversation. Later on, I got to do a small but fun project with Duncan. And I decided to invest that money into coffee.

A homeroaster seemed like the most appropriate investment. The Behmor was still out of reach but the i-Roast seemed like a reasonable purchase. Especially if I could buy it used.

But I was also thinking about buying it new, as long as I could get it quickly. It took me several years to make a decision about this purchase but, once I made it, I wanted something as close to “instant gratification” as possible. In some ways, the i-Roast was my equivalent to Little Mrs Sommers‘s “pair of silk stockings.”

At the time, Mon café vert seemed like the only place where I could buy a new i-Roast. I tried several times to reach them to no avail. As I was in the Mile-End as I decided to make that purchase, I went to Caffè in Gamba, both to use the WiFi signal and to check if, by any chance, they might not have started selling roasters. They didn’t, of course, homeroasters isn’t mainstream enough. But, as I was there, I saw the Hario Ceramic Coffee Mill Skerton, a “hand-cranked” coffee grinder about which I had read some rather positive reviews.

For the past few years, I had been using a Bodum Antigua conical burr electric coffee grinder. This grinder was doing the job, but maybe because of “wear and tear,” it started taking a lot longer to grind a small amount of coffee. The grind took so long, at some points, that the grounds were warm to the touch and it seemed like the grinder’s motor was itself heating.

So I started dreaming about the Baratza Vario, a kind of prosumer electric grinder which seemed like the ideal machine for someone who uses diverse coffee making methods. The Vario is rather expensive and seemed like overkill, for my current coffee setup. But I was lusting over it and, yes, dreaming about it.

One day, maybe, I’ll be able to afford a Vario.

In the meantime, and more reasonably, I had been thinking about “Turkish-style mills.” A friend lent me a box-type manual mill at some point and I did find it produced a nice grind, but it wasn’t that convenient for me, partly because the coffee drops into a small drawer which rapidly gets full. A handmill seemed somehow more convenient and there are some generic models which are sold in different parts of the World, especially in the Arab World. So I got the impression that I might be able to find handmills locally and started looking for them all over the place, enquiring at diverse stores and asking friends who have used those mills in the past. Of course, they can be purchased online. But they end up being relatively expensive and my manual experience wasn’t so positive as to convince me to spend so much money on one.

The Skerton was another story. It was much more convenient than a box-type manual mill. And, at Gamba, it was inexpensive enough for me to purchase it on the spot. I don’t tend to do this very often so I did feel strange about such an impulse purchase. But I certainly don’t regret it.

Especially since it complements my other purchases.

So, going to the i-Roast.

Over the years, I had been looking for the i-Roast and Behmor at most of the obvious sites where one might buy used devices like these. eBay, Craig’s List, Kijiji… As a matter of fact, I had seen an i-Roast on one of these, but I was still hesitating. Not exactly sure why, but it probably had to do with the fact that these homeroasters aren’t necessarily that durable and I couldn’t see how old this particular i-Roast was.

I eventually called to find out, after taking my decision to get an i-Roast. Turns out that it’s still under warranty, is in great condition, and was being sold by a very interesting (and clearly trustworthy) alto singer who happens to sing with a friend of mine who is also a local beer homebrewer. The same day I bought the roaster, I went to the cocoa-roasting friend’s place and saw a Behmor for the first time. And I tasted some really nice homemade chocolate. And met other interesting people including a couple that I saw, again, while taking the bus after purchasing the roaster.

The series of coincidences in that whole situation impressed me in a sense of awe. Not out of some strange superstition or other folk belief. But different things are all neatly packaged in a way that most of my life isn’t. Nothing weird about this. The packaging is easy to explain and mostly comes from my own perception. The effect is still there that it all fits.

And the i-Roast 2 itself fits, too.

It’s clearly not the ultimate coffee geek’s ideal roaster. But I get the impression it could become so. In fact, one reason I hesitated to buy the i-Roast 2 is that I was wondering if Hearthware might be coming out with the i-Roast 3, in the not-so-distant future.

I’m guessing that Hearthware might be getting ready to release a new roaster. I’m using unreliable information, but it’s still an educated guess. So, apparently…

I could just imagine what the i-Roast 3 might be. As I’m likely to get, I have a number of crazy ideas.

One “killer feature” actually relates both to the differences between the i-Roast and i-Roast 2 as well as to the geek factor behind homeroasting: roast profiles as computer files. Yes, I know, it sounds crazy. And, somehow, it’s quite unlikely that Hearthware would add such a feature on an entry-level machine. But I seriously think it’d make the roaster much closer to a roasting geek’s ultimate machine.

For one thing, programming a roast profile on the i-Roast is notoriously awkward. Sure, you get used to it. But it’s clearly suboptimal. And one major improvement of the i-Roast 2 over the original i-Roast is that the original version didn’t maintain profiles if you unplugged it. The next step, in my mind, would be to have some way to transfer a profile from a computer to the roaster, say via a slot for SD cards or even a USB port.

What this would open isn’t only the convenience of saving profiles, but actually a way to share them with fellow homeroasters. Since a lot in geek culture has to do with sharing information, a neat effect could come out of shareable roast profiles. In fact, when I looked for example roast profiles, I found forum threads, guides, and incredibly elaborate experiments. Eventually, it might be possible to exchange roasting profiles relating to coffee beans from the same shipment and compare roasting. Given the well-known effects of getting a group of people using online tools to share information, this could greatly improve the state of homeroasting and even make it break out of the very small niche in which it currently sits.

Of course, there are many problems with that approach, including things as trivial as voltage differences as well as bigger issues such as noise levels:

But I’m still dreaming about such things.

In fact, I go a few steps further. A roaster which could somehow connect to a computer might also be used to track data about temperature and voltage. In my own experiments with the i-Roast 2, I’ve been logging temperatures at 15 second intervals along with information about roast profile, quantity of beans, etc. It may sound extreme but it already helped me achieve a result I wanted to achieve. And it’d be precisely the kind of information I would like to share with other homeroasters, eventually building a community of practice.

Nothing but geekness, of course. Shall the geek inherit the Earth?

Groupthink in Action

Seems like I’m witnessing a clear groupthink phenomenon.

An interesting situation which, I would argue, is representative of Groupthink.

As a brief summary of the situation: a subgroup within a larger group is discussing the possibility of changing the larger group’s structure. In that larger group, similar discussions have been quite frequent, in the past. In effect, the smaller group is moving toward enacting a decision based on perceived consensus as to “the way to go.”

No bad intention on anyone’s part and the situation is far from tragic. But my clear impression is that groupthink is involved. I belong to the larger group but I feel little vested interest in what might happen with it.

An important point about this situation is that the smaller group seems to be acting as if the decision had already been made, after careful consideration. Through the history of the larger group, prior discussions on the same topic have been frequent. Through these discussions, clear consensus has never been reached. At the same time, some options have been gaining some momentum in the recent past, mostly based (in my observation) on accumulated frustration with the status quo and some reflection on the effectiveness of activities done by subgroups within the larger group. Members of that larger group (including participants in the smaller group) are quite weary of rehashing the same issues and the “rallying cry” within the subgroup has to do with “moving on.” Within the smaller group, prior discussions are described as if they had been enough to explore all the options. Weariness through the group as a whole seems to create a sense of urgency even though the group as a whole could hardly be described as being involved in time-critical activities.

Nothing personal about anyone involved and it’s possible that I’m off on this one. Where some of those involved would probably disagree is in terms of the current stage in the decision making process (i.e., they may see themselves as having gone through the process of making the primary decision, the rest is a matter of detail). I actually feel strange talking about this situation because it may seem like I’m doing the group a disservice. The reason I think it isn’t the case is that I have already voiced my concerns about groupthink to those who are involved in the smaller group. The reason I feel the urge to blog about this situation is that, as a social scientist, I take it as my duty to look at issues such as group dynamics. Simply put, I started thinking about it as a kind of “case study.”

Yes, I’m a social science geek. And proud of it, too!

Thing is, I have a hard time not noticing a rather clear groupthink pattern. Especially when I think about a few points in Janis‘s description of groupthink.

.

Antecedent Conditions Symptoms Decisions Affected

.

Insulation of the group Illusion of invulnerability Incomplete survey of alternatives

.

High group cohesiveness Unquestioned belief in the inherent morality of the group Incomplete survey of objectives

.

Directive leadership Collective rationalization of group’s decisions Failure to examine risks of preferred choice

.

Lack of norms requiring methodical procedures Shared stereotypes of outgroup, particularly opponents Failure to re-appraise initially rejected alternatives

.

Homogeneity of members’ social background and ideology Self-censorship; members withhold criticisms Poor information search

.

High stress from external threats with low hope of a better solution than the one offered by the leader(s) Illusion of unanimity (see false consensus effect) Selective bias in processing information at hand (see also confirmation bias)

.

Direct pressure on dissenters to conform Failure to work out contingency plans

.

Self-appointed “mindguards” protect the group from negative information

.

A PDF version, with some key issues highlighted.

Point by point…

Observable

Antecedent Conditions of Groupthink

Insulation of the group

A small subgroup was created based on (relatively informal) prior expression of opinion in favour of some broad changes in the structure of the larger group.

Lack of norms requiring methodical procedures

Methodical procedures about assessing the situation are either put aside or explicitly rejected.
Those methodical procedures which are accepted have to do with implementing the group’s primary decision, not with the decision making process.

Symptoms Indicative of Groupthink

Illusion of unanimity (see false consensus effect)

Agreement is stated as a fact, possibly based on private conversations outside of the small group.

Direct pressure on dissenters to conform

A call to look at alternatives is constructed as a dissenting voice.
Pressure to conform is couched in terms of “moving on.”

Symptoms of Decisions Affected by Groupthink

Incomplete survey of alternatives

Apart from the status quo, no alternative has been discussed.
When one alternative model is proposed, it’s reduced to a “side” in opposition to the assessed consensus.

Incomplete survey of objectives

Broad objectives are assumed to be common, left undiscussed.
Discussion of objectives is pushed back as being irrelevant at this stage.

Failure to examine risks of preferred choice

Comments about possible risks (including the danger of affecting the dynamics of the existing broader group) are left undiscussed or dismissed as “par for the course.”

Failure to re-appraise initially rejected alternatives

Any alternative is conceived as having been tried in the past with the strong implication that it isn’t wort revisiting.

Poor information search

Information collected concerns ways to make sure that the primary option considered will work.

Failure to work out contingency plans

Comments about the possible failure of the plan, and effects on the wider group are met with “so be it.”

Less Obvious

Antecedent Conditions of Groupthink

High group cohesiveness

The smaller group is highly cohesive but so is the broader group.

Directive leadership

Several members of the smaller group are taking positions of leadership, but there’s no direct coercion from that leadership.

Positions of authority are assessed, in a subtle way, but this authority is somewhat indirect.

Homogeneity of members’ social background and ideology

As with cohesiveness, homogeneity of social background can be used to describe the broader group as well as the smaller one.

High stress from external threats with low hope of a better solution than the one offered by the leader(s)

External “threats” are mostly subtle but there’s a clear notion that the primary option considered may be met with some opposition by a proportion of the larger group.

Symptoms Indicative of Groupthink

Illusion of invulnerability

While “invulnerability” would be an exaggeration, there’s a clear sense that members of the smaller group have a strong position within the larger group.

Unquestioned belief in the inherent morality of the group

Discussions don’t necessarily have a moral undertone, but the smaller group’s goals seem self-evident in the context or, at least, not really worth careful discussion.

Collective rationalization of group’s decisions

Since attempts to discuss the group’s assumed consensus are labelled as coming from a dissenting voice, the group’s primary decision is reified through countering individual points made about this decision.

Shared stereotypes of outgroup, particularly opponents

The smaller group’s primary “outgroup” is in fact the broader group, described in rather simple terms, not a distinct group of people.
The assumption is that, within the larger group, positions about the core issue are already set.

Self-censorship; members withhold criticisms

Self-censorship is particularly hard to observe or assess but the group’s dynamics tends to construct criticism as “nitpicking,” making it difficult to share comments.

Self-appointed “mindguards” protect the group from negative information

As with leadership, the process of shielding the smaller group from negative information is mostly organic, not located in a single individual.
Because the smaller group is already set apart from the larger group, protection from external information is built into the system, to an extent.

Symptoms of Decisions Affected by Groupthink

Selective bias in processing information at hand (see also confirmation bias)

Information brought into the discussion is treated as either reinforcing the group’s alleged consensus or taken to be easy to counter.
Examples from cases showing clear similarities are dismissed (“we have no interest in knowing what others have done”) and distant cases are used to demonstrate that the approach is sound (“there are groups in other contexts which work, so we can use the same approach”).

Personal Devices

Personal devices after multitouch smartphones? Some random thoughts.

Still thinking about touch devices, such as the iPod touch and the rumoured “Apple Tablet.”

Thinking out loud. Rambling even more crazily than usual.

Something important about those devices is the need for a real “Personal Digital Assistant.” I put PDAs as a keyword for my previous post because I do use the iPod touch like I was using my PalmOS and even NewtonOS devices. But there’s more to it than that, especially if you think about cloud computing and speech technologies.
I mentioned speech recognition in that previous post. SR tends to be a pipedream of the computing world. Despite all the hopes put into realtime dictation, it still hasn’t taken off in a big way. One reason might be that it’s still somewhat cumbersome to use, in current incarnations. Another reason is that it’s relatively expensive as a standalone product which requires some getting used to. But I get the impression that another set of reasons has to do with the fact that it’s mostly fitting on a personal device. Partly because it needs to be trained. But also because voice itself is a personal thing.

Cloud computing also takes a new meaning with a truly personal device. It’s no surprise that there are so many offerings with some sort of cloud computing feature in the App Store. Not only do Apple’s touch devices have limited file storage space but the notion of accessing your files in the cloud go well with a personal device.
So, what’s the optimal personal device? I’d say that Apple’s touch devices are getting close to it but that there’s room for improvement.

Some perspective…

Originally, the PC was supposed to be a “personal” computer. But the distinction was mostly with mainframes. PCs may be owned by a given person, but they’re not so tied to that person, especially given the fact that they’re often used in a single context (office or home, say). A given desktop PC can be important in someone’s life, but it’s not always present like a personal device should be. What’s funny is that “personal computers” became somewhat more “personal” with the ‘Net and networking in general. Each computer had a name, etc. But those machines remained somewhat impersonal. In many cases, even when there are multiple profiles on the same machine, it’s not so safe to assume who the current user of the machine is at any given point.

On paper, the laptop could have been that “personal device” I’m thinking about. People may share a desktop computer but they usually don’t share their laptop, unless it’s mostly used like a desktop computer. The laptop being relatively easy to carry, it’s common for people to bring one back and forth between different sites: work, home, café, school… Sounds tautological, as this is what laptops are supposed to be. But the point I’m thinking about is that these are still distinct sites where some sort of desk or table is usually available. People may use laptops on their actual laps, but the form factor is still closer to a portable desktop computer than to the kind of personal device I have in mind.

Then, we can go all the way to “wearable computing.” There’s been some hype about wearable computers but it has yet to really be part of our daily lives. Partly for technical reasons but partly because it may not really be what people need.

The original PDAs (especially those on NewtonOS and PalmOS) were getting closer to what people might need, as personal devices. The term “personal digital assistant” seemed to encapsulate what was needed. But, for several reasons, PDAs have been having a hard time. Maybe there wasn’t a killer app for PDAs, outside of “vertical markets.” Maybe the stylus was the problem. Maybe the screen size and bulk of the device weren’t getting to the exact points where people needed them. I was still using a PalmOS device in mid-2008 and it felt like I was among the last PDA users.
One point was that PDAs had been replaced by “smartphones.” After a certain point, most devices running PalmOS were actually phones. RIM’s Blackberry succeeded in a certain niche (let’s use the vague term “professionals”) and is even beginning to expand out of it. And devices using other OSes have had their importance. It may not have been the revolution some readers of Pen Computing might have expected, but the smartphone has been a more successful “personal device” than the original PDAs.

It’s easy to broaden our focus from smartphones and think about cellphones in general. If the 3.3B figure can be trusted, cellphones may already be outnumbering desktop and laptop computers by 3:1. And cellphones really are personal. You bring them everywhere; you don’t need any kind of surface to use them; phone communication actually does seem to be a killer app, even after all this time; there are cellphones in just about any price range; cellphone carriers outside of Canada and the US are offering plans which are relatively reasonable; despite some variation, cellphones are rather similar from one manufacturer to the next… In short, cellphones already were personal devices, even before the smartphone category really emerged.

What did smartphones add? Basically, a few PDA/PIM features and some form of Internet access or, at least, some form of email. “Whoa! Impressive!”

Actually, some PIM features were already available on most cellphones and Internet access from a smartphone is in continuity with SMS and data on regular cellphones.

What did Apple’s touch devices add which was so compelling? Maybe not so much, apart from the multitouch interface, a few games, and integration with desktop/laptop computers. Even then, most of these changes were an evolution over the basic smartphone concept. Still, it seems to have worked as a way to open up personal devices to some new dimensions. People now use the iPhone (or some other multitouch smartphone which came out after the iPhone) as a single device to do all sorts of things. Around the World, multitouch smartphones are still much further from being ubiquitous than are cellphones in general. But we could say that these devices have brought the personal device idea to a new phase. At least, one can say that they’re much more exciting than the other personal computing devices.

But what’s next for personal devices?

Any set of buzzphrases. Cloud computing, speech recognition, social media…

These things can all come together, now. The “cloud” is mostly ready and personal devices make cloud computing more interesting because they’re “always-on,” are almost-wearable, have batteries lasting just about long enough, already serve to keep some important personal data, and are usually single-user.

Speech recognition could go well with those voice-enabled personal devices. For one thing, they already have sound input. And, by this time, people are used to seeing others “talk to themselves” as cellphones are so common. Plus, voice recognition is already understood as a kind of security feature. And, despite their popularity, these devices could use a further killer app, especially in terms of text entry and processing. Some of these devices already have voice control and it’s not so much of a stretch to imagine them having what’s needed for continuous speech recognition.

In terms of getting things onto the device, I’m also thinking about such editing features as a universal rich-text editor (à la TinyMCE), predictive text, macros, better access to calendar/contact data, ubiquitous Web history, multiple pasteboards, data detectors, Automator-like processing, etc. All sorts of things which should come from OS-level features.

“Social media” may seem like too broad a category. In many ways, those devices already take part in social networking, user-generated content, and microblogging, to name a few areas of social media. But what about a unified personal profile based on the device instead of the usual authentication method? Yes, all sorts of security issues. But aren’t people unconcerned about security in the case of social media? Twitter accounts are being hacked left and right yet Twitter doesn’t seem to suffer much. And there could be added security features on a personal device which is meant to really integrate social media. Some current personal devices already work well as a way to keep login credentials to multiple sites. The next step, there, would be to integrate all those social media services into the device itself. We maybe waiting for OpenSocial, OpenID, OAuth, Facebook Connect, Google Connect, and all sorts of APIs to bring us to an easier “social media workflow.” But a personal device could simplify the “social media workflow” even further, with just a few OS-based tweaks.

Unlike my previous, I’m not holding my breath for some specific event which will bring us the ultimate personal device. After all, this is just a new version of my ultimate handheld device blogpost. But, this time, I was focusing on what it means for a device to be “personal.” It’s even more of a drafty draft than my blogposts usually have been ever since I decided to really RERO.

So be it.

Sharing Tool Wishlist

My personal (potentially crazy) wishlist for a tool to share online content (links/bookmarks).

The following is an edited version of a wishlist I had been keeping on the side. The main idea is to define what would be, in my mind, the “ultimate social bookmarking system.” Which, obviously, goes way beyond social bookmarking. In a way, I even conceive of it as the ultimate tool for sharing online content. Yes, it’s that ambitious. Will it ever exist? Probably not. Should it exist? I personally think so. But I may be alone in this. Surely, you’ll tell me that I am indeed alone, which is fine. As long as you share your own wishlist items.

The trigger for my posting this is that someone contacted me, asking for what I’d like in a social bookmarking system. I find this person’s move quite remarkable, as a thoughtful strategy. Not only because this person contacted me directly (almost flattering), but because such a request reveals an approach to listening and responding to people’s needs that I find lacking in some software development circles.

This person’s message served as a prompt for my blogging this, but I’ve been meaning to blog this for a while. In fact, my guess is that I created a first version of this wishlist in 2007 after having it on my mind for a while before that. As such, it represents a type of “diachronic” or “longitudinal” view of social bookmarking and the way it works in the broader scheme of social media.

Which also means that I wrote this before I heard about Google Wave. In fact, I’m still unclear about Google Wave and I’ll need to blog about that. Not that I expect Wave to fulfill all the needs I set up for a sharing tool, but I get the impression that Google is finally putting some cards on the table.

The main part of this post is in outline form. I often think through outlines, especially with such a type of notes. I fully realize that it may not be that clear, as a structure, for other people to understand. Some of these bullet points cover a much broader issue than what they look like. But the overall idea might be fairly obvious to grasp, even if it may sound crazy to other people.

I’m posting this to the benefit of anyone who may wish to build the killer app for social media. Of course, it’s just one man’s opinion. But it’s my entitled opinion.

Concepts

What do we share online?

  • “Link”
  • “Page”
  • Identified content
  • Text
    • Narrative
    • Contact information
    • Event description
  • Contact information
  • Event invitation
  • Image
  • Recording
  • Structured content
  • Snippet
  • Access to semi-private content
  • Site’s entry point

Selective sharing

Private
  • Archiving
  • Cloud access
Individually shared
  • “Check this out”
  • Access to address book
  • Password protection
  • Specialization/expertise
  • Friendship
Group shared
  • Shared interests (SIG)
  • Collaboration (task-based)
Shared through network
  • Define identity in network
  • Semi-public
Public
  • Publishing
  • Processed
  • Reading lists

Notetaking

  • Active reading
  • Anchoring text
  • Ad hoc list of bookmarks
  • “Empty URL”
    • Create container/page
    • Personal notes

Todos

  • To read
  • To blog
  • To share
  • To update
  • Projects
    • GTD
    • Contexts
  • Add to calendar (recognized as event)

Outlining/Mindmapping

  • Manage lists of links
  • Prioritize
  • Easily group

Social aspects of sharing

  • Gift economy
  • Personal interaction
  • Trust
  • Hype
  • Value
  • Customized

Cloud computing

  • Webware
  • “Online disk”
  • Without download
  • Touch devices
  • Edit online

Personal streaming

  • Activities through pages
  • Logging
  • Flesh out personal profile

Tagging

  • “Folksonomy”
  • Enables non-hierarchical structure
  • Semantic fields
  • Related tags
  • Can include hierarchy
  • Tagclouds define concept map

Required Features

Crossplatform, crossbrowser

  • Browser-specific tools
  • Bookmarklets
  • Complete access through cloud
Keyboard shortcuts
  • Quick add (to account)
  • Vote
  • Bookmark all tabs (à la Flock)
  • Quick tags

Related pages

Recommended
  • Based on social graph
  • Based on tags
  • Based on content
  • Based on popularity
  • Pointing to this page

Quickly enter links

  • Add in place (while editing)
  • Similar to “spell as you type”
  • Incremental search
  • Add full link (title, URL, text, metadata)

Archiving

  • Prevent linkrot
  • Prepare for post-processing (offline reading, blogging…)
  • Enable bulk processing
  • Maintain version history
  • Internet Archive

Automatic processing

  • Tags
  • Summary
  • Wordcount
  • Reading time
  • Language(s)
  • Page structure analysis
  • Geotagging
  • Vote

Thread following

  • Blog comments
  • Forum comments
  • Trackbacks
  • Pings

Exporting

All
  • Archiving
  • Prepare for import
  • Maintain hierarchy
Selected
  • Tag
  • Category
  • Recently used
  • Shared
  • Site homepage
  • Blogroll
  • Blogs
Formats
  • Other services
  • HTML
  • RSS
  • OPML
  • Widget
Features
  • Comments
  • Tags
  • Statistics
  • Content

Offline processing

  • Browser-based
  • Device based
  • Offline archiving
  • Include content
  • Synchronization

Microblogging support

  • Laconi.ca/Identi.ca
  • Twitter
  • Ping.fm
  • Jaiku

Fixed/Static URL

  • Prevent linkrot
  • Maintain list for same page
  • Short URLs
  • Automatically generated
  • Expansion on mouseover
  • Statistics

Authentication

  • Use of resources
  • Identify
  • Privacy
  • Unnecessary for basic processing
  • Sticks (no need to login frequently)
  • Access to contacts and social graph
  • Multiple accounts
    • Personal/professional
    • Contexts
    • Group accounts
  • Premium accounts
    • Server space
    • Usage statistics
    • Promotion
  • Support
    • OpenID
      • As group login
    • Google Accounts
    • Facebook Connect
    • OAuth

Integration

  • Web history
  • Notebook
  • Blogging platform
  • Blog editor
  • Microblogging platform
  • Logbook
  • General purpose content editor
  • Toolbar
  • URL shortening
  • Address book
  • Social graph
  • Personal profile
  • Browser
    • Bookmarks
    • History
    • Autocomplete
  • Analytics
  • Email
  • Search
    • Online
    • Offline

Related Tools

  • Diigo
  • WebCitation
  • Ping.fm
  • BackType
  • Facebook share
  • Blog This
  • Link This
  • Share this
  • Digg
  • Plum
  • Spurl
  • CoComments
  • MyBlogLog
  • TwtVite
  • Twistory
  • Windows Live Writer
  • Magnolia
  • Stumble Upon
  • Delicious
  • Google Reader
  • Yahoo Pipes
  • Google Notebook
  • Zoho Notebook
  • Google Browser Sync
  • YouTube
  • Flock
  • Zotero

Relevant Blogposts

Social Networks and Microblogging

Event-based microblogging and the social dimensions of online social networks.

Microblogging (Laconica, Twitter, etc.) is still a hot topic. For instance, during the past few episodes of This Week in Tech, comments were made about the preponderance of Twitter as a discussion theme: microblogging is so prominent on that show that some people complain that there’s too much talk about Twitter. Given the centrality of Leo Laporte’s podcast in geek culture (among Anglos, at least), such comments are significant.

The context for the latest comments about TWiT coverage of Twitter had to do with Twitter’s financials: during this financial crisis, Twitter is given funding without even asking for it. While it may seem surprising at first, given the fact that Twitter hasn’t publicized a business plan and doesn’t appear to be profitable at this time, 

Along with social networking, microblogging is even discussed in mainstream media. For instance, Médialogues (a media critique on Swiss national radio) recently had a segment about both Facebook and Twitter. Just yesterday, Comedy Central’s The Daily Show with Jon Stewart made fun of compulsive twittering and mainstream media coverage of Twitter (original, Canadian access).

Clearly, microblogging is getting some mindshare.

What the future holds for microblogging is clearly uncertain. Anything can happen. My guess is that microblogging will remain important for a while (at least a few years) but that it will transform itself rather radically. Chances are that other platforms will have microblogging features (something Facebook can do with status updates and something Automattic has been trying to do with some WordPress themes). In these troubled times, Montreal startup Identi.ca received some funding to continue developing its open microblogging platform.  Jaiku, bought by Google last year, is going open source, which may be good news for microblogging in general. Twitter itself might maintain its “marketshare” or other players may take over. There’s already a large number of third-party tools and services making use of Twitter, from Mahalo Answers to Remember the Milk, Twistory to TweetDeck.

Together, these all point to the current importance of microblogging and the potential for further development in that sphere. None of this means that microblogging is “The Next Big Thing.” But it’s reasonable to expect that microblogging will continue to grow in use.

(Those who are trying to grok microblogging, Common Craft’s Twitter in Plain English video is among the best-known descriptions of Twitter and it seems like an efficient way to “get the idea.”)

One thing which is rarely mentioned about microblogging is the prominent social structure supporting it. Like “Social Networking Systems” (LinkedIn, Facebook, Ning, MySpace…), microblogging makes it possible for people to “connect” to one another (as contacts/acquaintances/friends). Like blogs, microblogging platforms make it possible to link to somebody else’s material and get notifications for some of these links (a bit like pings and trackbacks). Like blogrolls, microblogging systems allow for lists of “favourite authors.” Unlike Social Networking Systems but similar to blogrolls, microblogging allow for asymmetrical relations, unreciprocated links: if I like somebody’s microblogging updates, I can subscribe to those (by “following” that person) and publicly show my appreciation of that person’s work, regardless of whether or not this microblogger likes my own updates.

There’s something strangely powerful there because it taps the power of social networks while avoiding tricky issues of reciprocity, “confidentiality,” and “intimacy.”

From the end user’s perspective, microblogging contacts may be easier to establish than contacts through Facebook or Orkut. From a social science perspective, microblogging links seem to approximate some of the fluidity found in social networks, without adding much complexity in the description of the relationships. Subscribing to someone’s updates gives me the role of “follower” with regards to that person. Conversely, those I follow receive the role of “following” (“followee” would seem logical, given the common “-er”/”-ee” pattern). The following and follower roles are complementary but each is sufficient by itself as a useful social link.

Typically, a microblogging system like Twitter or Identi.ca qualifies two-way connections as “friendship” while one-way connections could be labelled as “fandom” (if Andrew follows Betty’s updates but Betty doesn’t follow Andrew’s, Andrew is perceived as one of Betty’s “fans”). Profiles on microblogging systems are relatively simple and public, allowing for low-involvement online “presence.” As long as updates are kept public, anybody can connect to anybody else without even needing an introduction. In fact, because microblogging systems send notifications to users when they get new followers (through email and/or SMS), subscribing to someone’s update is often akin to introducing yourself to that person. 

Reciprocating is the object of relatively intense social pressure. A microblogger whose follower:following ratio is far from 1:1 may be regarded as either a snob (follower:following much higher than 1:1) or as something of a microblogging failure (follower:following much lower than 1:1). As in any social context, perceived snobbery may be associated with sophistication but it also carries opprobrium. Perry Belcher  made a video about what he calls “Twitter Snobs” and some French bloggers have elaborated on that concept. (Some are now claiming their right to be Twitter Snobs.) Low follower:following ratios can result from breach of etiquette (for instance, ostentatious self-promotion carried beyond the accepted limit) or even non-human status (many microblogging accounts are associated to “bots” producing automated content).

The result of the pressure for reciprocation is that contacts are reciprocated regardless of personal relations.  Some users even set up ways to automatically follow everyone who follows them. Despite being tricky, these methods escape the personal connection issue. Contrary to Social Networking Systems (and despite the term “friend” used for reciprocated contacts), following someone on a microblogging service implies little in terms of friendship.

One reason I personally find this fascinating is that specifying personal connections has been an important part of the development of social networks online. For instance, long-defunct SixDegrees.com (one of the earliest Social Networking Systems to appear online) required of users that they specified the precise nature of their relationship to users with whom they were connected. Details escape me but I distinctly remember that acquaintances, colleagues, and friends were distinguished. If I remember correctly, only one such personal connection was allowed for any pair of users and this connection had to be confirmed before the two users were linked through the system. Facebook’s method to account for personal connections is somewhat more sophisticated despite the fact that all contacts are labelled as “friends” regardless of the nature of the connection. The uniform use of the term “friend” has been decried by many public commentators of Facebook (including in the United States where “friend” is often applied to any person with whom one is simply on friendly terms).

In this context, the flexibility with which microblogging contacts are made merits consideration: by allowing unidirectional contacts, microblogging platforms may have solved a tricky social network problem. And while the strength of the connection between two microbloggers is left unacknowledged, there are several methods to assess it (for instance through replies and republished updates).

Social contacts are the very basis of social media. In this case, microblogging represents a step towards both simplified and complexified social contacts.

Which leads me to the theme which prompted me to start this blogpost: event-based microblogging.

I posted the following blog entry (in French) about event-based microblogging, back in November.

Microblogue d’événement

I haven’t received any direct feedback on it and the topic seems to have little echoes in the social media sphere.

During the last PodMtl meeting on February 18, I tried to throw my event-based microblogging idea in the ring. This generated a rather lengthy between a friend and myself. (Because I don’t want to put words in this friend’s mouth, who happens to be relatively high-profile, I won’t mention this friend’s name.) This friend voiced several objections to my main idea and I got to think about this basic notion a bit further. At the risk of sounding exceedingly opinionated, I must say that my friend’s objections actually comforted me in the notion that my “event microblog” idea makes a lot of sense.

The basic idea is quite simple: microblogging instances tied to specific events. There are technical issues in terms of hosting and such but I’m mostly thinking about associating microblogs and events.

What I had in mind during the PodMtl discussion has to do with grouping features, which are often requested by Twitter users (including by Perry Belcher who called out Twitter Snobs). And while I do insist on events as a basis for those instances (like groups), some of the same logic applies to specific interests. However, given the time-sensitivity of microblogging, I still think that events are more significant in this context than interests, however defined.

In the PodMtl discussion, I frequently referred to BarCamp-like events (in part because my friend and interlocutor had participated in a number of such events). The same concept applies to any event, including one which is just unfolding (say, assassination of Guinea-Bissau’s president or bombings in Mumbai).

Microblogging users are expected to think about “hashtags,” those textual labels preceded with the ‘#’ symbol which are meant to categorize microblogging updates. But hashtags are problematic on several levels.

  • They require preliminary agreement among multiple microbloggers, a tricky proposition in any social media. “Let’s use #Bissau09. Everybody agrees with that?” It can get ugly and, even if it doesn’t, the process is awkward (especially for new users).
  • Even if agreement has been reached, there might be discrepancies in the way hashtags are typed. “Was it #TwestivalMtl or #TwestivalMontreal, I forgot.”
  • In terms of language economy, it’s unsurprising that the same hashtag would be used for different things. Is “#pcmtl” about Podcamp Montreal, about personal computers in Montreal, about PCM Transcoding Library…?
  • Hashtags are frequently misunderstood by many microbloggers. Just this week, a tweep of mine (a “peep” on Twitter) asked about them after having been on Twitter for months.
  • While there are multiple ways to track hashtags (including through SMS, in some regions), there is no way to further specify the tracked updates (for instance, by user).
  • The distinction between a hashtag and a keyword is too subtle to be really useful. Twitter Search, for instance, lumps the two together.
  • Hashtags take time to type. Even if microbloggers aren’t necessarily typing frantically, the time taken to type all those hashtags seems counterproductive and may even distract microbloggers.
  • Repetitively typing the same string is a very specific kind of task which seems to go against the microblogging ethos, if not the cognitive processes associated with microblogging.
  • The number of character in a hashtag decreases the amount of text in every update. When all you have is 140 characters at a time, the thirteen characters in “#TwestivalMtl” constitute almost 10% of your update.
  • If the same hashtag is used by a large number of people, the visual effect can be that this hashtag is actually dominating the microblogging stream. Since there currently isn’t a way to ignore updates containing a certain hashtag, this effect may even discourage people from using a microblogging service.

There are multiple solutions to these issues, of course. Some of them are surely discussed among developers of microblogging systems. And my notion of event-specific microblogs isn’t geared toward solving these issues. But I do think separate instances make more sense than hashtags, especially in terms of specific events.

My friend’s objections to my event microblogging idea had something to do with visibility. It seems that this friend wants all updates to be visible, regardless of the context. While I don’t disagree with this, I would claim that it would still be useful to “opt out” of certain discussions when people we follow are involved. If I know that Sean is participating in a PHP conference and that most of his updates will be about PHP for a period of time, I would enjoy the possibility to hide PHP-related updates for a specific period of time. The reason I talk about this specific case is simple: a friend of mine has manifested some frustration about the large number of updates made by participants in Podcamp Montreal (myself included). Partly in reaction to this, he stopped following me on Twitter and only resumed following me after Podcamp Montreal had ended. In this case, my friend could have hidden Podcamp Montreal updates and still have received other updates from the same microbloggers.

To a certain extent, event-specific instances are a bit similar to “rooms” in MMORPG and other forms of real-time many-to-many text-based communication such as the nostalgia-inducing Internet Relay Chat. Despite Dave Winer’s strong claim to the contrary (and attempt at defining microblogging away from IRC), a microblogging instance could, in fact, act as a de facto chatroom. When such a structure is needed. Taking advantage of the work done in microblogging over the past year (which seems to have advanced more rapidly than work on chatrooms has, during the past fifteen years). Instead of setting up an IRC channel, a Web-based chatroom, or even a session on MSN Messenger, users could use their microblogging platform of choice and either decide to follow all updates related to a given event or simply not “opt-out” of following those updates (depending on their preferences). Updates related to multiple events are visible simultaneously (which isn’t really the case with IRC or chatrooms) and there could be ways to make event-specific updates more prominent. In fact, there would be easy ways to keep real-time statistics of those updates and get a bird’s eye view of those conversations.

And there’s a point about event-specific microblogging which is likely to both displease “alpha geeks” and convince corporate users: updates about some events could be “protected” in the sense that they would not appear in the public stream in realtime. The simplest case for this could be a company-wide meeting during which backchannel is allowed and even expected “within the walls” of the event. The “nothing should leave this room” attitude seems contradictory to social media in general, but many cases can be made for “confidential microblogging.” Microblogged conversations can easily be archived and these archives could be made public at a later date. Event-specific microblogging allows for some control of the “permeability” of the boundaries surrounding the event. “But why would people use microblogging instead of simply talking to another?,” you ask. Several quick answers: participants aren’t in the same room, vocal communication is mostly single-channel, large groups of people are unlikely to communicate efficiently through oral means only, several things are more efficiently done through writing, written updates are easier to track and archive…

There are many other things I’d like to say about event-based microblogging but this post is already long. There’s one thing I want to explain, which connects back to the social network dimension of microblogging.

Events can be simplistically conceived as social contexts which bring people together. (Yes, duh!) Participants in a given event constitute a “community of experience” regardless of the personal connections between them. They may be strangers, ennemies, relatives, acquaintances, friends, etc. But they all share something. “Participation,” in this case, can be relatively passive and the difference between key participants (say, volunteers and lecturers in a conference) and attendees is relatively moot, at a certain level of analysis. The key, here, is the set of connections between people at the event.

These connections are a very powerful component of social networks. We typically meet people through “events,” albeit informal ones. Some events are explicitly meant to connect people who have something in common. In some circles, “networking” refers to something like this. The temporal dimension of social connections is an important one. By analogy to philosophy of language, the “first meeting” (and the set of “first impressions”) constitute the “baptism” of the personal (or social) connection. In social media especially, the nature of social connections tends to be monovalent enough that this “baptism event” gains special significance.

The online construction of social networks relies on a finite number of dimensions, including personal characteristics described in a profile, indirect connections (FOAF), shared interests, textual content, geographical location, and participation in certain activities. Depending on a variety of personal factors, people may be quite inclusive or rather exclusive, based on those dimensions. “I follow back everyone who lives in Austin” or “Only people I have met in person can belong to my inner circle.” The sophistication with which online personal connections are negotiated, along such dimensions, is a thing of beauty. In view of this sophistication, tools used in social media seem relatively crude and underdeveloped.

Going back to the (un)conference concept, the usefulness of having access to a list of all participants in a given event seems quite obvious. In an open event like BarCamp, it could greatly facilitate the event’s logistics. In a closed event with paid access, it could be linked to registration (despite geek resistance, closed events serve a purpose; one could even imagine events where attendance is free but the microblogging backchannel incurs a cost). In some events, everybody would be visible to everybody else. In others, there could be a sort of ACL for diverse types of participants. In some cases, people could be allowed to “lurk” without being seen while in others radically transparency could be enforced. For public events with all participants visible, lists of participants could be archived and used for several purposes (such as assessing which sessions in a conference are more popular or “tracking” event regulars).

One reason I keep thinking about event-specific microblogging is that I occasionally use microblogging like others use business cards. In a geek crowd, I may ask for someone’s Twitter username in order to establish a connection with that person. Typically, I will start following that person on Twitter and find opportunities to communicate with that person later on. Given the possibility for one-way relationships, it establishes a social connection without requiring personal involvement. In fact, that person may easily ignore me without the danger of a face threat.

If there were event-specific instances from microblogging platforms, we could manage connections and profiles in a more sophisticated way. For instance, someone could use a barebones profile for contacts made during an impersonal event and a full-fledged profile for contacts made during a more “intimate” event. After noticing a friend using an event-specific business card with an event-specific email address, I got to think that this event microblogging idea might serve as a way to fill a social need.

 

More than most of my other blogposts, I expect comments on this one. Objections are obviously welcomed, especially if they’re made thoughtfully (like my PodMtl friend made them). Suggestions would be especially useful. Or even questions about diverse points that I haven’t addressed (several of which I can already think about).

So…

 

What do you think of this idea of event-based microblogging? Would you use a microblogging instance linked to an event, say at an unconference? Can you think of fun features an event-based microblogging instance could have? If you think about similar ideas you’ve seen proposed online, care to share some links?

 

Thanks in advance!

Privilege: Library Edition

When I came out against privilege, over a month ago, I wasn’t thinking about libraries. But, last week, while running some errands at three local libraries (within an hour), I got to think about library privileges.

During that day, I first started thinking about library privileges because I was renewing my CREPUQ card at Concordia. With that card, graduate students and faculty members at a university in Quebec are able to get library privileges at other universities, a nice “perk” that we have. While renewing my card, I was told (or, more probably, reminded) that the card now gives me borrowing privileges at any university library in Canada through CURBA (Canadian University Reciprocal Borrowing Agreement).

My gut reaction: “Aw-sum!” (I was having a fun day).

It got me thinking about what it means to be an academic in Canada. Because I’ve also spent part of my still short academic career in the United States, I tend to compare the Canadian academe to US academic contexts. And while there are some impressive academic consortia in the US, I don’t think that any of them may offer as wide a set of library privileges as this one. If my count is accurate, there are 77 institutions involved in CURBA. University systems and consortia in the US typically include somewhere between ten and thirty institutions, usually within the same state or region. Even if members of both the “UC System” and “CalState” have similar borrowing privileges, it would only mean 33 institutions, less than half of CURBA (though the population of California is about 20% more than that of Canada as a whole). Some important university consortia through which I’ve had some privileges were the CIC (Committee on Institutional Cooperation), a group of twelve Midwestern universities, and the BLC (Boston Library Consortium), a group of twenty university in New England. Even with full borrowing privileges in all three groups of university libraries, an academic would only have access to library material from 65 institutions.

Of course, the number of institutions isn’t that relevant if the libraries themselves have few books. But my guess is that the average size of a Canadian university’s library collection is quite comparable to its US equivalents, including in such well-endowed institutions as those in the aforementioned consortia and university systems. What’s more, I would guess that there might be a broader range of references across Canadian universities than in any region of the US. Not to mention that BANQ (Quebec’s national library and archives) are part of CURBA and that their collections overlap very little with a typical university library.

So, I was thinking about access to an extremely wide range of references given to graduate students and faculty members throughout Canada. We get this very nice perk, this impressive privilege, and we pretty much take it for granted.

Which eventually got me to think about my problem with privilege. Privilege implies a type of hierarchy with which I tend to be uneasy. Even (or especially) when I benefit from a top position. “That’s all great for us but what about other people?”

In this case, there are obvious “Others” like undergraduate students at Canadian institutions,  Canadian non-academics, and scholars at non-Canadian institutions. These are very disparate groups but they are all denied something.

Canadian undergrads are the most direct “victims”: they participate in Canada’s academe, like graduate students and faculty members, yet their access to resources is severely limited by comparison to those of us with CURBA privileges. Something about this strikes me as rather unfair. Don’t undegrads need access as much as we do? Is there really such a wide gap between someone working on an honour’s thesis at the end of a bachelor’s degree and someone starting work on a master’s thesis that the latter requires much wider access than the former? Of course, the main rationale behind this discrepancy in access to library material probably has to do with sheer numbers: there are many undergraduate students “fighting for the same resources” and there are relatively few graduate students and faculty members who need access to the same resources. Or something like that. It makes sense but it’s still a point of tension, as any matter of privilege.

The second set of “victims” includes Canadians who happen to not be affiliated directly with an academic institution. While it may seem that their need for academic resources are more limited than those of students, many people in this category have a more unquenchable “thirst for knowledge” than many an academic. In fact, there are people in this category who could probably do a lot of academically-relevant work “if only they had access.” I mostly mean people who have an academic background of some sort but who are currently unaffiliated with formal institutions. But the “broader public” counts, especially when a specific topic becomes relevant to them. These are people who take advantage of public libraries but, as mentioned in the BANQ case, public and university libraries don’t tend to overlap much. For instance, it’s quite unlikely that someone without academic library privileges would have been able to borrow Visual Information Processing (Chase, William 1973), a proceedings book that I used as a source for a recent blogpost on expertise. Of course, “the public” is usually allowed to browse books in most university libraries in North America (apart from Harvard). But, depending on other practical factors, borrowing books can be much more efficient than browsing them in a library. I tend to hear from diverse people who would enjoy some kind of academic status for this very reason: library privileges matter.

A third category of “victims” of CURBA privileges are non-Canadian academics. Since most of them may only contribute indirectly to Canadian society, why should they have access to Canadian resources? As any social context, the national academe defines insiders and outsiders. While academics are typically inclusive, this type of restriction seems to make sense. Yet many academics outside of Canada could benefit from access to resources broadly available to Canadian academics. In some cases, there are special agreements to allow outside scholars to get temporary access to local, regional, or national resources. Rather frequently, these agreements come with special funding, the outside academic being a special visitor, sometimes with even better access than some local academics.  I have very limited knowledge of these agreements (apart from infrequent discussions with colleagues who benefitted from them) but my sense is that they are costly, cumbersome, and restrictive. Access to local resources is even more exclusive a privilege in this case than in the CURBA case.

Which brings me to my main point about the issue: we all need open access.

When I originally thought about how impressive CURBA privileges were, I was thinking through the logic of the physical library. In a physical library, resources are scarce, access to resources need to be controlled, and library privileges have a high value. In fact, it costs an impressive amount of money to run a physical library. The money universities invest in their libraries is relatively “inelastic” and must figure quite prominently in their budgets. The “return” on that investment seems to me a bit hard to measure: is it a competitive advantage, does a better-endowed library make a university more cost-effective, do university libraries ever “recoup” any portion of the amounts spent?

Contrast all of this with a “virtual” library. My guess is that an online collection of texts costs less to maintain than a physical library by any possible measure. Because digital data may be copied at will, the notion of “scarcity” makes little sense online. Distributing millions of copies of a digital text doesn’t make the original text unavailable to anyone. As long as the distribution system is designed properly, the “transaction costs” in distributing a text of any length are probably much less than those associated with borrowing a book.  And the differences between “browsing” and “borrowing,” which do appear significant with physical books, seem irrelevant with digital texts.

These are all well-known points about online distribution. And they all seem to lead to the same conclusion: “information wants to be free.” Not “free as in beer.” Maybe not even “free as in speech.” But “free as in unchained.”

Open access to academic resources is still a hot topic. Though I do consider myself an advocate of “OA” (the “Open Access movement”), what I mean here isn’t so much about OA as opposed to TA (“toll-access”) in the case of academic journals. Physical copies of periodicals may usually not be borrowed, regardless of library privileges, and online resources are typically excluded from borrowing agreements between institutions. The connection between OA and my perspective on library privileges is that I think the same solution could solve both issues.

I’ve been thinking about a “global library” for a while. Like others, the Library of Alexandria serves as a model but texts would be online. It sounds utopian but my main notion, there, is that “library privileges” would be granted to anyone. Not only senior scholars at accredited academic institutions. Anyone. Of course, the burden of maintaining that global library would also be shared by anyone.

There are many related models, apart from the Library of Alexandria: French «Encyclopédistes» through the Englightenment, public libraries, national libraries (including the Library of Congress), Tim Berners-Lee’s original “World Wide Web” concept, Brewster Kahle’s Internet Archive, Google Books, etc. Though these models differ, they all point to the same basic idea: a “universal” collection with the potential for “universal” access. In historical perspective, this core notion of a “universal library” seems relatively stable.

Of course, there are many obstacles to a “global” or “universal” library. Including issues having to do with conflicts between social groups across the Globe or the current state of so-called “intellectual property.” These are all very tricky and I don’t think they can be solved in any number of blogposts. The main thing I’ve been thinking about, in this case, is the implications of a global library in terms of privileges.

Come to think of it, it’s possible that much of the resistance to a global library have to do with privilege: unlike me, some people enjoy privilege.

Microblogue d’événement

Version éditée d’un message que je viens d’envoyer à mon ami Martin Lessard.

Le contexte direct, c’est une discussion que nous avons eue au sujet de mon utilisation de Twitter, la principale plateforme de microblogue. Pendant un événement quelconque (conférence, réunion, etc.), j’utilise Twitter pour faire du blogue en temps réel, du liveblogue.

Contrairement à certains, je pense que l’utilisation du microblogue peut être adaptée aux besoins de chaque utilisateur. D’ailleurs, c’est un aspect de la technologie que je trouve admirable: la possibilité d’utiliser des outils pour d’autres usages que ceux pour lesquels ils ont été conçus. C’est là que la technologie au sens propre dépasse l’outil. Dans mon cours de culture matérielle, j’appelle ça “unintended uses”, concept tout simple qui a beaucoup d’implications en rapport aux liens sociaux dans la chaîne qui va de la conception et de la construction d’un outil jusqu’à son utilisation et son «impact» social.

Donc, mon message édité.
Je pense pas mal à cette question de tweets («messages» sur Twitter) considérés comme intempestifs. Alors je lance quelques idées.

Ça m’apporte pas mal, de bloguer en temps réel par l’entremise de Twitter. Vraiment, je vois ça comme prendre des notes en public. Faut dire que la prise de notes est une seconde nature, pour moi. C’est comme ça que je structure ma pensée. Surtout avec des “outliners” mais ça marche aussi en linéaire.

De ce côté, je fais un peu comme ces journalistes sur Twitter qui utilisent le microblogue comme carnet de notes. Andy Carvin est mon exemple préféré. Il tweete plus vite que moi et ses tweets sont aussi utiles qu’un article de journal. Ma démarche est plus proche de la «lecture active» et du sens critique, mais c’est un peu la même idée. Dans mon cas, ça me permet même de remplacer un billet de blogue par une série de tweets.

L’avantage de la prise de notes en temps réel s’est dévoilé entre autres lors d’une présentation de Johannes Fabian, anthropologue émérite qui était à Montréal pendant une semaine bien remplie, le mois dernier. Je livebloguais sa première présentation, sur Twitter. En face de moi, il y avait deux anthropologues de Concordia (Maximilian Forte et Owen Wiltshire) que je connais entre autres comme blogueurs. Les deux prenaient des notes et l’un d’entre eux enregistrait la séance. Dans mes tweets, j’ai essayé de ne pas trop résumer ce que Fabian disait mais je prenais des notes sur mes propres réactions, je faisais part de mes observations de l’auditoire et je réfléchissais à des implications des idées énoncées. Après la présentation, Maximilian me demandait si j’allais bloguer là-dessus. J’ai pu lui dire en toute franchise que c’était déjà fait. Et Owen, un de mes anciens étudiants qui travaille maintenant sur la publication académique et le blogue, a maintenant accès à mes notes complètes, avec “timeline”.
Puissante méthode de prise de notes!

L’avantage de l’aspect public c’est premièrement que je peux avoir des «commentaires» en temps réel. J’en ai pas autant que j’aimerais, mais ça reste ce que je cherche, les commentaires. Le microbloguage me donne plus de commentaires que mon blogue principal, ici même sur WordPress. Facebook me donne plus de commentaires que l’un ou l’autre, mais c’est une autre histoire.

Dans certains cas, le livebloguage donne lieu à une véritable conversation parallèle. Mon exemple préféré, c’est probablement cette interaction que j’ai eue avec John Milles à la fin de la session d’Isabelle Lopez, lors de PodCamp Montréal (#pcmtl08). On parlait de culture d’Internet et je proposais qu’il y avait «une» culture d’Internet (comme on peut dire qu’il y a «une» culture chrétienne, disons). Milles, qui ne me savait pas anthropologue, me fait alors un tweet à propos de la notion classique de culture pour les anthropologues (monolithique, spécifiée dans l’espace, intemporelle…). J’ai alors pu le diriger vers la «crise de la représentation» en anthropologie depuis 1986 avec Writing Culture de Clifford et Marcus. Il m’a par la suite envoyé des références de la littérature juridique.

Bien sûr, c’est l’idée du “backchannel” appliqué au ‘Net. Ça fonctionne de façon très efficace pour des événements comme SXSW et BarCamp puisque tout le monde tweete en même temps. Mais ça peut fonctionner pour d’autres événements, si la pratique devient plus commune.

More on this later.”

Je crois que le bloguage en temps réel lors d’événements augmente la visibilité de l’événement lui-même. Ça marcherait mieux si je mettais des “hashtags” à chaque tweet. (Les “hashtags” sont des étiquettes textuelles précédées de la notation ‘#’, qui permettent d’identifier des «messages»). Le problème, c’est que c’est pas vraiment pratique de taper des hashtags continuellement, du moins sur un iPod touch. De toutes façons, ce type de redondance semble peu utile.

More on this later.”

Évidemment, le fait de microbloguer autant augmente un peu ma propre visibilité. Ces temps-ci, je commence à penser à des façons de me «vendre». C’est un peu difficile pour moi parce que j’ai pas l’habitude de me vendre et que je vois l’humilité comme une vertu. Mais ça semble nécessaire et je me cherche des moyens de me vendre tout en restant moi-même. Twitter me permet de me mettre en valeur dans un contexte qui rend cette pratique tout à fait appropriée (selon moi).

D’ailleurs, j’ai commencé à utiliser Twitter comme méthode de réseautage, pendant que j’étais à Austin. C’était quelques jours avant SXSW et je voulais me faire connaître localement. D’ailleurs, je conserve certaines choses de cette époque, y compris des contacts sur Twitter.

Ma méthode était toute simple: je me suis mis à «suivre» tous ceux qui suivaient @BarCampAustin. Ça faisait un bon paquet et ça me permettait de voir ce qui se passait. D’ailleurs, ça m’a permis d’aller observer des événements organisés par du monde de SXSW comme Gary Vaynerchuk et Scott Beale. Pour un ethnographe, y’a rien comme voir Kevin Rose avec son «entourage» ou d’apprendre que Dr. Tiki est d’origine lavalloise. 😉

Dans les “features” du microbloguage que je trouve particulièrement intéressantes, il y a les notations en ‘@’ et en ‘#’. Ni l’une, ni l’autre n’est si pratique sur un iPod touch, du moins avec les applis qu’on a. Mais le concept de base est très intéressant. Le ‘@’ est un peu l’équivalent du ping ou trackback, pouvant servir à attirer l’attention de quelqu’un d’autre (cette notation permet les réponses directes à des messages). C’est assez puissant comme principe et ça aide beaucoup dans le livebloguage (Muriel Ide et Martin Lessard ont utilisé cette méthode pour me contacter pendant WebCom/-Camp).

More on this later.”

D’après moi, avec des geeks, cette pratique du microblogue d’événement s’intensifie. Il prend même une place prépondérante, donnant au microblogue ce statut que les journalistes ont tant de difficulté à saisir. Lorsqu’il se passe quelque-chose, le microblogue est là pour couvrir l’événement.

Ce qui m’amène à ce “later“. Tout simple, dans le fond. Des instances de microblogues pour des événements. Surtout pour des événements préparés à l’avance, mais ça peut être une structure ad hoc à la Ushahidi d’Erik Hersman.

Laconica d’Evan Prodromou est tout désigné pour remplir la fonction à laquelle je pense mais ça peut être sur n’importe quelle plateforme. J’aime bien Identi.ca, qui est la plus grande instance Laconica. Par contre, j’utilise plus facilement Twitter, entre autres parce qu’il y a des clients Twitter pour l’iPod touch (y compris avec localisation).

Imaginons une (anti-)conférence à la PodCamp. Le même principe s’applique aux événements en-ligne (du genre “WebConference”) mais les rencontres face-à-face ont justement des avantages grâce au microbloguage. Surtout si on pense à la “serendipity”, à l’utilisation de plusieurs canaux de communication (cognitivement moins coûteuse dans un contexte de coprésence), à la facilité des conversations en petits groupes et au «langage non-verbal».

Donc, chaque événement a une instance de microblogue. Ça coûte pratiquement rien à gérer et ça peut vraiment ajouter de la valeur à l’événement.

Chaque personne inscrite à l’événement a un compte de microblogue qui est spécifique à l’instance de cet événement (ou peut utiliser un compte Laconica d’une autre instance et s’inscrire sur la nouvelle instance). Par défaut, tout le monde «suit» tout le monde (tout le monde est incrit pour voir tous les messages). Sur chaque “nametag” de la conférence, l’identifiant de la personne apparaît. Chaque présentateur est aussi lié à son identifiant. Le profil de chaque utilisateur peut être calqué sur un autre profil ou créé spécifiquement pour l’événement. Les portraits photos sont privilégiés, mais les avatars sont aussi permis. Tout ce qui est envoyé à travers l’instance est archivé et catalogué. S’il y a des façons de spécifier des positions dans l’espace, de façon précise (peut-être même avec une RFID qu’on peut désactiver), ce positionnement est inscrit dans l’instance. Comme ça, on peut se retrouver plus facilement pour discuter en semi-privé. D’ailleurs, ça serait facile d’inclure une façon de prendre des rendez-vous ou de noter des détails de conversations, pour se remémorer le tout plus tard. De belles intégrations possibles avec Google Calendar, par exemple.

Comme la liste des membres de l’instance est limitée, on peut avoir une appli qui facilite les notations ‘@’. Recherche «incrémentale», carnet d’adresse, auto-complétion… Les @ des présentateurs sont sous-entendus lors des présentations, on n’a pas à taper leurs noms au complet pour les citer. Dans le cas de conversations à plusieurs, ça devient légèrement compliqué, mais on peut quand même avoir une liste courte si c’est un panel ou d’autres méthodes si c’est plus large. D’ailleurs, les modérateurs pourraient utiliser ça pour faire la liste d’attente des interventions. (Ça, c’est du bonbon! J’imagine ce que ça donnerait à L’Université autrement!)

Comme Evan Prodromou en parlait lors de PodCamp Montréal, il y a toute la question du “microcasting” qui prend de l’ampleur. Avec une instance de microblogue liée à un événement, on pourrait avoir de la distribution de fichiers à l’interne. Fichiers de présentation (Powerpoint ou autre), fichiers médias, liens, etc. Les présentateurs peuvent préparer le tout à l’avance et envoyer leurs trucs au moment opportun. À la rigueur, ça peut même remplacer certaines utilisations de Powerpoint!

Plutôt que de devoir taper des hashtags d’événements (#pcmtl08), on n’a qu’à envoyer ses messages sur l’instance spécifique. Ceux qui ne participent pas à l’événement ne sont pas inondés de messages inopportuns. Nul besoin d’arrêter de suivre quelqu’un qui participe à un tel événement (comme ç’a été le cas avec #pcmtl08).

Une fois l’événement terminé, on peut faire ce qu’on veut avec l’instance. On peut y revenir, par exemple pour consulter la liste complète des participants. On peut retravailler ses notes pour les transformer en billets et même rapports. Ou on peut tout mettre ça de côté.

Pour le reste, ça serait comme l’utilisation de Twitter lors de SXSWi (y compris le cas Lacy, que je trouve fascinant) ou autre événement geek typique. Dans certains cas, les gens envoient les tweets directement sur des écrans autour des présentateurs.

Avec une instance spécifique, les choses sont plus simple à gérer. En plus, peu de risques de voir l’instance tomber en panne, comme c’était souvent le cas avec Twitter, pendant une assez longue période.

C’est une série d’idées en l’air et je tiens pas au détail spécifique. Mais je crois qu’il y a un besoin réel et que ça aide à mettre plusieurs choses sur une même plateforme. D’ailleurs, j’y avais pas trop pensé mais ça peut avoir des effets intéressants pour la gestion de conférences, pour des rencontres en-ligne, pour la couverture médiatique d’événements d’actualités, etc. Certains pourraient même penser à des modèles d’affaire qui incluent le microblogue comme valeur ajoutée. (Différents types de comptes, possibilité d’assister gratuitement à des conférences sans compte sur l’instance…)

Qu’en pensez-vous?

Why Is PRI’s The World Having Social Media Issues?

Some raw notes on why PRI’S The World (especially “The World Tech Podcast” or WTP) is having issues with social media. It may sound bad, for many reasons. But I won’t adapt the tone.

No offense intended.

Thing is, I don’t really care about WTP, The World, or even the major media outlets behind them (PRI, BBC, Discovery).

Reason for those notes: WTP host Clark Boyd mentioned that their social media strategy wasn’t working as well as they expected. Seemed like a nice opportunity to think about social media failures from mainstream media outlets.

My list of reasons is not exhaustive and it’s not really in order of importance.

Social media works best when people contribute widely. In other words, a podcaster (or blogger, etc.) who contributes to somebody else’s podcast (blog, etc.) is likely to attract the kind of mindshare afforded social media outlets. Case in point, I learnt about WTP through Erik Hersman because Afrigadget was able to post WTP content. A more efficient strategy is to actually go and contribute to other people’s social media.

The easiest way to do it is to link to other people, especially other blogs. Embedding a YouTube video can have some effects but a good ol’ trackback is so much more effective. In terms of attention economy, the currency is, well, attention: you need to pay attention to others!

Clark Boyd says WTP isn’t opposed to interacting with listeners. Nice… Yet, there hasn’t been any significant move toward interaction with listeners. Not even “letters to the editor” which could be read on the radio programme. No button to leave audio feedback. Listeners who feel they’re recognized as being interesting are likely to go the social media route.

While it’s a technology podcast, WTP is formatted as a straightforward radio news bulletin. “Stories” are strung together in a seamless fashion, most reports follow a very standard BBC format, there are very few “conversations” with non-journalists (interviews don’t count as conversations)… Such shows tend not to attract the same crowd as typical social media formats do. So WTP probably attracts a radio crowd and radio crowds aren’t necessarily that engaged in social media. Unless there’s a compelling reason to engage, but that’s not the issue I want to address.

What’s probably the saddest part is that The World ostensibly has a sort of global mission. Of course, they’re limited by language. But their coverage is even more Anglo-American than it needs to be. A far cry from Global Voices (and even GV tends to be somewhat Anglophone-centric).

The fact that WTP is part of The World (which is itself produced/supported by PRI, BBC, and Discovery) is an issue, in terms of social media. Especially given the fact that WTP-specific information is difficult to find. WTP is probably the one part of The World which is savvy to social media so the difficulty of finding WTP is made even more noticeable by the lack of a dedicated website.

WTP does have its own blog. But here’s how it shows up:

Discovery News: Etherized.

The main URL given for this blog? <tinyurl.com/wtpblog> Slightly better than <http://tinyurl.com/6g3me9&gt; (which also points to the same place). But very forgettable. No branding, no notion of an autonomous entity, little personality.

Speaking of personality, the main show’s name sounds problematic: The World. Not the most unique name in the world! 😉 On WTP, correspondents and host often use “the world” to refer to their main show. Not only is it confusing but it tends to sound extremely pretentious. And pretention is among the trickiest attitudes in social media.

A strange dimension of WTP’s online presence is that it isn’t integrated. For instance, their main blog doesn’t seem to have direct links to its Twitter and Facebook profiles. As we say in geek circles: FAIL!

To make matters worse, WTP is considering pulling off its Facebook page. As Facebook pages require zero maintenance and may bring help listeners associate themselves with the show, I have no idea why they would do such a thing. I’m actually having a very hard time finding that page, which might explain why it has had zero growth in the recent past. (Those who found it originally probably had friends who were adding it. Viral marketing works in bursts.) WTP host Clark Boyd doesn’t seem to have a public profile on Facebook. Facebook searches for WTP and “The World Tech Podcast” don’t return obvious results. Oh! There you go. I found the link to that Facebook page: <http://www.new.facebook.com/home.php#/group.php?gid=2411818715&ref=ts&gt;. Yes, the link they give is directly to the new version of Facebook. Yes, it has extra characters. No, it’s not linked in an obvious fashion.

That link was hidden in the August 22 post on WTP’s blog. But because every post has a link with “Share on Facebook” text, searching the page for “Facebook” returns all blogposts on the same page (not to mention the “Facebook” category for posts, in the right-hand sidebar). C’mon, folks! How about a Facebook badge? It’s free and it works!

Oh, wait! It’s not even a Facebook page! It’s a Facebook group! The difference between group and page seems quite small to the naked eye but ever since Fb came out with pages (a year or so ago), most people have switched from groups to pages. That might be yet another reason why WTP isn’t getting its “social media cred.” Not to mention that maintaining a Facebook group implies just a bit of time and doesn’t tend to provide direct results. Facebook groups may work well with preestablished groups but they’re not at all effective at bringing together disparate people to discuss diverse issues. Unless you regularly send messages to group members which is the best way to annoy people and generate actual animosity against the represented entity.

On that group, I eventually learn that WTP host Clark Boyd has his own WTP-themed blog. In terms of social media, the fact that I only found that blog after several steps indicates a broader problem, IMHO.

And speaking of Clark Boyd… He’s most likely a great person and an adept journalist. But is WTP his own personal podcast with segments from his parent entity or is WTP, like the unfortunately defunct Search Engine, a work of collaboration? If the latter is true, why is Boyd alone between segments in the podcast, why is his picture the only one of the WTP blog, and why is his name the domain for the WTP-themed blog on WordPress.com?

Again, no offence. But I just don’t grok WTP.

There’s one trap I’m glad WTP can avoid. I won’t describe it too much for fear that it will represent the main change in strategy. Not because I get the impression I may have an impact. But, in attention economy, “the squeaky wheel gets the grease.”

Oops! I said too much… 😦

I said I don’t care about WTP. It’s still accurate. But I do care about some of the topics covered by WTP. I wish there were more social media with a modicum of cultural awareness. In this sense, WTP is a notch above Radio Open Source and a few notches below Global Voices. But the podcast for Global Voices may have podfaded and Open Source sounds increasingly U.S.-centric.

Ah, well…

Google for Educational Contexts

Interesting wishlist, over at tbarrett’s classroom ICT blog.

11 Google Apps Improvements for the Classroom | ICT in my Classroom.

In a way, Google is in a unique position in terms of creating the optimal set of classroom tools. And Google teams have an interest in educational projects (as made clear by Google for Educators, Google Summer of Code, Google Apps for schools…).
What seems to be missing is integration. Maybe Google is taking its time before integrating all of its services and apps. After all, the integration of Google Notebook and Google Bookmarks was fairly recent (and we can easily imagine a further integration with Google Reader). But some of us are a bit impatient. Or too enthusiastic about tools.

Because I just skimmed through the Google Chrome comicbook, I get to think that, maybe, Google is getting ready to integrate its tools in a neat way. Not specifically meant for schools but, in the end, an integrated Google platform can be developed into an education-specific set of applications.
After all, apart from Google Scholar, we’re talking about pretty much the same tools as those used outside of educational contexts.

What tools am I personally thinking about? Almost everything Google does or has done could be useful in educational contexts. From Google Apps (which includes Google Docs, Gmail, Google Sites, GTalk, Gcal…) to Google Books and Google Scholar or even Google Earth, Google Translate, and Google Maps. Not to mention OpenSocial, YouTube, Android, Blogger, Sketchup, Lively

Not that Google’s versions of all of these tools and services are inherently more appropriate for education than those developed outside of Google. But it’s clear that Google has an edge in terms of its technology portfolio. Can’t we just imagine a new kind of Learning Management System leveraging all the neat Google technologies and using a social networking model?

Educational contexts do have some specific requirements. Despite Google’s love affair with “openness,” schools typically require protection for different types of data. Some would also say that Google’s usual advertisement-supported model may be inappropriate for learning environments. So it might be a sign that Google does understand school-focused requirements that Google Apps are ad-free for students, faculty, and staff.

Ok, I’m thinking out loud. But isn’t this what wishlists are about?

Crazy App Idea: Happy Meter

I keep getting ideas for apps I’d like to see on Apple’s App Store for iPod touch and iPhone. This one may sound a bit weird but I think it could be fun. An app where you can record your mood and optionally broadcast it to friends. It could become rather sophisticated, actually. And I think it can have interesting consequences.

The idea mostly comes from Philippe Lemay, a psychologist friend of mine and fellow PDA fan. Haven’t talked to him in a while but I was just thinking about something he did, a number of years ago (in the mid-1990s). As part of an academic project, Philippe helped develop a PDA-based research program whereby subjects would record different things about their state of mind at intervals during the day. Apart from the neatness of the data gathering technique, this whole concept stayed with me. As a non-psychologist, I personally get the strong impression that recording your moods frequently during the day can actually be a very useful thing to do in terms of mental health.

And I really like the PDA angle. Since I think of the App Store as transforming Apple’s touch devices into full-fledged PDAs, the connection is rather strong between Philippe’s work at that time and the current state of App Store development.

Since that project of Philippe’s, a number of things have been going on which might help refine the “happy meter” concept.

One is that “lifecasting” became rather big, especially among certain groups of Netizens (typically younger people, but also many members of geek culture). Though the lifecasting concept applies mostly to video streams, there are connections with many other trends in online culture. The connection with vidcasting specifically (and podcasting generally) is rather obvious. But there are other connections. For instance, with mo-, photo-, or microblogging. Or even with all the “mood” apps on Facebook.

Speaking of Facebook as a platform, I think it meshes especially well with touch devices.

So, “happy meter” could be part of a broader app which does other things: updating Facebook status, posting tweets, broadcasting location, sending personal blogposts, listing scores in a Brain Age type game, etc.

Yet I think the “happy meter” could be useful on its own, as a way to track your own mood. “Turns out, my mood was improving pretty quickly on that day.” “Sounds like I didn’t let things affect me too much despite all sorts of things I was going through.”

As a mood-tracker, the “happy meter” should be extremely efficient. Because it’s easy, I’m thinking of sliders. One main slider for general mood and different sliders for different moods and emotions. It would also be possible to extend the “entry form” on occasion, when the user wants to record more data about their mental state.

Of course, everything would be save automatically and “sent to the cloud” on occasion. There could be a way to selectively broadcast some slider values. The app could conceivably send reminders to the user to update their mood at regular intervals. It could even serve as a “break reminder” feature. Though there are limitations on OSX iPhone in terms of interapplication communication, it’d be even neater if the app were able to record other things happening on the touch device at the same time, such as music which is playing or some apps which have been used.

Now, very obviously, there are lots of privacy issues involved. But what social networking services have taught us is that users can have pretty sophisticated notions of privacy management, if they’re given the chance. For instance, adept Facebook users may seem to indiscrimately post just about everything about themselves but are often very clear about what they want to “let out,” in context. So, clearly, every type of broadcasting should be controlled by the user. No opt-out here.

I know this all sounds crazy. And it all might be a very bad idea. But the thing about letting my mind wander is that it helps me remain happy.

Note-Taking on OSX iPhone

Attended Dan Dennett’s “From Animal to Person : How Culture Makes Up our Minds” talk, yesterday. An event hosted by UQAM’s Cognitive Science Institute. Should blog about this pretty soon. It was entertaining and some parts were fairly stimulating. But what surprised me the most had nothing to do with the talk: I was able to take notes efficiently using the onscreen keyboard on my iPod touch (my ‘touch).

As I blogged yesterday, in French, it took me a while to realize that switching keyboard language on the ‘touch also changed the dictionary used for text prediction. Very sensical but I hadn’t realized it. Writing in English with French dictionary predictions was rather painful. I basically had to click bypass the dictionary predictions on most words. Even “to” was transformed into “go” by the predictive keyboard, and I didn’t necessarily notice all the substitutions done. Really, it was a frustrating experience.

It may seem weird that it would take me a while to realize that I could get an English predictive dictionary in a French interface. One reason for the delay is that I expect some degree of awkwardness in some software features, even with some Apple products. Another reason is that I wasn’t using my ‘touch for much text entry, as I’m pretty much waiting for OSX iPhone 2.0 which should bring me alternative text entry methods such as Graffiti, MessagEase and, one can dream, Dasher. If these sound like excuses for my inattention and absent-mindedness, so be it. 😀

At any rate, I did eventually find out that I could switch back and forth between French and English dictionaries for predictive text entry on my ‘touch’s onscreen keyboard. And I’ve been entering a bit of text through this method, especially answers to a few emails.

But, last night, I thought I’d give my ‘touch a try as a note-taking device. I’ve been using PDAs for a number of years and note-taking has been a major component of my PDA usage pattern. In fact, my taking notes on a PDA has been so conspicuous that some people seem to associate me quite directly with this. It may even have helped garner a gadget-freak reputation, even though my attitude toward gadgets tends to be quite distinct from the gadget-freak pattern.

For perhaps obvious reasons, I’ve typically been able to train myself to efficiently use handheld text entry methods. On my NewtonOS MessagePad 130, I initially “got pretty good” at using the default handwriting recognition. This surprised a lot of people because human beings usually have a very hard deciphering my handwriting. Still on the Newton, switching to Graffiti, I became rather proficient at entering text using this shorthand method. On PalmOS devices (HandSpring Visor and a series of Sony Clié devices), I was usually doubling on Graffiti and MessagEase. In all of these cases, I was typically able to take rather extensive notes during different types of oral presentations or simply when I thought about something. Though I mostly used paper to take notes during classes I’ve attended during most of my academic coursework, PDA text entry was usually efficient enough that I could write down some key things in realtime. In fact, I’ve used PDAs rather extensively to take notes during ethnographic field research.

So, note taking was one of the intended uses for my iPod touch. But, again, I thought I would have to wait for text entry alternatives to the default keyboard before I could do it efficiently. So that’s why I was so surprised, yesterday, when I found out that I was able to efficiently take notes during Dennett’s talk using only the default OSX iPhone onscreen keyboard.

The key, here, is pretty much what someone at Apple was describing during some keynote session (might have been the “iPhone Roadmap” event): you need to trust the predictions. Yes, it sounds pretty “touchy-feely” (we’re talking about “touch devices,” after all 😉 ). But, well, it does work better than you would expect.

The difference is even more striking for me because I really was “fighting” the predictions. I couldn’t trust them because most of them were in the wrong language. But, last night, I noticed how surprisingly accurate the predictions could be, even with a large number of characters being mistyped. Part of it has to do with the proximity part of the algorithm. If I type “xartion,” the algorithm guesses that I’m trying to type “cartoon” because ‘x’ is close to ‘c’ and ‘i’ is close to ‘o’ (not an example from last night but one I just tried). The more confident you are that the onscreen keyboard will accurately predict what you’re trying to type, the more comfortably you can enter text.  The more comfortable you are at entering text, the more efficient you become at typing, which begins a feedback loop.

Because I didn’t care that specifically about the content of Dennett’s talk, it was an excellent occasion to practise entering text on my ‘touch. The stakes of “capturing” text were fairly low. It almost became a game. When you add characters to a string which is bringing up the appropriate suggestion and delete those extra characters, the suggestion is lost. In other words, using the example above, if I type “xartion,” I get “cartoon” as a suggestion and simply need to type a space or any non-alphabetic character to accept that suggestion. But if I go on typing “xartionu” and go back to delete the ‘u,’ the “cartoon” suggestion disappears. So I was playing a kind of game with the ‘touch as I was typing relatively long strings and trying to avoid extra characters. I lost a few accurate suggestions and had to retype these, but the more I trusted the predictive algorithm, the less frequently did I have to retype.

During a 90 minute talk, I entered about 500 words. While it may not sound like much, I would say that it captured the gist of what I was trying to write down. I don’t think I would have written down much more if I had been writing on paper. Some of these words were the same as the ones Dennett uttered but the bulk of those notes were my own thoughts on what Dennett was saying. So there were different cognitive processes going on at the same time, which greatly slows down each specific process. I would still say that I was able to follow the talk rather closely and that my notes are pretty much appropriate for the task.

Now, I still have some issues with entering text using the ‘touch’s onscreen keyboard.

  • While it makes sense to make it the default that all suggestions are accepted, there could be an easier way to refuse suggestions that tapping the box where that suggestion appears.
  • It might also be quite neat (though probably inefficient) if the original characters typed by the user were somehow kept in memory. That way, one could correct inaccurate predictions using the original string.
  • The keyboard is both very small for fingers and quite big for the screen.
  • Switching between alphabetic characters and numbers is somewhat inefficient.
  • While predictions have some of the same effect, the lack of a “spell as you type” feature decreases the assurance in avoiding typos.
  • Dictionary-based predictions are still inefficient in bilingual writing.
  • The lack of copy-paste changes a lot of things about text entry.
  • There’s basically no “command” or “macro” available during text entry.
  • As a fan of outliners, I’m missing the possibility to structure my notes directly as I enter them.
  • A voice recorder could do wonders in conjunction with text entry.
  • I really just wish Dasher were available on OSX iPhone.

All told, taking notes on the iPod touch is more efficient than I thought it’d be but less pleasant than I wish it can become.

Bilinguisme sur OSX iPhone

Peut-être un peu bête de ma part, mais j’avais pas compris qu’en changeant de clavier sur mon iPod touch, je changeais aussi de dictionaire pour les prédictions.

Comme le clavier canadien-français fonctionne aussi bien en anglais qu’en français, je n’avais configuré que ce clavier. Mais j’écris plus souvent en anglais qu’en français et toutes sortes de suggestions en français rendaient très difficile l’écriture en anglais.

Récemment, j’ai voulu taper le signe de dollar («$») sur mon iPod touch mais, à chaque fois que j’appuyais sur ce signe sur le clavier virtuel, c’était le signe d’euro qui apparaissait («€»). Très bizarre, surtout que c’est bel et bien un clavier canadien-français (QWERTY avec «é» dans le bas, à droite), et non un clavier français (AZERTY, chiffres avec touche majuscule…). J’ai alors ajouté un clavier U.S. à la configuration et non seulement m’est-il alors possible de taper le signe de dollar, mais les suggestions sont maintenant en anglais des États-Unis. Toujours pas idéal, mais très différent des suggestions françaises quand on écrit en anglais. D’ailleurs, j’imagine qu’il y a aussi un dictionaire personalisé qui ne dépend pas d’une langue spécifique puisque certains termes que je tape souvent apparaissent dans une langue comme dans l’autre.

J’espère vraiment que la mise à jour à OSX iPhone 2.0 va amener diverses amélioration côté «entrée de texte» (“text input”). Déjà, le support multilinguistique semble être intégré, surtout pour les langues d’Asie de l’est. Mais j’espère aussi qu’il va y avoir de nouvelles options pour insérer du texte. Personnellement, parce que je suis à l’aise avec ces systèmes, j’aimerais bien Graffiti, MessagEase et, ô merveille, Dasher. J’ai bon espoir pour les deux premiers, puisqu’ils existent déjà sur iPhone. Pour Dasher, comme c’est un projet en source ouverte, il «suffirait» peut-être d’avoir un développeur OSX iPhone intéressé par Dasher pour «porter» Dasher de Mac OS X à OSX iPhone. Si ça peut marcher, entrer du texte sur un iPod touch peut devenir agréable, efficace et utile. D’après moi, Dasher serait très approprié pour les appareils de type iPhone (ce que j’aime appeler des “touch devices”, incluant des appareils créés par d’autres entreprises qu’Apple).

Visualizing Touch Devices in Education

Took me a while before I watched this concept video about iPhone use on campus.

Connected: The Movie – Abilene Christian University

Sure, it’s a bit campy. Sure, some features aren’t available on the iPhone yet. But the basic concepts are pretty much what I had in mind.

Among things I like in the video:

  • The very notion of student empowerment runs at the centre of it.
  • Many of the class-related applications presented show an interest in the constructivist dimensions of learning.
  • Material is made available before class. Face-to-face time is for engaging in the material, not rehashing it.
  • The technology is presented as a way to ease the bureaucratic aspects of university life, relieving a burden on students (and, presumably, on everyone else involved).
  • The “iPhone as ID” concept is simple yet powerful, in context.
  • Social networks (namely Facebook and MySpace, in the video) are embedded in the campus experience.
  • Blended learning (called “hybrid” in the video) is conceived as an option, not as an obligation.
  • Use of the technology is specifically perceived as going beyond geek culture.
  • The scenarios (use cases) are quite realistic in terms of typical campus life in the United States.
  • While “getting an iPhone” is mentioned as a perk, it’s perfectly possible to imagine technology as a levelling factor with educational institutions, lowering some costs while raising the bar for pedagogical standards.
  • The shift from “eLearning” to “mLearning” is rather obvious.
  • ACU already does iTunes U.
  • The video is released under a Creative Commons license.

Of course, there are many directions things can go, from here. Not all of them are in line with the ACU dream scenario. But I’m quite hope judging from some apparently random facts: that Apple may sell iPhones through universities, that Apple has plans for iPhone use on campuses,  that many of the “enterprise features” of iPhone 2.0 could work in institutions of higher education, that the Steve Jobs keynote made several mentions of education, that Apple bundles iPod touch with Macs, that the OLPC XOXO is now conceived more as a touch handheld than as a laptop, that (although delayed) Google’s Android platform can participate in the same usage scenarios, and that browser-based computing apparently has a bright future.

Waiting for Other Touch Devices?

Though I’m interpreting Apple’s current back-to-school special to imply that we might not see radically new iPod touch models until September, I’m still hoping that there will be a variety of touch devices available in the not-so-distant future, whether or not Apple makes them.

Turns out, the rumour mill has some items related to my wish, including this one:

AppleInsider | Larger Apple multi-touch devices move beyond prototype stage

This could be excellent news for the device category as a whole and for Apple itself. As explained before, I’m especially enthusiastic about touch devices in educational contexts.

I’ve been lusting over an iPod touch since it was announced. I sincerely think that an iPod touch will significantly enhance my life. As strange as it may sound, especially given the fact I’m no gadget freak, I think frequently about the iPod touch. Think Wayne, in Wayne’s World 2, going to a music store to try a guitar (and being denied the privilege to play Stairway to Heaven). That’s almost me and the iPod touch. When I go to an Apple Store, I spend precious minutes with a touch.

Given my current pattern of computer use, the fact that I have no access to a laptop at this point, and the availability of WiFi connections at some interesting spots, I think an iPod touch will enable me to spend much less time in front of this desktop, spend much more time outside, and focus on my general well-being.

One important feature the touch has, which can have a significant effect on my life, is instant-on. My desktop still takes minutes to wake up from “Stand by.” Several times during the day, the main reason I wake my desktop is to make sure I haven’t received important email messages. (I don’t have push email.) For a number of reasons, what starts out as simple email-checking frequently ends up being a more elaborate browsing session. An iPod touch would greatly reduce the need for those extended sessions and let me “do other things with my life.”

Another reason a touch would be important in my life at this point is that I no longer have access to a working MP3 player. While I don’t technically need any portable media player to be happy, getting my first iPod just a few years ago was an important change in my life. I’ll still miss my late iRiver‘s recording capabilities, but it’s now possible to get microphone input on the iPod touch. Eventually, the iPod touch could become a very attractive tool for fieldwork recordings. Or for podcasting. Given my audio orientation, a recording-capable iPod touch could be quite useful. Even more so than iPod Classic with recording capabilities.

There are a number of other things which should make the iPod touch very useful in my life. A set of them have to do with expected features and applications. One is Omni Group’s intention to release their OmniFocus task management software through the iPhone SDK. As an enthusiastic user of OmniOutliner for most of the time I’ve spent on Mac OS X laptops, I can just imagine how useful OmniFocus could be on an iPod touch. Getting Things Done, the handheld version. It could help me streamline my whole workflow, the way OO used to do. In other words: OF on an iPod touch could be this fieldworker’s dream come true.

There are also applications to be released for Apple’s Touch devices which may be less “utilitarian” but still quite exciting. Including the Trism game. In terms of both “appropriate use of the platform” and pricing, Trism scores high on my list. I see it as an excellent example of what casual gaming can be like. One practical aspect of casual gaming, especially on such a flexible device as the iPod touch, is that it can greatly decrease stress levels by giving users “something to do while they wait.” I’ve had that experience with other handhelds. Whether it’s riding the bus or waiting for a computer to wake up from stand by, having something to do with your hands makes the situation just a tad bit more pleasant.

I’m also expecting some new features to eventually be released through software, including some advanced podcatching features like wireless synchronization of podcasts and, one can dream, a way to interact directly with podcast content. Despite having been an avid podcast listener for years, I think podcasts aren’t nearly “interactive” enough. Software on a touch device could solve this. But that part is wishful thinking. I tend to do a lot of wishlists. Sometimes, my daydreams become realities.

The cool thing is, it looks as though I’ll be able to get my own touch device in the near future. w00t! 😀

Even if Apple does release new Touch devices, the device I’m most likely to get is an iPod touch. Chances are that I might be able to get a used 8MB touch for a decent price. Especially if, as is expected for next Monday, Apple officially announces the iPhone for Canada (possibly with a very attractive data plan) As a friend was telling me, once Canadians are able to get their hands on an iPhone directly in Canada, there’ll likely be a number of used iPod touches for sale. With a larger supply of used iPod touches and a presumably lower demand for the same, we can expect a lower price.

Another reason I might get an iPod touch is that a friend of mine has been talking about helping me with this purchase. Though I feel a bit awkward about accepting this kind of help, I’m very enthusiastic at the prospect.

Watch this space for more on my touch life. 😉

Nailed It! Keyboard-Less OLPC XO (Update)

It’s a strange feeling that I get fairly frequently. I dream up some tech “thing” (hardward device, software tool, service) and it’s unveiled shortly thereafter. At the risk of sounding boastful, it feels as if I have my pulse on the “industry.”

Of course, there are other explanations. One is that I dream up so many things that some of them are bound to come through at some point. Another is that I may have internalized some information about those products ready to be unveiled from some source and that I forget that I got this information. Or maybe what I’m dreaming up is so obvious that just everybody predicted it.

Still, it’s a strange feeling. I feel prescient.

Latest case in point, the OLPC’s XOXO (XO-2), will be keyboard-less, just as I dreamt about on another blog and just as I described here, yesterday. As could be expected, some people are already expressing negative opinions about the keyboard-less design. Maybe they’re just surprised. But I can’t help but think that designing the device without a hardware keyboard is an important step toward radically creative thinking. Several aspects of the XO-1 were very innovative and could be described as “creative solutions to important problems.” But the shift to a keyboard-free device is closer to “creating a new device category.” Of course I’m biased but I do think this new device category can have game-changing implications. The fact that the device is much smaller and more specifically designed as an eBook also goes with this “new device category” idea. At the risk of belabouring the point, the XOXO is almost exactly what I had in mind last night as “handheld for the rest of us.”

I’m also glad that this radical shift in design explicitly relates to cultural awareness. What I mean is, the OLPC team is actually saying that the double-screen will be used for diverse (on-screen) “keyboards.” If I hadn’t thought of the same thing myself, I would call it “genius!” 🙂

Now, to go back to the notion of feeling eerily prescient. I can wash the feeling away by myself. I’ve written a number of things about possible features for the OLPC or other devices and the lack of keyboard seems to be the only one which stuck. In fact, although I did think about a Nintendo-like dual-screen system at several points, I didn’t write it down as a prediction or even a part of my wishlist.

Keyboard-less devices are rather common, these days. Apart from the Nintendo DS and DS Lite that people are using as a point of comparison for the XOXO, there are several (multi-)touch based devices out there which may have served as inspiration for both the OLPC redesign and my own dream. In fact, some rumours seem to indicate that Apple might release a dual-screen portable at some point, maybe with double-sided panels. I, for one, would say that such a design would make the long-rumoured Apple tablet much more practical. In other words: I wasn’t prescient, in the OLPC case, I just dreamt up what was the most logical next step.

Also, it’s possible that I read or heard something which made me think specifically of a keyboard-less OLPC. I kind of doubt it and I don’t really want to look for such an occurrence, but now that I know that it was already planned, I admit that I may have seen some mention of the keyboard-less design.

[Edit, May 21, 1:20 a.m.: Apparently, the International Herald Tribune had already published a preview of the device by Friday, May 16. I’m pretty sure I had seen nothing of that IHT preview and I really don’t think I was able to see any description of a dual-screen XO by the time I posted my blog entry and other comments about a keyboard-less XO. But the fact that it was, somehow, in the open makes me more suspicious of my own intuitions.]

Sheesh!