WordPress as Content Directory: Getting Somewhere

Using WordPress to build content directories and databases.

{I tend to ramble a bit. If you just want a step-by-step tutorial, you can skip to here.}

Woohoo!

I feel like I’ve reached a milestone in a project I’ve had in mind, ever since I learnt about Custom Post Types in WordPress 3.0: Using WordPress as a content directory.

The concept may not be so obvious to anyone else, but it’s very clear to me. And probably much clearer for anyone who has any level of WordPress skills (I’m still a kind of WP newbie).

Basically, I’d like to set something up through WordPress to make it easy to create, review, and publish entries in content databases. WordPress is now a Content Management System and the type of “content management” I’d like to enable has to do with something of a directory system.

Why WordPress? Almost glad you asked.

These days, several of the projects on which I work revolve around WordPress. By pure coincidence. Or because WordPress is “teh awsum.” No idea how representative my sample is. But I got to work on WordPress for (among other things): an academic association, an adult learners’ week, an institute for citizenship and social change, and some of my own learning-related projects.

There are people out there arguing about the relative value of WordPress and other Content Management Systems. Sometimes, WordPress may fall short of people’s expectations. Sometimes, the pro-WordPress rhetoric is strong enough to sound like fanboism. But the matter goes beyond marketshare, opinions, and preferences.

In my case, WordPress just happens to be a rather central part of my life, these days. To me, it’s both a question of WordPress being “the right tool for the job” and the work I end up doing being appropriate for WordPress treatment. More than a simple causality (“I use WordPress because of the projects I do” or “I do these projects because I use WordPress”), it’s a complex interaction which involves diverse tools, my skillset, my social networks, and my interests.

Of course, WordPress isn’t perfect nor is it ideal for every situation. There are cases in which it might make much more sense to use another tool (Twitter, TikiWiki, Facebook, Moodle, Tumblr, Drupal..). And there are several things I wish WordPress did more elegantly (such as integrating all dimensions in a single tool). But I frequently end up with WordPress.

Here are some things I like about WordPress:

This last one is where the choice of WordPress for content directories starts making the most sense. Not only is it easy for me to use and build on WordPress but the learning curves are such that it’s easy for me to teach WordPress to others.

A nice example is the post editing interface (same in the software and service). It’s powerful, flexible, and robust, but it’s also very easy to use. It takes a few minutes to learn and is quite sufficient to do a lot of work.

This is exactly where I’m getting to the core idea for my content directories.

I emailed the following description to the digital content editor for the academic organization for which I want to create such content directories:

You know the post editing interface? What if instead of editing posts, someone could edit other types of contents, like syllabi, calls for papers, and teaching resources? What if fields were pretty much like the form I had created for [a committee]? What if submissions could be made by people with a specific role? What if submissions could then be reviewed by other people, with another role? What if display of these items were standardised?

Not exactly sure how clear my vision was in her head, but it’s very clear for me. And it came from different things I’ve seen about custom post types in WordPress 3.0.

For instance, the following post has been quite inspiring:

I almost had a drift-off moment.

But I wasn’t able to wrap my head around all the necessary elements. I perused and read a number of things about custom post types, I tried a few things. But I always got stuck at some point.

Recently, a valuable piece of the puzzle was provided by Kyle Jones (whose blog I follow because of his work on WordPress/BuddyPress in learning, a focus I share).

Setting up a Staff Directory using WordPress Custom Post Types and Plugins | The Corkboard.

As I discussed in the comments to this post, it contained almost everything I needed to make this work. But the two problems Jones mentioned were major hurdles, for me.

After reading that post, though, I decided to investigate further. I eventually got some material which helped me a bit, but it still wasn’t sufficient. Until tonight, I kept running into obstacles which made the process quite difficult.

Then, while trying to solve a problem I was having with Jones’s code, I stumbled upon the following:

Rock-Solid WordPress 3.0 Themes using Custom Post Types | Blancer.com Tutorials and projects.

This post was useful enough that I created a shortlink for it, so I could have it on my iPad and follow along: http://bit.ly/RockSolidCustomWP

By itself, it might not have been sufficient for me to really understand the whole process. And, following that tutorial, I replaced the first bits of code with use of the neat plugins mentioned by Jones in his own tutorial: More Types, More Taxonomies, and More Fields.

I played with this a few times but I can now provide an actual tutorial. I’m now doing the whole thing “from scratch” and will write down all steps.

This is with the WordPress 3.0 blogging software installed on a Bluehost account. (The WordPress.com blogging service doesn’t support custom post types.) I use the default Twenty Ten theme as a parent theme.

Since I use WordPress Multisite, I’m creating a new test blog (in Super Admin->Sites, “Add New”). Of course, this wasn’t required, but it helps me make sure the process is reproducible.

Since I already installed the three “More Plugins” (but they’re not “network activated”) I go in the Plugins menu to activate each of them.

I can now create the new “Product” type, based on that Blancer tutorial. To do so, I go to the “More Types” Settings menu, I click on “Add New Post Type,” and I fill in the following information: post type names (singular and plural) and the thumbnail feature. Other options are set by default.

I also set the “Permalink base” in Advanced settings. Not sure it’s required but it seems to make sense.

I click on the “Save” button at the bottom of the page (forgot to do this, the last time).

I then go to the “More Fields” settings menu to create a custom box for the post editing interface.

I add the box title and change the “Use with post types” options (no use in having this in posts).

(Didn’t forget to click “save,” this time!)

I can now add the “Price” field. To do so, I need to click on the “Edit” link next to the “Product Options” box I just created and add click “Add New Field.”

I add the “Field title” and “Custom field key”:

I set the “Field type” to Number.

I also set the slug for this field.

I then go to the “More Taxonomies” settings menu to add a new product classification.

I click “Add New Taxonomy,” and fill in taxonomy names, allow permalinks, add slug, and show tag cloud.

I also specify that this taxonomy is only used for the “Product” type.

(Save!)

Now, the rest is more directly taken from the Blancer tutorial. But instead of copy-paste, I added the files directly to a Twenty Ten child theme. The files are available in this archive.

Here’s the style.css code:

/*
Theme Name: Product Directory
Theme URI: http://enkerli.com/
Description: A product directory child theme based on Kyle Jones, Blancer, and Twenty Ten
Author: Alexandre Enkerli
Version: 0.1
Template: twentyten
*/

@import url("../twentyten/style.css");

The code for functions.php:

<!--?php /**  * ProductDir functions and definitions  * @package WordPress  * @subpackage Product_Directory  * @since Product Directory 0.1  */ /*Custom Columns*/ add_filter("manage_edit-product_columns", "prod_edit_columns"); add_action("manage_posts_custom_column",  "prod_custom_columns"); function prod_edit_columns($columns){ 		$columns = array( 			"cb" =--> "<input type="\&quot;checkbox\&quot;" />",
			"title" => "Product Title",
			"description" => "Description",
			"price" => "Price",
			"catalog" => "Catalog",
		);

		return $columns;
}

function prod_custom_columns($column){
		global $post;
		switch ($column)
		{
			case "description":
				the_excerpt();
				break;
			case "price":
				$custom = get_post_custom();
				echo $custom["price"][0];
				break;
			case "catalog":
				echo get_the_term_list($post->ID, 'catalog', '', ', ','');
				break;
		}
}
?>

And the code in single-product.php:

<!--?php /**  * Template Name: Product - Single  * The Template for displaying all single products.  *  * @package WordPress  * @subpackage Product_Dir  * @since Product Directory 1.0  */ get_header(); ?-->
<div id="container">
<div id="content">
<!--?php the_post(); ?-->

<!--?php 	$custom = get_post_custom($post--->ID);
	$price = "$". $custom["price"][0];

?>
<div id="post-<?php the_ID(); ?><br />">>
<h1 class="entry-title"><!--?php the_title(); ?--> - <!--?=$price?--></h1>
<div class="entry-meta">
<div class="entry-content">
<div style="width: 30%; float: left;">
			<!--?php the_post_thumbnail( array(100,100) ); ?-->
			<!--?php the_content(); ?--></div>
<div style="width: 10%; float: right;">
			Price
<!--?=$price?--></div>
</div>
</div>
</div>
<!-- #content --></div>
<!-- #container -->

<!--?php get_footer(); ?-->

That’s it!

Well, almost..

One thing is that I have to activate my new child theme.

So, I go to the “Themes” Super Admin menu and enable the Product Directory theme (this step isn’t needed with single-site WordPress).

I then activate the theme in Appearance->Themes (in my case, on the second page).

One thing I’ve learnt the hard way is that the permalink structure may not work if I don’t go and “nudge it.” So I go to the “Permalinks” Settings menu:

And I click on “Save Changes” without changing anything. (I know, it’s counterintuitive. And it’s even possible that it could work without this step. But I spent enough time scratching my head about this one that I find it important.)

Now, I’m done. I can create new product posts by clicking on the “Add New” Products menu.

I can then fill in the product details, using the main WYSIWYG box as a description, the “price” field as a price, the “featured image” as the product image, and a taxonomy as a classification (by clicking “Add new” for any tag I want to add, and choosing a parent for some of them).

Now, in the product management interface (available in Products->Products), I can see the proper columns.

Here’s what the product page looks like:

And I’ve accomplished my mission.

The whole process can be achieved rather quickly, once you know what you’re doing. As I’ve been told (by the ever-so-helpful Justin Tadlock of Theme Hybrid fame, among other things), it’s important to get the data down first. While I agree with the statement and its implications, I needed to understand how to build these things from start to finish.

In fact, getting the data right is made relatively easy by my background as an ethnographer with a strong interest in cognitive anthropology, ethnosemantics, folk taxonomies (aka “folksonomies“), ethnography of communication, and ethnoscience. In other words, “getting the data” is part of my expertise.

The more technical aspects, however, were a bit difficult. I understood most of the principles and I could trace several puzzle pieces, but there’s a fair deal I didn’t know or hadn’t done myself. Putting together bits and pieces from diverse tutorials and posts didn’t work so well because it wasn’t always clear what went where or what had to remain unchanged in the code. I struggled with many details such as the fact that Kyle Jones’s code for custom columns wasn’t working first because it was incorrectly copied, then because I was using it on a post type which was “officially” based on pages (instead of posts). Having forgotten the part about “touching” the Permalinks settings, I was unable to get a satisfying output using Jones’s explanations (the fact that he doesn’t use titles didn’t really help me, in this specific case). So it was much harder for me to figure out how to do this than it now is for me to build content directories.

I still have some technical issues to face. Some which are near essential, such as a way to create archive templates for custom post types. Other issues have to do with features I’d like my content directories to have, such as clearly defined roles (the “More Plugins” support roles, but I still need to find out how to define them in WordPress). Yet other issues are likely to come up as I start building content directories, install them in specific contexts, teach people how to use them, observe how they’re being used and, most importantly, get feedback about their use.

But I’m past a certain point in my self-learning journey. I’ve built my confidence (an important but often dismissed component of gaining expertise and experience). I found proper resources. I understood what components were minimally necessary or required. I succeeded in implementing the system and testing it. And I’ve written enough about the whole process that things are even clearer for me.

And, who knows, I may get feedback, questions, or advice..

Advertisement

Actively Reading: Organic Ideas for Startups

Annotations on Paul Graham’s Organic Startup Ideas.

Been using Diigo as a way to annotate online texts. In this case, I was as interested in the tone as in the text itself. At the same time, I kept thinking about things which seem to be missing from Diigo.

One thing I like about this text is its tone. There’s an honesty, an ingenuity that I find rare in this type of writing.

  • startup ideas
    • The background is important, in terms of the type of ideas about which we’re constructing something.
  • what do you wish someone would make for you?
    • My own itch has to do with Diigo, actually. There’s a lot I wish Diigo would make for me. I may be perceived as an annoyance, but I think my wishlist may lead to something bigger and possibly quite successful.
    • The difference between this question and the “scratch your own itch” principle seems significant, and this distinction may have some implications in terms of success: we’re already talking about others, not just running ideas in our own head.
  • what do you wish someone would make for you?
    • It’s somewhat different from the well-known “scratch your own itch” principle. In this difference might be located something significant. In a way, part of the potential for this version to lead to success comes from the fact that it’s already connected with others, instead of being about running ideas in your own mind.
  • grow organically
    • The core topic of the piece, put in a comparative context. The comparison isn’t the one people tend to make and one may argue about the examples used. But the concept of organic ideas is fascinating and inspiring.
  • you decide, from afar,
    • What we call, in anthropology, the “armchair” approach. Also known as “backbenching.” For this to work, you need to have a deep knowledge of the situation, which is part of the point in this piece. Nice that it’s not demonizing this position but putting it in context.
  • Apple
    was the first type
    • One might argue that it was a hybrid case. Although, it does sound like the very beginnings of Apple weren’t about “thinking from afar.”
  • class of users other than you
    • Since developers are part of a very specific “class” of people, this isn’t insignificant a way to phrase this.
  • They still rely on this principle today, incidentally.
    The iPhone is the phone Steve Jobs wants.
    • Apple tends to be perceived in a different light. According to many people, it’s the “textbook example” of a company where decisions are made without concerns for what people need. “Steve Jobs uses a top-down approach,” “They don’t even use focus groups,” “They don’t let me use their tools the way I want to use them.” But we’re not talking about the same distinction between top-down and bottom-up. Though “organic ideas” seem to imply that it’s a grassroots/bottom-up phenomenon, the core distinction isn’t about the origin of the ideas (from the “top,” in both cases) but on the reasoning behind these ideas.
  • We didn’t need this software ourselves.
    • Sounds partly like a disclaimer but this approach is quite common and “there’s nothing wrong with it.”
  • comparatively old
    • Age and life experience make for an interesting angle. It’s not that this strategy needs people of a specific age to work. It’s that there’s a connection between one’s experience and the way things may pan out.
  • There is no sharp line between the two types of ideas,
    • Those in the “engineering worldview” might go nuts, at this point. I can hear the claims of “hand waving.” But we’re talking about something complex, here, not a merely complicated problem.
  • Apple type
    • One thing to note in the three examples here: they’re all made by pairs of guys. Jobs and Woz, Gates and Allen, Page and Brin. In many cases, the formula might be that one guy (or gal, one wishes) comes up with ideas knowing that the other can implement them. Again, it’s about getting somebody else to build it for you, not about scratching your own itch.
  • Bill Gates was writing something he would use
    • Again, Gates may not be the most obvious example, since he’s mostly known for another approach. It’s not inaccurate to say he was solving his own problem, at the time, but it may not be that convincing as an example.
  • Larry and Sergey when they wrote the first versions of Google.
    • Although, the inception of the original ideas was academic in context. They weren’t solving a search problem or thinking about monetization. They were discovering the power of CitationRank.
  • generally preferable
    • Nicely relativistic.
  • It takes experience
    to predict what other people will want.
    • And possibly a lot more. Interesting that he doesn’t mention empirical data.
  • young founders
    • They sound like a fascinating group to observe. They do wonders when they open up to others, but they seem to have a tendency to impose their worldviews.
  • I’d encourage you to focus initially on organic ideas
    • Now, this advice sounds more like the “scratch your own itch” advocation. But there’s a key difference in that it’s stated as part of a broader process. It’s more of a “walk before you run” or “do your homework” piece of advice, not a “you can’t come up with good ideas if you just think about how people will use your tool.”
  • missing or broken
    • It can cover a lot, but it’s couched in terms of the typical “problem-solving” approach at the centre of the engineering worldview. Since we’re talking about developing tools, it makes sense. But there could be a broader version, admitting for dreams, inspiration, aspiration. Not necessarily of the “what would make you happy?” kind, although there’s a lot to be said about happiness and imagination. You’re brainstorming, here.
  • immediate answers
    • Which might imply that there’s a second step. If you keep asking yourself the same question, you may be able to get a very large number of ideas. The second step could be to prioritize them but I prefer “outlining” as a process: you shuffle things together and you group some ideas to get one which covers several. What’s common between your need for a simpler way to code on the Altair and your values? Why do you care so much about algorithms instead of human encoding?
  • You may need to stand outside yourself a bit to see brokenness
    • Ah, yes! “Taking a step back,” “distancing yourself,” “seeing the forest for the trees”… A core dimension of the ethnographic approach and the need for a back-and-forth between “inside” and “outside.” There’s a reflexive component in this “being an outsider to yourself.” It’s not only psychological, it’s a way to get into the social, which can lead to broader success if it’s indeed not just about scratching your own itch.
  • get used to it and take it for granted
    • That’s enculturation, to you. When you do things a certain way simply because “we’ve always done them that way,” you may not create these organic ideas. But it’s a fine way to do your work. Asking yourself important questions about what’s wrong with your situation works well in terms of getting new ideas. But, sometimes, you need to get some work done.
  • a Facebook
    • Yet another recontextualized example. Zuckerberg wasn’t trying to solve that specific brokenness, as far as we know. But Facebook became part of what it is when Zuck began scratching that itch.
  • organic startup ideas usually don’t
    seem like startup ideas at first
    • Which gets us to the pivotal importance of working with others. Per this article, VCs and “angel investors,” probably. But, in the case of some of cases cited, those we tend to forget, like Paul Allen, Narendra, and the Winklevosses.
  • end up making
    something of value to a lot of people
    • Trial and error, it’s an iterative process. So you must recognize errors quickly and not invest too much effort in a specific brokenness. Part of this requires maturity.
  • something
    other people dismiss as a toy
    • The passage on which Gruber focused and an interesting tidbit. Not that central, come to think of it. But it’s important to note that people’s dismissive attitude may be misled, that “toys” may hide tools, that it’s probably a good idea not to take all feedback to heart…
  • At this point, when someone comes to us with
    something that users like but that we could envision forum trolls
    dismissing as a toy, it makes us especially likely to invest.
  • the best source of organic ones
    • Especially to investors. Potentially self-serving… in a useful way.
  • they’re at the forefront of technology
    • That part I would dispute, actually. Unless we talk about a specific subgroup of young founders and a specific set of tools. Young founders tend to be oblivious to a large field in technology, including social tools.
  • they’re in a position to discover
    valuable types of fixable brokenness first
    • The focus on fixable brokenness makes sense if we’re thinking exclusively through the engineering worldview, but it’s at the centre of some failures like the Google Buzz launch.
  • you still have to work hard
    • Of the “inspiration shouldn’t make use forget perspiration” kind. Makes for a more thoughtful approach than the frequent “all you need to do…” claims.
  • I’d encourage anyone
    starting a startup to become one of its users, however unnatural it
    seems.
    • Not merely an argument for dogfooding. It’s deeper than that. Googloids probably use Google tools but they didn’t actually become users. They’re beta testers with a strong background in troubleshooting. Not the best way to figure out what users really want or how the tool will ultimately fail.
  • It’s hard to compete directly with open source software
    • Open Source as competition isn’t new as a concept, but it takes time to seep in.
  • there has to be some part
    you can charge for
    • The breach through which old-school “business models” enter with little attention paid to everything else. To the extent that much of the whole piece might crumble from pressure built up by the “beancounter” worldview. Good thing he acknowledges it.

Why I Need an iPad

I’m one of those who feel the iPad is the right tool for the job.

I’m one of those who feel the iPad is the right tool for the job.

This is mostly meant as a reply to this blogthread. But it’s also more generally about my personal reaction to Apple’s iPad announcement.

Some background.

I’m an ethnographer and a teacher. I read a fair deal, write a lot of notes, and work in a variety of contexts. These days, I tend to spend a good amount of time in cafés and other public places where I like to work without being too isolated. I also commute using public transit, listen to lots of podcast, and create my own. I’m also very aural.

I’ve used a number of PDAs, over the years, from a Newton MessagePad 130 (1997) to a variety of PalmOS devices (until 2008). In fact, some people readily associated me with PDA use.

As soon as I learnt about the iPod touch, I needed one. As soon as I’ve heard about the SafariPad, I wanted one. I’ve been an intense ‘touch user since the iPhone OS 2.0 release and I’m a happy camper.

(A major reason I never bought an iPhone, apart from price, is that it requires a contract.)

In my experience, the ‘touch is the most appropriate device for all sorts of activities which are either part of an other activity (reading during a commute) or are simply too short in duration to constitute an actual “computer session.” You don’t “sit down to work at your ‘touch” the way you might sit in front of a laptop or desktop screen. This works great for “looking up stufff” or “checking email.” It also makes a lot of sense during commutes in crowded buses or metros.

In those cases, the iPod touch is almost ideal. Ubiquitous access to Internet would be nice, but that’s not a deal-breaker. Alternative text-input methods would help in some cases, but I do end up being about as fast on my ‘touch as I was with Graffiti on PalmOS.

For other tasks, I have a Mac mini. Sure, it’s limited. But it does the job. In fact, I have no intention of switching for another desktop and I even have an eMachines collecting dust (it’s too noisy to make a good server).

What I miss, though, is a laptop. I used an iBook G3 for several years and loved it. For a little while later, I was able to share a MacBook with somebody else and it was a wonderful experience. I even got to play with the OLPC XO for a few weeks. That one was not so pleasant an experience but it did give me a taste for netbooks. And it made me think about other types of iPhone-like devices. Especially in educational contexts. (As I mentioned, I’m a teacher)

I’ve been laptop-less for a while, now. And though my ‘touch replaces it in many contexts, there are still times when I’d really need a laptop. And these have to do with what I might call “mobile sessions.”

For instance: liveblogging a conference or meeting. I’ve used my ‘touch for this very purpose on a good number of occasions. But it gets rather uncomfortable, after a while, and it’s not very fast. A laptop is better for this, with a keyboard and a larger form factor. But the iPad will be even better because of lower risks of RSI. A related example: just imagine TweetDeck on iPad.

Possibly my favourite example of a context in which the iPad will be ideal: presentations. Even before learning about the prospect of getting iWork on a tablet, presentations were a context in which I really missed a laptop.

Sure, in most cases, these days, there’s a computer (usually a desktop running XP) hooked to a projector. You just need to download your presentation file from Slideshare, show it from Prezi, or transfer it through USB. No biggie.

But it’s not the extra steps which change everything. It’s the uncertainty. Even if it’s often unfounded, I usually get worried that something might just not work, along the way. The slides might not show the same way as you see it because something is missing on that computer or that computer is simply using a different version of the presentation software. In fact, that software is typically Microsoft PowerPoint which, while convenient, fits much less in my workflow than does Apple Keynote.

The other big thing about presentations is the “presenter mode,” allowing you to get more content than (or different content from) what the audience sees. In most contexts where I’ve used someone else’s computer to do a presentation, the projector was mirroring the computer’s screen, not using it as a different space. PowerPoint has this convenient “presenter view” but very rarely did I see it as an available option on “the computer in the room.” I wish I could use my ‘touch to drive presentations, which I could do if I installed software on that “computer in the room.” But it’s not something that is likely to happen, in most cases.

A MacBook solves all of these problems. and it’s an obvious use for laptops. But how, then, is the iPad better? Basically because of interface. Switching slides on a laptop isn’t hard, but it’s more awkward than we realize. Even before watching the demo of Keynote on the iPad, I could simply imagine the actual pleasure of flipping through slides using a touch interface. The fit is “natural.”

I sincerely think that Keynote on the iPad will change a number of things, for me. Including the way I teach.

Then, there’s reading.

Now, I’m not one of those people who just can’t read on a computer screen. In fact, I even grade assignments directly from the screen. But I must admit that online reading hasn’t been ideal, for me. I’ve read full books as PDF files or dedicated formats on PalmOS, but it wasn’t so much fun, in terms of the reading process. And I’ve used my ‘touch to read things through Stanza or ReadItLater. But it doesn’t work so well for longer reading sessions. Even in terms of holding the ‘touch, it’s not so obvious. And, what’s funny, even a laptop isn’t that ideal, for me, as a reading device. In a sense, this is when the keyboard “gets in the way.”

Sure, I could get a Kindle. I’m not a big fan of dedicated devices and, at least on paper, I find the Kindle a bit limited for my needs. Especially in terms of sources. I’d like to be able to use documents in a variety of formats and put them in a reading list, for extended reading sessions. No, not “curled up in bed.” But maybe lying down in a sofa without external lighting. Given my experience with the ‘touch, the iPad is very likely the ideal device for this.

Then, there’s the overall “multi-touch device” thing. People have already been quite creative with the small touchscreen on iPhones and ‘touches, I can just imagine what may be done with a larger screen. Lots has been said about differences in “screen real estate” in laptop or desktop screens. We all know it can make a big difference in terms of what you can display at the same time. In some cases, two screens isn’t even a luxury, for instance when you code and display a page at the same time (LaTeX, CSS…). Certainly, the same qualitative difference applies to multitouch devices. Probably even more so, since the display is also used for input. What Han found missing in the iPhone’s multitouch was the ability to use both hands. With the iPad, Han’s vision is finding its space.

Oh, sure, the iPad is very restricted. For instance, it’s easy to imagine how much more useful it’d be if it did support multitasking with third-party apps. And a front-facing camera is something I was expecting in the first iPhone. It would just make so much sense that a friend seems very disappointed by this lack of videoconferencing potential. But we’re probably talking about predetermined expectations, here. We’re comparing the iPad with something we had in mind.

Then, there’s the issue of the competition. Tablets have been released and some multitouch tablets have recently been announced. What makes the iPad better than these? Well, we could all get in the same OS wars as have been happening with laptops and desktops. In my case, the investment in applications, files, and expertise that I have made in a Mac ecosystem rendered my XP years relatively uncomfortable and me appreciate returning to the Mac. My iPod touch fits right in that context. Oh, sure, I could use it with a Windows machine, which is in fact what I did for the first several months. But the relationship between the iPhone OS and Mac OS X is such that using devices in those two systems is much more efficient, in terms of my own workflow, than I could get while using XP and iPhone OS. There are some technical dimensions to this, such as the integration between iCal and the iPhone OS Calendar, or even the filesystem. But I’m actually thinking more about the cognitive dimensions of recognizing some of the same interface elements. “Look and feel” isn’t just about shiny and “purty.” It’s about interactions between a human brain, a complex sensorimotor apparatus, and a machine. Things go more quickly when you don’t have to think too much about where some tools are, as you’re working.

So my reasons for wanting an iPad aren’t about being dazzled by a revolutionary device. They are about the right tool for the job.

Judging Coffee and Beer: Answer to DoubleShot Coffee Company

DoubleShot Coffee Company: More Espresso Arguments.

I’m not in the coffee biz but I do involve myself in some coffee-related things, including barista championships (sensory judge at regional and national) and numerous discussions with coffee artisans. In other words, I’m nobody important.

In a way, I “come from” the worlds of beer and coffee homebrewing. In coffee circles, I like to introduce myself as a homeroaster and blogger.

(I’m mostly an ethnographer, meaning that I do what we call “participant-observation” as both an insider and an outsider.)

There seem to be several disconnects in today’s coffee world, despite a lot of communication across the Globe. Between the huge coffee corporations and the “specialty coffee” crowd. Between coffee growers and coffee lovers. Between professional and home baristas. Even, sometimes, between baristas from different parts of the world.
None of it is very surprising. But it’s sometimes a bit sad to hear people talk past one another.

I realize nothing I say may really help. And it may all be misinterpreted. That’s all part of the way things go and I accept that.

In the world of barista champions and the so-called “Third Wave,” emotions seem particularly high. Part of it might have to do with the fact that so many people interact on a rather regular basis. Makes for a very interesting craft, in some ways. But also for rather tense moments.

About judging…
My experience isn’t that extensive. I’ve judged at the Canadian Eastern Regional BC twice and at the Canadian BC once.
Still, I did notice a few things.

One is that there can be a lot of camaraderie/collegiality among BC participants. This can have a lot of beneficial effects on the quality of coffee served in different places as well as on the quality of the café experience itself, long after the championships. A certain cohesiveness which may come from friendly competition can do a lot for the diversity of coffee scenes.

Another thing I’ve noticed is that it’s really easy to be fair, in judging using WBC regulations. It’s subjective in a very literal way since there’s tasting involved (tastebuds belong to the “subjects” of the sensory and head judges). But it simply has very little if anything to do with personal opinions, relationships, or “liking the person.” It’s remarkably easy to judge the performance, with a focus on what’s in the cup, as opposed to the person her-/himself or her/his values.

Sure, the championship setting is in many ways artificial and arbitrary. A little bit like rules for an organized sport. Or so many other contexts.

A competition like this has fairly little to do with what is likely to happen in “The Real World” (i.e., in a café). I might even say that applying a WBC-compatible in a café is likely to become a problem in many cases. A bit like working the lunch shift at a busy diner using ideas from the Iron Chef or getting into a street fight and using strict judo rules.

A while ago, I was working in French restaurants, as a «garde-manger» (assistant-chef). We often talked about (and I did meet a few) people who were just coming out of culinary institutes. In most cases, they were quite good at producing a good dish in true French cuisine style. But the consensus was that “they didn’t know how to work.”
People fresh out of culinary school didn’t really know how to handle a chaotic kitchen, order only the supplies required, pay attention to people’s tastes, adapt to differences in prices, etc. They could put up a good show and their dishes might have been exquisite. But they could also be overwhelmed with having to serve 60 customers in a regular shift or, indeed, not know what to do during a slow night. Restaurant owners weren’t that fond of hiring them, right away. They had to be “broken out” («rodés»).

Barista championships remind me of culinary institutes, in this way. Both can be useful in terms of skills, but experience is more diverse than that.

So, yes, WBC rules are probably artificial and arbitrary. But it’s easy to be remarkably consistent in applying these rules. And that should count for something. Just not for everythin.

Sure, you may get some differences between one judge and the other. But those differences aren’t that difficult to understand and I didn’t see that they tended to have to do with “preferences,” personal issues, or anything of the sort. From what I noticed while judging, you simply don’t pay attention to the same things as when you savour coffee. And that’s fine. Cupping coffee isn’t the same thing as drinking it, either.

In my (admittedly very limited) judging experience, emphasis was put on providing useful feedback. The points matter a lot, of course, but the main thing is that the points make sense in view of the comments. In a way, it’s to ensure calibration (“you say ‘excellent’ but put a ‘3,’ which one is more accurate?”) but it’s also about the goals of the judging process. The textual comments are a way to help the barista pay attention to certain things. “Constructive criticism” is one way to put it. But it’s more than that. It’s a way to get something started.

Several of the competitors I’ve seen do come to ask judges for clarifications and many of them seemed open to discussion. A few mostly wanted justification and may have felt slighted. But I mostly noticed a rather thoughtful process of debriefing.

Having said that, there are competitors who are surprised by differences between two judges’ scores. “But both shots came from the same portafilter!” “Well, yes, but if you look at the video, you’ll notice that coffee didn’t flow the same way in both cups.” There are also those who simply doubt judges, no matter what. Wonder if they respect people who drink their espresso…

Coming from the beer world, I also notice differences with beer. In the beer world, there isn’t really an equivalent to the WBC in the sense that professional beer brewers don’t typically have competitions. But amateur homebrewers do. And it’s much stricter than the WBC in terms of certification. It requires a lot of rote memorization, difficult exams (I helped proctor two), judging points, etc.

I’ve been a vocal critic of the Beer Judge Certification Program. There seems to be an idea, there, that you can make the process completely neutral and that the knowledge necessary to judge beers is solid and well-established. One problem is that this certification program focuses too much on a series of (over a hundred) “styles” which are more of a context-specific interpretation of beer diversity than a straightforward classification of possible beers.
Also, the one thing they want to avoid the most (basing their evaluation on taste preferences) still creeps in. It’s probably no coincidence that, at certain events, beers which were winning “Best of Show” tended to be big, assertive beers instead of very subtle ones. Beer judges don’t want to be human, but they may still end up acting like ones.

At the same time, while there’s a good deal of debate over beer competition results and such, there doesn’t seem to be exactly the same kind of tension as in barista championships. Homebrewers take their results to heart and they may yell at each other over their scores. But, somehow, I see much less of a fracture, “there” than “here.” Perhaps because the stakes are very low (it’s a hobby, not a livelihood). Perhaps because beer is so different from coffee. Or maybe because there isn’t a sense of “Us vs. Them”: brewers judging a competition often enter beer in that same competition (but in a separate category from the ones they judge).
Actually, the main difference may be that beer judges can literally only judge what’s in the bottle. They don’t observe the brewers practicing their craft (this happens weeks prior), they simply judge the product. In a specific condition. In many ways, it’s very unfair. But it can help brewers understand where something went wrong.

Now, I’m not saying the WBC should become like the BJCP. For one thing, it just wouldn’t work. And there’s already a lot of investment in the current WBC format. And I’m really not saying the BJCP is better than the WBC as an inspiration, since I actually prefer the WBC-style championships. But I sense that there’s something going on in the coffee world which has more to do with interpersonal relationships and “attitudes” than with what’s in the cup.

All this time, those of us who don’t make a living through coffee but still live it with passion may be left out. And we do our own things. We may listen to coffee podcasts, witness personal conflicts between café owners, hear rants about the state of the “industry,” and visit a variety of cafés.
Yet, slowly but surely, we’re making our own way through coffee. Exploring its diversity, experimenting with different brewing methods, interacting with diverse people involved, even taking trips “to origin”…

Coffee is what unites us.

Installing BuddyPress on a Webhost

Installing BuddyPress on a FatCow-hosted site. With ramblings.

[Jump here for more technical details.]

A few months ago, I installed BuddyPress on my Mac to try it out. It was a bit of an involved process, so I documented it:

WordPress MU, BuddyPress, and bbPress on Local Machine « Disparate.

More recently, I decided to get a webhost. Both to run some tests and, eventually, to build something useful. BuddyPress seems like a good way to go at it, especially since it’s improved a lot, in the past several months.

In fact, the installation process is much simpler, now, and I ran into some difficulties because I was following my own instructions (though adapting the process to my webhost). So a new blogpost may be in order. My previous one was very (possibly too) detailed. This one is much simpler, technically.

One thing to make clear is that BuddyPress is a set of plugins meant for WordPress µ (“WordPress MU,” “WPMU,” “WPµ”), the multi-user version of the WordPress blogging platform. BP is meant as a way to make WPµ more “social,” with such useful features as flexible profiles, user-to-user relationships, and forums (through bbPress, yet another one of those independent projects based on WordPress).

While BuddyPress depends on WPµ and does follow a blogging logic, I’m thinking about it as a social platform. Once I build it into something practical, I’ll probably use the blogging features but, in a way, it’s more of a tool to engage people in online social activities. BuddyPress probably doesn’t work as a way to “build a community” from scratch. But I think it can be quite useful as a way to engage members of an existing community, even if this engagement follows a blogger’s version of a Pareto distribution (which, hopefully, is dissociated from elitist principles).

But I digress, of course. This blogpost is more about the practical issue of adding a BuddyPress installation to a webhost.

Webhosts have come a long way, recently. Especially in terms of shared webhosting focused on LAMP (or PHP/MySQL, more specifically) for blogs and content-management. I don’t have any data on this, but it seems to me that a lot of people these days are relying on third-party webhosts instead of relying on their own servers when they want to build on their own blogging and content-management platforms. Of course, there’s a lot more people who prefer to use preexisting blog and content-management systems. For instance, it seems that there are more bloggers on WordPress.com than on other WordPress installations. And WP.com blogs probably represent a small number of people in comparison to the number of people who visit these blogs. So, in a way, those who run their own WordPress installations are a minority in the group of active WordPress bloggers which, itself, is a minority of blog visitors. Again, let’s hope this “power distribution” not a basis for elite theory!

Yes, another digression. I did tell you to skip, if you wanted the technical details!

I became part of the “self-hosted WordPress” community through a project on which I started work during the summer. It’s a website for an academic organization and I’m acting as the organization’s “Web Guru” (no, I didn’t choose the title). The site was already based on WordPress but I was rebuilding much of it in collaboration with the then-current “Digital Content Editor.” Through this project, I got to learn a lot about WordPress, themes, PHP, CSS, etc. And it was my first experience using a cPanel- (and Fantastico-)enabled webhost (BlueHost, at the time). It’s also how I decided to install WordPress on my local machine and did some amount of work from that machine.

But the local installation wasn’t an ideal solution for two reasons: a) I had to be in front of that local machine to work on this project; and b) it was much harder to show the results to the person with whom I was collaborating.

So, in the Fall, I decided to get my own staging server. After a few quick searches, I decided HostGator, partly because it was available on a monthly basis. Since this staging server was meant as a temporary solution, HG was close to ideal. It was easy to set up as a PayPal “subscription,” wasn’t that expensive (9$/month), had adequate support, and included everything that I needed at that point to install a current version of WordPress and play with theme files (after importing content from the original site). I’m really glad I made that decision because it made a number of things easier, including working from different computers, and sending links to get feedback.

While monthly HostGator fees were reasonable, it was still a more expensive proposition than what I had in mind for a longer-term solution. So, recently, a few weeks after releasing the new version of the organization’s website, I decided to cancel my HostGator subscription. A decision I made without any regret or bad feeling. HostGator was good to me. It’s just that I didn’t have any reason to keep that account or to do anything major with the domain name I was using on HG.

Though only a few weeks elapsed since I canceled that account, I didn’t immediately set out to transition to a new webhost. I didn’t go from HostGator to another webhost.

But having my own webhost still remained at the back of my mind as something which might be useful. For instance, while not really making a staging server necessary, a new phase in the academic website project brought up a sandboxing idea. Also, I went to a “WordPress Montreal” meeting and got to think about further WordPress development/deployment, including using BuddyPress for my own needs (both as my own project and as a way to build my own knowledge of the platform) instead of it being part of an organization’s project. I was also thinking about other interesting platforms which necessitate a webhost.

(More on these other platforms at a later point in time. Bottom line is, I’m happy with the prospects.)

So I wanted a new webhost. I set out to do some comparison shopping, as I’m wont to do. In my (allegedly limited) experience, finding the ideal webhost is particularly difficult. For one thing, search results are cluttered with a variety of “unuseful” things such as rants, advertising, and limited comparisons. And it’s actually not that easy to give a new webhost a try. For one thing, these hosting companies don’t necessarily have the most liberal refund policies you could imagine. And, switching a domain name between different hosts and registrars is a complicated process through which a name may remain “hostage.” Had I realized what was involved, I might have used a domain name to which I have no attachment or actually eschewed the whole domain transition and just try the webhost without a dedicated domain name.

Doh!
Live and learn. I sure do. Loving almost every minute of it.

At any rate, I had a relatively hard time finding my webhost.

I really didn’t need “bells and whistles.” For instance, all the AdSense, shopping cart, and other business-oriented features which seem to be publicized by most webhosting companies have no interest, to me.

I didn’t even care so much about absolute degree of reliability or speed. What I’m to do with this host is fairly basic stuff. The core idea is to use my own host to bypass some limitations. For instance, WordPress.com doesn’t allow for plugins yet most of the WordPress fun has to do with plugins.

I did want an “unlimited” host, as much as possible. Not because expect to have huge resource needs but I just didn’t want to have to monitor bandwidth.

I thought that my needs would be basic enough that any cPanel-enabled webhost would fit. As much as I could see, I needed FTP access to something which had PHP 5 and MySQL 5. I expected to install things myself, without use of the webhost’s scripts but I also thought the host would have some useful scripts. Although I had already registered the domain I wanted to use (through Name.com), I thought it might be useful to have a free domain in the webhosting package. Not that domain names are expensive, it’s more of a matter of convenience in terms of payment or setup.

I ended up with FatCow. But, honestly, I’d probably go with a different host if I were to start over (which I may do with another project).

I paid 88$ for two years of “unlimited” hosting, which is quite reasonable. And, on paper, FatCow has everything I need (and I bunch of things I don’t need). The missing parts aren’t anything major but have to do with minor annoyances. In other words, no real deal-breaker, here. But there’s a few things I wish I had realized before I committed on FatCow with a domain name I actually want to use.

Something which was almost a deal-breaker for me is the fact that FatCow requires payment for any additional subdomain. And these aren’t cheap: the minimum is 5$/month for five subdomains, up to 25$/month for unlimited subdomains! Even at a “regular” price of 88$/year for the basic webhosting plan, the “unlimited subdomains” feature (included in some webhosting plans elsewhere) is more than three times more expensive than the core plan.

As I don’t absolutely need extra subdomains, this is mostly a minor irritant. But it’s one reason I’ll probably be using another webhost for other projects.

Other issues with FatCow are probably not enough to motivate a switch.

For instance, the PHP version installed on FatCow (5.2.1) is a few minor releases behind the one needed by some interesting web applications. No biggie, especially if PHP is updated in a relatively reasonable timeframe. But still makes for a slight frustration.

The MySQL version seems recent enough, but it uses non-standard tools to manage it, which makes for some confusion. Attempting to create some MySQL databases with obvious names (say “wordpress”) fails because the database allegedly exists (even though it doesn’t show up in the MySQL administration). In the same vein, the URL of the MySQL is <username>.fatcowmysql.com instead of localhost as most installers seem to expect. Easy to handle once you realize it, but it makes for some confusion.

In terms of Fantastico-like simplified installation of webapps, FatCow uses InstallCentral, which looks like it might be its own Fantastico replacement. InstallCentral is decent enough as an installation tool and FatCow does provide for some of the most popular blog and CMS platforms. But, in some cases, the application version installed by FatCow is old enough (2005!)  that it requires multiple upgrades to get to a current version. Compared to other installation tools, FatCow’s InstallCentral doesn’t seem really efficient at keeping track of installed and released versions.

Something which is partly a neat feature and partly a potential issue is the way FatCow handles Apache-related security. This isn’t something which is so clear to me, so I might be wrong.

Accounts on both BlueHost and HostGator include a public_html directory where all sorts of things go, especially if they’re related to publicly-accessible content. This directory serves as the website’s root, so one expects content to be available there. The “index.html” or “index.php” file in this directory serves as the website’s frontpage. It’s fairly obvious, but it does require that one would understand a few things about webservers. FatCow doesn’t seem to create a public_html directory in a user’s server space. Or, more accurately, it seems that the root directory (aka ‘/’) is in fact public_html. In this sense, a user doesn’t have to think about which directory to use to share things on the Web. But it also means that some higher-level directories aren’t available. I’ve already run into some issues with this and I’ll probably be looking for a workaround. I’m assuming there’s one. But it’s sometimes easier to use generally-applicable advice than to find a custom solution.

Further, in terms of access control… It seems that webapps typically make use of diverse directories and .htaccess files to manage some forms of access controls. Unix-style file permissions are also involved but the kind of access needed for a web app is somewhat different from the “User/Group/All” of Unix filesystems. AFAICT, FatCow does support those .htaccess files. But it has its own tools for building them. That can be a neat feature, as it makes it easier, for instance, to password-protect some directories. But it could also be the source of some confusion.

There are other issues I have with FatCow, but it’s probably enough for now.

So… On to the installation process… 😉

It only takes a few minutes and is rather straightforward. This is the most verbose version of that process you could imagine…

Surprised? 😎

Disclaimer: I’m mostly documenting how I did it and there are some things about which I’m unclear. So it may not work for you. If it doesn’t, I may be able to help but I provide no guarantee that I will. I’m an anthropologist, not a Web development expert.

As always, YMMV.

A few instructions here are specific to FatCow, but the general process is probably valid on other hosts.

I’m presenting things in a sequence which should make sense. I used a slightly different order myself, but I think this one should still work. (If it doesn’t, drop me a comment!)

In these instructions, straight quotes (“”) are used to isolate elements from the rest of the text. They shouldn’t be typed or pasted.

I use “example.com” to refer to the domain on which the installation is done. In my case, it’s the domain name I transfered to FatCow from another registrar but it could probably be done without a dedicated domain (in which case it would be “<username>.fatcow.com” where “<username>” is your FatCow username).

I started with creating a MySQL database for WordPress MU. FatCow does have phpMyAdmin but the default tool in the cPanel is labeled “Manage MySQL.” It’s slightly easier to use for creating new databases than phpMyAdmin because it creates the database and initial user (with confirmed password) in a single, easy-to-understand dialog box.

So I created that new database, user, and password, noting down this information. Since that password appears in clear text at some point and can easily be changed through the same interface, I used one which was easy to remember but wasn’t one I use elsewhere.
Then, I dowloaded the following files to my local machine in order to upload them to my FatCow server space. The upload can be done through either FTP or FatCow’s FileManager. I tend to prefer FTP (via CyberDuck on the Mac or FileZilla on PC). But the FileManager does allow for easy uploads.
(Wish it could be more direct, using the HTTP links directly instead of downloading to upload. But I haven’t found a way to do it through either FTP or the FileManager.)
At any rate, here are the four files I transfered to my FatCow space, using .zip when there’s a choice (the .tar.gz “tarball” versions also work but require a couple of extra steps).
  1. WordPress MU (wordpress-mu-2.9.1.1.zip, in my case)
  2. Buddymatic (buddymatic.0.9.6.3.1.zip, in my case)
  3. EarlyMorning (only one version, it seems)
  4. EarlyMorning-BP (only one version, it seems)

Only the WordPress MU archive is needed to install BuddyPress. The last three files are needed for EarlyMorning, a BuddyPress theme that I found particularly neat. It’s perfectly possible to install BuddyPress without this specific theme. (Although, doing so, you need to install a BuddyPress-compatible theme, if only by moving some folders to make the default theme available, as I explained in point 15 in that previous tutorial.) Buddymatic itself is a theme framework which includes some child themes, so you don’t need to install EarlyMorning. But installing it is easy enough that I’m adding instructions related to that theme.

These files can be uploaded anywhere in my FatCow space. I uploaded them to a kind of test/upload directory, just to make it clear, for me.

A major FatCow idiosyncrasy is its FileManager (actually called “FileManager Beta” in the documentation but showing up as “FileManager” in the cPanel). From my experience with both BlueHost and HostGator (two well-known webhosting companies), I can say that FC’s FileManager is quite limited. One thing it doesn’t do is uncompress archives. So I have to resort to the “Archive Gateway,” which is surprisingly slow and cumbersome.

At any rate, I used that Archive Gateway to uncompress the four files. WordPress µ first (in the root directory or “/”), then both Buddymatic and EarlyMorning in “/wordpress-mu/wp-content/themes” (you can chose the output directory for zip and tar files), and finally EarlyMorning-BP (anywhere, individual files are moved later). To uncompress each file, select it in the dropdown menu (it can be located in any subdirectory, Archive Gateway looks everywhere), add the output directory in the appropriate field in the case of Buddymatic or EarlyMorning, and press “Extract/Uncompress”. Wait to see a message (in green) at the top of the window saying that the file has been uncompressed successfully.

Then, in the FileManager, the contents of the EarlyMorning-BP directory have to be moved to “/wordpress-mu/wp-content/themes/earlymorning”. (Thought they could be uncompressed there directly, but it created an extra folder.) To move those files in the FileManager, I browse to that earlymorning-bp directory, click on the checkbox to select all, click on the “Move” button (fourth from right, marked with a blue folder), and add the output path: /wordpress-mu/wp-content/themes/earlymorning

These files are tweaks to make the EarlyMorning theme work with BuddyPress.

Then, I had to change two files, through the FileManager (it could also be done with an FTP client).

One change is to EarlyMorning’s style.css:

/wordpress-mu/wp-content/themes/earlymorning/style.css

There, “Template: thematic” has to be changed to “Template: buddymatic” (so, “the” should be changed to “buddy”).

That change is needed because the EarlyMorning theme is a child theme of the “Thematic” WordPress parent theme. Buddymatic is a BuddyPress-savvy version of Thematic and this changes the child-parent relation from Thematic to Buddymatic.

The other change is in the Buddymatic “extensions”:

/wordpress-mu/wp-content/themes/buddymatic/library/extensions/buddypress_extensions.php

There, on line 39, “$bp->root_domain” should be changed to “bp_root_domain()”.

This change is needed because of something I’d consider a bug but that a commenter on another blog was kind enough to troubleshoot. Without this modification, the login button in BuddyPress wasn’t working because it was going to the website’s root (example.com/wp-login.php) instead of the WPµ installation (example.com/wordpress-mu/wp-login.php). I was quite happy to find this workaround but I’m not completely clear on the reason it works.

Then, something I did which might not be needed is to rename the “wordpress-mu” directory. Without that change, the BuddyPress installation would sit at “example.com/wordpress-mu,” which seems a bit cryptic for users. In my mind, “example.com/<name>,” where “<name>” is something meaningful like “social” or “community” works well enough for my needs. Because FatCow charges for subdomains, the “<name>.example.com” option would be costly.

(Of course, WPµ and BuddyPress could be installed in the site’s root and the frontpage for “example.com” could be the BuddyPress frontpage. But since I think of BuddyPress as an add-on to a more complete site, it seems better to have it as a level lower in the site’s hierarchy.)

With all of this done, the actual WPµ installation process can begin.

The first thing is to browse to that directory in which WPµ resides, either “example.com/wordpress-mu” or “example.com/<name>” with the “<name>” you chose. You’re then presented with the WordPress µ Installation screen.

Since FatCow charges for subdomains, it’s important to choose the following option: “Sub-directories (like example.com/blog1).” It’s actually by selecting the other option that I realized that FatCow restricted subdomains.

The Database Name, username and password are the ones you created initially with Manage MySQL. If you forgot that password, you can actually change it with that same tool.

An important FatCow-specific point, here, is that “Database Host” should be “<username>.fatcowmysql.com” (where “<username>” is your FatCow username). In my experience, other webhosts use “localhost” and WPµ defaults to that.

You’re asked to give a name to your blog. In a way, though, if you think of BuddyPress as more of a platform than a blogging system, that name should be rather general. As you’re installing “WordPress Multi-User,” you’ll be able to create many blogs with more specific names, if you want. But the name you’re entering here is for BuddyPress as a whole. As with <name> in “example.com/<name>” (instead of “example.com/wordpress-mu”), it’s a matter of personal opinion.

Something I noticed with the EarlyMorning theme is that it’s a good idea to keep the main blog’s name relatively short. I used thirteen characters and it seemed to fit quite well.

Once you’re done filling in this page, WPµ is installed in a flash. You’re then presented with some information about your installation. It’s probably a good idea to note down some of that information, including the full paths to your installation and the administrator’s password.

But the first thing you should do, as soon as you log in with “admin” as username and the password provided, is probably to the change that administrator password. (In fact, it seems that a frequent advice in the WordPress community is to create a new administrator user account, with a different username than “admin,” and delete the “admin” account. Given some security issues with WordPress in the past, it seems like a good piece of advice. But I won’t describe it here. I did do it in my installation and it’s quite easy to do in WPµ.

Then, you should probably enable plugins here:

example.com/<name>/wp-admin/wpmu-options.php#menu

(From what I understand, it might be possible to install BuddyPress without enabling plugins, since you’re logged in as the administrator, but it still makes sense to enable them and it happens to be what I did.)

You can also change a few other options, but these can be set at another point.

One option which is probably useful, is this one:

Allow new registrations Disabled
Enabled. Blogs and user accounts can be created.
Only user account can be created.

Obviously, it’s not necessary. But in the interest of opening up the BuddyPress to the wider world without worrying too much about a proliferation of blogs, it might make sense. You may end up with some fake user accounts, but that shouldn’t be a difficult problem to solve.

Now comes the installation of the BuddyPress plugin itself. You can do so by going here:

example.com/<name>/wp-admin/plugin-install.php

And do a search for “BuddyPress” as a term. The plugin you want was authored by “The BuddyPress Community.” (In my case, version 1.1.3.) Click the “Install” link to bring up the installation dialog, then click “Install Now” to actually install the plugin.

Once the install is done, click the “Activate” link to complete the basic BuddyPress installation.

You now have a working installation of BuddyPress but the BuddyPress-savvy EarlyMorning isn’t enabled. So you need to go to “example.com/<name>/wp-admin/wpmu-themes.php” to enable both Buddymatic and EarlyMorning. You should then go to “example.com/<name>/wp-admin/themes.php” to activate the EarlyMorning theme.

Something which tripped me up because it’s now much easier than before is that forums (provided through bbPress) are now, literally, a one-click install. If you go here:

example.com/<name>/wp-admin/admin.php?page=bb-forums-setup

You can set up a new bbPress install (“Set up a new bbPress installation”) and everything will work wonderfully in terms of having forums fully integrated in BuddyPress. It’s so seamless that I wasn’t completely sure it had worked.

Besides this, I’d advise that you set up a few widgets for the BuddyPress frontpage. You do so through an easy-to-use drag-and-drop interface here:

example.com/<name>/wp-admin/widgets.php

I especially advise you to add the Twitter RSS widget because it seems to me to fit right in. If I’m not mistaken, the EarlyMorning theme contains specific elements to make this widget look good.

After that, you can just have fun with your new BuddyPress installation. The first thing I did was to register a new user. To do so, I logged out of my admin account,  and clicked on the Sign Up button. Since I “allow new registrations,” it’s a very simple process. In fact, this is one place where I think that BuddyPress shines. Something I didn’t explain is that you can add a series of fields for that registration and the user profile which goes with it.

The whole process really shouldn’t take very long. In fact, the longest parts have probably to do with waiting for Archive Gateway.

The rest is “merely” to get people involved in your BuddyPress installation. It can happen relatively easily, if you already have a group of people trying to do things together online. But it can be much more complicated than any software installation process… 😉

Homeroasting and Coffee Geekness

I bought the i-Roast 2 homeroaster: I’m one happy (but crazy) coffee geek.

I’m a coffee geek. By which I mean that I have a geeky attitude to coffee. I’m passionate about the crafts and arts of coffee making, I seek coffee-related knowledge wherever I can find it, I can talk about coffee until people’s eyes glaze over (which happens more quickly than I’d guess possible), and I even dream about coffee gadgets. I’m not a typical gadget freak, as far as geek culture goes, but coffee is one area where I may invest in some gadgetry.

Perhaps my most visible acts of coffee geekery came in the form of updates I posted through diverse platforms about my home coffee brewing experiences. Did it from February to July. These posts contained cryptic details about diverse measurements, including water temperature and index of refraction. It probably contributed to people’s awareness of my coffee geek identity, which itself has been the source of fun things like a friend bringing me back coffee from Ethiopia.

But I digress, a bit. This is both about coffee geekness in general and about homeroasting in particular.

See, I bought myself this Hearthware i-Roast 2 dedicated homeroasting device. And I’m dreaming about coffee again.

Been homeroasting since December 2002, at the time I moved to Moncton, New Brunswick and was lucky enough to get in touch with Terry Montague of Down Esst Coffee.

Though I had been wishing to homeroast for a while before that and had become an intense coffee-lover fifteen years prior to contacting him, Terry is the one who enabled me to start roasting green coffee beans at home. He procured me a popcorn popper, sourced me some quality green beans, gave me some advice. And off I was.

Homeroasting is remarkably easy. And it makes a huge difference in one’s appreciation of coffee. People in the coffee industry, especially baristas and professional roasters, tend to talk about the “channel” going from the farmer to the “consumer.” In some ways, homeroasting gets the coffee-lover a few steps closer to the farmer, both by eliminating a few intermediaries in the channel and by making coffee into much less of a commodity. Once you’ve spent some time smelling the fumes emanated by different coffee varietals and looking carefully at individual beans, you can’t help but get a deeper appreciation for the farmer’s and even the picker’s work. When you roast 150g or less at a time, every coffee bean seems much more valuable. Further, as you experiment with different beans and roast profiles, you get to experience coffee in all of its splendour.

A popcorn popper may sound like a crude way to roast coffee. And it might be. Naysayers may be right in their appraisal of poppers as a coffee roasting method. You’re restricted in different ways and it seems impossible to produce exquisite coffee. But having roasted with a popper for seven years, I can say that my poppers gave me some of my most memorable coffee experiences. Including some of the most pleasant ones, like this organic Sumatra from Theta Ridge Coffee that I roasted in my campus appartment at IUSB and brewed using my beloved Brikka.

Over the years, I’ve roasted a large variety of coffee beans. I typically buy a pound each of three or four varietals and experiment with them for a while.

Mostly because I’ve been moving around quite a bit, I’ve been buying green coffee beans from a rather large variety of places. I try to buy them locally, as much as possible (those beans have travelled far enough and I’ve had enough problems with courier companies). But I did participate in a few mail orders or got beans shipped to me for some reason or another. Sourcing green coffee beans has almost been part of my routine in those different places where I’ve been living since 2002: Moncton, Montreal, Fredericton, South Bend, Northampton, Brockton, Cambridge, and Austin. Off the top of my head, I’ve sourced beans from:

  1. Down East
  2. Toi, moi & café
  3. Brûlerie Saint-Denis
  4. Brûlerie des quatre vents
  5. Terra
  6. Theta Ridge
  7. Dean’s Beans
  8. Green Beanery
  9. Cuvée
  10. Fair Bean
  11. Sweet Maria’s
  12. Evergreen Coffee
  13. Mon café vert
  14. Café-Vrac
  15. Roastmasters
  16. Santropol

And probably a few other places, including this one place in Ethiopia where my friend Erin bought some.

So, over the years, I got beans from a rather large array of places and from a wide range of regional varietals.

I rapidly started blending freshly-roasted beans. Typically, I would start a blend by roasting three batches in a row. I would taste some as “single origin” (coffee made from a single bean varietal, usually from the same farm or estate), shortly after roasting. But, typically, I would mix my batches of freshly roasted coffee to produce a main blend. I would then add fresh batches after a few days to fine-tune the blend to satisfy my needs and enhance my “palate” (my ability to pick up different flavours and aromas).

Once the quantity of green beans in a particular bag would fall below an amount I can reasonably roast as a full batch (minimum around 100g), I would put those green beans in a pre-roast blend, typically in a specially-marked ziplock bag. Roasting this blend would usually be a way for me to add some complexity to my roasted blends.

And complexity I got. Lots of diverse flavours and aromas. Different things to “write home about.”

But I was obviously limited in what I could do with my poppers. The only real controls that I had in homeroasting, apart from blending, consisted in the bean quantity and roasting time. Ambient temperature was clearly a factor, but not one over which I was able to exercise much control. Especially since I frequently ended up roasting outside, so as to not incommodate people with fumes, noise, and chaff. The few homeroast batches which didn’t work probably failed because of low ambient temperature.

One reason I stuck with poppers for so long was that I had heard that dedicated roasters weren’t that durable. I’ve probably used three or four different hot air popcorn poppers, over the years. Eventually, they just stop working, when you use them for coffee beans. As I’d buy them at garage sales and Salvation Army stores for 3-4$, replacing them didn’t feel like such a financially difficult thing to do, though finding them could occasionally be a challenge. Money was also an issue. Though homeroasting was important for me, I wasn’t ready to pay around 200$ for an entry-level dedicated roaster. I was thinking about saving money for a Behmor 1600, which offers several advantages over other roasters. But I finally gave in and bought my i-Roast as a kind of holiday gift to myself.

One broad reason is that my financial situation has improved since I started a kind of partial professional reorientation (PPR). I have a blogpost in mind about this PPR, and I’ll probably write it soon. But this post isn’t about my PPR.

Although, the series of events which led to my purchase does relate to my PPR, somehow.

See, the beans I (indirectly) got from Roastmasters came from a friend who bought a Behmor to roast cocoa beans. The green coffee beans came with the roaster but my friend didn’t want to roast coffee in his brand new Behmor, to avoid the risk of coffee oils and flavours getting into his chocolate. My friend asked me to roast some of these beans for his housemates (he’s not that intensely into coffee, himself). When I went to drop some homeroasted coffee by the Station C co-working space where he spends some of his time, my friend was discussing a project with Duncan Moore, whom I had met a few times but with whom I had had few interactions. The three of us had what we considered a very fruitful yet very short conversation. Later on, I got to do a small but fun project with Duncan. And I decided to invest that money into coffee.

A homeroaster seemed like the most appropriate investment. The Behmor was still out of reach but the i-Roast seemed like a reasonable purchase. Especially if I could buy it used.

But I was also thinking about buying it new, as long as I could get it quickly. It took me several years to make a decision about this purchase but, once I made it, I wanted something as close to “instant gratification” as possible. In some ways, the i-Roast was my equivalent to Little Mrs Sommers‘s “pair of silk stockings.”

At the time, Mon café vert seemed like the only place where I could buy a new i-Roast. I tried several times to reach them to no avail. As I was in the Mile-End as I decided to make that purchase, I went to Caffè in Gamba, both to use the WiFi signal and to check if, by any chance, they might not have started selling roasters. They didn’t, of course, homeroasters isn’t mainstream enough. But, as I was there, I saw the Hario Ceramic Coffee Mill Skerton, a “hand-cranked” coffee grinder about which I had read some rather positive reviews.

For the past few years, I had been using a Bodum Antigua conical burr electric coffee grinder. This grinder was doing the job, but maybe because of “wear and tear,” it started taking a lot longer to grind a small amount of coffee. The grind took so long, at some points, that the grounds were warm to the touch and it seemed like the grinder’s motor was itself heating.

So I started dreaming about the Baratza Vario, a kind of prosumer electric grinder which seemed like the ideal machine for someone who uses diverse coffee making methods. The Vario is rather expensive and seemed like overkill, for my current coffee setup. But I was lusting over it and, yes, dreaming about it.

One day, maybe, I’ll be able to afford a Vario.

In the meantime, and more reasonably, I had been thinking about “Turkish-style mills.” A friend lent me a box-type manual mill at some point and I did find it produced a nice grind, but it wasn’t that convenient for me, partly because the coffee drops into a small drawer which rapidly gets full. A handmill seemed somehow more convenient and there are some generic models which are sold in different parts of the World, especially in the Arab World. So I got the impression that I might be able to find handmills locally and started looking for them all over the place, enquiring at diverse stores and asking friends who have used those mills in the past. Of course, they can be purchased online. But they end up being relatively expensive and my manual experience wasn’t so positive as to convince me to spend so much money on one.

The Skerton was another story. It was much more convenient than a box-type manual mill. And, at Gamba, it was inexpensive enough for me to purchase it on the spot. I don’t tend to do this very often so I did feel strange about such an impulse purchase. But I certainly don’t regret it.

Especially since it complements my other purchases.

So, going to the i-Roast.

Over the years, I had been looking for the i-Roast and Behmor at most of the obvious sites where one might buy used devices like these. eBay, Craig’s List, Kijiji… As a matter of fact, I had seen an i-Roast on one of these, but I was still hesitating. Not exactly sure why, but it probably had to do with the fact that these homeroasters aren’t necessarily that durable and I couldn’t see how old this particular i-Roast was.

I eventually called to find out, after taking my decision to get an i-Roast. Turns out that it’s still under warranty, is in great condition, and was being sold by a very interesting (and clearly trustworthy) alto singer who happens to sing with a friend of mine who is also a local beer homebrewer. The same day I bought the roaster, I went to the cocoa-roasting friend’s place and saw a Behmor for the first time. And I tasted some really nice homemade chocolate. And met other interesting people including a couple that I saw, again, while taking the bus after purchasing the roaster.

The series of coincidences in that whole situation impressed me in a sense of awe. Not out of some strange superstition or other folk belief. But different things are all neatly packaged in a way that most of my life isn’t. Nothing weird about this. The packaging is easy to explain and mostly comes from my own perception. The effect is still there that it all fits.

And the i-Roast 2 itself fits, too.

It’s clearly not the ultimate coffee geek’s ideal roaster. But I get the impression it could become so. In fact, one reason I hesitated to buy the i-Roast 2 is that I was wondering if Hearthware might be coming out with the i-Roast 3, in the not-so-distant future.

I’m guessing that Hearthware might be getting ready to release a new roaster. I’m using unreliable information, but it’s still an educated guess. So, apparently…

I could just imagine what the i-Roast 3 might be. As I’m likely to get, I have a number of crazy ideas.

One “killer feature” actually relates both to the differences between the i-Roast and i-Roast 2 as well as to the geek factor behind homeroasting: roast profiles as computer files. Yes, I know, it sounds crazy. And, somehow, it’s quite unlikely that Hearthware would add such a feature on an entry-level machine. But I seriously think it’d make the roaster much closer to a roasting geek’s ultimate machine.

For one thing, programming a roast profile on the i-Roast is notoriously awkward. Sure, you get used to it. But it’s clearly suboptimal. And one major improvement of the i-Roast 2 over the original i-Roast is that the original version didn’t maintain profiles if you unplugged it. The next step, in my mind, would be to have some way to transfer a profile from a computer to the roaster, say via a slot for SD cards or even a USB port.

What this would open isn’t only the convenience of saving profiles, but actually a way to share them with fellow homeroasters. Since a lot in geek culture has to do with sharing information, a neat effect could come out of shareable roast profiles. In fact, when I looked for example roast profiles, I found forum threads, guides, and incredibly elaborate experiments. Eventually, it might be possible to exchange roasting profiles relating to coffee beans from the same shipment and compare roasting. Given the well-known effects of getting a group of people using online tools to share information, this could greatly improve the state of homeroasting and even make it break out of the very small niche in which it currently sits.

Of course, there are many problems with that approach, including things as trivial as voltage differences as well as bigger issues such as noise levels:

But I’m still dreaming about such things.

In fact, I go a few steps further. A roaster which could somehow connect to a computer might also be used to track data about temperature and voltage. In my own experiments with the i-Roast 2, I’ve been logging temperatures at 15 second intervals along with information about roast profile, quantity of beans, etc. It may sound extreme but it already helped me achieve a result I wanted to achieve. And it’d be precisely the kind of information I would like to share with other homeroasters, eventually building a community of practice.

Nothing but geekness, of course. Shall the geek inherit the Earth?

Development and Quality: Reply to Agile Diary

Getting on the soapbox about developers.

Former WiZiQ product manager Vikrama Dhiman responded to one of my tweets with a full-blown blogpost, thereby giving support to Matt Mullenweg‘s point that microblogging goes hand-in-hand with “macroblogging.”

My tweet:

enjoys draft æsthetics yet wishes more developers would release stable products. / adopte certains produits trop rapidement.

Vikrama’s post:

Good Enough Software Does Not Mean Bad Software « Agile Diary, Agile Introduction, Agile Implementation.

My reply:

“To an engineer, good enough means perfect. With an artist, there’s no such thing as perfect.” (Alexander Calder)

Thanks a lot for your kind comments. I’m very happy that my tweet (and status update) triggered this.

A bit of context for my tweet (actually, a post from Ping.fm, meant as a status update, thereby giving support in favour of conscious duplication, «n’en déplaise aux partisans de l’action contre la duplication».)

I’ve been thinking about what I call the “draft æsthetics.” In fact, I did a podcast episode about it. My description of that episode was:

Sometimes, there is such a thing as “Good Enough.”

Though I didn’t emphasize the “sometimes” part in that podcast episode, it was an important part of what I wanted to say. In fact, my intention wasn’t to defend draft æsthetics but to note that there seems to be a tendency toward this æsthetic mode. I do situate myself within that mode in many things I do, but it really doesn’t mean that this mode should be the exclusive one used in any context.

That aforequoted tweet was thus a response to my podcast episode on draft æsthetics. “Yes, ‘good enough’ may work, sometimes. But it needs not be applied in all cases.”

As I often get into convoluted discussions with people who seem to think that I condone or defend a position because I take it for myself, the main thing I’d say there is that I’m not only a relativist but I cherish nuance. In other words, my tweet was a way to qualify the core statement I was talking about in my podcast episode (that “good enough” exists, at times). And that statement isn’t necessarily my own. I notice a pattern by which this statement seems to be held as accurate by people. I share that opinion, but it’s not a strongly held belief of mine.

Of course, I digress…

So, the tweet which motivated Vikrama had to do with my approach to “good enough.” In this case, I tend to think about writing but in view of Eric S. Raymond’s approach to “Release Early, Release Often” (RERO). So there is a connection to software development and geek culture. But I think of “good enough” in a broader sense.

Disclaimer: I am not a coder.

The Calder quote remained in my head, after it was mentioned by a colleague who had read it in a local newspaper. One reason it struck me is that I spend some time thinking about artists and engineers, especially in social terms. I spend some time hanging out with engineers but I tend to be more on the “artist” side of what I perceive to be an axis of attitudes found in some social contexts. I do get a fair deal of flack for some of my comments on this characterization and it should be clear that it isn’t meant to imply any evaluation of individuals. But, as a model, the artist and engineer distinction seems to work, for me. In a way, it seems more useful than the distinction between science and art.

An engineer friend with whom I discussed this kind of distinction was quick to point out that, to him, there’s no such thing as “good enough.” He was also quick to point out that engineers can be creative and so on. But the point isn’t to exclude engineers from artistic endeavours. It’s to describe differences in modes of thought, ways of knowing, approaches to reality. And the way these are perceived socially. We could do a simple exercise with terms like “troubleshooting” and “emotional” to be assigned to the two broad categories of “engineer” and “artist.” Chances are that clear patterns would emerge. Of course, many concepts are as important to both sides (“intelligence,” “innovation”…) and they may also be telling. But dichotomies have heuristic value.

Now, to go back to software development, the focus in Vikrama’s Agile Diary post…

What pushed me to post my status update and tweet is in fact related to software development. Contrary to what Vikrama presumes, it wasn’t about a Web application. And it wasn’t even about a single thing. But it did have to do with firmware development and with software documentation.

The first case is that of my Fonera 2.0n router. Bought it in early November and I wasn’t able to connect to its private signal using my iPod touch. I could connect to the router using the public signal, but that required frequent authentication, as annoying as with ISF. Since my iPod touch is my main WiFi device, this issue made my Fonera 2.0n experience rather frustrating.

Of course, I’ve been contacting Fon‘s tech support. As is often the case, that experience was itself quite frustrating. I was told to reset my touch’s network settings which forced me to reauthenticate my touch on a number of networks I access regularly and only solved the problem temporarily. The same tech support person (or, at least, somebody using the same name) had me repeat the same description several times in the same email message. Perhaps unsurprisingly, I was also told to use third-party software which had nothing to do with my issue. All in all, your typical tech support experience.

But my tweet wasn’t really about tech support. It was about the product. Thougb I find the overall concept behind the Fonera 2.0n router very interesting, its implementation seems to me to be lacking. In fact, it reminds me of several FLOSS development projects that I’ve been observing and, to an extent, benefitting from.

This is rapidly transforming into a rant I’ve had in my “to blog” list for a while about “thinking outside the geek box.” I’ll try to resist the temptation, for now. But I can mention a blog thread which has been on my mind, in terms of this issue.

Firefox 3 is Still a Memory Hog — The NeoSmart Files.

The blogpost refers to a situation in which, according to at least some users (including the blogpost’s author), Firefox uses up more memory than it should and becomes difficult to use. The thread has several comments providing support to statements about the relatively poor performance of Firefox on people’s systems, but it also has “contributions” from an obvious troll, who keeps assigning the problem on the users’ side.

The thing about this is that it’s representative of a tricky issue in the geek world, whereby developers and users are perceived as belonging to two sides of a type of “class struggle.” Within the geek niche, users are often dismissed as “lusers.” Tech support humour includes condescending jokes about “code 6”: “the problem is 6″ from the screen.” The aforementioned Eric S. Raymond wrote a rather popular guide to asking questions in geek circles which seems surprisingly unaware of social and cultural issues, especially from someone with an anthropological background. Following that guide, one should switch their mind to that of a very effective problem-solver (i.e., the engineer frame) to ask questions “the smart way.” Not only is the onus on users, but any failure to comply with these rules may be met with this air of intellectual superiority encoded in that guide. IOW, “Troubleshoot now, ask questions later.”

Of course, many users are “guilty” of all sorts of “crimes” having to do with not reading the documentation which comes with the product or with simply not thinking about the issue with sufficient depth before contacting tech support. And as the majority of the population is on the “user” side, the situation can be described as both a form of marginalization (geek culture comes from “nerd” labels) and a matter of elitism (geek culture as self-absorbed).

This does have something to do with my Fonera 2.0n. With it, I was caught in this dynamic whereby I had to switch to the “engineer frame” in order to solve my problem. I eventually did solve my Fonera authentication problem, using a workaround mentioned in a forum post about another issue (free registration required). Turns out, the “release candidate” version of my Fonera’s firmware does solve the issue. Of course, this new firmware may cause other forms of instability and installing it required a bit of digging. But it eventually worked.

The point is that, as released, the Fonera 2.0n router is a geek toy. It’s unpolished in many ways. It’s full of promise in terms of what it may make possible, but it failed to deliver in terms of what a router should do (route a signal). In this case, I don’t consider it to be a finished product. It’s not necessarily “unstable” in the strict sense that a software engineer might use the term. In fact, I hesitated between different terms to use instead of “stable,” in that tweet, and I’m not that happy with my final choice. The Fonera 2.0n isn’t unstable. But it’s akin to an alpha version released as a finished product. That’s something we see a lot of, these days.

The main other case which prompted me to send that tweet is “CivRev for iPhone,” a game that I’ve been playing on my iPod touch.

I’ve played with different games in the Civ franchise and I even used the FLOSS version on occasion. Not only is “Civilization” a geek classic, but it does connect with some anthropological issues (usually in a problematic view: Civ’s worldview lacks anthro’s insight). And it’s the kind of game that I can easily play while listening to podcasts (I subscribe to a number of th0se).

What’s wrong with that game? Actually, not much. I can’t even say that it’s unstable, unlike some other items in the App Store. But there’s a few things which aren’t optimal in terms of documentation. Not that it’s difficult to figure out how the game works. But the game is complex enough that some documentation is quite useful. Especially since it does change between one version of the game and another. Unfortunately, the online manual isn’t particularly helpful. Oh, sure, it probably contains all the information required. But it’s not available offline, isn’t optimized for the device it’s supposed to be used with, doesn’t contain proper links between sections, isn’t directly searchable, and isn’t particularly well-written. Not to mention that it seems to only be available in English even though the game itself is available in multiple languages (I play it in French).

Nothing tragic, of course. But coupled with my Fonera experience, it contributed to both a slight sense of frustration and this whole reflection about unfinished products.

Sure, it’s not much. But it’s “good enough” to get me started.

What’s So “Social” About “Social Media?” (Podcamp Montreal Topic)

Planning my #pcmtl session.

It’s all good and well to label things “social media” but those of us who are social scientists need to speak up about some of the insight we can share.
If social scientists and social media peeps make no effort at talking with one another, social media will suffer and social scientists will be shut out of something important.
Actually, social media might provide one of the most useful vantage points to look at diverse social issues, these days. And participants social media could really benefit from some basic social analysis.
Let’s talk about this.

Will be participating in this year’s Podcamp Montreal. Last year’s event had a rather big impact on me. (It’s at #pcmtl08 that I “came out of the closet” as a geek!) At that time, I presented on “Social Acamedia” (Slides, Audio). Been meaning to do a slidecast but never got a round tuit.

My purpose, this year, is in a way to follow up on this blogpost of mine (from the same period) about “The Need for Social Science in Social Web/Marketing/Media.”

As it’s a BarCamp-style unconference, I’ll do this in a very casual way. I might actually not use slides or anything like that. And I guess I could use my time for a discussion, more than anything else. I’ll be missing much of the event because I’m teaching on Saturday. So my session comes in a very different context from last year’s, when I was able to participate in all sorts of things surrounding #pcmtl.

Yup. Come to think of it, a conversation makes more sense, as I’ll be getting condensed insight from what happens before. I still might start with a 15 minute spiel, but I really should spend as much time as possible just discussing these issues with people. Isabelle Lopez did a workshop-style session last year and that was quite useful.

The context for this year’s session is quite specific. I’ve been reorienting myself as an “informal ethnographer.” I eventually started my own podcast on ethnography, I’ve been doing presentations and workshops both on social media and on social analysis of online stuff, I was even able to participate in the creation of material for a graduate course about the “Social Web”…

Should be fun.

War of the Bugs: Playing with Life in the Brewery

A mad brewer’s approach to wild yeast and bacteria.

Kept brewing and thinking about brewing, after that last post. Been meaning to discuss my approach to “brewing bugs”: the yeast and bacteria strains which are involved in some of my beers. So, it’s a kind of follow-up.

Perhaps more than a reason for me to brew, getting to have fun with these living organisms is something of an achievement. It took a while before it started paying off, but it now does.

Now, I’m no biochemist. In fact, I’m fairly far to “wet sciences” in general. What I do with these organisms is based on a very limited understanding of what goes on during fermentation. But as long as I’m having fun, that should be ok.

This blogpost is about yeast in brewing. My focus is on homebrewing but many things also apply to craft brewing or even to macrobreweries.

There’s supposed to be a saying that “brewers make wort, yeast makes beer.” Whether or not it’s an actual saying, it’s quite accurate.

“Wort” is unfermented beer. It’s a liquid containing fermentable sugars and all sorts of other compounds which will make their way into the final beer after the yeast has had its fun in it. It’s a sweet liquid which tastes pretty much like Malta (e.g. Vitamalt).

Yeast is a single-cell organism which can do a number of neat things including the fine act of converting simple sugars into alcohol and CO2. Yeast cells also do a number of other neat (and not so neat) things with the wort, including the creation of a large array of flavour compounds which can radically change the character of the beer. Among the four main ingredients in beer (water, grain, hops, and yeast), I’d say that yeast often makes the largest contribution to the finished beer’s flavour and aroma profile.

The importance of yeast in brewing has been acknowledged to different degrees in history. The well-known Reinheitsgebot “purity law” of 1516, which specifies permissible ingredients in beer, made no mention of yeast. As the story goes, it took Pasteur (and probably others) to discover the role of yeast in brewing. After this “discovery,” Pasteur and others have been active at isolating diverse yeast strains to be used in brewing. Before that time, it seems that yeast was just occurring naturally in the brewing process.

As may be apparent in my tone, I’m somewhat skeptical of the “discovery” narrative. Yeast may not have been understood very clearly before Pasteur came on the scene, but there’s some evidence showing that yeast’s contribution to brewing had been known in different places at previous points in history. It also seems likely that multiple people had the same basic insight as LP did but may not have had the evidence to support this insight. This narrative is part of the (home)brewing “shared knowledge.”

But I’m getting ahead of myself.

There’s a lot to be said about yeast biochemistry. In fact, the most casual of brewers who spends any significant amount of time with online brewing resources has some understanding, albeit fragmentary, of diverse dimensions of biochemistry through the action of yeast. But this blogpost isn’t about yeast biochemistry.

I’m no expert and biochemistry is a field for experts. What tends to interest me more than the hard science on yeast is the kind of “folk science” brewers create around yeast. Even the most scientific of brewers occasionally talks about yeast in a way which sounds more like folk beliefs than like hard science. In ethnographic disciplines, there’s a field of “ethnoscience” which deals with this kind of “folk knowledge.” My characterization of “folk yeast science” will probably sound overly simplistic and I’m not saying that it accurately represents a common approach to yeast among brewers. It’s more in line with the tone of Horace Miner’s classic text about the Nacirema than with anything else. A caricature, maybe, but one which can provide some insight.

In this case, because it’s a post on my personal blog, it probably provides more insight about yours truly than about anybody else. So be it.

I’m probably more naïve than most. Or, at least, I try to maintain a sense of wonder, as I play with yeast. I’ve done just enough reading about biochemistry to be dangerous. Again, “the brewery is an adult’s chemistry set.”

A broad distinction in the brewer’s approach to yeast is between “pure” and “wild” yeast. Pure yeast usually comes to the brewer from a manufacturer but it originated in a well-known brewery. Wild yeast comes from the environment and should be avoided at all costs. Wild yeast infects and spoils the wort. Pure yeast is a brewer’s best friend as it’s the one which transforms sweet wort into tasty, alcoholic beer. Brewers do everything to “keep the yeast happy.” Though yeast happiness sounds like exaggeration on my part, this kind of anthropomorphic concept is clearly visible in discussions among brewers. (Certainly, “yeast health” is a common concept. It’s not anthropomorphic by itself, but it takes part in the brewer’s approach to yeast as life.) Wild yeast is the reason brewers use sanitizing agents. Pure yeast is carefully handled, preserved, “cultured.” In this context, “wild yeast” is unwanted yeast. “Pure yeast” is the desirable portion of microflora.

It wouldn’t be too much of an exaggeration to say that many brewers are obsessed with the careful handling of pure yeast and the complete avoidance of wild yeast. The homebrewer’s motto, following Charlie Papazian, may be “Relax, Don’t Worry, Have a Homebrew,” when brewers do worry, they often worry about keeping their yeast as pure as possible or keeping their wort as devoid of wild yeast as possible.

In the context of brewers’ folk taxonomy, wild yeast is functionally a “pest,” its impact is largely seen as negative. Pure yeast is beneficial. Terms like “bugs” or “beasties” are applied to both but, with wild yeast, their connotations and associations are negative (“nasty bugs”) while the terms are applied to pure yeast in a more playful, almost endeared tone. “Yeasties” is almost a pet name for pure yeast.

I’ve mentioned “folk taxonomy.” Here, I’m mostly thinking about cognitive anthropology. Taxonomies have been the hallmark of cognitive anthropology, as they reveal a lot about the ways people conceive of diverse parts of reality and are relatively easy to study. Eliciting categories in a folk taxonomy is a relatively simple exercise which can even lead to other interesting things in terms of ethnographic research (including, for instance, establishing rapport with local experts or providing a useful basis to understanding subtleties in the local language). I use terms like “folk” and “local” in a rather vague way. The distinction is often with “Western” or even “scientific.” Given the fact that brewing in North America has some strong underpinnings in science, it’s quite fun to think about North American homebrewers through a model which involves an opposition to “Western/scientific.” Brewers, including a large proportion of homebrewers, tend to be almost stereotypically Western and to work through (and sometimes labour under) an almost-reductionist scientific mindframe. In other words, my talking about “folk taxonomy” is almost a way to tease brewers. But it also relates to my academic interest in cultural diversity, language, worldviews, and humanism.

“Folk taxonomies” can be somewhat fluid but the concept applies mostly to classification systems which are tree-like, with “branches” coming of broader categories. The term “folksonomy” has some currency, these days, to refer to a classification structure which has some relation to folk taxonomy but which doesn’t tend to work through a very clear arborescence. In many contexts, “folksonomy” simply means “tagging,” with the notion that it’s a free-form classification, not amenable to treatment in the usual “hierarchical database” format. Examples of folksonomies often have to do with the way people classify books or other sources of information. A folksonomy is then the opposite of the classification system used in libraries or in Web directories such as the original Yahoo! site. Tags assigned to this blogpost (“Tagged: Belgian artist…”) are part of my own folksonomy for blogposts. Categories on WordPress blogs such as this ones are supposed to create more of a (folk) taxonomy. For several reasons (including the fact that tags weren’t originally available to me for this blog), I tend to use categories as more of a folksonomy, but with a bit more structure. Categories are more stable than tags. For a while, now, I’ve refrained from adding new categories (to my already overly-long list). But I do add lots of new tags.

Anyhoo…

Going back to brewers’ folk taxonomy of yeast strains…

Technically, if I’m not mistaken, the term “pure” should probably refer to the yeast culture, not to the yeast itself. But the overall concept does seem to apply to types of yeast, even if other terms are used. The terms “wild” and “pure” aren’t inappropriate. “Wild” yeast is undomesticated. “Pure” yeast strains were those strains which were selected from wild yeast strains and were isolated in laboratories.

Typically, pure yeast strains come from one of two species of the genus Saccharomyces. One species includes the “top-fermenting” yeast strains used in ales while the other species includes the “bottom-fermenting” yeast strains used in lagers. The distinction between ale and lager is relatively recent, in terms of brewing history, but it’s one which is well-known among brewers. The “ale” species is called cerevisiae (with all sorts of common misspellings) and the “lager” species has been called different names through history, to the extent that the most appropriate name (pastorianus) seems to be the object of specialized, not of common knowledge.

“Wild yeast” can be any yeast strain. In fact, the two species of pure yeast used in brewing exist as wild yeast and brewers’ “folk classification” of microorganisms often lumps bacteria in the “wild yeast” category. The distinction between bacteria and yeast appears relatively unimportant in relation to brewing.

As can be expected from my emphasis on “typically,” above, not all pure yeast strains belong to the “ale” and “lager” species. And as is often the case in research, the exceptions are where things get interesting.

One category of yeast which is indeed pure but which doesn’t belong to one of the two species is wine yeast. While brewers do occasionally use strains of wild yeast when making other beverages besides beer, wine yeast strains mostly don’t appear on the beer brewer’s radar as being important or interesting. Unlike wild yeast, it shouldn’t be avoided at all costs. Unlike pure yeast, it shouldn’t be cherished. In this sense, it could almost serve as «degré zéro» or “null” in the brewer’s yeast taxonomy.

Then, there are yeast strains which are usually considered in a negative way but which are treated as pure strains. I’m mostly thinking about two of the main species in the Brettanomyces genus, commonly referred to as “Brett.” These are winemakers’ pests, especially in the case of oak aging. Oak casks are expensive and they can be ruined by Brett infections. In beer, while Brett strains are usually classified as wild yeast, some breweries have been using Brett in fermentation to effects which are considered by some people to be rather positive while others find these flavours and aromas quite displeasing. It’s part of the brewing discourse to use “barnyard” and “horse blanket” as descriptors for some of the aroma and flavour characteristics given by Brett.

Brewers who consciously involve Brett in the fermentation process are rather uncommon. There are a few breweries in Belgium which make use of Brett, mostly in lambic beers which are fermented “spontaneously” (without the use of controlled innoculation). And there’s a (slightly) growing trend among North American home- and craft brewers toward using Brett and other bugs in brewing.

Because of these North American brewers, Brett strains are now available commercially, as “pure” strains.

Which makes for something quite interesting. Brett is now part of the “pure yeast” category, at least for some brewers. They then use Brett as they would other pure strains, taking precautions to make sure it’s not contaminated. At the same time, Brett is often used in conjunction with other yeast strains and, contrary to the large majority of beer fermentation methods, what brewers use is a complex yeast culture which includes both Saccharomyces and Brett. It may not seem that significant but it brings fermentation out of the strict “mono-yeast” model. Talking about “miscegenation” in social terms would be abusive. But it’s interesting to notice which brewers use Brett in this way. In some sense, it’s an attitude which has dimensions from both the “Belgian Artist” and “German Engineer” poles in my brewing attitude continuum.

Other brewers use Brett in a more carefree way. Since Brett-brewing is based on a complex culture, one can go all the way and mix other bugs. Because Brett has been mostly associated with lambic brewing, since the onset of “pure yeast” brewing, the complex cultures used in lambic breweries serve as the main model. In those breweries, little control can be applied to the balance between yeast strains and the concept of “pure yeast” seems quite foreign. I’ve never visited a lambic brewery (worse yet, I’ve yet to set foot in Belgium), but I get to hear and read a lot about lambic brewing. My perception might be inaccurate, but it also reflects “common knowledge” among North American brewers.

As you might guess, by now, I take part in the trend to brew carefreely. Even carelessly. Which makes me more of a MadMan than the majority of brewers.

Among both winemakers and beer brewers, Brett has the reputation to be “resilient.” Once Brett takes hold of your winery or brewery, it’s hard to get rid of it. Common knowledge about Brett includes different things about its behaviour in the fermentation process (it eats some sugars that Saccharomyces doesn’t, it takes a while to do its work…). But Brett also has a kind of “character,” in an almost-psychological sense.

Which reminds me of a comment by a pro brewer about a well-known strain of lager yeast being “wimpy,” especially in comparison with some well-known British ale yeast strains such as Ringwood. To do their work properly, lager strains tend to require more care than ale strains, for several reasons. Ringwood and some other strains are fast fermenters and tend to “take over,” leaving little room for other bugs.

Come to think of it, I should try brewing with a blend of Ringwood and Brett. It’d be interesting to see “who wins.”

Which brings me to “war.”

Now, I’m as much of a pacifist as one can be. Not only do I not tend to be bellicose and do I cherish peace, I frequently try to avoid conflict and I even believe that there’s a peaceful resolution to most situations.

Yet, one thing I enjoy about brewing is to play with conflicting yeast strains. Pitting one strain against another is my way to “wage wars.” And it’s not very violent.

I also tend to enjoy some games which involve a bit of conflict, including Diplomacy and Civilization. But I tend to play these games as peacefully as possible. Even Spymaster, which rapidly became focused on aggressions, I’ve been playing as a peace-loving, happy-go-lucky character.

But, in the brewery, I kinda like the fact that yeast cells from different strains are “fighting” one another. I don’t picture yeast cells like warriors (with tiny helmets), but I do have fun imagining the “Battle of the Yeast.”

Of course, this has more to do with competition than with conflict. But both are related, in my mind. I’m also not that much into competition and I don’t like to pit people against one another, even in friendly competition. But this is darwinian competition. True “survival of the fittest,” with everything which is implied in terms of being contextually appropriate.

So I’m playing with life, in my brewery. I’m not acting as a Creator over the yeast population, but there’s something about letting yeast cells “having at it” while exercising some level of control that could be compared to some spiritual figures.

Thinking about this also makes me think about the Life game. There are some similarities between what goes on in my wort and what Conway’s game implies. But there are also several differences, including the type of control which can be applied in either case and the fact that the interaction between yeast cells is difficult to visualize. Not to mention that yeast cells are actual, living organisms while the cellular automaton is pure simulation.

The fun I have playing with yeast cells is part of the reason I like to use Brett in my beers. The main reason, though, is that I like the taste of Brett in beer. In fact, I even like it in wine, by transfer from my taste for Brett in beer.

And then, there’s carefree brewing.

As I described above, brewers are very careful to avoid wild yeast and other unwanted bugs in their beers. Sanitizing agents are an important part of the brewer’s arsenal. Which goes well with the “German engineer” dimension of brewing. There’s an extreme position in brewing, even in homebrewing. The “full-sanitization brewery.” Apart from pure yeast, nothing should live in the wort. Actually, nothing else should live in the brewery. If it weren’t for the need to use yeast in the fermentation process, brewing could be done in a completely sterile environment. The reference for this type of brewery is the “wet science” lab. As much as possible, wort shouldn’t come in contact with air (oxidization is another reason behind this; the obsession with bugs and the distaste for oxidization often go together). It’s all about control.

There’s an obvious reason behind this. Wort is exactly the kind of thing wild yeast and other bugs really like. Apparently, slants used to culture microorganisms in labs may contain a malt-based gelatin which is fairly similar to wort. I don’t think it contains hops, but hops are an agent of preservation and could have a positive effect in such a slant.

I keep talking about “wild yeast and other bugs” and I mentioned that, in the brewer’s folk taxonomy, bacteria are equivalent to wild yeast. The distinction between yeast and bacteria matters much less in the brewery than in relation to life sciences. In the conceptual system behind brewing, bacteria is functionally equivalent to wild yeast.

Fear of bacteria and microbes is widespread, in North America. Obviously, there are many excellent medical reasons to fear a number of microorganisms. Bacteria can in fact be deadly, in the right context. Not that the mere presence of bacteria is directly linked with human death. But there’s a clear association, in a number of North American minds, between bacteria and disease.

As a North American, despite my European background, I tended to perceive bacteria in a very negative way. Even today, I react “viscerally” at the mention of bacteria. Though I know that bacteria may in fact be beneficial to human health and that the human body contains a large number of bacterial cells, I have this kind of ingrained fear of bacteria. I love cheese and yogurt, including those which are made with very complex bacterial culture. But even the mere mention of bacteria in this context requires that I think about the distinction between beneficial and dangerous bacteria. In other words, I can admit that I have an irrational fear of bacteria. I can go beyond it, but my conception of microflora is skewed.

For two years in Indiana, I was living with a doctoral student in biochemistry. Though we haven’t spent that much time talking about microorganisms, I was probably influenced by his attitude toward sanitization. What’s funny, though, is that our house wasn’t among the cleanest in which I’ve lived. In terms of “sanitary conditions,” I’ve had much better and a bit worse. (I’ve lived in a house where we received an eviction notice from the county based on safety hazards in that place. Lots of problems with flooding, mould, etc.)

Like most other North American brewers, I used to obsess about sanitization, at every step in the process. I was doing an average job at sanitization and didn’t seem to get any obvious infection. I did get “gushers” (beers which gush out of the bottle when I open it) and a few “bottle bombs” (beer bottles which actually explode). But there were other explanations behind those occurrences than contamination.

The practise of sanitizing everything in the brewery had some significance in other parts of my life. For instance, I tend to think about dishes and dishwashing in a way which has more to do with caution over potential contamination than with dishes appearing clean and/or shiny. I also think about what should be put in the refrigerator and what can be left out, based on my limited understanding of biochemistry. And I think about food safety in a specific way.

In the brewery, however, I moved more and more toward another approach to microflora. Again, a more carefree approach to brewing. And I’m getting results that I enjoy while having a lot of fun. This approach is also based on my pseudo-biochemistry.

One thing is that, in brewing, we usually boil the wort for an hour or more before inoculation with pure yeast. As boiling kills most bugs, there’s something to be said about sanitization being mostly need for equipment which touches the wort after the boil. Part of the equipment is sanitized during the boiling process and what bugs other pieces of equipment may transfer to the wort before boiling are unlikely to have negative effects on the finished beer. With this idea in mind, I became increasingly careless with some pieces of my brewing equipment. Starting with the immersion chiller and kettle, going all the way to the mashtun.

Then, there’s the fact that I use wild yeast in some fermentations. In both brewing and baking, actually. Though my results with completely “wild” fermentations have been mixed to unsatisfactory, some of my results with “partially-wild” fermentations have been quite good.

Common knowledge among brewers is that “no known pathogen can survive in beer.” From a food safety standpoint, beer is “safe” for four main reasons: boiling, alcohol, low pH, and hops. At least, that’s what is shared among brewers, with narratives about diverse historical figures who saved whole populations through beer, making water sanitary. Depending on people’s attitudes toward alcohol, these stories about beer may have different connotations. But it does seem historically accurate to say that beer played an important part in making water drinkable.

So, even wild fermentation is considered safe. People may still get anxious but, apart from off-flavours, the notion is that contaminated beer can do no more harm than other beers.

The most harmful products of fermentation about which brewers may talk are fusel alcohols. These, brewers say, may cause headaches if you get too much of them. Fusels can cause some unwanted consequences, but they’re not living organisms and won’t spread as a disease. In brewer common knowledge, “fusels” mostly have to do with beers with high degrees of alcohol which have been fermented at a high temperature. My personal sense is that fusels aren’t more likely to occur in wild fermentation than with pure fermentation, especially given the fact that most wild fermentation happens with beer with a low degree of alcohol.

Most of the “risks” associated with wild fermentation have to do with flavours and aromas which may be displeasing. Many of these have to do with souring, as some bugs transform different compounds (alcohol especially, if I’m not mistaken) into different types of acids. While Brett and other strains of wild yeast can cause some souring, the acids in questions mostly have to do with bacteria. For instance, lactobacillus creates lactic acid, acetobacter creates acetic acid, etc.

Not only do I like that flavour and aroma characteristics associated with some wild yeast strains (Brett, especially), I also like sour beers. It may sound strange given the fact that I suffer from GERD. But I don’t overindulge in sour beers. I rarely drink large quantities of beer and sour beers would be the last thing I’d drink large quantities of. Besides, there’s a lot to be said about balance in pH. I may be off but I get the impression that there are times in which sour things are either beneficial to me or at least harmless. Part of brewer common knowledge in fact has a whole thing about alkalinity and pH. I’m not exactly clear on how it affects my body based on ingestion of diverse substances, but I’m probably affected by my background as a homebrewer.

Despite my taste for sour beers, I don’t necessarily have the same reaction to all souring agents. For instance, I have a fairly clear threshold in terms of acetic acid in beer. I enjoy it when a sour beer has some acetic character. But I prefer to limit the “aceticness” of my beers. Two batches I’ve fermented with wild bugs were way too acetic for me and I’m now concerned that other beers may develop the same character. In fact, if there’s a way to prevent acetobacter from getting in my wort while still getting the other bugs working, I could be even more carefree as a brewer than I currently am.

Which is a fair deal. These days, I really am brewing carefreely. Partly because of my “discovery” of lactobacillus.

As brewer common knowledge has it, lactobacillus is just about everywhere. It’s certainly found on grain and it’s present in human saliva. It’s involved in some dairy fermentation and it’s probably the main source of bacterial fear among dairy farmers.

Apart from lambic beers (which all come from a specific region in Belgium), the main sour beer that is part of brewer knowledge is Berliner Weisse. Though I have little data on how Berliner Weisse is fermented, I’ve known for a while that some people create a beer akin to Berliner Weisse through what brewers call a “sour mash” (and which may or may not be related to sour mash in American whiskey production). After thinking about it for years, I’ve done my first sour mash last year. I wasn’t very careful in doing it but I got satisfying results. One advantage of the sour mash is that it happens before boiling, which means that the production of acid can be controlled, to a certain degree. While I did boil my wort coming from sour mash, it’s clear that I still had some lactobacillus in my fermenters. It’s possible that my boil (which was much shorter than the usual) wasn’t enough to kill all the bugs. But, come to think of it, I may have been a bit careless with sanitization of some pieces of equipment which had touched the sour wort before boiling. Whatever the cause, I ended up with some souring bugs in my fermentation. And these worked really well for what I wanted. So much so that I’ve consciously reused that culture in some of my most recent brewing experiments.

So, in my case, lactobacillus is in the “desirable” category of yeast taxonomy. With Brett and diverse Saccharomyces strains, lactobacillus is part of my fermentation apparatus.

As a mad brewer, I can use what I want to use. I may not create life, but I create beer out of this increasingly complex microflora which has been taking over my brewery.

And I’m a happy brewer.

How I Got Into Beer

Ramblings about my passions for beer and experimentation.

Was doing some homebrewing experimentation (sour mash, watermelon, honey, complex yeast cultures…) and I got to think about what I’d say in an interview about my brewing activities.

It’s a bit more personal than my usual posts in English (my more personal blogposts are usually in French), but it seems fitting.

I also have something of a backlog of blogposts I really should do ASAP. But blogging is also about seizing the moment. I feel like writing about beer. 😛

So…

As you might know, the drinking age in Quebec is 18, as in most parts of the World except for the US. What is somewhat distinct about Qc with regards to drinking age is that responsible drinking is the key and we tend to have a more “European” attitude toward alcohol: as compared to the Rest of Canada, there’s a fair bit of leeway in terms of when someone is allowed to drink alcohol. We also tend to learn to drink in the family environment, and not necessarily with friends. What it means, I would argue, is that we do our mistakes in a relatively safe context. By the time drinking with peers becomes important (e.g., in university or with colleagues), many of us know that there’s no fun in abusing alcohol and that there are better ways to prove ourselves than binge drinking. According to Barrett Seaman, author of Binge: What Your College Student Won’t Tell You, even students from the US studying at McGill University in Montreal are more likely to drink responsibly than most students he’s seen in the US. (In Montreal, McGill tends to be recognized as a place where binge drinking is most likely to occur, partly because of the presence of US students. In addition, binge drinking is becoming more conspicuous, in Qc, perhaps because of media pressure or because of influence from the US.)

All this to say that it’s rather common for a Québécois teen to at least try alcohol at a relatively age. Because of my family’s connections with Switzerland and France, we probably pushed this even further than most Québécois family. In other words, I had my first sips of alcohol at a relatively early age (I won’t tell) and, by age 16, I could distinguish different varieties of Swiss wines, during an extended trip to Switzerland. Several of these wines were produced by relatives and friends, from their own vineyards. They didn’t contain sulfites and were often quite distinctive. To this day, I miss those wines. In fact, I’d say that Swiss wines are among the best kept secrets of the wine world. Thing is, it seems that Swiss vineyards barely produce enough for local consumption so they don’t try to export any of it.

Anyhoo…

By age 18, my attitude toward alcohol was already quite similar to what it is now: it’s something that shouldn’t be abused but that can be very tasty. I had a similar attitude toward coffee, that I started to drink regularly when I was 15. (Apart from being a homebrewer and a beer geek, I’m also a homeroaster and coffee geek. Someone once called me a “Renaissance drinker.”)

When I started working in French restaurants, it was relatively normal for staff members to drink alcohol at the end of the shift. In fact, at one place where I worked, the staff meal at the end of the evening shift was a lengthy dinner accompanied by some quality wine. My palate was still relatively untrained, but I remember that we would, in fact, discuss the wine on at least some occasions. And I remember one customer, a stage director, who would share his bottle of wine with the staff during his meal: his doctor told him to reduce his alcohol consumption and the wine only came in 750ml bottles. 😉

That same restaurant might have been the first place where I tried a North American craft beer. At least, this is where I started to know about craft beer in North America. It was probably McAuslan‘s St. Ambroise Stout. But I also had opportunities to have some St. Ambroise Pale Ale. I just preferred the Stout.

At one point, that restaurant got promotional beer from a microbrewery called Massawippi. That beer was so unpopular that we weren’t able to give it away to customers. Can’t recall how it tasted but nobody enjoyed it. The reason this brewery is significant is that their license was the one which was bought to create a little microbrewery called Unibroue. So, it seems that my memories go back to some relatively early phases in Quebec’s craft beer history. I also have rather positive memories of when Brasal opened.

Somewhere along the way, I had started to pick up on some European beers. Apart from macros (Guinness, Heineken, etc.), I’m not really sure what I had tried by that point. But even though these were relatively uninspiring beers, they somehow got me to understand that there was more to beer than Molson, Labatt, Laurentide, O’Keefe, and Black Label.

The time I spent living in Switzerland, in 1994-1995, is probably the turning point for me in terms of beer tasting. Not only did I get to drink the occasional EuroLager and generic stout, but I was getting into Belgian Ales and Lambics. My “session beer,” for a while, was a wit sold in CH as Wittekop. Maybe not the most unique wit out there. But it was the house beer at Bleu Lézard, and I drank enough of it then to miss it. I also got to try several of the Trappists. In fact, one of the pubs on the EPFL campus had a pretty good beer selection, including Rochefort, Chimay, Westmalle, and Orval. The first lambic I remember was Mort Subite Gueuze, on tap at a very quirky place that remains on my mind as this near-cinematic experience.

At the end of my time in Switzerland, I took a trip to Prague and Vienna. Already at that time, I was interested enough in beer that a significant proportion of my efforts were about tasting different beers while I was there. I still remember a very tasty “Dopplemalz” beer from Vienna and, though I already preferred ales, several nice lagers from Prague.

A year after coming back to North America, I traveled to Scotland and England with a bunch of friends. Beer was an important part of the trip. Though I had no notion of what CAMRA was, I remember having some real ales in diverse places. Even some of the macro beers were different enough to merit our interest. For instance, we tried Fraoch then, probably before it became available in North America. We also visited a few distilleries which, though I didn’t know it at the time, were my first introduction to some beer brewing concepts.

Which brings me to homebrewing.

The first time I had homebrew was probably at my saxophone teacher’s place. He did a party for all of us and had brewed two batches. One was either a stout or a porter and the other one was probably some kind of blonde ale. What I remember of those beers is very vague (that was probably 19 years ago), but I know I enjoyed the stout and was impressed by the low price-quality ratio. From that point on, I knew I wanted to brew. Not really to cut costs (I wasn’t drinking much, anyway). But to try different beers. Or, at least, to easily get access to those beers which were more interesting than the macrobrewed ones.

I remember another occasion with a homebrewer, a few years later. I only tried a few sips of the beer but I remember that he was talking about the low price. Again, what made an impression on me wasn’t so much the price itself. But the low price for the quality.

At the same time, I had been thinking about all sorts of things which would later become my “hobbies.” I had never had hobbies in my life but I was thinking about homeroasting coffee, as a way to get really fresh coffee and explore diverse flavours. Thing is, I was already this hedonist I keep claiming I am. Tasting diverse things was already an important pleasure in my life.

So, homebrewing was on my mind because of the quality-price ratio and because it could allow me to explore diverse flavours.

When I moved to Bloomington, IN, I got to interact with some homebrewers. More specifically, I went to an amazing party thrown by an ethnomusicologist/homebrewer. The guy’s beer was really quite good. And it came from a full kegging system.

I started dreaming.

Brewpubs, beerpubs, and microbreweries were already part of my life. For instance, without being a true regular, I had been going to Cheval blanc on a number of occasions. And my “go to” beer had been Unibroue, for a while.

At the time, I was moving back and forth between Quebec and Indiana. In Bloomington, I was enjoying beers from Upland’s Brewing Co., which had just opened, and Bloomington Brewing Co., which was distributed around the city. I was also into some other beers, including some macro imports like Newcastle Brown Ale. And, at liquor stores around the city (including Big Red), I was discovering a few American craft beers, though I didn’t know enough to really make my way through those. In fact, I remember asking for Unibroue to be distributed there, which eventually happened. And I’m pretty sure I didn’t try Three Floyds, at the time.

So I was giving craft beer some thought.

Then, in February 1999, I discovered Dieu du ciel. I may have gone there in late 1998, but the significant point was in February 1999. This is when I tried their first batch of “Spring Equinox” Maple Scotch Ale. This is the beer that turned me into a homebrewer. This is the beer that made me changed my perspetive about beer. At that point, I knew that I would eventually have to brew.

Which happened in July 1999, I think. My then-girlfriend had offered me a homebrewing starter kit as a birthday gift. (Or maybe she gave it to me for Christmas… But I think it was summer.) Can’t remember the extent to which I was talking about beer, at that point, but it was probably a fair bit, i.e., I was probably becoming annoying about it. And before getting the kit, I was probably daydreaming about brewing.

Even before getting the kit, I had started doing some reading. The aforementioned ethnomusicologist/homebrewer had sent me a Word file with a set of instructions and some information about equipment. It was actually much more elaborate than the starter kit I eventually got. So I kept wondering about all the issues and started getting some other pieces of equipment. In other words, I was already deep into it.

In fact, when I got my first brewing book, I also started reading feverishly, in a way I hadn’t done in years. Even before brewing the first batch, I was passionate about brewing.

Thanks to the ‘Net, I was rapidly amassing a lot of information about brewing. Including some recipes.

Unsurprisingly, the first beer I brewed was a maple beer, based on my memory of that Dieu du ciel beer. However, for some reason, that first beer was a maple porter, instead of a maple scotch ale. I brewed it with extract and steeped grain. I probably used a fresh pack of Coopers yeast. I don’t think I used fresh hops (the beer wasn’t supposed to be hop-forward). I do know I used maple syrup at the end of boil and maple sugar at priming.

It wasn’t an amazing beer, perhaps. But it was tasty enough. And it got me started. I did a few batches with extract and moved to all-grain almost right away. I remember some comments on my first maple porter, coming from some much more advanced brewers than I was. They couldn’t believe that it was an extract beer. I wasn’t evaluating my extract beer very highly. But I wasn’t ashamed of it either.

Those comments came from brewers who were hanging out on the Biéropholie website. After learning about brewing on my own, I had eventually found the site and had started interacting with some local Québécois homebrewers.

This was my first contact with “craft beer culture.” I had been in touch with fellow craft beer enthusiasts. But hanging out with Bièropholie people and going to social events they had organized was my first foray into something more of a social group with its associated “mode of operation.” It was a fascinating experience. As an ethnographer and social butterfly, this introduction to the social and cultural aspects of homebrewing was decisive. Because I was moving all the time, it was hard for me to stay connected with that group. But I made some ties there and I still bump into a few of the people I met through Bièropholie.

At the time I first started interacting with the Bièropholie gang, I was looking for a brewclub. Many online resources mentioned clubs and associations and they sounded exactly like the kind of thing I needed. Not only for practical reasons (it’s easier to learn techniques in such a context, getting feedback from knowledgeable people is essential, and tasting other people’s beers is an eye-opener), but also for social reasons. Homebrewing was never meant to be a solitary experience, for me.

I was too much of a social butterfly.

Which brings me back to childhood. As a kid, I was often ostracized. And I always tried to build clubs. It never really worked. Things got much better for me after age 15, and I had a rich social life by the time I became a young adult. But, in 2000-2001, I was still looking for a club to which I could belong. Unlike Groucho, I cared a lot about any club which would accept me.

As fun as it was, Bièropholie wasn’t an actual brewclub. Brewers posting on the site mostly met as a group during an annual event, a BBQ which became known as «Xè de mille» (“Nth of 1000”) in 2001. The 2000 edition (“0th of 1000”) was when I had my maple porter tasted by more advanced brewers. Part of event was a bit like what brewclub meetings tend to be: tasting each other’s brews, providing feedback, discussing methods and ingredients, etc. But because people didn’t meet regularly as a group, because people were scattered all around Quebec, and because there wasn’t much in terms of “contribution to primary identity,” it didn’t feel like a brewclub, at least not of the type I was reading about.

The MontreAlers brewclub was formed at about that time. For some reason, it took me a while to learn of its existence. I distinctly remember looking for a Montreal-based club through diverse online resources, including the famed HomeBrew Digest. And I know I tried to contact someone from McGill who apparently had a club going. But I never found the ‘Alers.

I did eventually find the Members of Barleyment. Or, at least, some of the people who belonged to this “virtual brewclub.” It probably wasn’t until I moved to New Brunswick in 2003, but it was another turning point. One MoB member I met was Daniel Chisholm, a homebrewer near Fredericton, NB, who gave me insight on the New Brunswick beer scene (I was teaching in Fredericton at the time). Perhaps more importantly, Daniel also invited me to the Big Strange New Brunswick Brew (BSNBB), a brewing event like the ones I kept dreaming about. This was partly a Big Brew, an occasion for brewers to brew together at the same place. But it was also a very fun social event.

It’s through the BSNBB that I met MontreAlers Andrew Ludwig and John Misrahi. John is the instigator of the MontreAlers brewclub. Coming back to Montreal a few weeks after BSNBB, I was looking forward to attend my first meeting of the ‘Alers brewclub, in July 2003.

Which was another fascinating experience. Through it, I was able to observe different attitudes toward brewing. Misrahi, for instance, is a fellow experimental homebrewer to the point that I took to call him “MadMan Misrahi.” But a majority of ‘Alers are more directly on the “engineering” side of brewing. I also got to observe some interesting social dynamics among brewers, something which remained important as I moved to different places and got to observe other brewclubs and brewers meetings, such as the Chicago Beer Society’s Thirst Fursdays. Eventually, this all formed the backdrop for a set of informal observations which were the corse of a presentation I gave about craft beer and cultural identity.

Through all of these brewing-related groups, I’ve been positioning myself as an experimenter.  My goal isn’t necessarily to consistently make quality beer, to emulate some beers I know, or to win prizes in style-based brewing competitions. My thing is to have fun and try new things. Consistent beer is available anywhere and I drink little enough that I can afford enough of it. But homebrewing is almost a way for me to connect with my childhood.

There can be a “mad scientist” effect to homebrewing. Michael Tonsmeire calls himself The Mad Fermentationist and James Spencer at Basic Brewing has been interviewing a number of homebrewer who do rather unusual experiments.

I count myself among the ranks of the “Mad Brewers.” Oh, we’re not doing anything completely crazy. But slightly mad we are.

Through the selective memory of an adult with regards to his childhood, I might say that I was “always like that.” As a kid, I wanted to be everything at once: mayor, astronaut, fireman, and scholar. The researcher’s spirit had me “always try new things.” I even had a slight illusion of grandeur in that I would picture myself accomplishing all sorts of strange things. Had I known about it as a kid, I would have believed that I could solve the Poincaré conjecture. Mathematicians were strange enough for me.

But there’s something more closely related to homebrewing which comes back to my mind as I do experiments with beer. I had this tendency to do all sorts of concoctions. Not only the magic potions kids do with mud  and dishwashing liquid. But all sorts of potable drinks that a mixologist may experiment with. There wasn’t any alcohol in those drinks, but the principle was the same. Some of them were good enough for my tastes. But I never achieved the kind of breakthrough drink which would please masses. I did, however, got my experimentation spirit to bear on food.

By age nine, I was cooking for myself at lunch. Nothing very elaborate, maybe. It often consisted of reheating leftovers. But I got used to the stove (we didn’t have a microwave oven, at the time). And I sometimes cooked some eggs or similar things. To this day, eggs are still my default food.

And, like many children, I occasionally contributing to cooking. Simple things like mixing ingredients. But also tasting things at different stages in the cooking or baking process. Given the importance of sensory memory, I’d say the tasting part was probably more important in my development than the mixing. But the pride was mostly in being an active contributor in the kitchen.

Had I understood fermentation as a kid, I probably would have been fascinated by it. In a way, I wish I could have been involved in homebrewing at the time.

A homebrewery is an adult’s chemistry set.

Beer Eye for the Coffee Guy (or Gal)

The coffee world can learn from the beer world.

Judged twelve (12) espresso drinks as part of the Eastern Regional Canadian Barista Championship (UStream).

[Never watched Queer Eye. Thought the title would make sense, given both the “taste” and even gender dimensions.]

Had quite a bit of fun.

The experience was quite similar to the one I had last year. There were fewer competitors, this year. But I also think that there were more people in the audience, at least in the morning. One possible reason is that ads about the competition were much more visible this year than last (based on my own experience and on several comments made during the day). Also, I noticed a stronger sense of collegiality among competitors, as several of them have been different things together in the past year.

More specifically, people from Ottawa’s Bridgehead and people from Montreal’s Café Myriade have developed something which, at least from the outside, look like comradery. At the Canadian National Barista Championship, last year, Myriade’s Anthony Benda won the “congeniality” prize. This year, Benda got first place in the ERCBC. Second place went to Bridgehead’s Cliff Hansen, and third place went to Myriade’s Alex Scott.

Bill Herne served as head judge for most of the event. He made it a very pleasant experience for me personally and, I hope, for other judges. His insight on the championship is especially valuable given the fact that he can maintain a certain distance from the specifics.

The event was organized in part by Vida Radovanovic, founder of the Canadian Coffee & Tea Show. Though she’s quick to point to differences between Toronto and Montreal, in terms of these regional competitions, she also seemed pleased with several aspects of this year’s ERCBC.

To me, the championship was mostly an opportunity for thinking and talking about the coffee world.

Met and interacted with diverse people during the day. Some of them were already part of my circle of coffee-loving friends and acquaintances. Some who came to me to talk about coffee after noticing some sign of my connection to the championship. The fact that I was introduced to the audience as a blogger and homeroaster seems to have been relatively significant. And there were several people who were second-degree contacts in my coffee-related social network, making for easy introductions.

A tiny part of the day’s interactions was captured in interviews for CBC Montreal’s Daybreak (unfortunately, the recording is in RealAudio format).

“Coffee as a social phenomenon” was at the centre of several of my own interactions with diverse people. Clearly, some of it has to do with my own interests, especially with “Montreal’s coffee renaissance.” But there were also a clear interest in such things as the marketshare of quality coffee, the expansion of some coffee scenes, and the notion of building a sense of community through coffee. That last part is what motivated me to write this post.

After the event, a member of my coffee-centric social network has started a discussion about community-building in the coffee world and I found myself dumping diverse ideas on him. Several of my ideas have to do with my experience with craft beer in North America. In a way, I’ve been doing informal ethnography of craft beer. Beer has become an area of expertise, for me, and I’d like to pursue more formal projects on it. So beer is on my mind when I think about coffee. And vice-versa. I was probably a coffee geek before I started homebrewing beer but I started brewing beer at home before I took my coffee-related activities to new levels.

So, in my reply on a coffee community, I was mostly thinking about beer-related communities.

Comparing coffee and beer is nothing new, for me. In fact, a colleague has blogged about some of my comments, both formal and informal, about some of those connections.

Differences between beer and coffee are significant. Some may appear trivial but they can all have some impact on the way we talk about cultural and social phenomena surrounding these beverages.

  • Coffee contains caffeine, beer contains alcohol. (Non-alcoholic beers, decaf coffee, and beer with coffee are interesting but they don’t dominate.) Yes: “duh.” But the difference is significant. Alcohol and caffeine not only have different effects but they fit in different parts of our lives.
  • Coffee is often part of a morning ritual,  frequently perceived as part of preparation for work. Beer is often perceived as a signal for leisure time, once you can “wind down.” Of course, there are people (including yours truly) who drink coffee at night and people (especially in Europe) who drink alcohol during a workday. But the differences in the “schedules” for beer and coffee have important consequences on the ways these drinks are integrated in social life.
  • Coffee tends to be much less expensive than beer. Someone’s coffee expenses may easily be much higher than her or his “beer budget,” but the cost of a single serving of coffee is usually significantly lower than a single serving of beer.
  • While it’s possible to drink a few coffees in a row, people usually don’t drink more than two coffees in a single sitting. With beer, it’s not rare that people would drink quite a few pints in the same night. The UK concept of a “session beer” goes well with this fact.
  • Brewing coffee takes a few minutes, brewing beer takes a while (hours for the brewing process, days or even weeks for fermentation).
  • At a “bar,” coffee is usually brewed in front of those who will drink it while beer has been prepared in advance.
  • Brewing coffee at home has been mainstream for quite a while. Beer homebrewing is considered a hobby.
  • Historically, coffee is a recent phenomenon. Beer is among the most ancient human-made beverages in the world.

Despite these significant differences, coffee and beer also have a lot in common. The fact that the term “brew” is used for beer and coffee (along with tea) may be a coincidence, but there are remarkable similarities between the extraction of diverse compounds from grain and from coffee beans. In terms of process, I would argue that beer and coffee are more similar than are, say, coffee and tea or beer and wine.

But the most important similarity, in my mind, is social: beer and coffee are, indeed, central to some communities. So are other drinks, but I’m more involved in groups having to do with coffee or beer than in those having to do with other beverages.

One way to put it, at least in my mind, is that coffee and beer are both connected to revolutions.

Coffee is community-oriented from the very start as coffee beans often come from farming communities and cooperatives. The notion, then, is that there are local communities which derive a significant portion of their income from the global and very unequal coffee trade. Community-oriented people often find coffee-growing to be a useful focus of attention and given the place of coffee in the global economy, it’s unsurprising to see a lot of interest in the concept (if not the detailed principles) of “fair trade” in relation to coffee. For several reasons (including the fact that they’re often produced in what Wallerstein would call “core” countries), the main ingredients in beer (malted barley and hops) don’t bring to mind the same conception of local communities. Still, coffee and beer are important to some local agricultural communities.

For several reasons, I’m much more directly involved with communities which have to do with the creation and consumption of beverages made with coffee beans or with grain.

In my private reply about building a community around coffee, I was mostly thinking about what can be done to bring attention to those who actually drink coffee. Thinking about the role of enthusiasts is an efficient way to think about the craft beer revolution and about geeks in general. After all, would the computer world be the same without the “homebrew computer club?”

My impression is that when coffee professionals think about community, they mostly think about creating better relationships within the coffee business. It may sound like a criticism, but it has more to do with the notion that the trade of coffee has been quite competitive. Building a community could be a very significant change. In a way, that might be a basis for the notion of a “Third Wave” in coffee.

So, using my beer homebrewer’s perspective: what about a community of coffee enthusiasts? Wouldn’t that help?

And I don’t mean “a website devoted to coffee enthusiasts.” There’s a lot of that, already. A lot of people on the Coffee Geek Forums are outsiders to the coffee industry and Home Barista is specifically geared toward the home enthusiasts’ market.

I’m really thinking about fostering a sense of community. In the beer world, this frequently happens in brewclubs or through the Beer Judge Certification Program, which is much stricter than barista championships. Could the same concepts apply to the coffee world? Probably not. But there may still be “lessons to be learnt” from the beer world.

In terms of craft beer in North America, there’s a consensus around the role of beer enthusiasts. A very significant number of craft brewers were homebrewers before “going pro.” One of the main reasons craft beer has become so important is because people wanted to drink it. Craft breweries often do rather well with very small advertising budgets because they attract something akin to cult followings. The practise of writing elaborate comments and reviews has had a significant impact on a good number of craft breweries. And some of the most creative things which happen in beer these days come from informal experiments carried out by homebrewers.

As funny as it may sound (or look), people get beer-related jobs because they really like beer.

The same happens with coffee. On occasion. An enthusiastic coffee lover will either start working at a café or, somewhat more likely, will “drop everything” and open her/his own café out of a passion for coffee. I know several people like this and I know the story is quite telling for many people. But it’s not the dominant narrative in the coffee world where “rags to riches” stories have less to do with a passion for coffee than with business acumen. Things may be changing, though, as coffee becomes more… passion-driven.

To be clear: I’m not saying that serious beer enthusiasts make the bulk of the market for craft beer or that coffee shop owners should cater to the most sophisticated coffee geeks out there. Beer and coffee are both too cheap to warrant this kind of a business strategy. But there’s a lot to be said about involving enthusiasts in the community.

For one thing, coffee and beer can both get viral rather quickly. Because most people in North America can afford beer or coffee, it’s often easy to convince a friend to grab a cup or pint. Coffee enthusiasts who bring friends to a café do more than sell a cup. They help build up a place. And because some people are into the habit of regularly going to the same bar or coffee shop, the effects can be lasting.

Beer enthusiasts often complain about the inadequate beer selection at bars and restaurants. To this day, there are places where I end up not drinking anything besides water after hearing what the beerlist contains. In the coffee world, it seems that the main target these days is the restaurant business. The current state of affairs with coffee at restaurants is often discussed with heavy sighs of disappointment. What I”ve heard from several people in the coffee business is that, too frequently,  restaurant owners give so little attention to coffee that they end up destroying the dining experience of anyone who orders coffee after a meal. Even in my own case, I’ve had enough bad experiences with restaurant coffee (including, or even especially, at higher-end places) that I’m usually reluctant to have coffee at a restaurant. It seems quite absurd, as a quality experience with coffee at the end of a meal can do a lot to a restaurant’s bottom line. But I can’t say that it’s my main concern because I end up having coffee elsewhere, anyway. While restaurants can be the object of a community’s attention and there’s a lot to be said about what restaurants do to a region or neighbourhood, the community dimensions of coffee have less to do with what is sold where than with what people do around coffee.

Which brings me to the issue of education. It’s clearly a focus in the coffee world. In fact, most coffee-related events have some “training” dimension. But this type of education isn’t community-oriented. It’s a service-based approach, such as the one which is increasingly common in academic institutions. While I dislike customer-based learning in universities, I do understand the need for training services in the coffee world. What I perceive insight from the beer world can do is complement these training services instead of replacing them.

An impressive set of learning experiences can be seen among homebrewers. From the most practical of “hands-on training” to some very conceptual/theoretical knowledge exchanges. And much of the learning which occurs is informal, seamless, “organic.” It’s possible to get very solid courses in beer and brewing, but the way most people learn is casual and free. Because homebrewers are organized in relatively tight groups and because the sense of community among homebrewers is also a matter of solidarity.  Or, more simply, because “it’s just a hobby anyway.”

The “education” theme also has to do with “educating the public” into getting more sophisticated about what to order. This does happen in the beer world, but can only be pulled off when people are already interested in knowing more about beer. In relation with the coffee industry, it sometimes seems that “coffee education” is imposed on people from the top-down. And it’s sometimes quite arbitrary. Again, room for the coffee business to read the Cluetrain Manifesto and to learn from communities.

And speaking of Starbucks… One draft blogpost which has been nagging me is about the perception that, somehow, Starbucks has had a positive impact in terms of coffee quality. One important point is that Starbucks took the place of an actual coffee community. Even if it can be proven that coffee quality wouldn’t have been improved in North America if it hadn’t been for Starbucks (a tall order, if you ask me), the issue remains that Starbucks has only paid attention to the real estate dimension of the concept of community. The mermaid corporation has also not doing so well, recently, so we may finally get beyond the financial success story and get into the nitty-gritty of what makes people connect through coffee. The world needs more from coffee than chains selling coffee-flavoured milk.

One notion I wanted to write about is the importance of “national” traditions in both coffee and beer in relation to what is happening in North America, these days. Part of the situation is enough to make me very enthusiastic to be in North America, since it’s increasingly possible to not only get quality beer and coffee but there are many opportunities for brewing coffee and beer in new ways. But that’ll have to wait for another post.

In Western Europe at least, coffee is often associated with the home. The smell of coffee has often been described in novels and it can run deep in social life. There’s no reason homemade coffee can’t be the basis for a sense of community in North America.

Now, if people in the coffee industry would wake up and… think about actual human beings, for a change…

Présence féminine et culture geek (Journée Ada Lovelace) #ald09

Ma contribution pour la Journée Ada Lovelace (#ald09): les femmes, la culture geek et le média social.

En 2009, la journée de la femme a été hypothéquée d’une heure, dans certaines contrées qui sont passées à l’heure d’été le 8 mars. Pourtant, plus que jamais, c’est aux femmes que nous devrions accorder plus de place. Cette Journée internationale en l’honneur d’Ada Lovelace et des femmes dans les domaines technologiques est une excellente occasion pour discuter de l’importance de la présence féminine pour la pérennité sociale.

Pour un féministe mâle, le fait de parler de condition féminine peut poser certains défis. Qui suis-je, pour parler des femmes? De quel droit pourrais-je m’approprier de la parole qui devrait, selon moi, être accordée aux femmes? Mes propos ne sont-ils pas teintés de biais? C’est donc d’avantage en tant qu’observateur de ce que j’ai tendance à appeler la «culture geek» (voire la «niche geek» ou la «foule geek») que je parle de cette présence féminine.

Au risque de tomber dans le panneau du stéréotype, j’oserais dire qu’une présence accrue des femmes en milieu geek peut avoir des impacts intéressants en fonction de certains rôles impartis aux femmes dans diverses sociétés liées à la culture geek. En d’autres termes, j’aimerais célébrer le pouvoir féminin, bien plus fondamntal que la «force» masculine.

Je fais en cela référence à des notions sur les femmes et les hommes qui m’ont été révélées au cours de mes recherches sur les confréries de chasseurs, au Mali. En apparence exclusivement mâles, les confréries de chasseurs en Afrique de l’ouest accordent une place prépondérante à la féminité. Comme le dit le proverbe, «nous sommes tous dans les bras de nos mères» (bèè y’i ba bolo). Si le père, notre premier rival (i fa y’i faden folo de ye), peut nous donner la force physique, c’est la mère qui nous donne la puissance, le vrai pouvoir.

Loin de moi l’idée d’assigner aux femmes un pouvoir qui ne viendrait que de leur capacité à donner naissance. Ce n’est pas uniquement en tant que mère que la femme se doit d’être respectée. Bien au contraire, les divers rôles des femmes ont tous à être célébrés. Ce qui donne à la maternité une telle importance, d’un point de vue masculin, c’est son universalité: un homme peut ne pas avoir de sœur, d’épouse ou de fille, il peut même ne pas connaître l’identité précise de son père, il a au minimum eu un contact avec sa mère, de la conception à la naissance.

C’est souvent par référence à la maternité que les hommes conçoivent le respect le plus inconditionnel pour la femme. Et l’image maternelle ne doit pas être négligée, même si elle est souvent stéréotypée. Même si le terme «materner» a des connotations péjoratives, il fait appel à un soi adapté et sans motif spécifique. La culture geek a-t-elle besoin de soins maternels?

Une étude récente s’est penchée sur la dimension hormonale des activités des courtiers de Wall Street, surtout en ce qui a trait à la prise de risques. Selon cette étude (décrite dans une baladodiffusion de vulgarisation scientifique), il y aurait un lien entre certains taux d’hormones et un comportement fondé sur le profit à court terme. Ces hormones sont surtout présentes chez de jeunes hommes, qui constituent la majorité de ce groupe professionnel. Si les résultats de cette étude sont valables, un groupe plus diversifié de courtiers, au niveau du sexe et de l’âge, risque d’être plus prudent qu’un groupe dominé par de jeunes hommes.

Malgré d’énormes différences dans le détail, la culture geek a quelques ressemblances avec la composition de Wall Street, du moins au point de vue hormonal. Si l’appât du gain y est moins saillant que sur le plancher de la Bourse, la culture geek accorde une très large place au culte méritocratique de la compétition et à l’image de l’individu brillant et tout-puissant. La prise de risques n’est pas une caractéristique très visible de la culture geek, mais l’approche «résolution de problèmes» (“troubleshooting”) évoque la décision hâtive plutôt que la réflexion approfondie. Le rôle du dialogue équitable et respectueux, sans en être évacué, n’y est que rarement mis en valeur. La culture geek est «internationale», en ce sens qu’elle trouve sa place dans divers lieux du Globe (généralement définis avec une certaine précision en cebuees névralgiques comme la Silicon Valley). Elle est pourtant loin d’être représentative de la diversité humaine. La proportion bien trop basse de femmes liées à la culture geek est une marque importante de ce manque de diversité. Un groupe moins homogène rendrait plus prégnante la notion de coopération et, avec elle, un plus grand soucis de la dignité humaine. Après tout, le vrai humanisme est autant philogyne que philanthrope.

Un principe similaire est énoncé dans le cadre des soins médicaux. Sans être assignées à des tâches spécifiques, associées à leur sexe, la présence de certaines femmes-médecins semble améliorer certains aspects du travail médical. Il y a peut-être un stéréotype implicite dans tout ça et les femmes du secteur médical ne sont probablement pas traitées d’une bien meilleure façon que les femmes d’autres secteurs d’activité. Pourtant, au-delà du stéréotype, l’association entre féminité et relation d’aide semble se maintenir dans l’esprit des membres de certaines sociétés et peut être utilisée pour rendre la médecine plus «humaine», tant dans la diversité que dans cette notion d’empathie raisonnée, évoquée par l’humanisme.

Je ne peux m’empêcher de penser à cette remarquable expérience, il y a quelques années déjà, de participer à un colloque académique à forte présence féminine. En plus d’une proportion élevée de femmes, ce colloque sur la nourriture et la culture donnait la part belle à l’image de la mère nourricière, à l’influence fondamentale de la sphère donestique sur la vie sociale. Bien que mâle, je m’y suis senti à mon aise et je garde de ces quelques jours l’idée qu’un monde un tant soit peu féminisé pouvait avoir des effets intéressants, d’un point de vue social. Un groupe accordant un réel respect à la condition féminine peut être associé à une ambiance empreinte de «soin», une atmosphère “nurturing”.

Le milieu geek peut être très agréable, à divers niveaux, mais la notion de «soin», l’empathie, voire même l’humanisme n’en sont pas des caractéristiques très évidentes. Un monde geek accordant plus d’importance à la présence des femmes serait peut-être plus humain que ce qu’un portrait global de la culture geek semble présager.

Et n’est-ce pas ce qui s’est passé? Le ‘Net s’est partiellement féminisé au cours des dix dernières années et l’émergence du média social est intimement lié à cette transformation «démographique».

D’aucuns parlent de «démocratisation» d’Internet, usant d’un champ lexical associé au journalisme et à la notion d’État-Nation. Bien qu’il s’agisse de parler d’accès plus uniforme aux moyens technologiques, la source de ce discours se situe dans une vision spécifique de la structure social. Un relent de la Révolution Industrielle, peut-être? Le ‘Net étant construit au-delà des frontières politiques, cette vision du monde semble peu appropriée à la communication mondialisée. D’ailleurs, qu’entend-on vraiment par «démocratisation» d’Internet? La participation active de personnes diversifiées aux processus décisionnels qui créent continuellement le ‘Net? La simple juxtaposition de personnes provenant de milieux socio-économiques distincts? La possibilité pour la majorité de la planète d’utiliser certains outils dans le but d’obtenir ces avantages auxquels elle a droit, par prérogative statistique? Si c’est le cas, il en reviendrait aux femmes, majoritaires sur le Globe, de décider du sort du ‘Net. Pourtant, ce sont surtout des hommes qui dominent le ‘Net. Le contrôle exercé par les hommes semble indirect mais il n’en est pas moins réel.

Cet état des choses a tendance à changer. Bien qu’elles ne soient toujours pas dominantes, les femmes sont de plus en plus présentes, en-ligne. Certaines recherches statistiques semblent d’ailleurs leur assigner la majorité dans certaines sphères d’activité en-ligne. Mais mon approche est holistique et qualitative, plutôt que statistique et déterministe. C’est plutôt au sujet des rôles joués par les femmes que je pense. Si certains de ces rôles semblent sortir en ligne direct du stéréotype d’inégalité sexuelle du milieu du XXè siècle, c’est aussi en reconnaissant l’emprise du passé que nous pouvons comprendre certaines dimensions de notre présent. Les choses ont changé, soit. La conscience de ce changement informe certains de nos actes. Peu d’entre nous ont complètement mis de côté cette notion que notre «passé à tous» était patriarcal et misogyne. Et cette notion conserve sa signifiance dans nos gestes quotidiens puisque nous nous comparons à un modèle précis, lié à la domination et à la lutte des classes.

Au risque, encore une fois, de faire appel à des stéréotypes, j’aimerais parler d’une tendance que je trouve fascinante, dans le comportement de certaines femmes au sein du média social. Les blogueuses, par exemple, ont souvent réussi à bâtir des communautés de lectrices fidèles, des petits groupes d’amies qui partagent leurs vies en public. Au lieu de favoriser le plus grand nombre de visites, plusieurs femmes ont fondé leurs activités sur la blogosphère sur des groupes relativement restreints mais très actifs. D’ailleurs, certains blogues de femmes sont l’objet de longues discussions continues, liant les billets les uns aux autres et, même, dépassant le cadre du blogue.

À ce sujet, je fonde certaines de mes idées sur quelques études du phénomène de blogue, parues il y a déjà plusieurs années (et qu’il me serait difficile de localiser en ce moment) et sur certaines observations au sein de certaines «scènes geeks» comme Yulblog. Lors de certains événements mettant en contacts de nombreuses blogueuses, certaines d’entre elles semblaient préférer demeurer en groupe restreint pour une part importante de la durée de l’événement que de multiplier les nouveaux contacts. Il ne s’agit pas ici d’une restriction, certaines femmes sont mieux à même de provoquer l’«effet du papillon social» que la plupart des hommes. Mais il y a une force tranquille dans ces petits regroupements de femmes, qui fondent leur participation à la blogosphère sur des contacts directs et forts plutôt que sur la «pêche au filet». C’est souvent par de très petits groupes très soudés que les changements sociaux se produisent et, des “quilting bees” aux blogues de groupes de femmes, il y a une puissance ignorée.

Il serait probablement abusif de dire que c’est la présence féminine qui a provoqué l’éclosion du média social au cours des dix dernières années. Mais la présence des femmes est liée au fait que le ‘Net ait pu dépasser la «niche geek». Le domaine de ce que certains appellent le «Web 2.0» (ou la sixième culture d’Internet) n’est peut-être pas plus démocratique que le ‘Net du début des années 1990. Mais il est clairement moins exclusif et plus accueillant.

Comme ma tendre moitié l’a lu sur la devanture d’une taverne: «Bienvenue aux dames!»

Les billets publiés en l’honneur de la Journée Ada Lovelace devaient, semble-t-il, se pencher sur des femmes spécifiques, œuvrant dans des domaines technologiques. J’ai préféré «réfléchir à plume haute» au sujet de quelques éléments qui me trottaient dans la tête. Il serait toutefois de bon ton pour moi de mentionner des noms et de ne pas consigner ce billet à une observation purement macroscopique et impersonnelle. Étant peu porté sur l’individualisme, je préfère citer plusieurs femmes, plutôt que de me concentrer sur une d’entre elles. D’autant plus que la femme à laquelle je pense avec le plus d’intensité dit désirer garder une certaine discrétion et, même si elle blogue depuis bien plus longtemps que moi et qu’elle sait très bien se débrouiller avec les outils en question, elle prétend ne pas être associée à la technologie.

J’ai donc décidé de procéder à une simple énumération (alphabétique, j’aime pas les rangs) de quelques femmes dont j’apprécie le travail et qui ont une présence Internet facilement identifiable. Certaines d’entre elles sont très proches de moi. D’autres planent au-dessus de milieux auxquels je suis lié. D’autres encore sont des présences discrètes ou fortes dans un quelconque domaine que j’associe à la culture geek et/ou au média social. Évidemment, j’en oublie des tonnes. Mais c’est un début. Continuons le combat! 😉

Social Networks and Microblogging

Event-based microblogging and the social dimensions of online social networks.

Microblogging (Laconica, Twitter, etc.) is still a hot topic. For instance, during the past few episodes of This Week in Tech, comments were made about the preponderance of Twitter as a discussion theme: microblogging is so prominent on that show that some people complain that there’s too much talk about Twitter. Given the centrality of Leo Laporte’s podcast in geek culture (among Anglos, at least), such comments are significant.

The context for the latest comments about TWiT coverage of Twitter had to do with Twitter’s financials: during this financial crisis, Twitter is given funding without even asking for it. While it may seem surprising at first, given the fact that Twitter hasn’t publicized a business plan and doesn’t appear to be profitable at this time, 

Along with social networking, microblogging is even discussed in mainstream media. For instance, Médialogues (a media critique on Swiss national radio) recently had a segment about both Facebook and Twitter. Just yesterday, Comedy Central’s The Daily Show with Jon Stewart made fun of compulsive twittering and mainstream media coverage of Twitter (original, Canadian access).

Clearly, microblogging is getting some mindshare.

What the future holds for microblogging is clearly uncertain. Anything can happen. My guess is that microblogging will remain important for a while (at least a few years) but that it will transform itself rather radically. Chances are that other platforms will have microblogging features (something Facebook can do with status updates and something Automattic has been trying to do with some WordPress themes). In these troubled times, Montreal startup Identi.ca received some funding to continue developing its open microblogging platform.  Jaiku, bought by Google last year, is going open source, which may be good news for microblogging in general. Twitter itself might maintain its “marketshare” or other players may take over. There’s already a large number of third-party tools and services making use of Twitter, from Mahalo Answers to Remember the Milk, Twistory to TweetDeck.

Together, these all point to the current importance of microblogging and the potential for further development in that sphere. None of this means that microblogging is “The Next Big Thing.” But it’s reasonable to expect that microblogging will continue to grow in use.

(Those who are trying to grok microblogging, Common Craft’s Twitter in Plain English video is among the best-known descriptions of Twitter and it seems like an efficient way to “get the idea.”)

One thing which is rarely mentioned about microblogging is the prominent social structure supporting it. Like “Social Networking Systems” (LinkedIn, Facebook, Ning, MySpace…), microblogging makes it possible for people to “connect” to one another (as contacts/acquaintances/friends). Like blogs, microblogging platforms make it possible to link to somebody else’s material and get notifications for some of these links (a bit like pings and trackbacks). Like blogrolls, microblogging systems allow for lists of “favourite authors.” Unlike Social Networking Systems but similar to blogrolls, microblogging allow for asymmetrical relations, unreciprocated links: if I like somebody’s microblogging updates, I can subscribe to those (by “following” that person) and publicly show my appreciation of that person’s work, regardless of whether or not this microblogger likes my own updates.

There’s something strangely powerful there because it taps the power of social networks while avoiding tricky issues of reciprocity, “confidentiality,” and “intimacy.”

From the end user’s perspective, microblogging contacts may be easier to establish than contacts through Facebook or Orkut. From a social science perspective, microblogging links seem to approximate some of the fluidity found in social networks, without adding much complexity in the description of the relationships. Subscribing to someone’s updates gives me the role of “follower” with regards to that person. Conversely, those I follow receive the role of “following” (“followee” would seem logical, given the common “-er”/”-ee” pattern). The following and follower roles are complementary but each is sufficient by itself as a useful social link.

Typically, a microblogging system like Twitter or Identi.ca qualifies two-way connections as “friendship” while one-way connections could be labelled as “fandom” (if Andrew follows Betty’s updates but Betty doesn’t follow Andrew’s, Andrew is perceived as one of Betty’s “fans”). Profiles on microblogging systems are relatively simple and public, allowing for low-involvement online “presence.” As long as updates are kept public, anybody can connect to anybody else without even needing an introduction. In fact, because microblogging systems send notifications to users when they get new followers (through email and/or SMS), subscribing to someone’s update is often akin to introducing yourself to that person. 

Reciprocating is the object of relatively intense social pressure. A microblogger whose follower:following ratio is far from 1:1 may be regarded as either a snob (follower:following much higher than 1:1) or as something of a microblogging failure (follower:following much lower than 1:1). As in any social context, perceived snobbery may be associated with sophistication but it also carries opprobrium. Perry Belcher  made a video about what he calls “Twitter Snobs” and some French bloggers have elaborated on that concept. (Some are now claiming their right to be Twitter Snobs.) Low follower:following ratios can result from breach of etiquette (for instance, ostentatious self-promotion carried beyond the accepted limit) or even non-human status (many microblogging accounts are associated to “bots” producing automated content).

The result of the pressure for reciprocation is that contacts are reciprocated regardless of personal relations.  Some users even set up ways to automatically follow everyone who follows them. Despite being tricky, these methods escape the personal connection issue. Contrary to Social Networking Systems (and despite the term “friend” used for reciprocated contacts), following someone on a microblogging service implies little in terms of friendship.

One reason I personally find this fascinating is that specifying personal connections has been an important part of the development of social networks online. For instance, long-defunct SixDegrees.com (one of the earliest Social Networking Systems to appear online) required of users that they specified the precise nature of their relationship to users with whom they were connected. Details escape me but I distinctly remember that acquaintances, colleagues, and friends were distinguished. If I remember correctly, only one such personal connection was allowed for any pair of users and this connection had to be confirmed before the two users were linked through the system. Facebook’s method to account for personal connections is somewhat more sophisticated despite the fact that all contacts are labelled as “friends” regardless of the nature of the connection. The uniform use of the term “friend” has been decried by many public commentators of Facebook (including in the United States where “friend” is often applied to any person with whom one is simply on friendly terms).

In this context, the flexibility with which microblogging contacts are made merits consideration: by allowing unidirectional contacts, microblogging platforms may have solved a tricky social network problem. And while the strength of the connection between two microbloggers is left unacknowledged, there are several methods to assess it (for instance through replies and republished updates).

Social contacts are the very basis of social media. In this case, microblogging represents a step towards both simplified and complexified social contacts.

Which leads me to the theme which prompted me to start this blogpost: event-based microblogging.

I posted the following blog entry (in French) about event-based microblogging, back in November.

Microblogue d’événement

I haven’t received any direct feedback on it and the topic seems to have little echoes in the social media sphere.

During the last PodMtl meeting on February 18, I tried to throw my event-based microblogging idea in the ring. This generated a rather lengthy between a friend and myself. (Because I don’t want to put words in this friend’s mouth, who happens to be relatively high-profile, I won’t mention this friend’s name.) This friend voiced several objections to my main idea and I got to think about this basic notion a bit further. At the risk of sounding exceedingly opinionated, I must say that my friend’s objections actually comforted me in the notion that my “event microblog” idea makes a lot of sense.

The basic idea is quite simple: microblogging instances tied to specific events. There are technical issues in terms of hosting and such but I’m mostly thinking about associating microblogs and events.

What I had in mind during the PodMtl discussion has to do with grouping features, which are often requested by Twitter users (including by Perry Belcher who called out Twitter Snobs). And while I do insist on events as a basis for those instances (like groups), some of the same logic applies to specific interests. However, given the time-sensitivity of microblogging, I still think that events are more significant in this context than interests, however defined.

In the PodMtl discussion, I frequently referred to BarCamp-like events (in part because my friend and interlocutor had participated in a number of such events). The same concept applies to any event, including one which is just unfolding (say, assassination of Guinea-Bissau’s president or bombings in Mumbai).

Microblogging users are expected to think about “hashtags,” those textual labels preceded with the ‘#’ symbol which are meant to categorize microblogging updates. But hashtags are problematic on several levels.

  • They require preliminary agreement among multiple microbloggers, a tricky proposition in any social media. “Let’s use #Bissau09. Everybody agrees with that?” It can get ugly and, even if it doesn’t, the process is awkward (especially for new users).
  • Even if agreement has been reached, there might be discrepancies in the way hashtags are typed. “Was it #TwestivalMtl or #TwestivalMontreal, I forgot.”
  • In terms of language economy, it’s unsurprising that the same hashtag would be used for different things. Is “#pcmtl” about Podcamp Montreal, about personal computers in Montreal, about PCM Transcoding Library…?
  • Hashtags are frequently misunderstood by many microbloggers. Just this week, a tweep of mine (a “peep” on Twitter) asked about them after having been on Twitter for months.
  • While there are multiple ways to track hashtags (including through SMS, in some regions), there is no way to further specify the tracked updates (for instance, by user).
  • The distinction between a hashtag and a keyword is too subtle to be really useful. Twitter Search, for instance, lumps the two together.
  • Hashtags take time to type. Even if microbloggers aren’t necessarily typing frantically, the time taken to type all those hashtags seems counterproductive and may even distract microbloggers.
  • Repetitively typing the same string is a very specific kind of task which seems to go against the microblogging ethos, if not the cognitive processes associated with microblogging.
  • The number of character in a hashtag decreases the amount of text in every update. When all you have is 140 characters at a time, the thirteen characters in “#TwestivalMtl” constitute almost 10% of your update.
  • If the same hashtag is used by a large number of people, the visual effect can be that this hashtag is actually dominating the microblogging stream. Since there currently isn’t a way to ignore updates containing a certain hashtag, this effect may even discourage people from using a microblogging service.

There are multiple solutions to these issues, of course. Some of them are surely discussed among developers of microblogging systems. And my notion of event-specific microblogs isn’t geared toward solving these issues. But I do think separate instances make more sense than hashtags, especially in terms of specific events.

My friend’s objections to my event microblogging idea had something to do with visibility. It seems that this friend wants all updates to be visible, regardless of the context. While I don’t disagree with this, I would claim that it would still be useful to “opt out” of certain discussions when people we follow are involved. If I know that Sean is participating in a PHP conference and that most of his updates will be about PHP for a period of time, I would enjoy the possibility to hide PHP-related updates for a specific period of time. The reason I talk about this specific case is simple: a friend of mine has manifested some frustration about the large number of updates made by participants in Podcamp Montreal (myself included). Partly in reaction to this, he stopped following me on Twitter and only resumed following me after Podcamp Montreal had ended. In this case, my friend could have hidden Podcamp Montreal updates and still have received other updates from the same microbloggers.

To a certain extent, event-specific instances are a bit similar to “rooms” in MMORPG and other forms of real-time many-to-many text-based communication such as the nostalgia-inducing Internet Relay Chat. Despite Dave Winer’s strong claim to the contrary (and attempt at defining microblogging away from IRC), a microblogging instance could, in fact, act as a de facto chatroom. When such a structure is needed. Taking advantage of the work done in microblogging over the past year (which seems to have advanced more rapidly than work on chatrooms has, during the past fifteen years). Instead of setting up an IRC channel, a Web-based chatroom, or even a session on MSN Messenger, users could use their microblogging platform of choice and either decide to follow all updates related to a given event or simply not “opt-out” of following those updates (depending on their preferences). Updates related to multiple events are visible simultaneously (which isn’t really the case with IRC or chatrooms) and there could be ways to make event-specific updates more prominent. In fact, there would be easy ways to keep real-time statistics of those updates and get a bird’s eye view of those conversations.

And there’s a point about event-specific microblogging which is likely to both displease “alpha geeks” and convince corporate users: updates about some events could be “protected” in the sense that they would not appear in the public stream in realtime. The simplest case for this could be a company-wide meeting during which backchannel is allowed and even expected “within the walls” of the event. The “nothing should leave this room” attitude seems contradictory to social media in general, but many cases can be made for “confidential microblogging.” Microblogged conversations can easily be archived and these archives could be made public at a later date. Event-specific microblogging allows for some control of the “permeability” of the boundaries surrounding the event. “But why would people use microblogging instead of simply talking to another?,” you ask. Several quick answers: participants aren’t in the same room, vocal communication is mostly single-channel, large groups of people are unlikely to communicate efficiently through oral means only, several things are more efficiently done through writing, written updates are easier to track and archive…

There are many other things I’d like to say about event-based microblogging but this post is already long. There’s one thing I want to explain, which connects back to the social network dimension of microblogging.

Events can be simplistically conceived as social contexts which bring people together. (Yes, duh!) Participants in a given event constitute a “community of experience” regardless of the personal connections between them. They may be strangers, ennemies, relatives, acquaintances, friends, etc. But they all share something. “Participation,” in this case, can be relatively passive and the difference between key participants (say, volunteers and lecturers in a conference) and attendees is relatively moot, at a certain level of analysis. The key, here, is the set of connections between people at the event.

These connections are a very powerful component of social networks. We typically meet people through “events,” albeit informal ones. Some events are explicitly meant to connect people who have something in common. In some circles, “networking” refers to something like this. The temporal dimension of social connections is an important one. By analogy to philosophy of language, the “first meeting” (and the set of “first impressions”) constitute the “baptism” of the personal (or social) connection. In social media especially, the nature of social connections tends to be monovalent enough that this “baptism event” gains special significance.

The online construction of social networks relies on a finite number of dimensions, including personal characteristics described in a profile, indirect connections (FOAF), shared interests, textual content, geographical location, and participation in certain activities. Depending on a variety of personal factors, people may be quite inclusive or rather exclusive, based on those dimensions. “I follow back everyone who lives in Austin” or “Only people I have met in person can belong to my inner circle.” The sophistication with which online personal connections are negotiated, along such dimensions, is a thing of beauty. In view of this sophistication, tools used in social media seem relatively crude and underdeveloped.

Going back to the (un)conference concept, the usefulness of having access to a list of all participants in a given event seems quite obvious. In an open event like BarCamp, it could greatly facilitate the event’s logistics. In a closed event with paid access, it could be linked to registration (despite geek resistance, closed events serve a purpose; one could even imagine events where attendance is free but the microblogging backchannel incurs a cost). In some events, everybody would be visible to everybody else. In others, there could be a sort of ACL for diverse types of participants. In some cases, people could be allowed to “lurk” without being seen while in others radically transparency could be enforced. For public events with all participants visible, lists of participants could be archived and used for several purposes (such as assessing which sessions in a conference are more popular or “tracking” event regulars).

One reason I keep thinking about event-specific microblogging is that I occasionally use microblogging like others use business cards. In a geek crowd, I may ask for someone’s Twitter username in order to establish a connection with that person. Typically, I will start following that person on Twitter and find opportunities to communicate with that person later on. Given the possibility for one-way relationships, it establishes a social connection without requiring personal involvement. In fact, that person may easily ignore me without the danger of a face threat.

If there were event-specific instances from microblogging platforms, we could manage connections and profiles in a more sophisticated way. For instance, someone could use a barebones profile for contacts made during an impersonal event and a full-fledged profile for contacts made during a more “intimate” event. After noticing a friend using an event-specific business card with an event-specific email address, I got to think that this event microblogging idea might serve as a way to fill a social need.

 

More than most of my other blogposts, I expect comments on this one. Objections are obviously welcomed, especially if they’re made thoughtfully (like my PodMtl friend made them). Suggestions would be especially useful. Or even questions about diverse points that I haven’t addressed (several of which I can already think about).

So…

 

What do you think of this idea of event-based microblogging? Would you use a microblogging instance linked to an event, say at an unconference? Can you think of fun features an event-based microblogging instance could have? If you think about similar ideas you’ve seen proposed online, care to share some links?

 

Thanks in advance!

Transparency and Secrecy

Musings on transparency and secrecy, related to both my professional reorientation and my personal life.

[Started working on this post on December 1st, based on something which happened a few days prior. Since then, several things happened which also connected to this post. Thought the timing was right to revisit the entry and finally publish it. Especially since a friend just teased me for not blogging in a while.]

I’m such a strong advocate of transparency that I have a real problem with secrecy.

I know, transparency is not exactly the mirror opposite of secrecy. But I think my transparency-radical perspective causes some problem in terms of secrecy-management.

“Haven’t you been working with a secret society in Mali?,” you ask. Well, yes, I have. And secrecy hasn’t been a problem in that context because it’s codified. Instead of a notion of “absolute secrecy,” the Malian donsow I’ve been working with have a subtle, nuanced, complex, layered, contextually realistic, elaborate, and fascinating perspective on how knowledge is processed, “transmitted,” managed. In fact, my dissertation research had a lot to do with this form of knowledge management. The term “knowledge people” (“karamoko,” from kalan+mogo=learning+people) truly applies to members of hunter’s associations in Mali as well as to other local experts. These people make a clear difference between knowledge and information. And I can readily relate to their approach. Maybe I’ve “gone native,” but it’s more likely that I was already in that mode before I ever went to Mali (almost 11 years ago).

Of course, a high value for transparency is a hallmark of academia. The notion that “information wants to be free” makes more sense from an academic perspective than from one focused on a currency-based economy. Even when people are clear that “free” stands for “freedom”/«libre» and not for “gratis”/«gratuit» (i.e. “free as in speech, not free as in beer”), there persists a notion that “free comes at a cost” among those people who are so focused on growth and profit. IMHO, most the issues with the switch to “immaterial economies” (“information economy,” “attention economy,” “digital economy”) have to do with this clash between the value of knowledge and a strict sense of “property value.”

But I digress.

Or, do I…?

The phrase “radical transparency” has been used in business circles related to “information and communication technology,” a context in which the “information wants to be free” stance is almost the basis of a movement.

I’m probably more naïve than most people I have met in Mali. While there, a friend told me that he thought that people from the United States were naïve. While he wasn’t referring to me, I can easily acknowledge that the naïveté he described is probably characteristic of my own attitude. I’m North American enough to accept this.

My dedication to transparency was tested by an apparently banal set of circumstances, a few days before I drafted this post. I was given, in public, information which could potentially be harmful if revealed to a certain person. The harm which could be done is relatively small. The person who gave me that information wasn’t overstating it. The effects of my sharing this information wouldn’t be tragic. But I was torn between my radical transparency stance and my desire to do as little harm as humanly possible. So I refrained from sharing this information and decided to write this post instead.

And this post has been sitting in my “draft box” for a while. I wrote a good number of entries in the meantime but I still had this one at the back of my mind. On the backburner. This is where social media becomes something more of a way of life than an activity. Even when I don’t do anything on this blog, I think about it quite a bit.

As mentioned in the preamble, a number of things have happened since I drafted this post which also relate to transparency and secrecy. Including both professional and personal occurrences. Some of these comfort me in my radical transparency position while others help me manage secrecy in a thoughtful way.

On the professional front, first. I’ve recently signed a freelance ethnography contract with Toronto-based consultancy firm Idea Couture. The contract included a non-disclosure agreement (NDA). Even before signing the contract/NDA, I was asking fellow ethnographer and blogger Morgan Gerard about disclosure. Thanks to him, I now know that I can already disclose several things about this contract and that, once the results are public, I’ll be able to talk about this freely. Which all comforts me on a very deep level. This is precisely the kind of information and knowledge management I can relate to. The level of secrecy is easily understandable (inopportune disclosure could be detrimental to the client). My commitment to transparency is unwavering. If all contracts are like this, I’ll be quite happy to be a freelance ethnographer. It may not be my only job (I already know that I’ll be teaching online, again). But it already fits in my personal approach to information, knowledge, insight.

I’ll surely blog about private-sector ethnography. At this point, I’ve mostly been preparing through reading material in the field and discussing things with friends or colleagues. I was probably even more careful than I needed to be, but I was still able to exchange ideas about market research ethnography with people in diverse fields. I sincerely think that these exchanges not only add value to my current work for Idea Couture but position me quite well for the future. I really am preparing for freelance ethnography. I’m already thinking like a freelance ethnographer.

There’s a surprising degree of “cohesiveness” in my life, these days. Or, at least, I perceive my life as “making sense.”

And different things have made me say that 2009 would be my year. I get additional evidence of this on a regular basis.

Which brings me to personal issues, still about transparency and secrecy.

Something has happened in my personal life, recently, that I’m currently unable to share. It’s a happy circumstance and I’ll be sharing it later, but it’s semi-secret for now.

Thing is, though, transparency was involved in that my dedication to radical transparency has already been paying off in these personal respects. More specifically, my being transparent has been valued rather highly and there’s something about this type of validation which touches me deeply.

As can probably be noticed, I’m also becoming more public about some emotional dimensions of my life. As an artist and a humanist, I’ve always been a sensitive person, in-tune with his emotions. Specially positive ones. I now feel accepted as a sensitive person, even if several people in my life tend to push sensitivity to the side. In other words, I’ve grown a lot in the past several months and I now want to share my growth with others. Despite reluctance toward the “touchy-feely,” specially in geek and other male-centric circles, I’ve decided to “let it all loose.” I fully respect those who dislike this. But I need to be myself.

Back in Mac: Low End Edition

I’m happy to go “back in Mac,” even on a low end machine.

Today, I’m buying an old Mac mini G4 1.25GHz. Yes, a low end computer from 2005. It’ll be great to be back in Mac after spending most of my computer life on XP for three years.

This mini is slower than my XP desktop (emachines H3070). But that doesn’t really matter for what I want to do.

There’s something to be said about computers being “fast enough.” Gamers and engineers may not grok this concept, since they always want more. But there’s a point at which computers don’t really need to be faster, for some categories of uses.

Car analogies are often made, in computer discussions, and this case seems fairly obvious. Some cars are still designed to “push the envelope,” in terms of performance. Yet most cars, including some relatively inexpensive ones, are already fast enough to run on highways beyond the speed limits in North America. Even in Europe, most drivers don’t tend to push their cars to the limit. Something vaguely similar happens with computers, though there are major differences. For instance, the difference in cost between fast driving and normal driving is a factor with cars while it isn’t so much of a factor with computers. With computers, the need for cooling and battery power (on laptops) do matter but, even if they were completely solved, there’s a limit to the power needed for casual computer use.

This isn’t contradicting Moore’s Law directly. Chips do increase exponentially in speed-to-cost ratio. But the effects aren’t felt the same way through all uses of computers, especially if we think about casual use of desktop and laptop “personal computers.” Computer chips in other devices (from handheld devices to cars or DVD players) benefit from Moore’s Law, but these are not what we usually mean by “computer,” in daily use.
The common way to put it is something like “you don’t need a fast machine to do email and word processing.”

The main reason I needed a Mac is that I’ll be using iMovie to do simple video editing. Video editing does push the limits of a slow computer and I’ll notice those limits very readily. But it’ll still work, and that’s quite interesting to think about, in terms of the history of personal computing. A Mac mini G4 is a slug, in comparison with even the current Mac mini Core 2 Duo. But it’s fast enough for even some tasks which, in historical terms, have been processor-intensive.

None of this is meant to say that the “need for speed” among computer users is completely manufactured. As computers become more powerful, some applications of computing technologies which were nearly impossible at slower speeds become easy to do. In fact, there certainly are things which we don’t even imagine becoming which will be easy to do in the future, thanks to improvements in computer chip performance. Those who play processor-intensive games always want faster machines and they certainly feel the “need for speed.” But, it seems to me, the quest for raw speed isn’t the core of personal computing, anymore.

This all reminds me of the Material Culture course I was teaching in the Fall: the Social Construction of Technology, Actor-Network Theory, the Social Shaping of Technology, etc.

So, a low end computer makes sense.

While iMovie is the main reason I decided to get a Mac at this point, I’ve been longing for Macs for three years. There were times during which I was able to use somebody else’s Mac for extended periods of time but this Mac mini G4 will be the first Mac to which I’ll have full-time access since late 2005, when my iBook G3 died.

As before, I’m happy to be “back in Mac.” I could handle life on XP, but it never felt that comfortable and I haven’t been able to adapt my workflow to the way the Windows world works. I could (and probably should) have worked on Linux, but I’m not sure it would have made my life complete either.

Some things I’m happy to go back to:

  • OmniOutliner
  • GarageBand
  • Keynote
  • Quicksilver
  • Nisus Thesaurus
  • Dictionary
  • Preview
  • Terminal
  • TextEdit
  • BibDesk
  • iCal
  • Address Book
  • Mail
  • TAMS Analyzer
  • iChat

Now I need to install some RAM in this puppy.

Blogging and Literary Standards

Comment on literary quality and blogging, in response to a conversation between novelist Rick Moody and podcasting pioneer Chris Lydon.

I wrote the following comment in response to a conversation between novelist Rick Moody and podcasting pioneer Chris Lydon:

Open Source » Blog Archive » In the Obama Moment: Rick Moody.

In keeping with the RERO principle I describe in that comment, the version on the Open Source site is quite raw. As is my habit, these days, I pushed the “submit” button without rereading what I had written. This version is edited, partly because I noticed some glaring mistakes and partly because I wanted to add some links. (Blog comments are often tagged for moderation if they contain too many links.) As I started editing that comment, I changed a few things, some of which have consequences to the meaning of my comment. There’s this process, in both writing and editing, which “generates new thoughts.” Yet another argument for the RERO principle.

I can already think of an addendum to this post, revolving on my personal position on writing styles (informed by my own blogwriting experience) along with my relative lack of sensitivity for Anglo writing. But I’m still blogging this comment on a standalone basis.

Read on, please… Continue reading “Blogging and Literary Standards”

I Am Not a Guru

“Nor do I play one online!”

The “I am not a ” phrase is often used as a disclaimer when one is giving advice. Especially in online contexts having to do with law, in which case the IANAL acronym can be used, and understood.
I’m not writing this to give advice. (Even though I could!) I’ve simply been thinking about social media a fair deal, recently, and thought I’d share a few thoughts.

I’ve been on the record as saying that I have a hard time selling my expertise. It’s not through lack of self-confidence (though I did have problems with this in the past), nor is it that my expertise is difficult to sell. It’s simply a matter of seeing myself as a friendly humanist, not as a brand to sell. To a certain extent, this post is an extension of the same line of thinking.

I’m also going back to my post about “the ‘social’ in ‘social media/marketing/web'” as I tend to position myself as an ethnographer and social scientist (I teach anthropology, sociology, and folkloristics). Simply put, I do participant-observation in social media spheres. Haven’t done formal research on the subject, nor have I taught in that field. But I did gain some insight in terms if what social media entails.

Again, I’m no guru. I’m just a social geek.

The direct prompt for this blogpost is a friend’s message in which he asked me for advice on the use of social media to market his creative work. Not that he framed his question in precisely those terms but the basic idea was there.

As he’s a friend, I answered him candidly, not trying to sell my social media expertise to him. But, after sending that message, I got to think about the fact that I’m not selling my social media expertise to anyone.

One reason is that I’m no salesman. Not only do I perceive myself as “too frank to be a salesman” (more on the assumptions later), but I simply do not have the skills to sell anything. Some people are so good at sales pitches that they could create needs where they is none (the joke about refrigerators and “Eskimos” is too much of an ethnic slur to be appropriate). I’ve been on the record saying that “I couldn’t sell bread for a penny” (to a rich yet starving person).

None of this means that I haven’t had any influence on any purchasing pattern. In fact, that long thread in which I confessed my lack of salesman skills was the impulse (direct or indirect) behind the purchase of a significant number of stovetop coffee devices and this “influence” has been addressed explicitly. It’s just that my influence tends to be more subtle, more “diffuse.” Influence based on participation in diverse groups. It’s one reason I keep talking about the “social butterfly effect.”

Coming back to social media and social marketing.

First, some working definitions. By “social media” I usually mean blogs, podcasts, social networking systems, and microblogs. My usage also involves any participatory use of the Internet and any alternative to “mainstream media” (MSM) which makes use of online contacts between human beings. “Social marketing” is, to me, the use of social media to market and sell a variety of things online, including “people as brands.” This notion connects directly to a specific meaning of “social capital” which, come to think of it, probably has more to do with Putnam than Bourdieu (PDF version of an atricle about both versions).

Other people, I admit, probably have much better ways to define those concepts. But those definitions are appropriate in the present context. I mostly wanted to talk about gurus.

Social Guru

I notice guru-like behaviour in the social media/marketing sphere.

I’m not targetting individuals, though the behaviour is adopted by specific people. Not every one is acting as a “social media guru” or “social marketing guru.” The guru-like behaviour is in fact quite specific and not as common as some would think.

Neither am I saying that guru-like behaviour is inappropriate. I’m not blaming anyone for acting like a guru. I’m mostly distancing myself from that behaviour. Trying to show that it’s one model for behaviour in the social media/marketing sphere.

It should go without saying: I’m not using the term “guru” in a literal sense it might have in South Asia. That kind of guru I might not distance myself from as quickly. Especially if we think about “teachers as personal trainers.” But I’m using “guru” in reference to an Anglo-American phenomenon having to do with expertise and prestige.

Guru-like behaviour, as noticed in the social media/marketing sphere, has to do with “portraying oneself as an expert holding a secret key which can open the doors to instant success.” Self-assurance is involved, of course. But there’s also a degree of mystification. And though this isn’t a rant against people who adopt this kind of behaviour, I must admit that I have negative reactions to any kind of mystification.

There’s a difference between mystery and mystification. Something that is mysterious is difficult to explain “by its very nature.” Mystification involves withholding information to prevent knowledge. As an academic, I have been trained to fight obscurantism of any kind. Mystification seems counterproductive. “Information Wants to be Free.”

This is not to say that I dislike ambiguity, double-entendres, or even secrets. In fact, I’m often using ambiguity in playful manner and, working with a freemasonry-like secret association, I do understand the value of the most restrictive knowledge management practises. But I find limited value in restricting information when knowledge can be beneficial to everyone. As in Eco’s The Name of the Rose, subversive ideas find their way out of attempts to hide them.

Another aspect of guru-like behaviour which tends to bother me is that I can’t help but find it empty. As some would say, “there needs to be a ‘there’ there.” With social media/marketing, the behaviour I’m alluding to seems to imply that there is, in fact, some “secret key to open all doors.” Yet, as I scratch beneath the surface, I find something hollow. (The image I have in mind is that of a chocolate Easter egg. But any kind of trompe-l’œil would work.)

Obviously, I’m not saying that there’s “nothing to” social media/marketing. Those who dismiss social media and/or social marketing sound to me like curmudgeons or naysayers. “There’s nothing new, here. It’s just the same thing as what it always was. Buy my book to read all about what nonsense this all is.” (A bit self-serving, don’t you think?)

And I’m not saying that I know what there is in social media and marketing which is worth using. That would not only be quite presumptuous but it would also represent social media and marketing in a more simplified manner than I feel it deserves.

I’m just saying that caution should be used with people who claim they know everything there is to know about social media and social marketing. In other words, “be careful when someone promises to make you succeed through the Internet.” Sounds obvious, but some people still fall prey to grandiose claims.

Having said this, I’ll keep on posting some of thoughts about social media and social marketing. I might be way off, so “don’t quote me on this.” (You can actually quote me but don’t give my ideas too much credit.)

Café à la montréalaise

Montréal est en passe de (re)devenir une destination pour le café. Mieux encore, la «Renaissance du café à Montréal» risque d’avoir des conséquences bénéfiques pour l’ensemble du milieu culinaire de la métropole québécoise.

Cette thèse peut sembler personnelle et je n’entends pas la proposer de façon dogmatique. Mais en me mêlant au milieu du café à Montréal, j’ai accumulé un certain nombre d’impressions qu’il me ferait plaisir de partager. Il y a même de la «pensée magique» dans tout ça en ce sens qu’il me semble plus facile de rebâtir la scène montréalaise du café si nous avons une idée assez juste de ce qui constitue la spécificité montréalaise.

Continue reading “Café à la montréalaise”

Microblogue d’événement

Version éditée d’un message que je viens d’envoyer à mon ami Martin Lessard.

Le contexte direct, c’est une discussion que nous avons eue au sujet de mon utilisation de Twitter, la principale plateforme de microblogue. Pendant un événement quelconque (conférence, réunion, etc.), j’utilise Twitter pour faire du blogue en temps réel, du liveblogue.

Contrairement à certains, je pense que l’utilisation du microblogue peut être adaptée aux besoins de chaque utilisateur. D’ailleurs, c’est un aspect de la technologie que je trouve admirable: la possibilité d’utiliser des outils pour d’autres usages que ceux pour lesquels ils ont été conçus. C’est là que la technologie au sens propre dépasse l’outil. Dans mon cours de culture matérielle, j’appelle ça “unintended uses”, concept tout simple qui a beaucoup d’implications en rapport aux liens sociaux dans la chaîne qui va de la conception et de la construction d’un outil jusqu’à son utilisation et son «impact» social.

Donc, mon message édité.
Je pense pas mal à cette question de tweets («messages» sur Twitter) considérés comme intempestifs. Alors je lance quelques idées.

Ça m’apporte pas mal, de bloguer en temps réel par l’entremise de Twitter. Vraiment, je vois ça comme prendre des notes en public. Faut dire que la prise de notes est une seconde nature, pour moi. C’est comme ça que je structure ma pensée. Surtout avec des “outliners” mais ça marche aussi en linéaire.

De ce côté, je fais un peu comme ces journalistes sur Twitter qui utilisent le microblogue comme carnet de notes. Andy Carvin est mon exemple préféré. Il tweete plus vite que moi et ses tweets sont aussi utiles qu’un article de journal. Ma démarche est plus proche de la «lecture active» et du sens critique, mais c’est un peu la même idée. Dans mon cas, ça me permet même de remplacer un billet de blogue par une série de tweets.

L’avantage de la prise de notes en temps réel s’est dévoilé entre autres lors d’une présentation de Johannes Fabian, anthropologue émérite qui était à Montréal pendant une semaine bien remplie, le mois dernier. Je livebloguais sa première présentation, sur Twitter. En face de moi, il y avait deux anthropologues de Concordia (Maximilian Forte et Owen Wiltshire) que je connais entre autres comme blogueurs. Les deux prenaient des notes et l’un d’entre eux enregistrait la séance. Dans mes tweets, j’ai essayé de ne pas trop résumer ce que Fabian disait mais je prenais des notes sur mes propres réactions, je faisais part de mes observations de l’auditoire et je réfléchissais à des implications des idées énoncées. Après la présentation, Maximilian me demandait si j’allais bloguer là-dessus. J’ai pu lui dire en toute franchise que c’était déjà fait. Et Owen, un de mes anciens étudiants qui travaille maintenant sur la publication académique et le blogue, a maintenant accès à mes notes complètes, avec “timeline”.
Puissante méthode de prise de notes!

L’avantage de l’aspect public c’est premièrement que je peux avoir des «commentaires» en temps réel. J’en ai pas autant que j’aimerais, mais ça reste ce que je cherche, les commentaires. Le microbloguage me donne plus de commentaires que mon blogue principal, ici même sur WordPress. Facebook me donne plus de commentaires que l’un ou l’autre, mais c’est une autre histoire.

Dans certains cas, le livebloguage donne lieu à une véritable conversation parallèle. Mon exemple préféré, c’est probablement cette interaction que j’ai eue avec John Milles à la fin de la session d’Isabelle Lopez, lors de PodCamp Montréal (#pcmtl08). On parlait de culture d’Internet et je proposais qu’il y avait «une» culture d’Internet (comme on peut dire qu’il y a «une» culture chrétienne, disons). Milles, qui ne me savait pas anthropologue, me fait alors un tweet à propos de la notion classique de culture pour les anthropologues (monolithique, spécifiée dans l’espace, intemporelle…). J’ai alors pu le diriger vers la «crise de la représentation» en anthropologie depuis 1986 avec Writing Culture de Clifford et Marcus. Il m’a par la suite envoyé des références de la littérature juridique.

Bien sûr, c’est l’idée du “backchannel” appliqué au ‘Net. Ça fonctionne de façon très efficace pour des événements comme SXSW et BarCamp puisque tout le monde tweete en même temps. Mais ça peut fonctionner pour d’autres événements, si la pratique devient plus commune.

More on this later.”

Je crois que le bloguage en temps réel lors d’événements augmente la visibilité de l’événement lui-même. Ça marcherait mieux si je mettais des “hashtags” à chaque tweet. (Les “hashtags” sont des étiquettes textuelles précédées de la notation ‘#’, qui permettent d’identifier des «messages»). Le problème, c’est que c’est pas vraiment pratique de taper des hashtags continuellement, du moins sur un iPod touch. De toutes façons, ce type de redondance semble peu utile.

More on this later.”

Évidemment, le fait de microbloguer autant augmente un peu ma propre visibilité. Ces temps-ci, je commence à penser à des façons de me «vendre». C’est un peu difficile pour moi parce que j’ai pas l’habitude de me vendre et que je vois l’humilité comme une vertu. Mais ça semble nécessaire et je me cherche des moyens de me vendre tout en restant moi-même. Twitter me permet de me mettre en valeur dans un contexte qui rend cette pratique tout à fait appropriée (selon moi).

D’ailleurs, j’ai commencé à utiliser Twitter comme méthode de réseautage, pendant que j’étais à Austin. C’était quelques jours avant SXSW et je voulais me faire connaître localement. D’ailleurs, je conserve certaines choses de cette époque, y compris des contacts sur Twitter.

Ma méthode était toute simple: je me suis mis à «suivre» tous ceux qui suivaient @BarCampAustin. Ça faisait un bon paquet et ça me permettait de voir ce qui se passait. D’ailleurs, ça m’a permis d’aller observer des événements organisés par du monde de SXSW comme Gary Vaynerchuk et Scott Beale. Pour un ethnographe, y’a rien comme voir Kevin Rose avec son «entourage» ou d’apprendre que Dr. Tiki est d’origine lavalloise. 😉

Dans les “features” du microbloguage que je trouve particulièrement intéressantes, il y a les notations en ‘@’ et en ‘#’. Ni l’une, ni l’autre n’est si pratique sur un iPod touch, du moins avec les applis qu’on a. Mais le concept de base est très intéressant. Le ‘@’ est un peu l’équivalent du ping ou trackback, pouvant servir à attirer l’attention de quelqu’un d’autre (cette notation permet les réponses directes à des messages). C’est assez puissant comme principe et ça aide beaucoup dans le livebloguage (Muriel Ide et Martin Lessard ont utilisé cette méthode pour me contacter pendant WebCom/-Camp).

More on this later.”

D’après moi, avec des geeks, cette pratique du microblogue d’événement s’intensifie. Il prend même une place prépondérante, donnant au microblogue ce statut que les journalistes ont tant de difficulté à saisir. Lorsqu’il se passe quelque-chose, le microblogue est là pour couvrir l’événement.

Ce qui m’amène à ce “later“. Tout simple, dans le fond. Des instances de microblogues pour des événements. Surtout pour des événements préparés à l’avance, mais ça peut être une structure ad hoc à la Ushahidi d’Erik Hersman.

Laconica d’Evan Prodromou est tout désigné pour remplir la fonction à laquelle je pense mais ça peut être sur n’importe quelle plateforme. J’aime bien Identi.ca, qui est la plus grande instance Laconica. Par contre, j’utilise plus facilement Twitter, entre autres parce qu’il y a des clients Twitter pour l’iPod touch (y compris avec localisation).

Imaginons une (anti-)conférence à la PodCamp. Le même principe s’applique aux événements en-ligne (du genre “WebConference”) mais les rencontres face-à-face ont justement des avantages grâce au microbloguage. Surtout si on pense à la “serendipity”, à l’utilisation de plusieurs canaux de communication (cognitivement moins coûteuse dans un contexte de coprésence), à la facilité des conversations en petits groupes et au «langage non-verbal».

Donc, chaque événement a une instance de microblogue. Ça coûte pratiquement rien à gérer et ça peut vraiment ajouter de la valeur à l’événement.

Chaque personne inscrite à l’événement a un compte de microblogue qui est spécifique à l’instance de cet événement (ou peut utiliser un compte Laconica d’une autre instance et s’inscrire sur la nouvelle instance). Par défaut, tout le monde «suit» tout le monde (tout le monde est incrit pour voir tous les messages). Sur chaque “nametag” de la conférence, l’identifiant de la personne apparaît. Chaque présentateur est aussi lié à son identifiant. Le profil de chaque utilisateur peut être calqué sur un autre profil ou créé spécifiquement pour l’événement. Les portraits photos sont privilégiés, mais les avatars sont aussi permis. Tout ce qui est envoyé à travers l’instance est archivé et catalogué. S’il y a des façons de spécifier des positions dans l’espace, de façon précise (peut-être même avec une RFID qu’on peut désactiver), ce positionnement est inscrit dans l’instance. Comme ça, on peut se retrouver plus facilement pour discuter en semi-privé. D’ailleurs, ça serait facile d’inclure une façon de prendre des rendez-vous ou de noter des détails de conversations, pour se remémorer le tout plus tard. De belles intégrations possibles avec Google Calendar, par exemple.

Comme la liste des membres de l’instance est limitée, on peut avoir une appli qui facilite les notations ‘@’. Recherche «incrémentale», carnet d’adresse, auto-complétion… Les @ des présentateurs sont sous-entendus lors des présentations, on n’a pas à taper leurs noms au complet pour les citer. Dans le cas de conversations à plusieurs, ça devient légèrement compliqué, mais on peut quand même avoir une liste courte si c’est un panel ou d’autres méthodes si c’est plus large. D’ailleurs, les modérateurs pourraient utiliser ça pour faire la liste d’attente des interventions. (Ça, c’est du bonbon! J’imagine ce que ça donnerait à L’Université autrement!)

Comme Evan Prodromou en parlait lors de PodCamp Montréal, il y a toute la question du “microcasting” qui prend de l’ampleur. Avec une instance de microblogue liée à un événement, on pourrait avoir de la distribution de fichiers à l’interne. Fichiers de présentation (Powerpoint ou autre), fichiers médias, liens, etc. Les présentateurs peuvent préparer le tout à l’avance et envoyer leurs trucs au moment opportun. À la rigueur, ça peut même remplacer certaines utilisations de Powerpoint!

Plutôt que de devoir taper des hashtags d’événements (#pcmtl08), on n’a qu’à envoyer ses messages sur l’instance spécifique. Ceux qui ne participent pas à l’événement ne sont pas inondés de messages inopportuns. Nul besoin d’arrêter de suivre quelqu’un qui participe à un tel événement (comme ç’a été le cas avec #pcmtl08).

Une fois l’événement terminé, on peut faire ce qu’on veut avec l’instance. On peut y revenir, par exemple pour consulter la liste complète des participants. On peut retravailler ses notes pour les transformer en billets et même rapports. Ou on peut tout mettre ça de côté.

Pour le reste, ça serait comme l’utilisation de Twitter lors de SXSWi (y compris le cas Lacy, que je trouve fascinant) ou autre événement geek typique. Dans certains cas, les gens envoient les tweets directement sur des écrans autour des présentateurs.

Avec une instance spécifique, les choses sont plus simple à gérer. En plus, peu de risques de voir l’instance tomber en panne, comme c’était souvent le cas avec Twitter, pendant une assez longue période.

C’est une série d’idées en l’air et je tiens pas au détail spécifique. Mais je crois qu’il y a un besoin réel et que ça aide à mettre plusieurs choses sur une même plateforme. D’ailleurs, j’y avais pas trop pensé mais ça peut avoir des effets intéressants pour la gestion de conférences, pour des rencontres en-ligne, pour la couverture médiatique d’événements d’actualités, etc. Certains pourraient même penser à des modèles d’affaire qui incluent le microblogue comme valeur ajoutée. (Différents types de comptes, possibilité d’assister gratuitement à des conférences sans compte sur l’instance…)

Qu’en pensez-vous?

L’intellectuel s’assume

Le personnage de l’intellectuel(le) mérite bien son petit billet. D’autant que son identité est venue se loger à plusieurs reprises dans ma vie, ces derniers temps.

(Pour simplifier, et par référence à un contexte universaliste, j’utiliserai le terme «intellectuel» au masculin comme s’il était neutre.)

Oui, bien entendu, je suis moi-même un intellectuel et je m’assume en tant que tel. D’ailleurs, j’ai d’abord pensé intituler ce billet «Confessions d’un intellectuel solidaire» ou quelque-chose du genre. Mais la formule «Confessions d’un <nom><adjectif>» est déjà assez fréquente, sur ce blogue. Et je ne pense pas seulement de façon introspective à ce personnage.

D’ailleurs, c’est en lisant certains trucs au sujet de la fameuse Affaire Dreyfus que m’est venue l’idée d’écrire un billet sur la notion d’«intellectuel». Il s’avère que l’adoption du terme «intellectuel» pour désigner une certaine catégorie d’individu puisse dater de la France de la fin du XIXè siècle, y compris dans son usage anglais. Cette période historique m’a fortement influencé, surtout par la lecture de divers écrivains français de l’époque. Mais c’est moins par désir de reconstituer une réalité historique que je me mets à parler d’intellectuel que par intérêt pour la construction de personnages sociaux, quels qu’ils soient. Penser au fait que l’intellectuel est construit me permet de remettre en contexte social un ensemble de réalités qui m’apparaissent intéressantes. Surtout qu’elles peuvent facilement être liées à la «culture geek» qui m’intéresse tant, en plus de me toucher directement.

Évidemment, ce n’est pas la première fois que l’intellectuel comme personnage se retrouve sur ce blogue. Mais le contexte semblait particulièrement approprié, aujourd’hui.

Faut dire que je suis allé à un petit brunch avec des amis du primaire. Ça ne surprendra personne de savoir que ces amis me considéraient déjà comme un intellectuel à l’époque. Pas qu’ils aient utilisé le terme. Mais l’étiquette était là. Sauf que, contrairement à ce que j’ai ressenti il y a près de trente ans, cette étiquette n’était pas la base d’un rejet.

D’ailleurs, je pense souvent à la théorie de l’étiquetage. Elle était même présente dans un cours de sociologie que j’enseignais l’été dernier. Pour simplifier: les étiquettes qui nous sont collées ont des implications durables dans nos agissements sociaux. Ou, pour citer Howie Becker selon un dictionnaire suisse:

Le comportement déviant est ce que les gens étiquettent comme tel ; le déviant est celui à qui on a réussi à coller cette étiquette

(Évidemment, j’étends la notion d’étiquetage hors de la déviance au sens strict.)

Dans ce contexte, le comportement d’intellectuel est celui qui est étiquetté comme tel. L’intellectuel est celui à qui on a réussi à coller cette étiquette.

Version personnelle (que j’ai même eu l’occasion d’exposer à un ami du primaire): je me comporte en fonction de l’étiquette d’intellectuel qui a été posée sur moi, dès le jeune âge. Pas que cette étiquette est abusive: elle colle parce qu’elle trouve une surface qui s’y prête. Mais le personnage de l’intellectuel n’est pas naturel, universel, atemporel ou dénué d’ambiguïté.

Parlant d’ambiguïté, faudrait penser à le définir, cet intellectuel.

Selon Wikipedia:

Un intellectuel est une personne qui, du fait de sa position sociale, dispose d’une forme d’autorité et s’engage dans la sphère publique pour défendre des valeurs.

Pas mal. C’est un peu la base de mon premier billet sur les intellectuels. L’engagement public prend diverses formes et on comprend le lien avec l’Affaire Dreyfus.

Mais les usages communs du terme (et d’«intellectualisme» et “intellectualism“) semblent aller dans diverses autres directions. D’abord, la notion d’une intelligence «supérieure» (que les cognitivistes relativisent si bien mais qui semble consensuelle, socialement). Cette perception de l’intelligence est liée à une forme d’élitisme, l’intellectuel fait partie d’une élite particulière et exclue parfois ceux qui n’en font pas partie. Puis il y a la notion de «rationalité», l’intellectuel conçu comme étant «loin de ses émotions». Ou la maladresse et le manque d’aptitudes manuelles, le terme «intellectuel» alors utilisé pour exprimer un certain mépris. Pour aller plus loin, on peut même dire que le fait de souscrire à un certain dualisme «corps/esprit» est souvent teinté d’«intellectualisme».

Ces dénotations et connotations me semblent toutes appropriées pour décrire un type précis d’intellectuel: le «geek» (j’aime bien «geekette» pour le féminin; il y a relativement peu de femmes geeks). Le personnage du geek est une part important du stéréotype contemporain lié à l’intellectuel. Contrairement au «nerd» des années 1980, le geek a désormais une place de choix au sein de la culture populaire. Et la réhabilitation du geek constitue un mouvement contraire à une vague d’anti-intellectualisme très patente aux États-Unis et dans d’autres sociétés post-industrielles.

Penser au geek en tant qu’intellectuel permet de situer le personnage dans son contexte social. D’un point de vue professionnel, le geek typique est souvent ingénieur, informatien ou scientifique. Le contexte scolaire a souvent accordé beaucoup d’importance aux notes qu’il obtenait. Il est peut-être très apte à entreprendre diverses activités manuelles, il peut même «travailler de ses mains autant que de sa tête», mais son intellect demeure valorisé. C’est «un cerveau», un “brainiac”. Pas que son «niveau d’intelligence» est nécessairement plus élevé que la moyenne, mais le type particulier d’intelligence qui le caractérise correspond largement à l’idée qu’on se fait généralement du «quotient intellectuel»: capacité d’abstraction, sens logique, rapidité à résoudre des équations ou à se remémorer une information, minutie…

Pour revenir à la construction sociale du personnage de l’intellectuel. Malgré certaines transformations au cours du dernier siècle, l’intellectuel conserve un statut social particulier. Dans un modèle d’économie politique (à la fois dans sa version capitaliste que socialiste), l’intellectuel fait partie d’une espèce de classe sociale avec ses caractéristiques particulières. C’est un type de «col blanc» qui ne fait pas un travail très routinier. C’est aussi l’individu qui bénéficie du privilège lié à l’éducation post-secondaire dans les sociétés post-industrielles. C’est celui qui a le loisir de lire et de parfaire son apprentissage. C’est le public-cible de «La Culture», au sens raffiné du terme. C’est peut-être même un snob, un personnage hautain, l’opposé du «vrai monde».

Et c’est là que le mode introspectif me fait réagir: je suis peut-être un intellectuel, mais je suis pas snob. Si je suis «anti-» quoi que ce soit, c’est anti-snob. Et je ne considère pas l’intellectuel comme plus intelligent qu’un autre. Je considère surtout l’intellectuel comme une création des sociétés post-industrielles, basées sur la division pointue du travail social. Même que, ce snobisme, c’est ce qui me dérange le plus du fait d’être intellectuel. C’est probablement pour ça que, même si je m’assume comme intellectuel, je tente souvent d’effacer cette étiquette. «Je suis un intellectuel mais je suis aussi un bon gars.»

Dans mon cas, le fait d’être considéré comme un intellectuel a beaucoup de lien avec mon éloquence perçue. On m’a toujours considéré comme éloquent. Enfant, déjà, je «parlais bien». Du moins, c’est ce qu’on a dit de moi (pas plus tard qu’aujourd’hui). Bon, d’accord, comme l’art oratoire a toujours été valorisé dans ma famille, j’ai probablement été porté à m’amuser avec le verbe. Aussi, je lisais déjà beaucoup, enfant. Et j’écrivais: à l’âge de dix ans, je tapais à la dactylo un petit texte au sujet de la perfection (qui semble logiquement impossible puisqu’elle est une absence de défaut). Et j’avais l’occasion de m’exprimer. Auprès d’adultes, surtout.

D’ailleurs, c’est probablement un point très important. Tout jeune, j’avais des rapports assez étroits avec plusieurs adultes (des amis de mes parents, surtout). J’étais souvent le seul enfant parmi de nombreux adultes. Plusieurs d’entre eux étaient profs (comme mon père). On m’écoutait avec intérêt. Dans une certaine mesure, j’étais presque pavané comme un animal de cirque qui pouvait discourir sur tout et sur rien. Mon père a souvent parlé de tout ça comme d’un problème fondamental. Peut-être par extension, mon étiquette d’intellectuel était perçue comme un problème. Fondamental.

Je considère aujourd’hui que je me suis bien développé. Je suis ce que j’ai toujours voulu être et je peux parfois faire ce que j’ai toujours voulu faire. Je devrais pas avoir honte.

D’être un intellectuel.