Academics and Their Publics

(Why Are Academics So) Misunderstood?

Misunderstood by Raffi Asdourian
Misunderstood by Raffi Asdourian

Academics are misunderstood.

Almost by definition.

Pretty much any academic eventually feels that s/he is misunderstood. Misunderstandings about some core notions in about any academic field are involved in some of the most common pet peeves among academics.

In other words, there’s nothing as transdisciplinary as misunderstanding.

It can happen in the close proximity of a given department (“colleagues in my department misunderstand my work”). It can happen through disciplinary boundaries (“people in that field have always misunderstood our field”). And, it can happen generally: “Nobody gets us.”

It’s not paranoia and it’s probably not self-victimization. But there almost seems to be a form of “onedownmanship” at stake with academics from different disciplines claiming that they’re more misunderstood than others. In fact, I personally get the feeling that ethnographers are more among the most misunderstood people around, but even short discussions with friends in other fields (including mathematics) have helped me get the idea that, basically, we’re all misunderstood at the same “level” but there are variations in the ways we’re misunderstood. For instance, anthropologists in general are mistaken for what they aren’t based on partial understanding by the general population.

An example from my own experience, related to my decision to call myself an “informal ethnographer.” When you tell people you’re an anthropologist, they form an image in their minds which is very likely to be inaccurate. But they do typically have an image in their minds. On the other hand, very few people have any idea about what “ethnography” means, so they’re less likely to form an opinion of what you do from prior knowledge. They may puzzle over the term and try to take a guess as to what “ethnographer” might mean but, in my experience, calling myself an “ethnographer” has been a more efficient way to be understood than calling myself an “anthropologist.”

This may all sound like nitpicking but, from the inside, it’s quite impactful. Linguists are frequently asked about the number of languages they speak. Mathematicians are taken to be number freaks. Psychologists are perceived through the filters of “pop psych.” There are many stereotypes associated with engineers. Etc.

These misunderstandings have an impact on anyone’s work. Not only can it be demoralizing and can it impact one’s sense of self-worth, but it can influence funding decisions as well as the use of research results. These misunderstandings can underminine learning across disciplines. In survey courses, basic misunderstandings can make things very difficult for everyone. At a rather basic level, academics fight misunderstandings more than they fight ignorance.

The  main reason I’m discussing this is that I’ve been given several occasions to think about the interface between the Ivory Tower and the rest of the world. It’s been a major theme in my blogposts about intellectuals, especially the ones in French. Two years ago, for instance, I wrote a post in French about popularizers. A bit more recently, I’ve been blogging about specific instances of misunderstandings associated with popularizers, including Malcolm Gladwell’s approach to expertise. Last year, I did a podcast episode about ethnography and the Ivory Tower. And, just within the past few weeks, I’ve been reading a few things which all seem to me to connect with this same issue: common misunderstandings about academic work. The connections are my own, and may not be so obvious to anyone else. But they’re part of my motivations to blog about this important issue.

In no particular order:

But, of course, I think about many other things. Including (again, in no particular order):

One discussion I remember, which seems to fit, included comments about Germaine Dieterlen by a friend who also did research in West Africa. Can’t remember the specifics but the gist of my friend’s comment was that “you get to respect work by the likes of Germaine Dieterlen once you start doing field research in the region.” In my academic background, appreciation of Germaine Dieterlen’s may not be unconditional, but it doesn’t necessarily rely on extensive work in the field. In other words, while some parts of Dieterlen’s work may be controversial and it’s extremely likely that she “got a lot of things wrong,” her work seems to be taken seriously by several French-speaking africanists I’ve met. And not only do I respect everyone but I would likely praise someone who was able to work in the field for so long. She’s not my heroine (I don’t really have heroes) or my role-model, but it wouldn’t have occurred to me that respect for her wasn’t widespread. If it had seemed that Dieterlen’s work had been misunderstood, my reflex would possibly have been to rehabilitate her.

In fact, there’s  a strong academic tradition of rehabilitating deceased scholars. The first example which comes to mind is a series of articles (PDF, in French) and book chapters by UWO linguistic anthropologist Regna Darnell.about “Benjamin Lee Whorf as a key figure in linguistic anthropology.” Of course, saying that these texts by Darnell constitute a rehabilitation of Whorf reveals a type of evaluation of her work. But that evaluation comes from a third person, not from me. The likely reason for this case coming up to my mind is that the so-called “Sapir-Whorf Hypothesis” is among the most misunderstood notions from linguistic anthropology. Moreover, both Whorf and Sapir are frequently misunderstood, which can make matters difficulty for many linguistic anthropologists talking with people outside the discipline.

The opposite process is also common: the “slaughtering” of “sacred cows.” (First heard about sacred cows through an article by ethnomusicologist Marcia Herndon.) In some significant ways, any scholar (alive or not) can be the object of not only critiques and criticisms but a kind of off-handed dismissal. Though this often happens within an academic context, the effects are especially lasting outside of academia. In other words, any scholar’s name is likely to be “sullied,” at one point or another. Typically, there seems to be a correlation between the popularity of a scholar and the likelihood of her/his reputation being significantly tarnished at some point in time. While there may still be people who treat Darwin, Freud, Nietzsche, Socrates, Einstein, or Rousseau as near divinities, there are people who will avoid any discussion about anything they’ve done or said. One way to put it is that they’re all misunderstood. Another way to put it is that their main insights have seeped through “common knowledge” but that their individual reputations have decreased.

Perhaps the most difficult case to discuss is that of Marx (Karl, not Harpo). Textbooks in introductory sociology typically have him as a key figure in the discipline and it seems clear that his insight on social issues was fundamental in social sciences. But, outside of some key academic contexts, his name is associated with a large series of social events about which people tend to have rather negative reactions. Even more so than for Paul de Man or  Martin Heidegger, Marx’s work is entangled in public opinion about his ideas. Haven’t checked for examples but I’m quite sure that Marx’s work is banned in a number of academic contexts. However, even some of Marx’s most ardent opponents are likely to agree with several aspects of Marx’s work and it’s sometimes funny how Marxian some anti-Marxists may be.

But I digress…

Typically, the “slaughtering of sacred cows” relates to disciplinary boundaries instead of social ones. At least, there’s a significant difference between your discipline’s own “sacred cows” and what you perceive another discipline’s “sacred cows” to be. Within a discipline, the process of dismissing a prior scholar’s work is almost œdipean (speaking of Freud). But dismissal of another discipline’s key figures is tantamount to a rejection of that other discipline. It’s one thing for a physicist to show that Newton was an alchemist. It’d be another thing entirely for a social scientist to deconstruct James Watson’s comments about race or for a theologian to argue with Darwin. Though discussions may have to do with individuals, the effects of the latter can widen gaps between scholarly disciplines.

And speaking of disciplinarity, there’s a whole set of issues having to do with discussions “outside of someone’s area of expertise.” On one side, comments made by academics about issues outside of their individual areas of expertise can be very tricky and can occasionally contribute to core misunderstandings. The fear of “talking through one’s hat” is quite significant, in no small part because a scholar’s prestige and esteem may greatly decrease as a result of some blatantly inaccurate statements (although some award-winning scholars seem not to be overly impacted by such issues).

On the other side, scholars who have to impart expert knowledge to people outside of their discipline  often have to “water down” or “boil down” their ideas and, in effect, oversimplifying these issues and concepts. Partly because of status (prestige and esteem), lowering standards is also very tricky. In some ways, this second situation may be more interesting. And it seems unavoidable.

How can you prevent misunderstandings when people may not have the necessary background to understand what you’re saying?

This question may reveal a rather specific attitude: “it’s their fault if they don’t understand.” Such an attitude may even be widespread. Seems to me, it’s not rare to hear someone gloating about other people “getting it wrong,” with the suggestion that “we got it right.”  As part of negotiations surrounding expert status, such an attitude could even be a pretty rational approach. If you’re trying to position yourself as an expert and don’t suffer from an “impostor syndrome,” you can easily get the impression that non-specialists have it all wrong and that only experts like you can get to the truth. Yes, I’m being somewhat sarcastic and caricatural, here. Academics aren’t frequently that dismissive of other people’s difficulties understanding what seem like simple concepts. But, in the gap between academics and the general population a special type of intellectual snobbery can sometimes be found.

Obviously, I have a lot more to say about misunderstood academics. For instance, I wanted to address specific issues related to each of the links above. I also had pet peeves about widespread use of concepts and issues like “communities” and “Eskimo words for snow” about which I sometimes need to vent. And I originally wanted this post to be about “cultural awareness,” which ends up being a core aspect of my work. I even had what I might consider a “neat” bit about public opinion. Not to mention my whole discussion of academic obfuscation (remind me about “we-ness and distinction”).

But this is probably long enough and the timing is right for me to do something else.

I’ll end with an unverified anecdote that I like. This anecdote speaks to snobbery toward academics.

[It’s one of those anecdotes which was mentioned in a course I took a long time ago. Even if it’s completely fallacious, it’s still inspiring, like a tale, cautionary or otherwise.]

As the story goes (at least, what I remember of it), some ethnographers had been doing fieldwork  in an Australian cultural context and were focusing their research on a complex kinship system known in this context. Through collaboration with “key informants,” the ethnographers eventually succeeded in understanding some key aspects of this kinship system.

As should be expected, these kinship-focused ethnographers wrote accounts of this kinship system at the end of their field research and became known as specialists of this system.

After a while, the fieldworkers went back to the field and met with the same people who had described this kinship system during the initial field trip. Through these discussions with their “key informants,” the ethnographers end up hearing about a radically different kinship system from the one about which they had learnt, written, and taught.

The local informants then told the ethnographers: “We would have told you earlier about this but we didn’t think you were able to understand it.”

Why I Need an iPad

I’m one of those who feel the iPad is the right tool for the job.

I’m one of those who feel the iPad is the right tool for the job.

This is mostly meant as a reply to this blogthread. But it’s also more generally about my personal reaction to Apple’s iPad announcement.

Some background.

I’m an ethnographer and a teacher. I read a fair deal, write a lot of notes, and work in a variety of contexts. These days, I tend to spend a good amount of time in cafés and other public places where I like to work without being too isolated. I also commute using public transit, listen to lots of podcast, and create my own. I’m also very aural.

I’ve used a number of PDAs, over the years, from a Newton MessagePad 130 (1997) to a variety of PalmOS devices (until 2008). In fact, some people readily associated me with PDA use.

As soon as I learnt about the iPod touch, I needed one. As soon as I’ve heard about the SafariPad, I wanted one. I’ve been an intense ‘touch user since the iPhone OS 2.0 release and I’m a happy camper.

(A major reason I never bought an iPhone, apart from price, is that it requires a contract.)

In my experience, the ‘touch is the most appropriate device for all sorts of activities which are either part of an other activity (reading during a commute) or are simply too short in duration to constitute an actual “computer session.” You don’t “sit down to work at your ‘touch” the way you might sit in front of a laptop or desktop screen. This works great for “looking up stufff” or “checking email.” It also makes a lot of sense during commutes in crowded buses or metros.

In those cases, the iPod touch is almost ideal. Ubiquitous access to Internet would be nice, but that’s not a deal-breaker. Alternative text-input methods would help in some cases, but I do end up being about as fast on my ‘touch as I was with Graffiti on PalmOS.

For other tasks, I have a Mac mini. Sure, it’s limited. But it does the job. In fact, I have no intention of switching for another desktop and I even have an eMachines collecting dust (it’s too noisy to make a good server).

What I miss, though, is a laptop. I used an iBook G3 for several years and loved it. For a little while later, I was able to share a MacBook with somebody else and it was a wonderful experience. I even got to play with the OLPC XO for a few weeks. That one was not so pleasant an experience but it did give me a taste for netbooks. And it made me think about other types of iPhone-like devices. Especially in educational contexts. (As I mentioned, I’m a teacher)

I’ve been laptop-less for a while, now. And though my ‘touch replaces it in many contexts, there are still times when I’d really need a laptop. And these have to do with what I might call “mobile sessions.”

For instance: liveblogging a conference or meeting. I’ve used my ‘touch for this very purpose on a good number of occasions. But it gets rather uncomfortable, after a while, and it’s not very fast. A laptop is better for this, with a keyboard and a larger form factor. But the iPad will be even better because of lower risks of RSI. A related example: just imagine TweetDeck on iPad.

Possibly my favourite example of a context in which the iPad will be ideal: presentations. Even before learning about the prospect of getting iWork on a tablet, presentations were a context in which I really missed a laptop.

Sure, in most cases, these days, there’s a computer (usually a desktop running XP) hooked to a projector. You just need to download your presentation file from Slideshare, show it from Prezi, or transfer it through USB. No biggie.

But it’s not the extra steps which change everything. It’s the uncertainty. Even if it’s often unfounded, I usually get worried that something might just not work, along the way. The slides might not show the same way as you see it because something is missing on that computer or that computer is simply using a different version of the presentation software. In fact, that software is typically Microsoft PowerPoint which, while convenient, fits much less in my workflow than does Apple Keynote.

The other big thing about presentations is the “presenter mode,” allowing you to get more content than (or different content from) what the audience sees. In most contexts where I’ve used someone else’s computer to do a presentation, the projector was mirroring the computer’s screen, not using it as a different space. PowerPoint has this convenient “presenter view” but very rarely did I see it as an available option on “the computer in the room.” I wish I could use my ‘touch to drive presentations, which I could do if I installed software on that “computer in the room.” But it’s not something that is likely to happen, in most cases.

A MacBook solves all of these problems. and it’s an obvious use for laptops. But how, then, is the iPad better? Basically because of interface. Switching slides on a laptop isn’t hard, but it’s more awkward than we realize. Even before watching the demo of Keynote on the iPad, I could simply imagine the actual pleasure of flipping through slides using a touch interface. The fit is “natural.”

I sincerely think that Keynote on the iPad will change a number of things, for me. Including the way I teach.

Then, there’s reading.

Now, I’m not one of those people who just can’t read on a computer screen. In fact, I even grade assignments directly from the screen. But I must admit that online reading hasn’t been ideal, for me. I’ve read full books as PDF files or dedicated formats on PalmOS, but it wasn’t so much fun, in terms of the reading process. And I’ve used my ‘touch to read things through Stanza or ReadItLater. But it doesn’t work so well for longer reading sessions. Even in terms of holding the ‘touch, it’s not so obvious. And, what’s funny, even a laptop isn’t that ideal, for me, as a reading device. In a sense, this is when the keyboard “gets in the way.”

Sure, I could get a Kindle. I’m not a big fan of dedicated devices and, at least on paper, I find the Kindle a bit limited for my needs. Especially in terms of sources. I’d like to be able to use documents in a variety of formats and put them in a reading list, for extended reading sessions. No, not “curled up in bed.” But maybe lying down in a sofa without external lighting. Given my experience with the ‘touch, the iPad is very likely the ideal device for this.

Then, there’s the overall “multi-touch device” thing. People have already been quite creative with the small touchscreen on iPhones and ‘touches, I can just imagine what may be done with a larger screen. Lots has been said about differences in “screen real estate” in laptop or desktop screens. We all know it can make a big difference in terms of what you can display at the same time. In some cases, two screens isn’t even a luxury, for instance when you code and display a page at the same time (LaTeX, CSS…). Certainly, the same qualitative difference applies to multitouch devices. Probably even more so, since the display is also used for input. What Han found missing in the iPhone’s multitouch was the ability to use both hands. With the iPad, Han’s vision is finding its space.

Oh, sure, the iPad is very restricted. For instance, it’s easy to imagine how much more useful it’d be if it did support multitasking with third-party apps. And a front-facing camera is something I was expecting in the first iPhone. It would just make so much sense that a friend seems very disappointed by this lack of videoconferencing potential. But we’re probably talking about predetermined expectations, here. We’re comparing the iPad with something we had in mind.

Then, there’s the issue of the competition. Tablets have been released and some multitouch tablets have recently been announced. What makes the iPad better than these? Well, we could all get in the same OS wars as have been happening with laptops and desktops. In my case, the investment in applications, files, and expertise that I have made in a Mac ecosystem rendered my XP years relatively uncomfortable and me appreciate returning to the Mac. My iPod touch fits right in that context. Oh, sure, I could use it with a Windows machine, which is in fact what I did for the first several months. But the relationship between the iPhone OS and Mac OS X is such that using devices in those two systems is much more efficient, in terms of my own workflow, than I could get while using XP and iPhone OS. There are some technical dimensions to this, such as the integration between iCal and the iPhone OS Calendar, or even the filesystem. But I’m actually thinking more about the cognitive dimensions of recognizing some of the same interface elements. “Look and feel” isn’t just about shiny and “purty.” It’s about interactions between a human brain, a complex sensorimotor apparatus, and a machine. Things go more quickly when you don’t have to think too much about where some tools are, as you’re working.

So my reasons for wanting an iPad aren’t about being dazzled by a revolutionary device. They are about the right tool for the job.

Groupthink in Action

Seems like I’m witnessing a clear groupthink phenomenon.

An interesting situation which, I would argue, is representative of Groupthink.

As a brief summary of the situation: a subgroup within a larger group is discussing the possibility of changing the larger group’s structure. In that larger group, similar discussions have been quite frequent, in the past. In effect, the smaller group is moving toward enacting a decision based on perceived consensus as to “the way to go.”

No bad intention on anyone’s part and the situation is far from tragic. But my clear impression is that groupthink is involved. I belong to the larger group but I feel little vested interest in what might happen with it.

An important point about this situation is that the smaller group seems to be acting as if the decision had already been made, after careful consideration. Through the history of the larger group, prior discussions on the same topic have been frequent. Through these discussions, clear consensus has never been reached. At the same time, some options have been gaining some momentum in the recent past, mostly based (in my observation) on accumulated frustration with the status quo and some reflection on the effectiveness of activities done by subgroups within the larger group. Members of that larger group (including participants in the smaller group) are quite weary of rehashing the same issues and the “rallying cry” within the subgroup has to do with “moving on.” Within the smaller group, prior discussions are described as if they had been enough to explore all the options. Weariness through the group as a whole seems to create a sense of urgency even though the group as a whole could hardly be described as being involved in time-critical activities.

Nothing personal about anyone involved and it’s possible that I’m off on this one. Where some of those involved would probably disagree is in terms of the current stage in the decision making process (i.e., they may see themselves as having gone through the process of making the primary decision, the rest is a matter of detail). I actually feel strange talking about this situation because it may seem like I’m doing the group a disservice. The reason I think it isn’t the case is that I have already voiced my concerns about groupthink to those who are involved in the smaller group. The reason I feel the urge to blog about this situation is that, as a social scientist, I take it as my duty to look at issues such as group dynamics. Simply put, I started thinking about it as a kind of “case study.”

Yes, I’m a social science geek. And proud of it, too!

Thing is, I have a hard time not noticing a rather clear groupthink pattern. Especially when I think about a few points in Janis‘s description of groupthink.

.

Antecedent Conditions Symptoms Decisions Affected

.

Insulation of the group Illusion of invulnerability Incomplete survey of alternatives

.

High group cohesiveness Unquestioned belief in the inherent morality of the group Incomplete survey of objectives

.

Directive leadership Collective rationalization of group’s decisions Failure to examine risks of preferred choice

.

Lack of norms requiring methodical procedures Shared stereotypes of outgroup, particularly opponents Failure to re-appraise initially rejected alternatives

.

Homogeneity of members’ social background and ideology Self-censorship; members withhold criticisms Poor information search

.

High stress from external threats with low hope of a better solution than the one offered by the leader(s) Illusion of unanimity (see false consensus effect) Selective bias in processing information at hand (see also confirmation bias)

.

Direct pressure on dissenters to conform Failure to work out contingency plans

.

Self-appointed “mindguards” protect the group from negative information

.

A PDF version, with some key issues highlighted.

Point by point…

Observable

Antecedent Conditions of Groupthink

Insulation of the group

A small subgroup was created based on (relatively informal) prior expression of opinion in favour of some broad changes in the structure of the larger group.

Lack of norms requiring methodical procedures

Methodical procedures about assessing the situation are either put aside or explicitly rejected.
Those methodical procedures which are accepted have to do with implementing the group’s primary decision, not with the decision making process.

Symptoms Indicative of Groupthink

Illusion of unanimity (see false consensus effect)

Agreement is stated as a fact, possibly based on private conversations outside of the small group.

Direct pressure on dissenters to conform

A call to look at alternatives is constructed as a dissenting voice.
Pressure to conform is couched in terms of “moving on.”

Symptoms of Decisions Affected by Groupthink

Incomplete survey of alternatives

Apart from the status quo, no alternative has been discussed.
When one alternative model is proposed, it’s reduced to a “side” in opposition to the assessed consensus.

Incomplete survey of objectives

Broad objectives are assumed to be common, left undiscussed.
Discussion of objectives is pushed back as being irrelevant at this stage.

Failure to examine risks of preferred choice

Comments about possible risks (including the danger of affecting the dynamics of the existing broader group) are left undiscussed or dismissed as “par for the course.”

Failure to re-appraise initially rejected alternatives

Any alternative is conceived as having been tried in the past with the strong implication that it isn’t wort revisiting.

Poor information search

Information collected concerns ways to make sure that the primary option considered will work.

Failure to work out contingency plans

Comments about the possible failure of the plan, and effects on the wider group are met with “so be it.”

Less Obvious

Antecedent Conditions of Groupthink

High group cohesiveness

The smaller group is highly cohesive but so is the broader group.

Directive leadership

Several members of the smaller group are taking positions of leadership, but there’s no direct coercion from that leadership.

Positions of authority are assessed, in a subtle way, but this authority is somewhat indirect.

Homogeneity of members’ social background and ideology

As with cohesiveness, homogeneity of social background can be used to describe the broader group as well as the smaller one.

High stress from external threats with low hope of a better solution than the one offered by the leader(s)

External “threats” are mostly subtle but there’s a clear notion that the primary option considered may be met with some opposition by a proportion of the larger group.

Symptoms Indicative of Groupthink

Illusion of invulnerability

While “invulnerability” would be an exaggeration, there’s a clear sense that members of the smaller group have a strong position within the larger group.

Unquestioned belief in the inherent morality of the group

Discussions don’t necessarily have a moral undertone, but the smaller group’s goals seem self-evident in the context or, at least, not really worth careful discussion.

Collective rationalization of group’s decisions

Since attempts to discuss the group’s assumed consensus are labelled as coming from a dissenting voice, the group’s primary decision is reified through countering individual points made about this decision.

Shared stereotypes of outgroup, particularly opponents

The smaller group’s primary “outgroup” is in fact the broader group, described in rather simple terms, not a distinct group of people.
The assumption is that, within the larger group, positions about the core issue are already set.

Self-censorship; members withhold criticisms

Self-censorship is particularly hard to observe or assess but the group’s dynamics tends to construct criticism as “nitpicking,” making it difficult to share comments.

Self-appointed “mindguards” protect the group from negative information

As with leadership, the process of shielding the smaller group from negative information is mostly organic, not located in a single individual.
Because the smaller group is already set apart from the larger group, protection from external information is built into the system, to an extent.

Symptoms of Decisions Affected by Groupthink

Selective bias in processing information at hand (see also confirmation bias)

Information brought into the discussion is treated as either reinforcing the group’s alleged consensus or taken to be easy to counter.
Examples from cases showing clear similarities are dismissed (“we have no interest in knowing what others have done”) and distant cases are used to demonstrate that the approach is sound (“there are groups in other contexts which work, so we can use the same approach”).

Happiness Anniversary

A year ago today, I found out that I was, in fact, happy.

HappyTweet

A year ago today, I found out that I was, in fact, happy.

Continue reading “Happiness Anniversary”

Beer Eye for the Coffee Guy (or Gal)

The coffee world can learn from the beer world.

Judged twelve (12) espresso drinks as part of the Eastern Regional Canadian Barista Championship (UStream).

[Never watched Queer Eye. Thought the title would make sense, given both the “taste” and even gender dimensions.]

Had quite a bit of fun.

The experience was quite similar to the one I had last year. There were fewer competitors, this year. But I also think that there were more people in the audience, at least in the morning. One possible reason is that ads about the competition were much more visible this year than last (based on my own experience and on several comments made during the day). Also, I noticed a stronger sense of collegiality among competitors, as several of them have been different things together in the past year.

More specifically, people from Ottawa’s Bridgehead and people from Montreal’s Café Myriade have developed something which, at least from the outside, look like comradery. At the Canadian National Barista Championship, last year, Myriade’s Anthony Benda won the “congeniality” prize. This year, Benda got first place in the ERCBC. Second place went to Bridgehead’s Cliff Hansen, and third place went to Myriade’s Alex Scott.

Bill Herne served as head judge for most of the event. He made it a very pleasant experience for me personally and, I hope, for other judges. His insight on the championship is especially valuable given the fact that he can maintain a certain distance from the specifics.

The event was organized in part by Vida Radovanovic, founder of the Canadian Coffee & Tea Show. Though she’s quick to point to differences between Toronto and Montreal, in terms of these regional competitions, she also seemed pleased with several aspects of this year’s ERCBC.

To me, the championship was mostly an opportunity for thinking and talking about the coffee world.

Met and interacted with diverse people during the day. Some of them were already part of my circle of coffee-loving friends and acquaintances. Some who came to me to talk about coffee after noticing some sign of my connection to the championship. The fact that I was introduced to the audience as a blogger and homeroaster seems to have been relatively significant. And there were several people who were second-degree contacts in my coffee-related social network, making for easy introductions.

A tiny part of the day’s interactions was captured in interviews for CBC Montreal’s Daybreak (unfortunately, the recording is in RealAudio format).

“Coffee as a social phenomenon” was at the centre of several of my own interactions with diverse people. Clearly, some of it has to do with my own interests, especially with “Montreal’s coffee renaissance.” But there were also a clear interest in such things as the marketshare of quality coffee, the expansion of some coffee scenes, and the notion of building a sense of community through coffee. That last part is what motivated me to write this post.

After the event, a member of my coffee-centric social network has started a discussion about community-building in the coffee world and I found myself dumping diverse ideas on him. Several of my ideas have to do with my experience with craft beer in North America. In a way, I’ve been doing informal ethnography of craft beer. Beer has become an area of expertise, for me, and I’d like to pursue more formal projects on it. So beer is on my mind when I think about coffee. And vice-versa. I was probably a coffee geek before I started homebrewing beer but I started brewing beer at home before I took my coffee-related activities to new levels.

So, in my reply on a coffee community, I was mostly thinking about beer-related communities.

Comparing coffee and beer is nothing new, for me. In fact, a colleague has blogged about some of my comments, both formal and informal, about some of those connections.

Differences between beer and coffee are significant. Some may appear trivial but they can all have some impact on the way we talk about cultural and social phenomena surrounding these beverages.

  • Coffee contains caffeine, beer contains alcohol. (Non-alcoholic beers, decaf coffee, and beer with coffee are interesting but they don’t dominate.) Yes: “duh.” But the difference is significant. Alcohol and caffeine not only have different effects but they fit in different parts of our lives.
  • Coffee is often part of a morning ritual,  frequently perceived as part of preparation for work. Beer is often perceived as a signal for leisure time, once you can “wind down.” Of course, there are people (including yours truly) who drink coffee at night and people (especially in Europe) who drink alcohol during a workday. But the differences in the “schedules” for beer and coffee have important consequences on the ways these drinks are integrated in social life.
  • Coffee tends to be much less expensive than beer. Someone’s coffee expenses may easily be much higher than her or his “beer budget,” but the cost of a single serving of coffee is usually significantly lower than a single serving of beer.
  • While it’s possible to drink a few coffees in a row, people usually don’t drink more than two coffees in a single sitting. With beer, it’s not rare that people would drink quite a few pints in the same night. The UK concept of a “session beer” goes well with this fact.
  • Brewing coffee takes a few minutes, brewing beer takes a while (hours for the brewing process, days or even weeks for fermentation).
  • At a “bar,” coffee is usually brewed in front of those who will drink it while beer has been prepared in advance.
  • Brewing coffee at home has been mainstream for quite a while. Beer homebrewing is considered a hobby.
  • Historically, coffee is a recent phenomenon. Beer is among the most ancient human-made beverages in the world.

Despite these significant differences, coffee and beer also have a lot in common. The fact that the term “brew” is used for beer and coffee (along with tea) may be a coincidence, but there are remarkable similarities between the extraction of diverse compounds from grain and from coffee beans. In terms of process, I would argue that beer and coffee are more similar than are, say, coffee and tea or beer and wine.

But the most important similarity, in my mind, is social: beer and coffee are, indeed, central to some communities. So are other drinks, but I’m more involved in groups having to do with coffee or beer than in those having to do with other beverages.

One way to put it, at least in my mind, is that coffee and beer are both connected to revolutions.

Coffee is community-oriented from the very start as coffee beans often come from farming communities and cooperatives. The notion, then, is that there are local communities which derive a significant portion of their income from the global and very unequal coffee trade. Community-oriented people often find coffee-growing to be a useful focus of attention and given the place of coffee in the global economy, it’s unsurprising to see a lot of interest in the concept (if not the detailed principles) of “fair trade” in relation to coffee. For several reasons (including the fact that they’re often produced in what Wallerstein would call “core” countries), the main ingredients in beer (malted barley and hops) don’t bring to mind the same conception of local communities. Still, coffee and beer are important to some local agricultural communities.

For several reasons, I’m much more directly involved with communities which have to do with the creation and consumption of beverages made with coffee beans or with grain.

In my private reply about building a community around coffee, I was mostly thinking about what can be done to bring attention to those who actually drink coffee. Thinking about the role of enthusiasts is an efficient way to think about the craft beer revolution and about geeks in general. After all, would the computer world be the same without the “homebrew computer club?”

My impression is that when coffee professionals think about community, they mostly think about creating better relationships within the coffee business. It may sound like a criticism, but it has more to do with the notion that the trade of coffee has been quite competitive. Building a community could be a very significant change. In a way, that might be a basis for the notion of a “Third Wave” in coffee.

So, using my beer homebrewer’s perspective: what about a community of coffee enthusiasts? Wouldn’t that help?

And I don’t mean “a website devoted to coffee enthusiasts.” There’s a lot of that, already. A lot of people on the Coffee Geek Forums are outsiders to the coffee industry and Home Barista is specifically geared toward the home enthusiasts’ market.

I’m really thinking about fostering a sense of community. In the beer world, this frequently happens in brewclubs or through the Beer Judge Certification Program, which is much stricter than barista championships. Could the same concepts apply to the coffee world? Probably not. But there may still be “lessons to be learnt” from the beer world.

In terms of craft beer in North America, there’s a consensus around the role of beer enthusiasts. A very significant number of craft brewers were homebrewers before “going pro.” One of the main reasons craft beer has become so important is because people wanted to drink it. Craft breweries often do rather well with very small advertising budgets because they attract something akin to cult followings. The practise of writing elaborate comments and reviews has had a significant impact on a good number of craft breweries. And some of the most creative things which happen in beer these days come from informal experiments carried out by homebrewers.

As funny as it may sound (or look), people get beer-related jobs because they really like beer.

The same happens with coffee. On occasion. An enthusiastic coffee lover will either start working at a café or, somewhat more likely, will “drop everything” and open her/his own café out of a passion for coffee. I know several people like this and I know the story is quite telling for many people. But it’s not the dominant narrative in the coffee world where “rags to riches” stories have less to do with a passion for coffee than with business acumen. Things may be changing, though, as coffee becomes more… passion-driven.

To be clear: I’m not saying that serious beer enthusiasts make the bulk of the market for craft beer or that coffee shop owners should cater to the most sophisticated coffee geeks out there. Beer and coffee are both too cheap to warrant this kind of a business strategy. But there’s a lot to be said about involving enthusiasts in the community.

For one thing, coffee and beer can both get viral rather quickly. Because most people in North America can afford beer or coffee, it’s often easy to convince a friend to grab a cup or pint. Coffee enthusiasts who bring friends to a café do more than sell a cup. They help build up a place. And because some people are into the habit of regularly going to the same bar or coffee shop, the effects can be lasting.

Beer enthusiasts often complain about the inadequate beer selection at bars and restaurants. To this day, there are places where I end up not drinking anything besides water after hearing what the beerlist contains. In the coffee world, it seems that the main target these days is the restaurant business. The current state of affairs with coffee at restaurants is often discussed with heavy sighs of disappointment. What I”ve heard from several people in the coffee business is that, too frequently,  restaurant owners give so little attention to coffee that they end up destroying the dining experience of anyone who orders coffee after a meal. Even in my own case, I’ve had enough bad experiences with restaurant coffee (including, or even especially, at higher-end places) that I’m usually reluctant to have coffee at a restaurant. It seems quite absurd, as a quality experience with coffee at the end of a meal can do a lot to a restaurant’s bottom line. But I can’t say that it’s my main concern because I end up having coffee elsewhere, anyway. While restaurants can be the object of a community’s attention and there’s a lot to be said about what restaurants do to a region or neighbourhood, the community dimensions of coffee have less to do with what is sold where than with what people do around coffee.

Which brings me to the issue of education. It’s clearly a focus in the coffee world. In fact, most coffee-related events have some “training” dimension. But this type of education isn’t community-oriented. It’s a service-based approach, such as the one which is increasingly common in academic institutions. While I dislike customer-based learning in universities, I do understand the need for training services in the coffee world. What I perceive insight from the beer world can do is complement these training services instead of replacing them.

An impressive set of learning experiences can be seen among homebrewers. From the most practical of “hands-on training” to some very conceptual/theoretical knowledge exchanges. And much of the learning which occurs is informal, seamless, “organic.” It’s possible to get very solid courses in beer and brewing, but the way most people learn is casual and free. Because homebrewers are organized in relatively tight groups and because the sense of community among homebrewers is also a matter of solidarity.  Or, more simply, because “it’s just a hobby anyway.”

The “education” theme also has to do with “educating the public” into getting more sophisticated about what to order. This does happen in the beer world, but can only be pulled off when people are already interested in knowing more about beer. In relation with the coffee industry, it sometimes seems that “coffee education” is imposed on people from the top-down. And it’s sometimes quite arbitrary. Again, room for the coffee business to read the Cluetrain Manifesto and to learn from communities.

And speaking of Starbucks… One draft blogpost which has been nagging me is about the perception that, somehow, Starbucks has had a positive impact in terms of coffee quality. One important point is that Starbucks took the place of an actual coffee community. Even if it can be proven that coffee quality wouldn’t have been improved in North America if it hadn’t been for Starbucks (a tall order, if you ask me), the issue remains that Starbucks has only paid attention to the real estate dimension of the concept of community. The mermaid corporation has also not doing so well, recently, so we may finally get beyond the financial success story and get into the nitty-gritty of what makes people connect through coffee. The world needs more from coffee than chains selling coffee-flavoured milk.

One notion I wanted to write about is the importance of “national” traditions in both coffee and beer in relation to what is happening in North America, these days. Part of the situation is enough to make me very enthusiastic to be in North America, since it’s increasingly possible to not only get quality beer and coffee but there are many opportunities for brewing coffee and beer in new ways. But that’ll have to wait for another post.

In Western Europe at least, coffee is often associated with the home. The smell of coffee has often been described in novels and it can run deep in social life. There’s no reason homemade coffee can’t be the basis for a sense of community in North America.

Now, if people in the coffee industry would wake up and… think about actual human beings, for a change…

Social Networks and Microblogging

Event-based microblogging and the social dimensions of online social networks.

Microblogging (Laconica, Twitter, etc.) is still a hot topic. For instance, during the past few episodes of This Week in Tech, comments were made about the preponderance of Twitter as a discussion theme: microblogging is so prominent on that show that some people complain that there’s too much talk about Twitter. Given the centrality of Leo Laporte’s podcast in geek culture (among Anglos, at least), such comments are significant.

The context for the latest comments about TWiT coverage of Twitter had to do with Twitter’s financials: during this financial crisis, Twitter is given funding without even asking for it. While it may seem surprising at first, given the fact that Twitter hasn’t publicized a business plan and doesn’t appear to be profitable at this time, 

Along with social networking, microblogging is even discussed in mainstream media. For instance, Médialogues (a media critique on Swiss national radio) recently had a segment about both Facebook and Twitter. Just yesterday, Comedy Central’s The Daily Show with Jon Stewart made fun of compulsive twittering and mainstream media coverage of Twitter (original, Canadian access).

Clearly, microblogging is getting some mindshare.

What the future holds for microblogging is clearly uncertain. Anything can happen. My guess is that microblogging will remain important for a while (at least a few years) but that it will transform itself rather radically. Chances are that other platforms will have microblogging features (something Facebook can do with status updates and something Automattic has been trying to do with some WordPress themes). In these troubled times, Montreal startup Identi.ca received some funding to continue developing its open microblogging platform.  Jaiku, bought by Google last year, is going open source, which may be good news for microblogging in general. Twitter itself might maintain its “marketshare” or other players may take over. There’s already a large number of third-party tools and services making use of Twitter, from Mahalo Answers to Remember the Milk, Twistory to TweetDeck.

Together, these all point to the current importance of microblogging and the potential for further development in that sphere. None of this means that microblogging is “The Next Big Thing.” But it’s reasonable to expect that microblogging will continue to grow in use.

(Those who are trying to grok microblogging, Common Craft’s Twitter in Plain English video is among the best-known descriptions of Twitter and it seems like an efficient way to “get the idea.”)

One thing which is rarely mentioned about microblogging is the prominent social structure supporting it. Like “Social Networking Systems” (LinkedIn, Facebook, Ning, MySpace…), microblogging makes it possible for people to “connect” to one another (as contacts/acquaintances/friends). Like blogs, microblogging platforms make it possible to link to somebody else’s material and get notifications for some of these links (a bit like pings and trackbacks). Like blogrolls, microblogging systems allow for lists of “favourite authors.” Unlike Social Networking Systems but similar to blogrolls, microblogging allow for asymmetrical relations, unreciprocated links: if I like somebody’s microblogging updates, I can subscribe to those (by “following” that person) and publicly show my appreciation of that person’s work, regardless of whether or not this microblogger likes my own updates.

There’s something strangely powerful there because it taps the power of social networks while avoiding tricky issues of reciprocity, “confidentiality,” and “intimacy.”

From the end user’s perspective, microblogging contacts may be easier to establish than contacts through Facebook or Orkut. From a social science perspective, microblogging links seem to approximate some of the fluidity found in social networks, without adding much complexity in the description of the relationships. Subscribing to someone’s updates gives me the role of “follower” with regards to that person. Conversely, those I follow receive the role of “following” (“followee” would seem logical, given the common “-er”/”-ee” pattern). The following and follower roles are complementary but each is sufficient by itself as a useful social link.

Typically, a microblogging system like Twitter or Identi.ca qualifies two-way connections as “friendship” while one-way connections could be labelled as “fandom” (if Andrew follows Betty’s updates but Betty doesn’t follow Andrew’s, Andrew is perceived as one of Betty’s “fans”). Profiles on microblogging systems are relatively simple and public, allowing for low-involvement online “presence.” As long as updates are kept public, anybody can connect to anybody else without even needing an introduction. In fact, because microblogging systems send notifications to users when they get new followers (through email and/or SMS), subscribing to someone’s update is often akin to introducing yourself to that person. 

Reciprocating is the object of relatively intense social pressure. A microblogger whose follower:following ratio is far from 1:1 may be regarded as either a snob (follower:following much higher than 1:1) or as something of a microblogging failure (follower:following much lower than 1:1). As in any social context, perceived snobbery may be associated with sophistication but it also carries opprobrium. Perry Belcher  made a video about what he calls “Twitter Snobs” and some French bloggers have elaborated on that concept. (Some are now claiming their right to be Twitter Snobs.) Low follower:following ratios can result from breach of etiquette (for instance, ostentatious self-promotion carried beyond the accepted limit) or even non-human status (many microblogging accounts are associated to “bots” producing automated content).

The result of the pressure for reciprocation is that contacts are reciprocated regardless of personal relations.  Some users even set up ways to automatically follow everyone who follows them. Despite being tricky, these methods escape the personal connection issue. Contrary to Social Networking Systems (and despite the term “friend” used for reciprocated contacts), following someone on a microblogging service implies little in terms of friendship.

One reason I personally find this fascinating is that specifying personal connections has been an important part of the development of social networks online. For instance, long-defunct SixDegrees.com (one of the earliest Social Networking Systems to appear online) required of users that they specified the precise nature of their relationship to users with whom they were connected. Details escape me but I distinctly remember that acquaintances, colleagues, and friends were distinguished. If I remember correctly, only one such personal connection was allowed for any pair of users and this connection had to be confirmed before the two users were linked through the system. Facebook’s method to account for personal connections is somewhat more sophisticated despite the fact that all contacts are labelled as “friends” regardless of the nature of the connection. The uniform use of the term “friend” has been decried by many public commentators of Facebook (including in the United States where “friend” is often applied to any person with whom one is simply on friendly terms).

In this context, the flexibility with which microblogging contacts are made merits consideration: by allowing unidirectional contacts, microblogging platforms may have solved a tricky social network problem. And while the strength of the connection between two microbloggers is left unacknowledged, there are several methods to assess it (for instance through replies and republished updates).

Social contacts are the very basis of social media. In this case, microblogging represents a step towards both simplified and complexified social contacts.

Which leads me to the theme which prompted me to start this blogpost: event-based microblogging.

I posted the following blog entry (in French) about event-based microblogging, back in November.

Microblogue d’événement

I haven’t received any direct feedback on it and the topic seems to have little echoes in the social media sphere.

During the last PodMtl meeting on February 18, I tried to throw my event-based microblogging idea in the ring. This generated a rather lengthy between a friend and myself. (Because I don’t want to put words in this friend’s mouth, who happens to be relatively high-profile, I won’t mention this friend’s name.) This friend voiced several objections to my main idea and I got to think about this basic notion a bit further. At the risk of sounding exceedingly opinionated, I must say that my friend’s objections actually comforted me in the notion that my “event microblog” idea makes a lot of sense.

The basic idea is quite simple: microblogging instances tied to specific events. There are technical issues in terms of hosting and such but I’m mostly thinking about associating microblogs and events.

What I had in mind during the PodMtl discussion has to do with grouping features, which are often requested by Twitter users (including by Perry Belcher who called out Twitter Snobs). And while I do insist on events as a basis for those instances (like groups), some of the same logic applies to specific interests. However, given the time-sensitivity of microblogging, I still think that events are more significant in this context than interests, however defined.

In the PodMtl discussion, I frequently referred to BarCamp-like events (in part because my friend and interlocutor had participated in a number of such events). The same concept applies to any event, including one which is just unfolding (say, assassination of Guinea-Bissau’s president or bombings in Mumbai).

Microblogging users are expected to think about “hashtags,” those textual labels preceded with the ‘#’ symbol which are meant to categorize microblogging updates. But hashtags are problematic on several levels.

  • They require preliminary agreement among multiple microbloggers, a tricky proposition in any social media. “Let’s use #Bissau09. Everybody agrees with that?” It can get ugly and, even if it doesn’t, the process is awkward (especially for new users).
  • Even if agreement has been reached, there might be discrepancies in the way hashtags are typed. “Was it #TwestivalMtl or #TwestivalMontreal, I forgot.”
  • In terms of language economy, it’s unsurprising that the same hashtag would be used for different things. Is “#pcmtl” about Podcamp Montreal, about personal computers in Montreal, about PCM Transcoding Library…?
  • Hashtags are frequently misunderstood by many microbloggers. Just this week, a tweep of mine (a “peep” on Twitter) asked about them after having been on Twitter for months.
  • While there are multiple ways to track hashtags (including through SMS, in some regions), there is no way to further specify the tracked updates (for instance, by user).
  • The distinction between a hashtag and a keyword is too subtle to be really useful. Twitter Search, for instance, lumps the two together.
  • Hashtags take time to type. Even if microbloggers aren’t necessarily typing frantically, the time taken to type all those hashtags seems counterproductive and may even distract microbloggers.
  • Repetitively typing the same string is a very specific kind of task which seems to go against the microblogging ethos, if not the cognitive processes associated with microblogging.
  • The number of character in a hashtag decreases the amount of text in every update. When all you have is 140 characters at a time, the thirteen characters in “#TwestivalMtl” constitute almost 10% of your update.
  • If the same hashtag is used by a large number of people, the visual effect can be that this hashtag is actually dominating the microblogging stream. Since there currently isn’t a way to ignore updates containing a certain hashtag, this effect may even discourage people from using a microblogging service.

There are multiple solutions to these issues, of course. Some of them are surely discussed among developers of microblogging systems. And my notion of event-specific microblogs isn’t geared toward solving these issues. But I do think separate instances make more sense than hashtags, especially in terms of specific events.

My friend’s objections to my event microblogging idea had something to do with visibility. It seems that this friend wants all updates to be visible, regardless of the context. While I don’t disagree with this, I would claim that it would still be useful to “opt out” of certain discussions when people we follow are involved. If I know that Sean is participating in a PHP conference and that most of his updates will be about PHP for a period of time, I would enjoy the possibility to hide PHP-related updates for a specific period of time. The reason I talk about this specific case is simple: a friend of mine has manifested some frustration about the large number of updates made by participants in Podcamp Montreal (myself included). Partly in reaction to this, he stopped following me on Twitter and only resumed following me after Podcamp Montreal had ended. In this case, my friend could have hidden Podcamp Montreal updates and still have received other updates from the same microbloggers.

To a certain extent, event-specific instances are a bit similar to “rooms” in MMORPG and other forms of real-time many-to-many text-based communication such as the nostalgia-inducing Internet Relay Chat. Despite Dave Winer’s strong claim to the contrary (and attempt at defining microblogging away from IRC), a microblogging instance could, in fact, act as a de facto chatroom. When such a structure is needed. Taking advantage of the work done in microblogging over the past year (which seems to have advanced more rapidly than work on chatrooms has, during the past fifteen years). Instead of setting up an IRC channel, a Web-based chatroom, or even a session on MSN Messenger, users could use their microblogging platform of choice and either decide to follow all updates related to a given event or simply not “opt-out” of following those updates (depending on their preferences). Updates related to multiple events are visible simultaneously (which isn’t really the case with IRC or chatrooms) and there could be ways to make event-specific updates more prominent. In fact, there would be easy ways to keep real-time statistics of those updates and get a bird’s eye view of those conversations.

And there’s a point about event-specific microblogging which is likely to both displease “alpha geeks” and convince corporate users: updates about some events could be “protected” in the sense that they would not appear in the public stream in realtime. The simplest case for this could be a company-wide meeting during which backchannel is allowed and even expected “within the walls” of the event. The “nothing should leave this room” attitude seems contradictory to social media in general, but many cases can be made for “confidential microblogging.” Microblogged conversations can easily be archived and these archives could be made public at a later date. Event-specific microblogging allows for some control of the “permeability” of the boundaries surrounding the event. “But why would people use microblogging instead of simply talking to another?,” you ask. Several quick answers: participants aren’t in the same room, vocal communication is mostly single-channel, large groups of people are unlikely to communicate efficiently through oral means only, several things are more efficiently done through writing, written updates are easier to track and archive…

There are many other things I’d like to say about event-based microblogging but this post is already long. There’s one thing I want to explain, which connects back to the social network dimension of microblogging.

Events can be simplistically conceived as social contexts which bring people together. (Yes, duh!) Participants in a given event constitute a “community of experience” regardless of the personal connections between them. They may be strangers, ennemies, relatives, acquaintances, friends, etc. But they all share something. “Participation,” in this case, can be relatively passive and the difference between key participants (say, volunteers and lecturers in a conference) and attendees is relatively moot, at a certain level of analysis. The key, here, is the set of connections between people at the event.

These connections are a very powerful component of social networks. We typically meet people through “events,” albeit informal ones. Some events are explicitly meant to connect people who have something in common. In some circles, “networking” refers to something like this. The temporal dimension of social connections is an important one. By analogy to philosophy of language, the “first meeting” (and the set of “first impressions”) constitute the “baptism” of the personal (or social) connection. In social media especially, the nature of social connections tends to be monovalent enough that this “baptism event” gains special significance.

The online construction of social networks relies on a finite number of dimensions, including personal characteristics described in a profile, indirect connections (FOAF), shared interests, textual content, geographical location, and participation in certain activities. Depending on a variety of personal factors, people may be quite inclusive or rather exclusive, based on those dimensions. “I follow back everyone who lives in Austin” or “Only people I have met in person can belong to my inner circle.” The sophistication with which online personal connections are negotiated, along such dimensions, is a thing of beauty. In view of this sophistication, tools used in social media seem relatively crude and underdeveloped.

Going back to the (un)conference concept, the usefulness of having access to a list of all participants in a given event seems quite obvious. In an open event like BarCamp, it could greatly facilitate the event’s logistics. In a closed event with paid access, it could be linked to registration (despite geek resistance, closed events serve a purpose; one could even imagine events where attendance is free but the microblogging backchannel incurs a cost). In some events, everybody would be visible to everybody else. In others, there could be a sort of ACL for diverse types of participants. In some cases, people could be allowed to “lurk” without being seen while in others radically transparency could be enforced. For public events with all participants visible, lists of participants could be archived and used for several purposes (such as assessing which sessions in a conference are more popular or “tracking” event regulars).

One reason I keep thinking about event-specific microblogging is that I occasionally use microblogging like others use business cards. In a geek crowd, I may ask for someone’s Twitter username in order to establish a connection with that person. Typically, I will start following that person on Twitter and find opportunities to communicate with that person later on. Given the possibility for one-way relationships, it establishes a social connection without requiring personal involvement. In fact, that person may easily ignore me without the danger of a face threat.

If there were event-specific instances from microblogging platforms, we could manage connections and profiles in a more sophisticated way. For instance, someone could use a barebones profile for contacts made during an impersonal event and a full-fledged profile for contacts made during a more “intimate” event. After noticing a friend using an event-specific business card with an event-specific email address, I got to think that this event microblogging idea might serve as a way to fill a social need.

 

More than most of my other blogposts, I expect comments on this one. Objections are obviously welcomed, especially if they’re made thoughtfully (like my PodMtl friend made them). Suggestions would be especially useful. Or even questions about diverse points that I haven’t addressed (several of which I can already think about).

So…

 

What do you think of this idea of event-based microblogging? Would you use a microblogging instance linked to an event, say at an unconference? Can you think of fun features an event-based microblogging instance could have? If you think about similar ideas you’ve seen proposed online, care to share some links?

 

Thanks in advance!

Transparency and Secrecy

Musings on transparency and secrecy, related to both my professional reorientation and my personal life.

[Started working on this post on December 1st, based on something which happened a few days prior. Since then, several things happened which also connected to this post. Thought the timing was right to revisit the entry and finally publish it. Especially since a friend just teased me for not blogging in a while.]

I’m such a strong advocate of transparency that I have a real problem with secrecy.

I know, transparency is not exactly the mirror opposite of secrecy. But I think my transparency-radical perspective causes some problem in terms of secrecy-management.

“Haven’t you been working with a secret society in Mali?,” you ask. Well, yes, I have. And secrecy hasn’t been a problem in that context because it’s codified. Instead of a notion of “absolute secrecy,” the Malian donsow I’ve been working with have a subtle, nuanced, complex, layered, contextually realistic, elaborate, and fascinating perspective on how knowledge is processed, “transmitted,” managed. In fact, my dissertation research had a lot to do with this form of knowledge management. The term “knowledge people” (“karamoko,” from kalan+mogo=learning+people) truly applies to members of hunter’s associations in Mali as well as to other local experts. These people make a clear difference between knowledge and information. And I can readily relate to their approach. Maybe I’ve “gone native,” but it’s more likely that I was already in that mode before I ever went to Mali (almost 11 years ago).

Of course, a high value for transparency is a hallmark of academia. The notion that “information wants to be free” makes more sense from an academic perspective than from one focused on a currency-based economy. Even when people are clear that “free” stands for “freedom”/«libre» and not for “gratis”/«gratuit» (i.e. “free as in speech, not free as in beer”), there persists a notion that “free comes at a cost” among those people who are so focused on growth and profit. IMHO, most the issues with the switch to “immaterial economies” (“information economy,” “attention economy,” “digital economy”) have to do with this clash between the value of knowledge and a strict sense of “property value.”

But I digress.

Or, do I…?

The phrase “radical transparency” has been used in business circles related to “information and communication technology,” a context in which the “information wants to be free” stance is almost the basis of a movement.

I’m probably more naïve than most people I have met in Mali. While there, a friend told me that he thought that people from the United States were naïve. While he wasn’t referring to me, I can easily acknowledge that the naïveté he described is probably characteristic of my own attitude. I’m North American enough to accept this.

My dedication to transparency was tested by an apparently banal set of circumstances, a few days before I drafted this post. I was given, in public, information which could potentially be harmful if revealed to a certain person. The harm which could be done is relatively small. The person who gave me that information wasn’t overstating it. The effects of my sharing this information wouldn’t be tragic. But I was torn between my radical transparency stance and my desire to do as little harm as humanly possible. So I refrained from sharing this information and decided to write this post instead.

And this post has been sitting in my “draft box” for a while. I wrote a good number of entries in the meantime but I still had this one at the back of my mind. On the backburner. This is where social media becomes something more of a way of life than an activity. Even when I don’t do anything on this blog, I think about it quite a bit.

As mentioned in the preamble, a number of things have happened since I drafted this post which also relate to transparency and secrecy. Including both professional and personal occurrences. Some of these comfort me in my radical transparency position while others help me manage secrecy in a thoughtful way.

On the professional front, first. I’ve recently signed a freelance ethnography contract with Toronto-based consultancy firm Idea Couture. The contract included a non-disclosure agreement (NDA). Even before signing the contract/NDA, I was asking fellow ethnographer and blogger Morgan Gerard about disclosure. Thanks to him, I now know that I can already disclose several things about this contract and that, once the results are public, I’ll be able to talk about this freely. Which all comforts me on a very deep level. This is precisely the kind of information and knowledge management I can relate to. The level of secrecy is easily understandable (inopportune disclosure could be detrimental to the client). My commitment to transparency is unwavering. If all contracts are like this, I’ll be quite happy to be a freelance ethnographer. It may not be my only job (I already know that I’ll be teaching online, again). But it already fits in my personal approach to information, knowledge, insight.

I’ll surely blog about private-sector ethnography. At this point, I’ve mostly been preparing through reading material in the field and discussing things with friends or colleagues. I was probably even more careful than I needed to be, but I was still able to exchange ideas about market research ethnography with people in diverse fields. I sincerely think that these exchanges not only add value to my current work for Idea Couture but position me quite well for the future. I really am preparing for freelance ethnography. I’m already thinking like a freelance ethnographer.

There’s a surprising degree of “cohesiveness” in my life, these days. Or, at least, I perceive my life as “making sense.”

And different things have made me say that 2009 would be my year. I get additional evidence of this on a regular basis.

Which brings me to personal issues, still about transparency and secrecy.

Something has happened in my personal life, recently, that I’m currently unable to share. It’s a happy circumstance and I’ll be sharing it later, but it’s semi-secret for now.

Thing is, though, transparency was involved in that my dedication to radical transparency has already been paying off in these personal respects. More specifically, my being transparent has been valued rather highly and there’s something about this type of validation which touches me deeply.

As can probably be noticed, I’m also becoming more public about some emotional dimensions of my life. As an artist and a humanist, I’ve always been a sensitive person, in-tune with his emotions. Specially positive ones. I now feel accepted as a sensitive person, even if several people in my life tend to push sensitivity to the side. In other words, I’ve grown a lot in the past several months and I now want to share my growth with others. Despite reluctance toward the “touchy-feely,” specially in geek and other male-centric circles, I’ve decided to “let it all loose.” I fully respect those who dislike this. But I need to be myself.

Influence and Butterflies

The social butterfly effect shouldn’t be overshadowed by the notion of influence.

Seems like “influence” is a key theme in social media, these days. An example among several others:

Influenceur, autorité, passeur de culture ou l’un de ces singes exubérants | Mario tout de go.

In that post, Mario Asselin brings together a number of notions which are at the centre of current discussions about social media. The core notion seems to be that “influence” replaces “authority” as a quality or skill some people have, more than others. Some people are “influencers” and, as such, they have a specific power over others. Such a notion seems to be widely held in social media and numerous services exist which are based on the notion that “influence” can be measured.
I don’t disagree. There’s something important, online, which can be called “influence” and which can be measured. To a large extent, it’s related to a large number of other concepts such as fame and readership, popularity and network centrality. There are significant differences between all of those concepts but they’re still related. They still depict “social power” which isn’t coercive but is the basis of an obvious stratification.
In some contexts, this is what people mean by “social capital.” I originally thought people meant something closer to Bourdieu but a fellow social scientist made me realise that people are probably using Putnam’s concept instead. I recently learnt that George W. Bush himself used “political capital” in a sense which is fairly similar to what most people seem to mean by “social capital.” Even in that context, “capital” is more specific than “influence.” But the core notion is the same.
To put it bluntly:
Some people are more “important” than others.
Social marketers are especially interested in such a notion. Marketing as a whole is about influence. Social marketing, because it allows for social groups to be relatively amorphous, opposes influence to authority. But influence maintains a connection with “top-down” approaches to marketing.
My own point would be that there’s another kind of influence which is difficult to pinpoint but which is highly significant in social networks: the social butterfly effect.
Yep, I’m still at it after more than three years. It’s even more relevant now than it was then. And I’m now able to describe it more clearly and define it more precisely.
The social butterfly effect is a social network analogue to the Edward Lorenz’s well-known “butterfly effect. ” As any analogy, this connection is partial but telling. Like Lorenz’s phrase, “social butterfly effect” is more meaningful than precise. One thing which makes the phrase more important for me is the connection with the notion of a “social butterfly,” which is both a characteristic I have been said to have and a concept I deem important in social science.
I define social butterflies as people who connect to diverse network clusters. Community enthusiast Christine Prefontaine defined social butterflies within (clustered) networks, but I think it’s useful to separate out network clusters. A social butterfly’s network is rather sparse as, on the whole, a small number of people in it have direct connections with one another. But given the topography of most social groups, there likely are clusters within that network. The social butterfly connects these clusters. When the social butterfly is the only node which can connect these clusters directly, her/his “influence” can be as strong as that of a central node in one of these clusters since s/he may be able to bring some new element from one cluster to another.
I like the notion of “repercussion” because it has an auditory sense and it resonates with all sorts of notions I think important without being too buzzwordy. For instance, as expressions like “ripple effect” and “domino effect” are frequently used, they sound like clichés. Obviously, so does “butterfly effect” but I like puns too much to abandon it. From a social perspective, the behaviour of a social butterfly has important “repercussions” in diverse social groups.
Since I define myself as a social butterfly, this all sounds self-serving. And I do pride myself in being a “connector.” Not only in generational terms (I dislike some generational metaphors). But in social terms. I’m rarely, if ever, central to any group. But I’m also especially good at serving as a contact between people from different groups.
Yay, me! 🙂
My thinking about the social butterfly effect isn’t an attempt to put myself on some kind of pedestal. Social butterflies typically don’t have much “power” or “prestige.” Our status is fluid/precarious. I enjoy being a social butterfly but I don’t think we’re better or even more important than anybody else. But I do think that social marketers and other people concerned with “influence” should take us into account.
I say all of this as a social scientist. Some parts of my description are personalized but I’m thinking about a broad stance “from society’s perspective.” In diverse contexts, including this blog, I have been using “sociocentric” in at least three distinct senses: class-based ethnocentrism, a special form of “altrocentrism,” and this “society-centred perspective.” These meanings are distinct enough that they imply homonyms. Social network analysis is typically “egocentric” (“ego-centred”) in that each individual is the centre of her/his own network. This “egocentricity” is both a characteristic of social networks in opposition to other social groups and a methodological issue. It specifically doesn’t imply egotism but it does imply a move away from pre-established social categories. In this sense, social network analysis isn’t “society-centred” and it’s one reason I put so much emphasis on social networks.
In the context of discussions of influence, however, there is a “society-centredness” which needs to be taken into account. The type of “influence” social marketers and others are so interested in relies on defined “spaces.” In some ways, if “so-and-so is influential,” s/he has influence within a specific space, sphere, or context, the boundaries of which may be difficult to define. For marketers, this can bring about the notion of a “market,” including in its regional and demographic senses. This seems to be the main reason for the importance of clusters but it also sounds like a way to recuperate older marketing concepts which seem outdated online.
A related point is the “vertical” dimension of this notion of “influence.” Whether or not it can be measured accurately, it implies some sort of scale. Some people are at the top of the scale, they’re influencers. Those at the bottom are the masses, since we take for granted that pyramids are the main models for social structure. To those of us who favour egalitarianism, there’s something unpalatable about this.
And I would say that online contacts tend toward some form of egalitarianism. To go back to one of my favourite buzzphrases, the notion of attention relates to reciprocity:

It’s an attention economy: you need to pay attention to get attention.

This is one thing journalism tends to “forget.” Relationships between journalists and “people” are asymmetrical. Before writing this post, I read Brian Storm’s commencement speech for the Mizzou J-School. While it does contain some interesting tidbits about the future of journalism, it positions journalists (in this case, recent graduates from an allegedly prestigious school of journalism) away from the masses. To oversimplify, journalists are constructed as those who capture people’s attention by the quality of their work, not by any two-way relationship. Though they rarely discuss this, journalists, especially those in mainstream media, typically perceive themselves as influencers.

Attention often has a temporal dimension which relates to journalism’s obsession with time. Journalists work in time-sensitive contexts, news are timely, audiences spend time with journalistic contents, and journalists fight for this audience time as a scarce resource, especially in connection to radio and television. Much of this likely has to do with the fact that journalism is intimately tied to advertising.

As I write this post, I hear on a radio talk show a short discussion about media coverage of Africa. The topic wakes up the africanist in me. The time devoted to Africa in almost any media outside of Africa is not only very limited but spent on very specific issues having to do with Africa. In mainstream media, Africa only “matters” when major problems occur. Even though most parts of Africa are peaceful and there many fabulously interesting things occuring throughout the continent, Africa is the “forgotten” continent.

A connection I perceive is that, regardless of any other factor, Africans are taken to not be “influential.” What makes this notion especially strange to an africanist is that influence tends to be a very important matter throughout the continent. Most Africans I know or have heard about have displayed a very nuanced and acute sense of “influence” to the extent that “power” often seems less relevant when working in Africa than different elements of influence. I know full well that, to outsiders to African studies, these claims may sound far-fetched. But there’s a lot to be said about the importance of social networks in Africa and this could help refine a number of notions that I have tagged in this post.

Answers on Expertise

Follow-up to my post on a quest for the origin of the “rule of thumb” about expertise.

As a follow-up on my previous post…

Quest for Expertise « Disparate.

(I was looking for the origin of the “10 years or 10,000 hours to be an expert” claim.)

Interestingly enough, that post is getting a bit of blog attention.

I’m so grateful about this attention that it made me tweet the following:

Trackbacks, pings, and blog comments are blogger gifts.

I also posted a question about this on Mahalo Answers (after the first comment, by Alejna, appeared on my blog, but before other comments and trackbacks appeared). I selected glaspell’s answer as the best answer
(glaspell also commented on my blog entry).

At this point, my impression is that what is taken as a “rule” on expertise is a simplification of results from a larger body of research with an emphasis on work by K. Anders Ericsson but with little attention paid to primary sources.
The whole process is quite satisfying, to me. Not just because we might all gain a better understanding of how this “claim” became so generalized, but because the process as a whole shows both powers and limitations of the Internet. I tend to claim (publicly) that the ‘Net favours critical thinking (because we eventually all claims with grains of salt). But it also seems that, even with well-known research done in English, it can be rather difficult to follow all the connections across the literature. If you think about more obscure work in non-dominant languages, it’s easy to realize that Google’s dream of organizing the world’s information isn’t yet true.

By the by, I do realize that my quest was based on a somewhat arbitrary assumption: that this “rule of thumb” is now understood as a solid rule. But what I’ve noticed in popular media since 2006 leads me to believe that the claim is indeed taken as a hard and fast rule.

I’m not blaming anyone, in this case. I don’t think that anyone’s involvement in the “chain of transmission” was voluntarily misleading and I don’t even think that it was that essential. As with many other ideas, what “sticks” is what seems to make sense in context. Actually, this strong tendency for “convenient” ideas to be more widely believed relates to a set of tricky issues with which academics have to deal, on a daily basis. Sagan’s well-known “baloney detector” is useful, here. But it’s also in not so wide use.

One thing which should also be clear: I’m not saying that Ericsson and other researchers have done anything shoddy or inappropriate. Their work is being used outside of its original context, which is often an issue.

Mass media coverage of academic research was the basis of series of entries on the original Language Log, including one of my favourite blogposts, Mark Liberman’s Language Log: Raising standards — by lowering them. The main point, I think, is that secluded academics in the Ivory Tower do little to alleviate this problem.

But I digress.
And I should probably reply to the other comments on the entry itself.

Quest for Expertise

Who came up with the “rule of thumb” which says that it takes “ten (10) years and/or ten thousand (10,000) hours to become an expert?”

Will at Work Learning: People remember 10%, 20%…Oh Really?.

This post was mentioned on the mailing-list for the Society for Teaching and Learning in Higher Education (STLHE-L).

In that post, Will Thalheimer traces back a well-known claim about learning to shoddy citations. While it doesn’t invalidate the base claim (that people tend to retain more information through certain cognitive processes), Thalheimer does a good job of showing how a graph which has frequently been seen in educational fields was based on faulty interpretation of work by prominent scholars, mixed with some results from other sources.

Quite interesting. IMHO, demystification and critical thinking are among the most important things we can do in academia. In fact, through training in folkloristics, I have become quite accustomed to this specific type of debunking.

I have in mind a somewhat similar claim that I’m currently trying to trace. Preliminary searches seem to imply that citations of original statements have a similar hyperbolic effect on the status of this claim.

The claim is what a type of “rule of thumb” in cognitive science. A generic version could be stated in the following way:

It takes ten years or 10,000 hours to become an expert in any field.

The claim is a rather famous one from cognitive science. I’ve heard it uttered by colleagues with a background in cognitive science. In 2006, I first heard about such a claim from Philip E. Ross, on an episode of Scientific American‘s Science Talk podcast to discuss his article on expertise. I later read a similar claim in Daniel Levitin’s 2006 This Is Your Brain On Music. The clearest statement I could find back in Levitin’s book is the following (p. 193):

The emerging picture from such studies is that ten thousand hours of practice is required to achieve the level of mastery associated with being a world-class expert – in anything.

More recently, during a keynote speech he was giving as part of his latest book tour, I heard a similar claim from presenter extraordinaire Malcolm Gladwell. AFAICT, this claim runs at the centre of Gladwell’s recent book: Outliers: The Story of Success. In fact, it seems that Gladwell uses the same quote from Levitin, on page 40 of Outliers (I just found that out).

I would like to pinpoint the origin for the claim. Contrary to Thalheimer’s debunking, I don’t expect that my search will show that the claim is inaccurate. But I do suspect that the “rule of thumb” versions may be a bit misled. I already notice that most people who set up such claims are doing so without direct reference to the primary literature. This latter comment isn’t damning: in informal contexts, constant referal to primary sources can be extremely cumbersome. But it could still be useful to clear up the issue. Who made this original claim?

I’ve tried a few things already but it’s not working so well. I’m collecting a lot of references, to both online and printed material. Apart from Levitin’s book and a few online comments, I haven’t yet read the material. Eventually, I’d probably like to find a good reference on the cognitive basis for expertise which puts this “rule of thumb” in context and provides more elaborate data on different things which can be done during that extensive “time on task” (including possible skill transfer).

But I should proceed somewhat methodically. This blogpost is but a preliminary step in this process.

Since Philip E. Ross is the first person on record I heard talk about this claim, a logical first step for me is to look through this SciAm article. Doing some text searches on the printable version of his piece, I find a few interesting things including the following (on page 4 of the standard version):

Simon coined a psychological law of his own, the 10-year rule, which states that it takes approximately a decade of heavy labor to master any field.

Apart from the ten thousand (10,000) hours part of the claim, this is about as clear a statement as I’m looking for. The “Simon” in question is Herbert A. Simon, who did research on chess at the Department of Psychology at Carnegie-Mellon University with colleague William G. Chase.  So I dig for diverse combinations of “Herbert Simon,” “ten(10)-year rule,” “William Chase,” “expert(ise),” and/or “chess.” I eventually find two primary texts by those two authors, both from 1973: (Chase and Simon, 1973a) and (Chase and Simon, 1973b).

The first (1973a) is an article from Cognitive Psychology 4(1): 55-81, available for download on ScienceDirect (toll access). Through text searches for obvious words like “hour*,” “year*,” “time,” or even “ten,” it seems that this article doesn’t include any specific statement about the amount of time required to become an expert. The quote which appears to be the most relevant is the following:

Behind this perceptual analysis, as with all skills (cf., Fitts & Posner, 1967), lies an extensive cognitive apparatus amassed through years of constant practice.

While it does relate to the notion that there’s a cognitive basis to practise, the statement is generic enough to be far from the “rule of thumb.”

The second Chase and Simon reference (1973b) is a chapter entitled “The Mind’s Eye in Chess” (pp. 215-281) in the proceedings of the Eighth Carnegie Symposium on Cognition as edited by William Chase and published by Academic Press under the title Visual Information Processing. I borrowed a copy of those proceedings from Concordia and have been scanning that chapter visually for some statements about the “time on task.” Though that symposium occurred in 1972 (before the first Chase and Simon reference was published), the proceedings were apparently published after the issue of Cognitive Psychology since the authors mention that article for background information.

I do find some interesting quotes, but nothing that specific:

By a rough estimate, the amount of time each player has spent playing chess, studying chess, and otherwise staring at chess positions is perhaps 10,000 to 50,000 hours for the Master; 1,000 to 5,000 hours for the Class A player; and less than 100 horus for the beginner. (Chase and Simon 1973b: 219)

or:

The organization of the Master’s elaborate repertoire of information takes thousands of hours to build up, and the same is true of any skilled task (e.g., football, music). That is why practice is the major independent variable in the acquisition of skill. (Chase and Simon 1973b: 279, emphasis in the original, last sentences in the text)

Maybe I haven’t scanned these texts properly but those quotes I find seem to imply that Simon hadn’t really devised his “10-year rule” in a clear, numeric version.

I could probably dig for more Herbert Simon wisdom. Before looking (however cursorily) at those 1973 texts, I was using Herbert Simon as a key figure in the origin of that “rule of thumb.” To back up those statements, I should probably dig deeper in the Herbert Simon archives. But that might require more work than is necessary and it might be useful to dig through other sources.

In my personal case, the other main written source for this “rule of thumb” is Dan Levitin. So, using online versions of his book, I look for comments about expertise. (I do own a copy of the book and I’m assuming the Index contains page numbers for references on expertise. But online searches are more efficient and possibly more thorough on specific keywords.) That’s how I found the statement, quoted above. I’m sure it’s the one which was sticking in my head and, as I found out tonight, it’s the one Gladwell used in his first statement on expertise in Outliers.

So, where did Levitin get this? I could possibly ask him (we’ve been in touch and he happens to be local) but looking for those references might require work on his part. A preliminary step would be to look through Levitin’s published references for Your Brain On Music.

Though Levitin is a McGill professor, Your Brain On Music doesn’t follow the typical practise in English-speaking academia of ladling copious citations onto any claim, even the most truistic statements. Nothing strange in this difference in citation practise.  After all, as Levitin explains in his Bibliographic Notes:

This book was written for the non-specialist and not for my colleagues, and so I have tried to simplify topics without oversimplifying them.

In this context, academic-style citation-fests would make the book too heavy. Levitin does, however, provide those “Bibliographic Notes” at the end of his book and on the website for the same book. In the Bibliographic Notes of that site, Levitin adds a statement I find quite interesting in my quest for “sources of claims”:

Because I wrote this book for the general reader, I want to emphasize that there are no new ideas presented in this book, no ideas that have not already been presented in scientific and scholarly journals as listed below.

So, it sounds like going through those references is a good strategy to locate at least solid references on that specific “10,000 hour” claim. Among relevant references on the cognitive basis of expertise (in Chapter 7), I notice the following texts which might include specific statements about the “time on task” to become an expert. (An advantage of the Web version of these bibliographic notes is that Levitin provides some comments on most references; I put Levitin’s comments in parentheses.)

  • Chi, Michelene T.H., Robert Glaser, and Marshall J. Farr, eds. 1988. The Nature of Expertise. Hillsdale, New Jersey: Lawrence Erlbaum Associates. (Psychological studies of expertise, including chess players)
  • Ericsson, K. A., and J. Smith, eds. 1991. Toward a General Theory of Expertise: prospects and limits. New York: Cambridge University Press. (Psychological studies of expertise, including chess players)
  • Hayes, J. R. 1985. Three problems in teaching general skills. In Thinking and Learning Skills: Research and Open Questions, edited by S. F. Chipman, J. W. Segal and R. Glaser. Hillsdale, NJ: Erlbaum. (Source for the study of Mozart’s early works not being highly regarded, and refutation that Mozart didn’t need 10,000 hours like everyone else to become an expert.)
  • Howe, M. J. A., J. W. Davidson, and J. A. Sloboda. 1998. Innate talents: Reality or myth? Behavioral & Brain Sciences 21 (3):399-442. (One of my favorite articles, although I don’t agree with everything in it; an overview of the “talent is a myth” viewpoint.)
  • Sloboda, J. A. 1991. Musical expertise. In Toward a general theory of expertise, edited by K. A. Ericcson (sic) and J. Smith. New York: Cambridge University Press. (Overview of issues and findings in musical expertise literature)

I have yet to read any of those references. I did borrow Ericsson and Smith when I first heard about Levitin’s approach to talent and expertise (probably through a radio and/or podcast appearance). But I had put the issue of expertise on the back-burner. It was always at the back of my mind and I did blog about it, back then. But it took Gladwell’s talk to wake me up. What’s funny, though, is that the “time on task” statements in (Ericsson and Smith,  1991) seem to lead back to (Chase and Simon, 1973b).

At this point, I get the impression that the “it takes a decade and/or 10,000 hours to become an expert”:

  • was originally proposed as a vague hypothesis a while ago (the year 1899 comes up);
  • became an object of some consideration by cognitive psychologists at the end of the 1960s;
  • became more widely accepted in the 1970s;
  • was tested by Benjamin Bloom and others in the 1980s;
  • was precised by Ericsson and others in the late 1980s;
  • gained general popularity in the mid-2000s;
  • is being further popularized by Malcolm Gladwell in late 2008.

Of course, I’ll have to do a fair bit of digging and reading to verify any of this, but it sounds like the broad timeline makes some sense. One thing, though, is that it doesn’t really seem that anybody had the intention of spelling it out as a “rule” or “law” in such a format as is being carried around. If I’m wrong, I’m especially surprised that a clear formulation isn’t easier to find.

As an aside, of sorts… Some people seem to associate the claim with Gladwell, at this point. Not very surprsing, given the popularity of his books, the effectiveness of his public presentations, the current context of his book tour, and the reluctance of the general public to dig any deeper than the latest source.

The problem, though, is that it doesn’t seem that Gladwell himself has done anything to “set the record straight.” He does quote Levitin in Outliers, but I heard him reply to questions and comments as if the research behind the “ten years or ten thousand hours” claim had some association with him. From a popular author like Gladwell, it’s not that awkward. But these situations are perfect opportunities for popularizers like Gladwell to get a broader public interested in academia. As Gladwell allegedly cares about “educational success” (as measured on a linear scale), I would have expected more transparency.

Ah, well…

So, I have some work to do on all of this. It will have to wait but this placeholder might be helpful. In fact, I’ll use it to collect some links.

 

Some relevant blogposts of mine on talent, expertise, effort, and Levitin.

And a whole bunch of weblinks to help me in my future searches (I have yet to really delve in any of this).

My Problem With Journalism

I hate having an axe to grind. Really, I do. “It’s unlike me.” When I notice that I catch myself grinding an axe, I “get on my own case.” I can be quite harsh with my own self.

But I’ve been trained to voice my concerns. And I’ve been perceiving an important social problem for a while.

So I “can’t keep quiet about it.”

If everything goes really well, posting this blog entry might be liberating enough that I will no longer have any axe to grind. Even if it doesn’t go as well as I hope, it’ll be useful to keep this post around so that people can understand my position.

Because I don’t necessarily want people to agree with me. I mostly want them to understand “where I come from.”

So, here goes:

Journalism may have outlived its usefulness.

Like several other “-isms” (including nationalism, colonialism, imperialism, and racism) journalism is counterproductive in the current state of society.

This isn’t an ethical stance, though there are ethical positions which go with it. It’s a statement about the anachronic nature of journalism. As per functional analysis, everything in society needs a function if it is to be maintained. What has been known as journalism is now taking new functions. Eventually, “journalism as we know it” should, logically, make way for new forms.

What these new forms might be, I won’t elaborate in this post. I have multiple ideas, especially given well-publicised interests in social media. But this post isn’t about “the future of journalism.”

It’s about the end of journalism.

Or, at least, my looking forward to the end of journalism.

Now, I’m not saying that journalists are bad people and that they should just lose their jobs. I do think that those who were trained as journalists need to retool themselves, but this post isn’t not about that either.

It’s about an axe I’ve been grinding.

See, I can admit it, I’ve been making some rather negative comments about diverse behaviours and statements, by media people. It has even become a habit of mine to allow myself to comment on something a journalist has said, if I feel that there is an issue.

Yes, I know: journalists are people too, they deserve my respect.

And I do respect them, the same way I respect every human being. I just won’t give them the satisfaction of my putting them on a pedestal. In my mind, journalists are people: just like anybody else. They deserve no special treatment. And several of them have been arrogant enough that I can’t help turning their arrogance back to them.

Still, it’s not about journalist as people. It’s about journalism “as an occupation.” And as a system. An outdated system.

Speaking of dates, some context…

I was born in 1972 and, originally,I was quite taken by journalism.

By age twelve, I was pretty much a news junkie. Seriously! I was “consuming” a lot of media at that point. And I was “into” media. Mostly television and radio, with some print mixed in, as well as lots of literary work for context: this is when I first read French and Russian authors from the late 19th and early 20th centuries.

I kept thinking about what was happening in The World. Back in 1984, the Cold War was a major issue. To a French-Canadian tween, this mostly meant thinking about the fact that there were (allegedly) US and USSR “bombs pointed at us,” for reasons beyond our direct control.

“Caring about The World” also meant thinking about all sorts of problems happening across The Globe. Especially poverty, hunger, diseases, and wars. I distinctly remember caring about the famine in Ethiopia. And when We Are the World started playing everywhere, I felt like something was finally happening.

This was one of my first steps toward cynicism. And I’m happy it occured at age twelve because it allowed me to eventually “snap out of it.” Oh, sure, I can still be a cynic on occasion. But my cynicism is contextual. I’m not sure things would have been as happiness-inducing for me if it hadn’t been for that early start in cynicism.

Because, you see, The World disinterested itself quite rapidly with the plight of Ethiopians. I distinctly remember asking myself, after the media frenzy died out, what had happened to Ethiopians in the meantime. I’m sure there has been some report at the time claiming that the famine was over and that the situation was “back to normal.” But I didn’t hear anything about it, and I was looking. As a twelve-year-old French-Canadian with no access to a modem, I had no direct access to information about the situation in Ethiopia.

Ethiopia still remained as a symbol, to me, of an issue to be solved. It’s not the direct cause of my later becoming an africanist. But, come to think of it, there might be a connection, deeper down than I had been looking.

So, by the end of the Ethiopian famine of 1984-85, I was “losing my faith in” journalism.

I clearly haven’t gained a new faith in journalism. And it all makes me feel quite good, actually. I simply don’t need that kind of faith. I was already training myself to be a critical thinker. Sounds self-serving? Well, sorry. I’m just being honest. What’s a blog if the author isn’t honest and genuine?

Flash forward to 1991, when I started formal training in anthropology. The feeling was exhilarating. I finally felt like I belonged. My statement at the time was to the effect that “I wasn’t meant for anthropology: anthropology was meant for me!” And I was learning quite a bit about/from The World. At that point, it already did mean “The Whole Wide World,” even though my knowledge of that World was fairly limited. And it was a haven of critical thinking.

Ideal, I tell you. Moan all you want, it felt like the ideal place at the ideal time.

And, during the summer of 1993, it all happened: I learnt about the existence of the “Internet.” And it changed my life. Seriously, the ‘Net did have a large part to play in important changes in my life.

That event, my discovery of the ‘Net, also has a connection to journalism. The person who described the Internet to me was Kevin Tuite, one of my linguistic anthropology teachers at Université de Montréal. As far as I can remember, Kevin was mostly describing Usenet. But the potential for “relatively unmediated communication” was already a big selling point. Kevin talked about the fact that members of the Caucasian diaspora were able to use the Internet to discuss with their relatives and friends back in the Caucasus about issues pertaining to these independent republics after the fall of the USSR. All this while media coverage was sketchy at best (sounded like journalism still had a hard time coping with the new realities).

As you can imagine, I was more than intrigued and I applied for an account as soon as possible. In the meantime, I bought at 2400 baud modem, joined some local BBSes, and got to chat about the Internet with several friends, some of whom already had accounts. Got my first email account just before semester started, in August, 1993. I can still see traces of that account, but only since April, 1994 (I guess I wasn’t using my address in my signature before this). I’ve been an enthusiastic user of diverse Internet-based means of communication since then.

But coming back to journalism, specifically…

Journalism missed the switch.

During the past fifteen years, I’ve been amazed at how clueless members of mainstream media institutions have been to “the power of the Internet.” This was during Wired Magazine’s first year as a print magazine and we (some friends and I) were already commenting upon the fact that print journalists should look at what was coming. Eventually, they would need to adapt. “The Internet changes everything,” I thought.

No, I didn’t mean that the Internet would cause any of the significant changes that we have seeing around us. I tend to be against technological determinism (and other McLuhan tendencies). Not that I prefer sociological determinism yet I can’t help but think that, from ARPAnet to the current state of the Internet, most of the important changes have been primarily social: if the Internet became something, it’s because people are making it so, not because of some inexorable technological development.

My enthusiastic perspective on the Internet was largely motivated by the notion that it would allow people to go beyond the model from the journalism era. Honestly, I could see the end of “journalism as we knew it.” And I’m surprised, fifteen years later, that journalism has been among the slowest institutions to adapt.

In a sense, my main problem with journalism is that it maintains a very stratified structure which gives too much weight to the credibility of specific individuals. Editors and journalists, who are part of the “medium” in the old models of communication, have taken on a gatekeeping role despite the fact that they rarely are much more proficient thinkers than people who read them. “Gatekeepers” even constitute a “textbook case” in sociology, especially in conflict theory. Though I can easily perceive how “constructed” that gatekeeping model may be, I can easily relate to what it entails in terms of journalism.

There’s a type of arrogance embedded in journalistic self-perception: “we’re journalists/editors so we know better than you; you need us to process information for you.” Regardless of how much I may disagree with some of his words and actions, I take solace in the fact that Murdoch, a key figure in today’s mainstream media, talked directly at this arrogance. Of course, he might have been pandering. But the very fact that he can pay lip-service to journalistic arrogance is, in my mind, quite helpful.

I think the days of fully stratified gatekeeping (a “top-down approach” to information filtering) are over. Now that information is easily available and that knowledge is constructed socially, any “filtering” method can be distributed. I’m not really thinking of a “cream rises to the top” model. An analogy with water sources going through multiple layers of mountain rock would be more appropriate to a Swiss citizen such as myself. But the model I have in mind is more about what Bakhtin called “polyvocality” and what has become an ethical position on “giving voice to the other.” Journalism has taken voice away from people. I have in mind a distributed mode of knowledge construction which gives everyone enough voice to have long-distance effects.

At the risk of sounding too abstract (it’s actually very clear in my mind, but it requires a long description), it’s a blend of ideas like: the social butterfly effect, a post-encyclopedic world, and cultural awareness. All of these, in my mind, contribute to this heightened form of critical thinking away from which I feel journalism has led us.

The social butterfly effect is fairly easy to understand, especially now that social networks are so prominent. Basically, the “butterfly effect” from chaos theory applied to social networks. In this context, a “social butterfly” is a node in multiple networks of varying degrees of density and clustering. Because such a “social butterfly” can bring things (ideas, especially) from one such network to another, I argue that her or his ultimate influence (in agregate) is larger than that of someone who sits at the core of a highly clustered network. Yes, it’s related to “weak ties” and other network classics. But it’s a bit more specific, at least in my mind. In terms of journalism, the social butterfly effect implies that the way knowledge is constructed needs not come from a singular source or channel.

The “encyclopedic world” I have in mind is that of our good friends from the French Enlightenment: Diderot and the gang. At that time, there was a notion that the sum of all knowledge could be contained in the Encyclopédie. Of course, I’m simplifying. But such a notion is still discussed fairly frequently. The world in which we now live has clearly challenged this encyclopedic notion of exhaustiveness. Sure, certain people hold on to that notion. But it’s not taken for granted as “uncontroversial.” Actually, those who hold on to it tend to respond rather positively to the journalistic perspective on human events. As should be obvious, I think the days of that encyclopedic worldview are counted and that “journalism as we know it” will die at the same time. Though it seems to be built on an “encyclopedia” frame, Wikipedia clearly benefits from distributed model of knowledge management. In this sense, Wikipedia is less anachronistic than Britannica. Wikipedia also tends to be more insightful than Britannica.

The cultural awareness point may sound like an ethnographer’s pipe dream. But I perceive a clear connection between Globalization and a certain form of cultural awareness in information and knowledge management. This is probably where the Global Voices model can come in. One of the most useful representations of that model comes from a Chris Lydon’s Open Source conversation with Solana Larsen and Ethan Zuckerman. Simply put, I feel that this model challenges journalism’s ethnocentrism.

Obviously, I have many other things to say about journalism (as well as about its corrolate, nationalism).

But I do feel liberated already. So I’ll leave it at that.

I Am Not a Guru

“Nor do I play one online!”

The “I am not a ” phrase is often used as a disclaimer when one is giving advice. Especially in online contexts having to do with law, in which case the IANAL acronym can be used, and understood.
I’m not writing this to give advice. (Even though I could!) I’ve simply been thinking about social media a fair deal, recently, and thought I’d share a few thoughts.

I’ve been on the record as saying that I have a hard time selling my expertise. It’s not through lack of self-confidence (though I did have problems with this in the past), nor is it that my expertise is difficult to sell. It’s simply a matter of seeing myself as a friendly humanist, not as a brand to sell. To a certain extent, this post is an extension of the same line of thinking.

I’m also going back to my post about “the ‘social’ in ‘social media/marketing/web'” as I tend to position myself as an ethnographer and social scientist (I teach anthropology, sociology, and folkloristics). Simply put, I do participant-observation in social media spheres. Haven’t done formal research on the subject, nor have I taught in that field. But I did gain some insight in terms if what social media entails.

Again, I’m no guru. I’m just a social geek.

The direct prompt for this blogpost is a friend’s message in which he asked me for advice on the use of social media to market his creative work. Not that he framed his question in precisely those terms but the basic idea was there.

As he’s a friend, I answered him candidly, not trying to sell my social media expertise to him. But, after sending that message, I got to think about the fact that I’m not selling my social media expertise to anyone.

One reason is that I’m no salesman. Not only do I perceive myself as “too frank to be a salesman” (more on the assumptions later), but I simply do not have the skills to sell anything. Some people are so good at sales pitches that they could create needs where they is none (the joke about refrigerators and “Eskimos” is too much of an ethnic slur to be appropriate). I’ve been on the record saying that “I couldn’t sell bread for a penny” (to a rich yet starving person).

None of this means that I haven’t had any influence on any purchasing pattern. In fact, that long thread in which I confessed my lack of salesman skills was the impulse (direct or indirect) behind the purchase of a significant number of stovetop coffee devices and this “influence” has been addressed explicitly. It’s just that my influence tends to be more subtle, more “diffuse.” Influence based on participation in diverse groups. It’s one reason I keep talking about the “social butterfly effect.”

Coming back to social media and social marketing.

First, some working definitions. By “social media” I usually mean blogs, podcasts, social networking systems, and microblogs. My usage also involves any participatory use of the Internet and any alternative to “mainstream media” (MSM) which makes use of online contacts between human beings. “Social marketing” is, to me, the use of social media to market and sell a variety of things online, including “people as brands.” This notion connects directly to a specific meaning of “social capital” which, come to think of it, probably has more to do with Putnam than Bourdieu (PDF version of an atricle about both versions).

Other people, I admit, probably have much better ways to define those concepts. But those definitions are appropriate in the present context. I mostly wanted to talk about gurus.

Social Guru

I notice guru-like behaviour in the social media/marketing sphere.

I’m not targetting individuals, though the behaviour is adopted by specific people. Not every one is acting as a “social media guru” or “social marketing guru.” The guru-like behaviour is in fact quite specific and not as common as some would think.

Neither am I saying that guru-like behaviour is inappropriate. I’m not blaming anyone for acting like a guru. I’m mostly distancing myself from that behaviour. Trying to show that it’s one model for behaviour in the social media/marketing sphere.

It should go without saying: I’m not using the term “guru” in a literal sense it might have in South Asia. That kind of guru I might not distance myself from as quickly. Especially if we think about “teachers as personal trainers.” But I’m using “guru” in reference to an Anglo-American phenomenon having to do with expertise and prestige.

Guru-like behaviour, as noticed in the social media/marketing sphere, has to do with “portraying oneself as an expert holding a secret key which can open the doors to instant success.” Self-assurance is involved, of course. But there’s also a degree of mystification. And though this isn’t a rant against people who adopt this kind of behaviour, I must admit that I have negative reactions to any kind of mystification.

There’s a difference between mystery and mystification. Something that is mysterious is difficult to explain “by its very nature.” Mystification involves withholding information to prevent knowledge. As an academic, I have been trained to fight obscurantism of any kind. Mystification seems counterproductive. “Information Wants to be Free.”

This is not to say that I dislike ambiguity, double-entendres, or even secrets. In fact, I’m often using ambiguity in playful manner and, working with a freemasonry-like secret association, I do understand the value of the most restrictive knowledge management practises. But I find limited value in restricting information when knowledge can be beneficial to everyone. As in Eco’s The Name of the Rose, subversive ideas find their way out of attempts to hide them.

Another aspect of guru-like behaviour which tends to bother me is that I can’t help but find it empty. As some would say, “there needs to be a ‘there’ there.” With social media/marketing, the behaviour I’m alluding to seems to imply that there is, in fact, some “secret key to open all doors.” Yet, as I scratch beneath the surface, I find something hollow. (The image I have in mind is that of a chocolate Easter egg. But any kind of trompe-l’œil would work.)

Obviously, I’m not saying that there’s “nothing to” social media/marketing. Those who dismiss social media and/or social marketing sound to me like curmudgeons or naysayers. “There’s nothing new, here. It’s just the same thing as what it always was. Buy my book to read all about what nonsense this all is.” (A bit self-serving, don’t you think?)

And I’m not saying that I know what there is in social media and marketing which is worth using. That would not only be quite presumptuous but it would also represent social media and marketing in a more simplified manner than I feel it deserves.

I’m just saying that caution should be used with people who claim they know everything there is to know about social media and social marketing. In other words, “be careful when someone promises to make you succeed through the Internet.” Sounds obvious, but some people still fall prey to grandiose claims.

Having said this, I’ll keep on posting some of thoughts about social media and social marketing. I might be way off, so “don’t quote me on this.” (You can actually quote me but don’t give my ideas too much credit.)

Intello-Bullying

A topic which I’ll revisit, to be sure. But while I’m at it…
I tend to react rather strongly to a behaviour which I consider the intellectual equivalent of schoolyard bullying.
Notice that I don’t claim to be above this kind of behaviour. I’m not. In fact, one reason for my blogging this is that I have given some thought to my typical anti-bullying reaction. Not that I feel bad about it. But I do wonder if it might not be a good idea to adopt a variety of mechanisms to respond to bullying, in conjunction with my more “gut response” knee-jerk reactions and habits.
Notice also that i’m not describing individual bullies. I’m not complaining about persons. I’m thinking about behaviour. Granted, certain behaviours are typically associated with certain people and bullying is no exception. But instead of blaming, I’d like to assess, at least as a step in a given direction. What can I do? I’m an ethnographer.
Like schoolyardb bullying, intello-bullying is based on a perceived strength used to exploit and/or harm those who perceived as weaker. Like physical strength, the perception of “intellectual strength” on which intello-bullying is based needs not have any objective validity. We’re in subjectivity territory, here. And subjects perceive in patterned but often obscure ways. Those who think of themselves as “strong” in intellectual as well as physical senses, are sometimes the people who are insecure as to their overall strengths and weaknesses.
Unlike schoolyard bullying, intello-bullying can be, and often is, originated by otherwise reasonably mature people. In fact, some of the most agressive intello-bullying comes from well-respected “career intellectuals” who “should know better.” Come to think of it, this type of bullying is probably the one I personally find the most problematic. But, again, I’m not talking about bullies. I’m not describing people. I’m talking about behaviour. And implications if behaviour.
My personal reactions may come from remnants of my impostor syndrome. Or maybe they come from a non-exclusive sense of self-worth that I found lying around in my life, as I was getting my happiness back. As much I try, I can’t help but feel that intello-bullying is a sign of intellectual self-absorption, which eventually link to weakness. Sorry, folks, but it seems to me that if you feel the need, even temporarily, to impose your intellectual strength on those you perceive as intellectually weak, I’ll assume you may “have issues to solve.” in fact, I react the same way when I perceive my own behaviour as tantamount to bullying. It’s the behaviour I have issues with. Not the person.
And this is the basis of my knee-jerks: when I witness bullying, I turn into a bully’s bully. Yeah, pretty dangerous. And quite unexpected for a lifelong pacifist like yours truly. But, at least I can talk and think about it. Unapologetically.
You know, this isn’t something I started doing yesterday. In fact, it may be part of a long-standing mission of mine. Half-implicit at first. Currently “assumed,” assessed, acknowledged. Accepted.
Before you blame me for the appearance of an “avenger complex” in this description, please give some more thought to bullying in general. My hunch is that many of you will admit that you value the existence of anti-bullies in schoolyards or in other contexts. You may prefer it if cases of bullying are solved through other means (sanction by school officials or by parents, creation of safe zones…). But I’d be somewhat surprised if your thoughts about anti-bullying prevention left no room for non-violent but strength-based control by peers. If it is the case, I’d be very interested in your comments on the issue. After all, I may be victim of some idiosyncratic notion of justice which you find inappropriate. I’m always willing to relativize.
Bear in mind that I’m not talking about retaliation. Though it may sound like it, this is no “eye for an eye” rule. Nor is it “present the left cheek.” it’s more like crowd control. Or this form of “non-abusive” technique used by occupational therapists and others while helping patients/clients who are “disorganizing.” Basically, I’m talking about responding to (intello-)bullying with calm but some strength being asserted. In the case of “fighting with words,” in my case, it may sound smug and even a bit dismissive. But it’s a localized smugness which I have a hard time finding unhealthy.
In a sense, I hope I’m talking about “taking the high road.” With a bit of self-centredness which has altruistic goals. “”I’ll act as if I were stronger than you, because you used your perceived strength to dominate somebody else. I don’t have anything against you but I feel you should be put in your place. Don’t make me go to the next step through which I can make you weep.”
At this point, I’m thinking martial arts. I don’t practise any martial art but, as an outsider, I get the impression this thinking goes well with some martial arts. Maybe judo, which allegedly relies on using your opponent’s strength. Or Tae Kwon Do, which always sounded “assertive yet peaceful” when described by practitioners.
The corrolary of all this is my attitude toward those who perceive themselves as weak. I have this strong tendency to want them to feel stronger. Both out of this idiosyncratic atttude toward justice and because of my compulsive empathy. So, when someone says something like “I’m not that smart” or “I don’t have anything to contribute,” I switch to the “nurturing mode” that I may occasionally use in class or with children. I mean not to patronize, though it probably sounds paternalistic to outside observers. It’s just a reaction I have. I don’t even think its consequences are that negative in most contexts.
Academic contexts are full of cases of intello-bullying. Classrooms, conferences, outings… Put a group of academics in a room and unless there’s a strong sense of community (Turner would say “communitas”), intello-bullying is likely to occur. At the very least, you may witness posturing, which I consider a mild form of bullying. It can be as subtle as a tricky question ask to someone who is unlikely to provide a face-saving answer and it can be as aggressive as questioning someone’s inteligence directly or claiming to have gone much beyond what somebody else has said.
In my mind, the most extreme context for this type of bullying is the classroom and it involves a teacher bullying a learner. Bullying between isn’t much better but, as a teacher, I’m even more troubled by the imposong authority structure based on status.

I put “cyber-bullying” as a tag because, in my mind, cyber-bullying (like trolling, flamebaiting and other agressive behaviours online) is a form of intello-bullying. It’s using a perceived “intellectual strength” to dominate. It’s very close to schoolyard bullying but because it may not rely on a display of physical strength, I tend to associate it with mind-based behaviour.
As I think about these issues, I keep thinking of snarky comments. Contrary to physical attacks, snarks necessitate a certain state of mind to be effective. They need to tap on some insecurity, some self-perceived weakness in the victim. But they can be quite dangerous in the right context.
As I write this, I think about my own snarky comments. Typically, they either come after some escalation or they will be as indefinite as possible. But they can be extremely insulting if they’re internalized by some people.
Two come from a fairly known tease/snark. Namely

If you’re so smart, why ain’t you rich?

(With several variants.)

I can provide several satisfactory answers to what is ostensibly a question. But, as much as I try, I can’t relate to the sentiment behind this rhetorical utterance, regardless of immediate context (but regardful of the broader social context). This may have to do with the fact that “getting rich” really isn’t my goal in life. Not only do I agree with the statement that “money can’t buy happiness” and do I care more about happiness than more easily measurable forms of success, but my high empathy levels do include a concept of egalitarianism and solidarity which makes this emphasis on wealth sound counter-productive.

Probably because of my personal reactions to that snark, I have created at least two counter-snarks. My latest one, and the one which may best represent my perspective, is the following:

If you’re so smart, why ain’t you happy?

With direct reference to the original “wealth and intelligence” snark, I wish to bring attention to what I perceive to be a more appropriate goal in life (because it’s my own goal): pursuit of happiness. What I like about this “rhetorical question” is that it’s fairly ambiguous yet has some of the same effects as the “don’t think about pink elephants” illocutionary act. As a rhetorical question, it needs not be face-threatening. Because the “why aren’t you happy?” question can stand on its own, the intelligence premise “dangles.” And, more importantly, it represents one of my responses to what I perceive as a tendency (or attitude and “phase”) associating happiness with lack of intelligence. The whole “ignorance is bliss” and «imbécile heureux» perspective. Voltaire’s Candide and (failed) attempts to discredit Rousseau. Uses of “touchy-feely” and “warm and fuzzy” as insults. In short, the very attitude which makes most effectively tricks out intellectuals in the “pursuit of happiness.”

I posted my own snarky comment on micro-blogs and other social networks. A friend replied rather negatively. Though I can understand my friend’s issues with my snark, I also care rather deeply about delinking intelligence and depression.

A previous snark of mine was much more insulting. In fact, I would never ever use it with any individual, because I abhor insulting others. Especially about their intelligence. But it does sound to me like an efficient way to unpack the original snark. Pretty obvious and rather “nasty”:

If you’re so rich, why ain’t you smart?

Again, I wouldn’t utter this to anyone. I did post it through social media. But, like the abovementioned snark on happiness, it wasn’t aimed at any specific person. Though I find it overly insulting, I do like its “counterstrike” power in witticism wars.

As announced through the “placeholder” tag and in the prefacing statement (or disclaimer), this post is but a draft. I’ll revisit this whole issue on several occasions and it’s probably better that I leave this post alone. Most of it was written while riding the bus from Ottawa to Montreal (through the WordPress editor available on the App Store). Though I’ve added a few things which weren’t in this post when I arrived in Montreal (e.g., a link to NAPPI training), I should probably leave this as a “bus ride post.”

I won’t even proofread this post.

RERO!

Éloge de la courtoisie en-ligne

Nous y voilà!

Après avoir terminé mon billet sur le contact social, j’ai reçu quelques commentaires et eu d’autres occasions de réfléchir à la question. Ce billet faisait suite à une interaction spécifique que j’ai vécue hier mais aussi à divers autres événements. En écrivant ce billet sur le contact social, j’ai eu l’idée (peut-être saugrenue) d’écrire une liste de «conseils d’ami» pour les gens qui désirent me contacter. Contrairement à mon attitude habituelle, j’ai rédigé cette liste dans un mode assez impératif et télégraphique. C’est peut-être contraire à mon habitude, mais c’est un exercice intéressant à faire, dans mon cas.

Bien qu’énoncés sur un ton quasi-sentencieux, ces conseils se veulent être des idées de base avec lesquelles je travaille quand on me sollicite (ce qui arrive plusieurs fois par jour). C’est un peu ma façon de dire: je suis très facile à contacter mais voici ce que je considère comme étant des bonnes et mauvaises idées dans une procédure de contact. Ça vaut pour mes lecteurs ici, pour mes étudiants (avant que je aie rencontrés), pour des contacts indirects, etc.

Pour ce qui est du «contact social», je parlais d’un contexte plus spécifique que ce que j’ai laissé entendre. Un des problèmes, c’est que même si j’ai de la facilité à décrire ce contexte, j’ai de la difficulté à le nommer d’une façon qui soit sans équivoque. C’est un des mondes auxquels je participe et il est lié à l’«écosystème geek». En parlant de «célébrité» dans le billet sur le contact social, je faisais référence à une situation assez précise qui est celle de la vie publique de certaines des personnes qui passent le plus clair de leur temps en-ligne. Les limites sont pas très claires mais c’est un groupe de quelques millions de personnes, dont plusieurs Anglophones des États-Unis, qui entrent dans une des logiques spécifiques de la socialisation en-ligne. Des gens qui vivent et qui oeuvrent dans le média social, le marketing social, le réseau social, la vie sociale médiée par les communications en-ligne, etc.

Des «socialiseurs alpha», si on veut.

C’est pas un groupe homogène, loi de là. Mais c’est un groupe qui a ses codes, comme tout groupe social. Certains individus enfreignent les règles et ils sont ostracisés, parfois sans le savoir.

Ce qui me permet de parler de courtoisie.

Un des trucs dont on parle beaucoup dans nos cours d’introduction, en anthropologie culturelle, c’est la diversité des normes de politesse à l’échelle humaine. Pas parce que c’est une partie essentielle de nos recherches, mais c’est souvent une façon assez efficace de faire comprendre des concepts de base à des gens qui n’ont pas (encore) de formation ethnographique ou de regard anthropologique. C’est encore plus efficace dans le cas d’étudiants qui ont déjà été formés dans une autre discipline et qui ont parfois tendance à ramener les concepts à leur expérience personnelle (ce qui, soit dit en passant, est souvent une bonne stratégie d’apprentissage quand elle est bien appliquée). L’idée de base, c’est qu’il n’y a pas d’«universal», de la politesse (malgré ce que disent Brown et Levinson). Il n’y a pas de règle universelle de politesse qui vaut pour l’ensemble de la population humaine, peu importe la distance temporelle ou culturelle. Chaque contexte culturel est bourré de règles de politesse, très souvent tacites, mais elles ne sont pas identiques d’un contexte à l’autre. Qui plus est, la même règle, énoncée de la même façon, a souvent des applications et des implications très différentes d’un contexte à l’autre. Donc, en contexte, il faut savoir se plier.

En classe, il y en a toujours pour essayer de trouver des exceptions à cette idée de base. Mais ça devient un petit jeu semi-compétitif plutôt qu’un réel processus de compréhension. D’après moi, ç’a un lien avec ce que les pédagogues anglophones appellent “Ways of Knowing”. Ce sont des gens qui croient encore qu’il n’existe qu’une vérité que le prof est en charge de dévoiler. Avec eux, il y a plusieurs étapes à franchir mais ils finissent parfois par passer à une compréhension plus souple de la réalité.

Donc, une fois qu’on peut travailler avec cette idée de base sur la non-universalité de règles de politesse spécifiques, on peut travailler avec des contextes dans lesquelles la politesse fonctionne. Et elle l’est fonctionnelle!

Mes «conseils d’ami» et mon «petit guide sur le contact social en-ligne» étaient à inscrire dans une telle optique. Mon erreur est de n’avoir pas assez décrit le contexte en question.

Si on pense à la notion de «blogosphère», on a déjà une idée du contexte. Pas des blogueurs isolés. Une sphère sociale qui est concentrée autour du blogue. Ces jours-ci, à part le blogue, il y a d’autres plates-formes à travers lesquelles les gens dont je parle entretiennent des rapports sociaux plus ou moins approfondis. Le micro-blogue comme Identi.ca et Twitter, par exemple. Mais aussi des réseaux sociaux comme Facebook ou même un service de signets sociaux comme Digg. C’est un «petit monde», mais c’est un groupe assez influent, puisqu’il lie entre eux beaucoup d’acteurs importants d’Internet. C’est un réseau tentaculaire, qui a sa présence dans divers milieux. C’est aussi, et c’est là que mes propos peuvent sembler particulièrement étranges, le «noyau d’Internet», en ce sens que ce sont des membres de ce groupe qui ont un certain contrôle sur plusieurs des choses qui se passent en-ligne. Pour utiliser une analogie qui date de l’ère nationale-industrielle (le siècle dernier), c’est un peu comme la «capitale» d’Internet. Ou, pour une analogie encore plus vieillotte, c’est la «Métropole» de l’Internet conçu comme Empire.

Donc, pour revenir à la courtoisie…

La spécificité culturelle du groupe dont je parle a créé des tas de trucs au cours des années, y compris ce qu’ils ont appelé la «Netiquette» (de «-net» pour «Internet» et «étiquette»). Ce qui peut contribuer à rendre mes propos difficiles à saisir pour ceux qui suivent une autre logique que la mienne, c’est que tout en citant (et apportant du support à) certaines composantes de cette étiquette, je la remets en contexte. Personnellement, je considère cette étiquette très valable dans le contexte qui nous préoccupe et j’affirme mon appartenance à un groupe socio-culturel précis qui fait partie de l’ensemble plus vaste auquel je fais référence. Mais je conserve mon approche ethnographique.

La Netiquette est si bien «internalisée» par certains qu’elles semblent provenir du sens commun (le «gros bon sens» dont je parlais hier). C’est d’ailleurs, d’après moi, ce qui explique certaines réactions très vives au bris d’étiquette: «comment peux-tu contrevenir à une règle aussi simple que celle de donner un titre clair à ton message?» (avec variantes plus insultantes). Comme j’ai tenté de l’expliquer en contexte semi-académique, une des bases du conflit en-ligne (la “flame war”), c’est la difficulté de se ressaisir après un bris de communication. Le bris de communication, on le tient pour acquis, il se produit de toutes façons. Mais c’est la façon de réétablir la communication qui change tout.

De la même façon, c’est pas tant le bris d’étiquette qui pose problème. Du moins, pas l’occasion spécifique de manquement à une règle précise. C’est la dynamique qui s’installe suite à de nombreux manquements aux «règles de base» de la vie sociale d’un groupe précis. L’effet immédiat, c’est le découpage du ‘Net en plus petites factions.

Et, personnellement, je trouve dommage ce fractionnement, cette balkanisation.

Qui plus est, c’est dans ce contexte que, malgré mon relativisme bien relatif, j’assigne le terme «éthique» à mon hédonisme. Pas une éthique absolue et rigide. Mais une orientation vers la bonne entente sociale.

Qu’on me comprenne bien (ça serait génial!), je me plains pas du comportement des gens, je ne jugent pas ceux qui se «comportent mal» ou qui enfreignent les règles de ce monde dans lequel je vis. Mais je trouve utile de parler de cette dynamique. Thérapeutique, même.

La raison spécifique qui m’a poussé à écrire ce billet, c’est que deux des commentaires que j’ai reçu suite à mes billets d’hier ont fait appel (probablement sans le vouloir) au «je fais comme ça me plaît et ça dérange personne». Là où je me sens presqu’obligé de dire quelque-chose, c’est que le «ça dérange personne» me semblerait plutôt myope dans un contexte où les gens ont divers liens entre eux. Désolé si ça choque, mais je me fais le devoir d’être honnête.

D’ailleurs, je crois que c’est la logique du «troll», ce personnage du ‘Net qui prend un «malin plaisir» à bousculer les gens sur les forums et les blogues. C’est aussi la logique du type macho qui se plaît à dire: «Je pince les fesses des filles. Dix-neuf fois sur 20, je reçois une baffe. Mais la vingtième, c’est la bonne». Personnellement, outre le fait que je sois féministe, j’ai pas tant de problèmes que ça avec cette idée quand il s’agit d’un contexte qui le permet (comme la France des années 1990, où j’ai souvent entendu ce genre de truc). Mais là où ça joue pas, d’après moi, c’est quand cette attitude est celle d’un individu qui se meut dans un contexte où ce genre de chose est très mal considéré (par exemple, le milieu cosmopolite contemporain en Amérique du Nord). Au niveau individuel, c’est peut-être pas si bête. Mais au niveau social, ça fait pas preuve d’un sens éthique très approfondi.

Pour revenir au «troll». Ce personnage quasi-mythique génère une ambiance très tendue, en-ligne. Individuellement, il peut facilement considérer qu’il est «dans son droit» et que ses actions n’ont que peu de conséquences négatives. Mais, ce qui se remarque facilement, c’est que ce même individu tolère mal le comportement des autres. Il se débat «comme un diable dans le bénitier», mais c’est souvent lui qui «sème le vent» et «récolte la tempête». Un forum sans «troll», c’est un milieu très agréable, “nurturing”. Mais il n’est besoin que d’un «troll» pour démolir l’atmosphère de bonne entente. Surtout si les autres membres du groupes réagissent trop fortement.

D’ailleurs, ça me fait penser à ceux qui envoient du pourriel et autres Plaies d’Internet. Ils ont exactement la logique du pinceur de femmes, mais menée à l’extrême. Si aussi peu que 0.01% des gens acceptent le message indésirable, ils pourront en tirer un certain profit à peu d’effort, peu importe ce qui affecte 99.99% des récipiendaires. Tant qu’il y aura des gens pour croire à leurs balivernes ou pour ouvrir des fichiers attachés provenant d’inconnus, ils auront peut-être raison à un niveau assez primaire («j’ai obtenu ce que je voulais sans me forcer»). Mais c’est la société au complet qui en souffre. Surtout quand on parle d’une société aussi diversifiée et complexe que celle qui vit en-ligne.

C’est intéressant de penser au fait que la culture en-ligne anglophone accorde une certaine place à la notion de «karma». Depuis une expression désignant une forme particulière de causalité à composante spirituelle, cette notion a pris, dans la culture geek, un acception spécifique liée au mérite relatif des propos tenus en-ligne, surtout sur le vénérable site Slashdot. Malgré le glissement de sens de causalité «mystique» à évaluation par les pairs, on peut lier les deux concepts dans une idée du comportement optimal pour la communication en-ligne: la courtoisie.

Les Anglophones ont tendance à se fier, sans les nommer ou même les connaître, aux maximes de Grice. J’ai beau percevoir qu’elles ne sont pas universelles, j’y vois un intérêt particulier dans le contexte autour duquel je tourne. L’idée de base, comme le diraient Wilson et Sperber, est que «tout acte de communication ostensive communique la présomption de sa propre pertinence optimale». Cette pertinence optimale est liée à un processus à la fois cognitif et communicatif qui fait appel à plusieurs des notions élaborées par Grice et par d’autres philosophes du langage. Dans le contexte qui m’intéresse, il y a une espèce de jeu entre deux orientations qui font appel à la même notion de pertinence: l’orientation individuelle («je m’exprime») souvent légaliste-réductive («j’ai bien le droit de m’exprimer») et l’orientation sociale («nous dialoguons») souvent éthique-idéaliste («le fait de dialoguer va sauver le monde»).

Aucun mystère sur mon orientation préférée…

Par contre, faut pas se leurrer: le fait d’être courtois, en-ligne, a aussi des effets positifs au niveau purement individuel. En étant courtois, on se permet très souvent d’obtenir de réels bénéfices, qui sont parfois financiers (c’est comme ça qu’on m’a payé un iPod touch). Je parle pas d’une causalité «cosmique» mais bien d’un processus précis par lequel la bonne entente génère directement une bonne ambiance.

Bon, évidemment, je semble postuler ma propre capacité à être courtois. Il m’arrive en fait très souvent de me faire désigner comme étant très (voire trop) courtois. C’est peut-être réaliste, comme description, même si certains ne sont peut-être pas d’accord.

À vous de décider.

Le petit guide du contact social en-ligne (brouillon)

Je viens de publier un «avis à ceux qui cherchent à me contacter». Et je pense à mon expertise au sujet de la socialisation en-ligne. Ça m’a donné l’idée d’écrire une sorte de guide, pour aider des gens qui n’ont pas tellement d’expérience dans le domaine. J’ai de la difficulté à me vendre.

Oui, je suis un papillon social. Je me lie facilement d’amitié avec les gens et j’ai généralement d’excellents contacts. En fait, je suis très peu sélectif: à la base, j’aime tout le monde.

Ce qui ne veut absolument pas dire que mon degré d’intimité est constant, peu importe l’individu. En fait, ma façon de gérer le degré d’intimité est relativement complexe et dépend d’un grand nombre de facteurs. C’est bien conscient mais difficile à verbaliser, surtout en public.

Et ça m’amène à penser au fait que, comme plusieurs, je suis «très sollicité». Chaque jour, je reçois plusieurs requêtes de la part de gens qui veulent être en contact avec moi, d’une façon ou d’une autre. C’est tellement fréquent, que j’y pense peu. Mais ça fait partie de mon quotidien, comme c’est le cas pour beaucoup de gens qui passent du temps en-ligne (blogueurs, membres de réseaux sociaux, etc.).

Évidemment, un bon nombre de ces requêtes font partie de la catégorie «indésirable». On pourrait faire l’inventaire des Dix Grandes Plaies d’Internet, du pourriel jusqu’à la sollicitation  intempestive. Mais mon but ici est plus large. Discuter de certaines façons d’établir le contact social. Qu’il s’agisse de se lier d’amitié ou simplement d’entrer en relation sociale diffuse (de devenir la «connaissance» de quelqu’un d’autre).

La question de base: comment effectuer une requête appropriée pour se mettre en contact avec quelqu’un? Il y a des questions plus spécifiques. Par exemple, comment démontrer à quelqu’un que nos intentions sont légitimes? C’est pas très compliqué et c’est très rapide. Mais ça fait appel à une logique particulière que je crois bien connaître.

Une bonne partie de tout ça, c’est ce qu’on appelle ici «le gros bon sens». «Ce qui devrait être évident.» Mais, comme nous le disons souvent en ethnographie, ce qui semble évident pour certains peut paraître très bizarre pour d’autres. Dans le fond, le contact social en-ligne a ses propres contextes culturels et il faut apprendre à s’installer en-ligne comme on apprend à emménager dans une nouvelle région. Si la plupart des choses que je dis ici semblent très évidentes, ça n’implique pas qu’elles sont bien connues du «public en général».

Donc, quelle est la logique du contact social en-ligne?

Il faut d’abord bien comprendre que les gens qui passent beaucoup de temps en-ligne reçoivent des tonnes de requêtes à chaque jour. Même un papillon social comme moi finit par être sélectif. On veut bien être inclusifs mais on veut pas être inondés, alors on trie les requêtes qui nous parviennent. On veut bien faire confiance, mais on veut pas être dupes, alors on se tient sur nos gardes.

Donc, pour contacter quelqu’un comme moi, «y a la manière».

Une dimension très importante, c’est la transparence. Je pense même à la «transparence radicale». En se présentant aux autres, vaut mieux être transparent. Pas qu’il faut tout dévoiler, bien au contraire. Il faut «contrôler son masque». Il faut «manipuler le voile». Une excellente façon, c’est d’être transparent.

L’idée de base, derrière ce concept, c’est que l’anonymat absolu est illusoire. Tout ce qu’on fait en-ligne laisse une trace. Si les gens veulent nous retracer, ils ont souvent la possibilité de le faire. En donnant accès à un profil public, on évite certaines intrusions.

C’est un peu la même idée derrière la «géolocation». Dans «notre monde post-industriel», nous sommes souvent faciles à localiser dans l’espace (grâce, entre autres, à la radio-identification). D’un autre côté, les gens veulent parfois faire connaître aux autres leur situation géographique et ce pour de multiples raisons. En donnant aux gens quelques informations sur notre présence géographique, on tente de contrôler une partie de l’information à notre sujet. La «géolocation» peut aller de la très grande précision temporelle et géographique («je suis au bout du comptoir de Caffè in Gamba jusqu’à 13h30») jusqu’au plus vague («je serai de retour en Europe pour une période indéterminée, au cours des six prochains mois»). Il est par ailleurs possible de guider les gens sur une fausse piste, de leur faire croire qu’on est ailleurs que là où on est réellement. Il est également possible de donner juste assez de précisions pour que les gens n’aient pas d’intérêt particulier à nous «traquer». C’est un peu une contre-attaque face aux intrusions dans notre vie privée.

Puisque plusieurs «Internautes» ont adopté de telles stratégies contre les intrusions, il est important de respecter ces stratégies et il peut être utile d’adopter des stratégies similaires. Ce qui implique qu’il faudrait accepter l’image que veut projeter l’individu et donner à cet individu la possibilité de se faire une image de nous.

Dans la plupart des contextes sociaux, les gens se dévoilent beaucoup plus facilement à ceux qui se dévoilent eux-mêmes. Dans certains coins du monde (une bonne partie de la blogosphère mais aussi une grande partie de l’Afrique), les gens ont une façon très sophistiquée de se montrer très transparents tout en conservant une grande partie de leur vie très secrète. Se cacher en public. C’est une forme radicale de la «présentation de soi». Aucune hypocrisie dans tout ça. Rien de sournois. Mais une transparence bien contrôlée. Radicale par son utilité (et non par son manque de pudeur).

«En-ligne, tout le monde agit comme une célébrité.» En fait, tout le monde vit une vie assez publique, sur le ‘Net. Ce qui implique plusieurs choses. Tout d’abord qu’il est presqu’aussi difficile de protéger sa vie privée en-ligne que dans une ville africaine typique (où la gestion de la frontière entre vie publique et vie privée fait l’objet d’une très grande sophistication). Ça implique aussi que chaque personne est moins fragile aux assauts de la célébrité puisqu’il y a beaucoup plus d’information sur beaucoup plus de personnes. C’est un peu la théorie du bruit dans la lutte contre les paparazzi et autres prédateurs. C’est là où la transparence de plusieurs aide à conserver l’anonymat relatif de chacun.

D’après moi, la méthode la plus efficace de se montrer transparent, c’est de se construire un profil public sur un blogue et/ou sur un réseau social. Il y a des tas de façons de construire son profil selon nos propres besoins et intérêts, l’effet reste le même. C’est une façon de se «présenter», au sens fort du terme.

Le rôle du profil est beaucoup plus complexe que ne semblent le croire ces journalistes qui commentent la vie des «Internautes». Oui, ça peut être une «carte de visite», surtout utile dans le réseautage professionnel. Pour certains, c’est un peu comme une fiche d’agence de rencontre (avec poids et taille). Plusieurs personnes rendent publiques des choses qui semblent compromettantes. Mais c’est surtout une façon de contrôler l’image,

Dans une certaine mesure, «plus on dévoile, plus on cache». En offrant aux gens la possibilité d’en savoir plus sur nous, on se permet une marge de manœuvre. D’ailleurs, on peut se créer un personnage de toutes pièces, ce que beaucoup ont fait à une certaine époque. C’est une technique de dissimulation, d’assombrissement. Ou, en pensant à l’informatique, c’est une méthode de cryptage et d’«obfuscation».

Mais on peut aussi «être soi-même» et s’accepter tel quel. D’un point de vue «philosophie de vie», c’est pas mauvais, à mon sens.

En bâtissant son profil, on pense à ce qu’on veut dévoiler. Le degré de précision varie énormément en fonction de nos façons de procéder et en fonction des contextes. Rien de linéaire dans tout ça. Il y a des choses qu’on dévoilerait volontiers à une étrangère et qu’on n’avouerait pas à des proches. On peut maintenir une certaine personnalité publique qui est parfois plus réelle que notre comportement en privé. Et on utilise peut-être plus de tact avec des amis qu’avec des gens qui nous rencontrent par hasard.

Il y a toute la question de la vie privée, bien sûr. Mais c’est pas tout. D’ailleurs, faut la complexifier, cette idée de «vie privée». Beaucoup de ce qu’on peut dire sur soi-même peut avoir l’effet d’impliquer d’autres personnes. C’est parfois évident, parfois très subtil. La stratégie de «transparence radicale» dans le contact social en-ligne est parfois difficile à concilier avec notre vie sociale hors-ligne. Mais on ne peut pas se permettre de ne rien dire. Le tout est une question de dosage.

Il y a de multiples façons de se bâtir un profil public et elles sont généralement faciles à utiliser. La meilleure méthode dépend généralement du contexte et, outre le temps nécessaire pour les mettre à jour (individuellement ou de façon centralisée), il y a peu d’inconvénients d’avoir de nombreux profils publics sur différents services.

Personnellement, je trouve qu’un blogue est un excellent moyen de conserver un profil public. Ceux qui laissent des commentaires sur des blogues ont un intérêt tout particulier à se créer un profil de blogueur, même s’ils ne publient pas de billets eux-mêmes. Il y a un sens de la réciprocité, dans le monde du blogue. En fait, il y a toute une négociation au sujet des différences entre commentaire et billet. Il est parfois préférable d’écrire son propre billet en réponse à celui d’un autre (les liens entre billets sont répertoriés par les “pings” et “trackbacks”). Mais, en laissant un commentaire sur le blogue de quelqu’un d’autre, on fait une promotion indirecte: «modérée et tempérée» (dans tous les sens de ces termes).

Ma préférence va à WordPress.com et Disparate est mon blogue principal. Sans être un véritable réseau social, WordPress.com a quelques éléments qui facilitent les contacts entre blogueurs. Par exemple, tout commentaire publié sur un blogue WordPress.com par un utilisateur de WordPress.com sera automatiquement lié à ce compte, ce qui facilite l’écriture du commentaire (nul besoin de taper les informations) et lie le commentateur à son identité. Blogger (ou Blogspot.com) a aussi certains de ces avantages mais puisque plusieurs blogues sur Blogger acceptent les identifiants OpenID et que WordPress.com procure de tels identifiants, j’ai tendance à m’identifier à travers WordPress.com plutôt qu’à travers Google/Blogger.

Hors du monde des blogues, il y a celui des services de réseaux sociaux, depuis SixDegrees.com (à l’époque) à OpenSocial (à l’avenir). Tous ces services offrent à l’utilisateur la possibilité de créer un profil (général ou spécialisé) et de spécifier des liens que nous avons avec d’autres personnes.

Ces temps-ci, un peu tout ce qui est en-ligne a une dimension «sociale» en ce sens qu’il est généralement possible d’utiliser un peu n’importe quoi pour se lier à quelqu’un d’autre. Dans chaque cas, il y a un «travail de l’image» plus ou moins sophistiqué. Sans qu’on soit obligés d’entreprendre ce «travail de l’image» de façon très directe, ceux qui sont actifs en-ligne (y compris de nombreux adolescents) sont passés maîtres dans l’art de jouer avec leurs identités.

Il peut aussi être utile de créer un profil public sur des plates-formes de microblogue, comme Identi.ca et Twitter. Ces plates-formes ont un effet assez intéressant, au niveau du contact social. Le profil de chaque utilisateur est plutôt squelettique, mais les liens entre utilisateurs ont un certain degré de sophistication parce qu’il y a une distinction entre lien unidirectionnel et lien bidirectionnel. En fait, c’est relativement difficile à décrire hors-contexte alors je crois que je vais laisser tomber cette section pour l’instant. Un bon préalable pour comprendre la base du microbloguage, c’est ce court vidéo, aussi disponible avec sous-titres français.

Tout ça pour parler de profil public!

En commençant ce billet, je croyais élaborer plusieurs autres aspects. Mais je crois quand même que la base est là et je vais probablement écrire d’autres billets sur la même question, dans le futur.

Quand même quelques bribes, histoire de conserver ce billet «en chantier».

Un point important, d’après moi, c’est qu’il est généralement préférable de laisser aux autres le soin de se lier à nous, sauf quand il y a un lien qui peut être établi. C’est un peu l’idée derrière mon billet précédent. Oh, bien sûr, on peut aller au-devant des gens dans un contexte spécifique. Si nous sommes au même événement, on peut aller se présenter «sans autre». Dès qu’il y a communauté de pratique (ou communauté d’expérience), on peut en profiter pour faire connaissance. S’agit simplement de ne pas s’accaparer l’attention de qui que ce soit et d’accepter la façon qu’a l’autre de manifester ses opinions.

Donc, en contexte (même en-ligne), on peut aller au-devant des gens.

Mais, hors-contexte, c’est une idée assez saugrenue que d’aller se présenter chez les gens sans y avoir été conviés.

Pour moi, c’est un peu une question de courtoisie. Mais il y a aussi une question de la compréhension du contexte. Même si nous réagissons tous un peu de la même façon aux appels non-solicités, plusieurs ont de la difficulté à comprendre le protocole.

Et le protocole est pas si différent de la vie hors-ligne. D’ailleurs, une technique très utile dans les contextes hors-ligne et qui a son importance en-ligne, c’est l’utilisation d’intermédiaires. Peut-être parce que je pense au Mali, j’ai tendance à penser au rôle du griot et au jeu très complexe de l’indirection, dans le contact social. Le réseau professionnel LinkedIn fait appel à une version très fruste de ce principe d’indirection, sans étoffer le rôle de l’intermédiaire. Pourtant, c’est souvent en construisant la médiation sociale qu’on comprend vraiment comment fonctionnent les rapports sociaux.

Toujours est-il qu’il y a une marche à suivre, quand on veut contacter les gens en-ligne. Ce protocole est beaucoup plus fluide que ne peuvent l’être les codes sociaux les mieux connus dans les sociétés industriels. C’est peut-être ce qui trompe les gens peu expérimentés, qui croient que «sur Internet, on peut tout faire».

D’où l’idée d’aider les gens à comprendre le contact social en-ligne.

Ce billet a été en partie motivé par une requête qui m’a été envoyée par courriel. Cette personne tentait de se lier d’amitié avec moi mais sa requête était décontextualisée et très vague. Je lui ai donc écrit une réponse qui contenait certains éléments de ce que j’ai voulu écrire ici.

Voici un extrait de ma réponse:

Si t’as toi-même un blogue, c’est une excellente façon de se présenter. Ou un compte sur un des multiples réseaux sociaux. Après, tu peux laisser le lien sur ton profil quand tu contactes quelqu’un et laisser aux autres le soin de se lier à toi, si tu les intéresses. C’est très facile et très efficace. Les messages non-sollicités, directement à l’adresse courriel de quelqu’un, ça éveille des suspicions. Surtout quand le titre est très générique ou que le contenu du message est pas suffisamment spécifique. Pas de ta faute, mais c’est le contexte.

En fait, la meilleure méthode, c’est de passer par des contacts préétablis. Si on a des amis communs, le tour est joué. Sinon, la deuxième meilleure méthode, c’est de laisser un commentaire vraiment très pertinent sur le blogue de quelqu’un que tu veux connaître. C’est alors cette personne qui te contactera. Mais si le commentaire n’est pas assez pertinent, cette même personne peut croire que c’est un truc indésirable et effacer ton commentaire, voire t’inclure dans une liste noire.

J’utilise pas Yahoo! Messenger, non. Et je suis pas assez souvent sur d’autres plateformes de messagerie pour accepter de converser avec des gens, comme ça. Je sais que c’est une technique utilisée par certaines personnes sérieuses, mais c’est surtout un moyen utilisé par des gens malveillants.

Si vous avez besoin d’aide, vous savez comment me contacter! 😉

The Issue Is Respect

As a creative generalist, I don’t tend to emphasize expert status too much, but I do see advantages in complementarity between people who act in different spheres of social life. As we say in French, «à chacun son métier et les vaches seront bien gardées» (“to each their own profession and cows will be well-kept”).

The diversity of skills, expertise, and interest is especially useful when people of different “walks of life” can collaborate with one another. Tolerance, collegiality, dialogue. When people share ideas, the potential is much greater if their ideas are in fact different. Very simple principle, which runs through anthropology as the study of human diversity (through language, time, biology, and culture).

The problem, though, is that people from different “fields” tend not to respect one another’s work. For instance, a life scientist and a social scientist often have a hard time understanding one another because they simply don’t respect their interlocutor’s discipline. They may respect each other as human beings but they share a distrust as to the very usefulness of the other person’s field.

Case in point: entomologist Paul R. Ehrlich, who spoke at the Seminar About Long Term Thinking (SALT) a few weeks ago.

The Long Now Blog » Blog Archive » Paul Ehrlich, “The Dominant Animal: Human Evolution and the Environment”

Ehrlich seems to have a high degree of expertise in population studies and, in that SALT talk, was able to make fairly interesting (though rather commonplace) statements about human beings. For instance, he explicitly addressed the tendency, in mainstream media, to perceive genetic determinism where it has no place. Similarly, his discussion about the origins and significance of human language was thoughtful enough that it could lead other life scientists to at least take a look at language.

What’s even more interesting is that Ehrlich realizes that social sciences can be extremely useful in solving the environmental issues which concern him the most. As we learn during the question period after this talk, Ehrlich is currently talking with some economists. And, contrary to business professors, economists participate very directly in the broad field of social sciences.

All of this shows quite a bit of promise, IMVHAWISHIMVVVHO. But the problem has to do with respect, it seems.

Now, it might well be that Ehrlich esteems and respects his economist colleagues. Their methods may be sufficiently compatible with his that he actually “hears what they’re saying.” But he doesn’t seem to “extend this courtesy” to my own highly esteemed colleagues in ethnographic disciplines. Ehrlich simply doesn’t grok the very studies which he states could be the most useful for him.

There’s a very specific example during the talk but my point is broader. When that specific issue was revealed, I had already been noticing an interdisciplinary problem. And part of that problem was my own.

Ehrlich’s talk was fairly entertaining, although rather unsurprising in the typical “doom and gloom” exposé to which science and tech shows have accustomed us. Of course, it was fairly superficial on even the points about which Ehrlich probably has the most expertise. But that’s expected of this kind of popularizer talk. But I started reacting quite negatively to several of his points when he started to make the kinds of statements which make any warm-blooded ethnographer cringe. No, not the fact that his concept of “culture” is so unsophisticated that it could prevent a student of his from getting a passing grade in an introductory course in cultural anthropology. But all sorts of comments which clearly showed that his perspective on human diversity is severely restricted. Though he challenges some ideas about genetic determinism, Ehrlich still holds to a form of reductionism which social scientists would associate with scholars who died before Ehrlich was born.

So, my level of respect for Ehrlich started to fade, with each of those half-baked pronouncments about cultural diversity and change.

Sad, I know. Especially since I respect every human being equally. But it doesn’t mean that I respect all statements equally. As is certainly the case for many other people, my respect for a person’s pronouncements may diminish greatly if those words demonstrate a lack of understanding of something in which I have a relatively high degree of expertise. In other words, a heart surgeon could potentially listen to a journalist talk about “cultural evolution” without blinking an eye but would likely lose “intellectual patience” if, in the same piece, the journalist starts to talk about heart diseases. And this impatience may retroactively carry over to the discussion about “cultural evolution.” As we tend to say in the ethnography of communication, context is the thing.

And this is where I have to catch myself. It’s not because Ehrlich made statements about culture which made him appear clueless that what he said about the connections between population and environment is also clueless. I didn’t, in fact, start perceiving his points about ecology as misled for the very simple reason that we have been saying the same things, in ethnographic disciplines. But that’s dangerous: selectively accepting statements because they reinforce what you already know. Not what academic work is supposed to be about.

In fact, there was something endearing about Ehrlich. He may not understand the study of culture and he doesn’t seem to have any training in the study of society, but at least he was trying to understand. There was even a point in his talk when he something which would be so obvious to any social scientist that I could have gained a new kind of personal respect for Ehrlich’s openness, if it hadn’t been for his inappropriate statements about culture.

The saddest part is about dialogue. If a social scientist is to work with Ehrlich and she reacts the same way I did, dialogue probably won’t be established. And if Ehrlich’s attitude toward epistemological approaches different from his own are represented by the statements he made about ethnography, chances are that he will only respect those of my social science colleagues who share his own reductionist perspective.

It should be obvious that there’s an academic issue, here, in terms of inter-disciplinarity. But there’s also a personal issue. In my own life, I don’t want to restrict myself to conversations with people who think the same way I do.

Crazy App Idea: Happy Meter

I keep getting ideas for apps I’d like to see on Apple’s App Store for iPod touch and iPhone. This one may sound a bit weird but I think it could be fun. An app where you can record your mood and optionally broadcast it to friends. It could become rather sophisticated, actually. And I think it can have interesting consequences.

The idea mostly comes from Philippe Lemay, a psychologist friend of mine and fellow PDA fan. Haven’t talked to him in a while but I was just thinking about something he did, a number of years ago (in the mid-1990s). As part of an academic project, Philippe helped develop a PDA-based research program whereby subjects would record different things about their state of mind at intervals during the day. Apart from the neatness of the data gathering technique, this whole concept stayed with me. As a non-psychologist, I personally get the strong impression that recording your moods frequently during the day can actually be a very useful thing to do in terms of mental health.

And I really like the PDA angle. Since I think of the App Store as transforming Apple’s touch devices into full-fledged PDAs, the connection is rather strong between Philippe’s work at that time and the current state of App Store development.

Since that project of Philippe’s, a number of things have been going on which might help refine the “happy meter” concept.

One is that “lifecasting” became rather big, especially among certain groups of Netizens (typically younger people, but also many members of geek culture). Though the lifecasting concept applies mostly to video streams, there are connections with many other trends in online culture. The connection with vidcasting specifically (and podcasting generally) is rather obvious. But there are other connections. For instance, with mo-, photo-, or microblogging. Or even with all the “mood” apps on Facebook.

Speaking of Facebook as a platform, I think it meshes especially well with touch devices.

So, “happy meter” could be part of a broader app which does other things: updating Facebook status, posting tweets, broadcasting location, sending personal blogposts, listing scores in a Brain Age type game, etc.

Yet I think the “happy meter” could be useful on its own, as a way to track your own mood. “Turns out, my mood was improving pretty quickly on that day.” “Sounds like I didn’t let things affect me too much despite all sorts of things I was going through.”

As a mood-tracker, the “happy meter” should be extremely efficient. Because it’s easy, I’m thinking of sliders. One main slider for general mood and different sliders for different moods and emotions. It would also be possible to extend the “entry form” on occasion, when the user wants to record more data about their mental state.

Of course, everything would be save automatically and “sent to the cloud” on occasion. There could be a way to selectively broadcast some slider values. The app could conceivably send reminders to the user to update their mood at regular intervals. It could even serve as a “break reminder” feature. Though there are limitations on OSX iPhone in terms of interapplication communication, it’d be even neater if the app were able to record other things happening on the touch device at the same time, such as music which is playing or some apps which have been used.

Now, very obviously, there are lots of privacy issues involved. But what social networking services have taught us is that users can have pretty sophisticated notions of privacy management, if they’re given the chance. For instance, adept Facebook users may seem to indiscrimately post just about everything about themselves but are often very clear about what they want to “let out,” in context. So, clearly, every type of broadcasting should be controlled by the user. No opt-out here.

I know this all sounds crazy. And it all might be a very bad idea. But the thing about letting my mind wander is that it helps me remain happy.

The Need for Social Science in Social Web/Marketing/Media (Draft)

[Been sitting on this one for a little while. Better RERO it, I guess.]

Sticking My Neck Out (Executive Summary)

I think that participants in many technology-enthusiastic movements which carry the term “social” would do well to learn some social science. Furthermore, my guess is that ethnographic disciplines are very well-suited to the task of teaching participants in these movements something about social groups.

Disclaimer

Despite the potentially provocative title and my explicitly stating a position, I mostly wish to think out loud about different things which have been on my mind for a while.

I’m not an “expert” in this field. I’m just a social scientist and an ethnographer who has been observing a lot of things online. I do know that there are many experts who have written many great books about similar issues. What I’m saying here might not seem new. But I’m using my blog as a way to at least write down some of the things I have in mind and, hopefully, discuss these issues thoughtfully with people who care.

Also, this will not be a guide on “what to do to be social-savvy.” Books, seminars, and workshops on this specific topic abound. But my attitude is that every situation needs to be treated in its own context, that cookie-cutter solutions often fail. So I would advise people interested in this set of issues to train themselves in at least a little bit of social science, even if much of the content of the training material seems irrelevant. Discuss things with a social scientist, hire a social scientist in your business, take a course in social science, and don’t focus on advice but on the broad picture. Really.

Clarification

Though they are all different, enthusiastic participants in “social web,” “social marketing,” “social media,” and other “social things online” do have some commonalities. At the risk of angering some of them, I’m lumping them all together as “social * enthusiasts.” One thing I like about the term “enthusiast” is that it can apply to both professional and amateurs, to geeks and dabblers, to full-timers and part-timers. My target isn’t a specific group of people. I just observed different things in different contexts.

Links

Shameless Self-Promotion

A few links from my own blog, for context (and for easier retrieval):

Shameless Cross-Promotion

A few links from other blogs, to hopefully expand context (and for easier retrieval):

Some raw notes

  • Insight
  • Cluefulness
  • Openness
  • Freedom
  • Transparency
  • Unintended uses
  • Constructivism
  • Empowerment
  • Disruptive technology
  • Innovation
  • Creative thinking
  • Critical thinking
  • Technology adoption
  • Early adopters
  • Late adopters
  • Forced adoption
  • OLPC XO
  • OLPC XOXO
  • Attitudes to change
  • Conservatism
  • Luddites
  • Activism
  • Impatience
  • Windmills and shelters
  • Niche thinking
  • Geek culture
  • Groupthink
  • Idea horizon
  • Intersubjectivity
  • Influence
  • Sphere of influence
  • Influence network
  • Social butterfly effect
  • Cog in a wheel
  • Social networks
  • Acephalous groups
  • Ego-based groups
  • Non-hierarchical groups
  • Mutual influences
  • Network effects
  • Risk-taking
  • Low-stakes
  • Trial-and-error
  • Transparency
  • Ethnography
  • Epidemiology of ideas
  • Neural networks
  • Cognition and communication
  • Wilson and Sperber
  • Relevance
  • Global
  • Glocal
  • Regional
  • City-State
  • Fluidity
  • Consensus culture
  • Organic relationships
  • Establishing rapport
  • Buzzwords
  • Viral
  • Social
  • Meme
  • Memetic marketplace
  • Meta
  • Target audience

Let’s Give This a Try

The Internet is, simply, a network. Sure, technically it’s a meta-network, a network of networks. But that is pretty much irrelevant, in social terms, as most networks may be analyzed at different levels as containing smaller networks or being parts of larger networks. The fact remains that the ‘Net is pretty easy to understand, sociologically. It’s nothing new, it’s just a textbook example of something social scientists have been looking at for a good long time.

Though the Internet mostly connects computers (in many shapes or forms, many of them being “devices” more than the typical “personal computer”), the impact of the Internet is through human actions, behaviours, thoughts, and feelings. Sure, we can talk ad nauseam about the technical aspects of the Internet, but these topics have been covered a lot in the last fifteen years of intense Internet growth and a lot of people seem to be ready to look at other dimensions.

The category of “people who are online” has expanded greatly, in different steps. Here, Martin Lessard’s description of the Internet’s Six Cultures (Les 6 cultures d’Internet) is really worth a read. Martin’s post is in French but we also had a blog discussion in English, about it. Not only are there more people online but those “people who are online” have become much more diverse in several respects. At the same time, there are clear patterns on who “online people” are and there are clear differences in uses of the Internet.

Groups of human beings are the very basic object of social science. Diversity in human groups is the very basis for ethnography. Ethnography is simply the description of (“writing about”) human groups conceived as diverse (“peoples”). As simple as ethnography can be, it leads to a very specific approach to society which is very compatible with all sorts of things relevant to “social * enthusiasts” on- and offline.

While there are many things online which may be described as “media,” comparing the Internet to “The Mass Media” is often the best way to miss “what the Internet is all about.” Sure, the Internet isn’t about anything (about from connecting computers which, in turn, connect human beings). But to get actual insight into the ‘Net, one probably needs to free herself/himself of notions relating to “The Mass Media.” Put bluntly, McLuhan was probably a very interesting person and some of his ideas remain intriguing but fallacies abound in his work and the best thing to do with his ideas is to go beyond them.

One of my favourite examples of the overuse of “media”-based concepts is the issue of influence. In blogging, podcasting, or selling, the notion often is that, on the Internet as in offline life, “some key individuals or outlets are influential and these are the people by whom or channels through which ideas are disseminated.” Hence all the Technorati rankings and other “viewer statistics.” Old techniques and ideas from the times of radio and television expansion are used because it’s easier to think through advertising models than through radically new models. This is, in fact, when I tend to bring back my explanation of the “social butterfly effect“: quite frequently, “influence” online isn’t through specific individuals or outlets but even when it is, those people are influential through virtue of connecting to diverse groups, not by the number of people they know. There are ways to analyze those connections but “measuring impact” is eventually missing the point.

Yes, there is an obvious “qual. vs. quant.” angle, here. A major distinction between non-ethnographic and ethnographic disciplines in social sciences is that non-ethnographic disciplines tend to be overly constrained by “quantitative analysis.” Ultimately, any analysis is “qualitative” but “quantitative methods” are a very small and often limiting subset of the possible research and analysis methods available. Hence the constriction and what some ethnographers may describe as “myopia” on the part of non-ethnographers.

Gone Viral

The term “viral” is used rather frequently by “social * enthusiasts” online. I happen to think that it’s a fairly fitting term, even though it’s used more by extension than by literal meaning. To me, it relates rather directly to Dan Sperber’s “epidemiological” treatment of culture (see Explaining Culture) which may itself be perceived as resembling Dawkins’s well-known “selfish gene” ideas made popular by different online observers, but with something which I perceive to be (to use simple semiotic/semiological concepts) more “motivated” than the more “arbitrary” connections between genetics and ideas. While Sperber could hardly be described as an ethnographer, his anthropological connections still make some of his work compatible with ethnographic perspectives.

Analysis of the spread of ideas does correspond fairly closely with the spread of viruses, especially given the nature of contacts which make transmission possible. One needs not do much to spread a virus or an idea. This virus or idea may find “fertile soil” in a given social context, depending on a number of factors. Despite the disadvantages of extending analogies and core metaphors too far, the type of ecosystem/epidemiology analysis of social systems embedded in uses of the term “viral” do seem to help some specific people make sense of different things which happen online. In “viral marketing,” the type of informal, invisible, unexpected spread of recognition through word of mouth does relate somewhat to the spread of a virus. Moreover, the metaphor of “viral marketing” is useful in thinking about the lack of control the professional marketer may have on how her/his product is perceived. In this context, the term “viral” seems useful.

The Social

While “viral” seems appropriate, the even more simple “social” often seems inappropriately used. It’s not a ranty attitude which makes me comment negatively on the use of the term “social.” In fact, I don’t really care about the use of the term itself. But I do notice that use of the term often obfuscates what is the obvious social character of the Internet.

To a social scientist, anything which involves groups is by definition “social.” Of course, some groups and individuals are more gregarious than others, some people are taken to be very sociable, and some contexts are more conducive to heightened social interactions. But social interactions happen in any context.
As an example I used (in French) in reply to this blog post, something as common as standing in line at a grocery store is representative of social behaviour and can be analyzed in social terms. Any Web page which is accessed by anyone is “social” in the sense that it establishes some link, however tenuous and asymmetric, between at least two individuals (someone who created the page and the person who accessed that page). Sure, it sounds like the minimal definition of communication (sender, medium/message, receiver). But what most people who talk about communication seem to forget (unlike Jakobson), is that all communication is social.

Sure, putting a comment form on a Web page facilitates a basic social interaction, making the page “more social” in the sense of “making that page easier to use explicit social interaction.” And, of course, adding some features which facilitate the act of sharing data with one’s personal contacts is a step above the contact form in terms of making certain type of social interaction straightforward and easy. But, contrary to what Google Friend Connect implies, adding those features doesn’t suddenly make the site social. The site itself isn’t really social and, assuming some people visited it, there was already a social dimension to it. I’m not nitpicking on word use. I’m saying that using “social” in this way may blind some people to social dimensions of the Internet. And the consequences can be pretty harsh, in some cases, for overlooking how social the ‘Net is.

Something similar may be said about the “Social Web,” one of the many definitions of “Web 2.0” which is used in some contexts (mostly, the cynic would say, “to make some tool appear ‘new and improved'”). The Web as a whole was “social” by definition. Granted, it lacked the ease of social interaction afforded such venerable Internet classics as Usenet and email. But it was already making some modes of social interaction easier to perceive. No, this isn’t about “it’s all been done.” It’s about being oblivious to the social potential of tools which already existed. True, the period in Internet history known as “Web 2.0” (and the onset of the Internet’s sixth culture) may be associated with new social phenomena. But there is little evidence that the association is causal, that new online tools and services created a new reality which suddenly made it possible for people to become social online. This is one reason I like Martin Lessard’s post so much. Instead of postulating the existence of a brand new phenomenon, he talks about the conditions for some changes in both Internet use and the form the Web has taken.

Again, this isn’t about terminology per se. Substitute “friendly” for “social” and similar issues might come up (friendship and friendliness being disconnected from the social processes which underline them).

Adoptive Parents

Many “social * enthusiasts” are interested in “adoption.” They want their “things” to be adopted. This is especially visible among marketers but even in social media there’s an issue of “getting people on board.” And some people, especially those without social science training, seem to be looking for a recipe.

Problem is, there probably is no such thing as a recipe for technology adoption.

Sure, some marketing practises from the offline world may work online. Sometimes, adapting a strategy from the material world to the Internet is very simple and the Internet version may be more effective than the offline version. But it doesn’t mean that there is such a thing as a recipe. It’s a matter of either having some people who “have a knack for this sort of things” (say, based on sensitivity to what goes on online) or based on pure luck. Or it’s a matter of measuring success in different ways. But it isn’t based on a recipe. Especially not in the Internet sphere which is changing so rapidly (despite some remarkably stable features).

Again, I’m partial to contextual approaches (“fully-customized solutions,” if you really must). Not just because I think there are people who can do this work very efficiently. But because I observe that “recipes” do little more than sell “best-selling books” and other items.

So, what can we, as social scientists, say about “adoption?” That technology is adopted based on the perceived fit between the tools and people’s needs/wants/goals/preferences. Not the simple “the tool will be adopted if there’s a need.” But a perception that there might be a fit between an amorphous set of social actors (people) and some well-defined tools (“technologies”). Recognizing this fit is extremely difficult and forcing it is extremely expensive (not to mention completely unsustainable). But social scientists do help in finding ways to adapt tools to different social situations.

Especially ethnographers. Because instead of surveys and focus groups, we challenge assumptions about what “must” fit. Our heads and books are full of examples which sound, in retrospect, as common sense but which had stumped major corporations with huge budgets. (Ask me about McDonald’s in Brazil or browse a cultural anthropology textbook, for more information.)

Recently, while reading about issues surrounding the OLPC’s original XO computer, I was glad to read the following:

John Heskett once said that the critical difference between invention and innovation was its mass adoption by users. (Niti Bhan The emperor has designer clothes)

Not that this is a new idea, for social scientists. But I was glad that the social dimension of technology adoption was recognized.

In marketing and design spheres especially, people often think of innovation as individualized. While some individuals are particularly adept at leading inventions to mass adoption (Steve Jobs being a textbook example), “adoption comes from the people.” Yes, groups of people may be manipulated to adopt something “despite themselves.” But that kind of forced adoption is still dependent on a broad acceptance, by “the people,” of even the basic forms of marketing. This is very similar to the simplified version of the concept of “hegemony,” so common in both social sciences and humanities. In a hegemony (as opposed to a totalitarian regime), no coercion is necessary because the logic of the system has been internalized by people who are affected by it. Simple, but effective.

In online culture, adept marketers are highly valued. But I’m quite convinced that pre-online marketers already knew that they had to “learn society first.” One thing with almost anything happening online is that “the society” is boundless. Country boundaries usually make very little sense and the social rules of every local group will leak into even the simplest occasion. Some people seem to assume that the end result is a cultural homogenization, thereby not necessitating any adaptation besides the move from “brick and mortar” to online. Others (or the same people, actually) want to protect their “business models” by restricting tools or services based on country boundaries. In my mind, both attitudes are ineffective and misleading.

Sometimes I Feel Like a Motherless Child

I think the Cluetrain Manifesto can somehow be summarized through concepts of freedom, openness, and transparency. These are all very obvious (in French, the book title is something close to “the evident truths manifesto”). They’re also all very social.

Social scientists often become activists based on these concepts. And among social scientists, many of us are enthusiastic about the social changes which are happening in parallel with Internet growth. Not because of technology. But because of empowerment. People are using the Internet in their own ways, the one key feature of the Internet being its lack of centralization. While the lack of centralized control may be perceived as a “bad thing” by some (social scientists or not), there’s little argument that the ‘Net as a whole is out of the control of specific corporations or governments (despite the large degree of consolidation which has happened offline and online).

Especially in the United States, “freedom” is conceived as a basic right. But it’s also a basic concept in social analysis. As some put it: “somebody’s rights end where another’s begin.” But social scientists have a whole apparatus to deal with all the nuances and subtleties which are bound to come from any situation where people’s rights (freedom) may clash or even simply be interpreted differently. Again, not that social scientists have easy, ready-made answers on these issues. But we’re used to dealing with them. We don’t interpret freedom as a given.

Transparency is fairly simple and relates directly to how people manage information itself (instead of knowledge or insight). Radical transparency is giving as much information as possible to those who may need it. Everybody has a “right to learn” a lot of things about a given institution (instead of “right to know”), when that institution has a social impact. Canada’s Access to Information Act is quite representative of the move to transparency and use of this act has accompanied changes in the ways government officials need to behave to adapt to a relatively new reality.

Openness is an interesting topic, especially in the context of the so-called “Open Source” movement. Radical openness implies participation by outsiders, at least in the form of verbal feedback. The cluefulness of “opening yourself to your users” is made obvious in the context of successes by institutions which have at least portrayed themselves as open. What’s in my mind unfortunate is that many institutions now attempt to position themselves on the openness end of the “closed/proprietary to open/responsive” scale without much work done to really open themselves up.

Communitas

Mottoes, slogans, and maxims like “build it and they will come,” “there’s a sucker born every minute,” “let them have cake,” and “give them what they want” all fail to grasp the basic reality of social life: “they” and “we” are linked. We’re all different and we’re all connected. We all take parts in groups. These groups are all associated with one another. We can’t simply behave the same way with everyone. Identity has two parts: sense of belonging (to an “in-group”) and sense of distinction (from an “out-group”). “Us/Them.”

Within the “in-group,” if there isn’t any obvious hierarchy, the sense of belonging can take the form that Victor Turner called “communitas” and which happens in situations giving real meaning to the notion of “community.” “Community of experience,” “community of practise.” Eckert and Wittgenstein brought to online networks. In a community, contacts aren’t always harmonious. But people feel they fully belong. A network isn’t the same thing as a community.

The World Is My Oyster

Despite the so-called “Digital Divide” (or, more precisely, the maintenance online of global inequalities), the ‘Net is truly “Global.” So is the phone, now that cellphones are accomplishing the “leapfrog effect.” But this one Internet we have (i.e., not Internet2 or other such specialized meta-network) is reaching everywhere through a single set of compatible connections. The need for cultural awareness is increased, not alleviated by online activities.

Release Early, Release Often

Among friends, we call it RERO.

The RERO principle is a multiple-pass system. Instead of waiting for the right moment to release a “perfect product” (say, a blogpost!), the “work in progress” is provided widely, garnering feedback which will be integrated in future “product versions.” The RERO approach can be unnerving to “product developers,” but it has proved its value in online-savvy contexts.

I use “product” in a broad sense because the principle applies to diverse contexts. Furthermore, the RERO principle helps shift the focus from “product,” back into “process.”

The RERO principle may imply some “emotional” or “psychological” dimensions, such as humility and the acceptance of failure. At some level, differences between RERO and “trial-and-error” methods of development appear insignificant. Those who create something should not expect the first try to be successful and should recognize mistakes to improve on the creative process and product. This is similar to the difference between “rehearsal” (low-stakes experimentation with a process) and “performance” (with responsibility, by the performer, for evaluation by an audience).

Though applications of the early/often concept to social domains are mostly satirical, there is a social dimension to the RERO principle. Releasing a “product” implies a group, a social context.

The partial and frequent “release” of work to “the public” relates directly to openness and transparency. Frequent releases create a “relationship” with human beings. Sure, many of these are “Early Adopters” who are already overrepresented. But the rapport established between an institution and people (users/clients/customers/patrons…) can be transfered more broadly.

Releasing early seems to shift the limit between rehearsal and performance. Instead of being able to do mistakes on your own, your mistakes are shown publicly and your success is directly evaluated. Yet a somewhat reverse effect can occur: evaluation of the end-result becomes a lower-stake rating at different parts of the project because expectations have shifted to the “lower” end. This is probably the logic behind Google’s much discussed propensity to call all its products “beta.”

While the RERO principle does imply a certain openness, the expectation that each release might integrate all the feedback “users” have given is not fundamental to releasing early and frequently. The expectation is set by a specific social relationship between “developers” and “users.” In geek culture, especially when users are knowledgeable enough about technology to make elaborate wishlists, the expectation to respond to user demand can be quite strong, so much so that developers may perceive a sense of entitlement on the part of “users” and grow some resentment out of the situation. “If you don’t like it, make it yourself.” Such a situation is rather common in FLOSS development: since “users” have access to the source code, they may be expected to contribute to the development project. When “users” not only fail to fulfil expectations set by open development but even have the gumption to ask developers to respond to demands, conflicts may easily occur. And conflicts are among the things which social scientists study most frequently.

Putting the “Capital” Back into “Social Capital”

In the past several years, ”monetization” (transforming ideas into currency) has become one of the major foci of anything happening online. Anything which can be a source of profit generates an immediate (and temporary) “buzz.” The value of anything online is measured through typical currency-based economics. The relatively recent movement toward ”social” whatever is not only representative of this tendency, but might be seen as its climax: nowadays, even social ties can be sold directly, instead of being part of a secondary transaction. As some people say “The relationship is the currency” (or “the commodity,” or “the means to an end”). Fair enough, especially if these people understand what social relationships entail. But still strange, in context, to see people “selling their friends,” sometimes in a rather literal sense, when social relationships are conceived as valuable. After all, “selling the friend” transforms that relationship, diminishes its value. Ah, well, maybe everyone involved is just cynical. Still, even their cynicism contributes to the system. But I’m not judging. Really, I’m not. I’m just wondering
Anyhoo, the “What are you selling anyway” question makes as much sense online as it does with telemarketers and other greed-focused strangers (maybe “calls” are always “cold,” online). It’s just that the answer isn’t always so clear when the “business model” revolves around creating, then breaking a set of social expectations.
Me? I don’t sell anything. Really, not even my ideas or my sense of self. I’m just not good at selling. Oh, I do promote myself and I do accumulate social capital. As social butterflies are wont to do. The difference is, in the case of social butterflies such as myself, no money is exchanged and the social relationships are, hopefully, intact. This is not to say that friends never help me or never receive my help in a currency-friendly context. It mostly means that, in our cases, the relationships are conceived as their own rewards.
I’m consciously not taking the moral high ground, here, though some people may easily perceive this position as the morally superior one. I’m not even talking about a position. Just about an attitude to society and to social relationships. If you will, it’s a type of ethnographic observation from an insider’s perspective.

Makes sense?

Addressing Issues vs. Assessing Unproblematic Uses (Rant)

A good example of something I tend to dislike, in geek culture. Users who are talking about issues they have are getting confronted with evidence that they may be wrong instead of any acknowledgment that there might be an issue. Basically, a variant of the “blame the victim” game.

Case in point, a discussion about memory usage by Firefox 3, started by Mahmoud Al-Qudsi, on his NeoSmart blog:

Firefox 3 is Still a Memory Hog

Granted, Al-Qudsi’s post could be interpreted as a form of provocation, especially given its title. But thoughtful responses are more effective than “counterstrike,” in cases in which you want to broaden your user base.

So the ZDNet response, unsurprisingly, is to run some benchmark tests. Under their conditions, they get better results for Firefox 3 than for other browsers, excluding Firefox 2 which was the basis of Al-Qudsi’s comment. The “only logical conclusion” is that the problem is with the user. Not surprising in geek culture which not only requires people to ask questions in a very specific way but calls that method the “smart” one.

How to ask a question the Smart Way

One issue with a piece like the ZDNet one is that those who are, say, having issues with a given browser are less likely to get those issues addressed than if the reaction had been more thoughtful. It’s fine to compare browsers or anything else under a standardized set of experimental conditions. But care should be taken to troubleshoot the issues users are saying they have. In other words, it’s especially important for tech journalists and developers to look at what users are actually saying, even if the problem is in the way they use the software. Sure, users can say weird things. But if developers don’t pay attention to users, chances are that they won’t pay attention to the tools the developers build.

The personal reason I’m interested in this issue is that I’ve been having memory issue with both Firefox 2 and in Flock 1.4 (my browser of choice, at least on Windows). I rarely have the same issues in Internet Explorer 7 or Safari 3. It might be “my problem,” but it still means that, as much as I love Mozila-based browsers, I feel compelled to switch to other browsers.

A major “selling point” for Firefox 3 (Ff3) is a set of improvements in memory management. Benchmarks and tests can help convince users to give Ff3 a try but those who are having issues with Ff3’s memory management should be the basis of a thoughtful analysis in terms of patterns in memory usage. It might just be that some sites take up more memory in Ff than in other browsers, for reasons unknown. Or that there are settings and/or extensions which are making Ff more memory hungry. The point is, this can all be troubleshot.

Helping users get a more pleasant experience out of Ff should be a great way to expand Ff’s userbase. Even if the problem is in the way they use Ff.

Ah, well…

Browser memory usage – the good, the bad, and the down right ugly! | Hardware 2.0 | ZDNet.com

Bookish Reference

Thinking about reference books, these days.

Are models inspired by reference books (encyclopedias, dictionaries, phonebooks, atlases…) still relevant in the context of almost-ubiquitous Internet access?

I don’t have an answer but questions such as these send me on streams of thought. I like thought streaming.

One stream of thought relates to a discussion I’ve had with fellow Yulblogger Martin Lessard about “trust in sources.” IIRC, Lessard was talking more specifically about individuals but I tend to react the same way about “source credibility” whether the source is a single human being, an institution, or a piece of writing. Typically, my reaction is a knee-jerk one: “No information is to be trusted, regardless of the source. Critical thinking and the scientific method both imply that we should apply the same rigorous analysis to any piece of information, regardless of the alleged source.” But this reasoned stance of mine is confronted with the reality of people (including myself and other vocal proponents of critical thinking) acting, at least occasionally, as if we did “trust” sources differentially.

I still think that this trusty attitude toward some sources needs to be challenged in contexts which give a lot of significance to information validity. Conversely, maybe there’s value in trust because information doesn’t always have to be that valid and because it’s often more expedient to trust some sources than to “apply the same rigorous analysis to information coming from any source.”

I also think that there are different forms of trust. From a strong version which relates to faith, all the way to a weak version, tantamount to suspension of disbelief. It’s not just a question of degree as there are different origins for source-trust, from positive prior experiences with a given source to the hierarchical dimensions of social status.

A basic point, here, might be that “trust in source” is contextual, nuanced, changing, constructed… relative.

Second stream of thought: popular reference books. I’m still afraid of groupthink, but there’s something deep about some well-known references.

Just learnt, through the most recent issue of Peter Suber’s SPARC Open Access newsletter, some news about French reference book editor Larousse (now part of Hachette, which is owned by Lagardère) making a move toward Open Access. Through their Larousse.fr site, Larousse is not only making some of its content available for open access but it’s adding some user-contributed content to its site. As an Open Access enthusiast, I do find the OA angle interesting. But the user-content angle leads me in another direction having to do with reference books.

What may not be well-known outside of Francophone contexts is that Larousse is pretty much a “household name” in many French-speaking homes. Larousse dictionaries have been commonly used in schools and they have been selling quite well through much of the editor’s history. Not to mention that some specialized reference books published by Larousse, are quite unique.

To make this more personal: I pretty much grew up on Larousse dictionaries. In my mind, Larousse dictionaries were typically less “stuffy” and more encyclopedic in approach than other well-known French dictionaries. Not only did Larousse’s flagship Petit Larousse illustré contain numerous images, but some aspect of its supplementary content, including Latin expressions and proverbs, were very useful and convenient. At the same time, Larousse’s fairly extensive line of reference books could retain some of the prestige afforded its stuffier and less encyclopedic counterparts in the French reference book market. Perhaps because I never enjoyed stuffiness, I pretty much associated my view of erudition with Larousse dictionaries. Through a significant portion of my childhood, I spent countless hours reading disparate pieces of Larousse dictionaries. Just for fun.

So, for me, freely accessing and potentially contributing to Larousse feels strange. Can’t help but think of our battered household copies of Petit Larousse illustré. It’s a bit as if a comics enthusiast were not only given access to a set of Marvel or DC comics but could also go on the drawing board. I’ve never been “into” comics but I could recognize my childhood self as a dictionary nerd.

There’s a clear connection in my mind between my Larousse-enhanced childhood memories and my attitude toward using Wikipedia. Sure, Petit Larousse was edited in a “closed” environment, by a committee. But there was a sense of discovery with Petit Larousse that I later found with CD-ROM and online encyclopedias. I used a few of these, over the years, and I eventually stuck with Wikipedia for much of this encyclopedic fun. Like probably many others, I’ve spent some pleasant hours browsing through Wikipedia, creating in my head a more complex picture of the world.

Which is not to say that I perceive Larousse as creating a new Wikipedia. Describing the Larousse.fr move toward open access and user-contributed content, the Independent mostly compares Larousse with Wikipedia. In fact, a Larousse representative seems to have made some specific statements about trying to compete with Wikipedia. Yet, the new Larousse.fr site is significantly different from Wikipedia.

As Suber says, Larousse’s attempt is closer to Google’s knols than to Wikipedia. In contrast with the Wikipedia model but as in Google’s knol model, content contributed by users on the Larousse site preserves an explicit sense of authorship. According to the demo video for Larousse.fr, some specific features have been implemented on the site to help users gather around specific topics. Something similar has happened informally with some Wikipedians, but the Larousse site makes these features rather obvious and, as some would say, “user-friendly.” After all, while many people do contribute to Wikipedia, some groups of editors function more like tight-knit communities or aficionados than like amorphous groups of casual users. One interesting detail about the Larousse model is that user-contributed and Larousse contents run in parallel to one another. There are bridges in terms of related articles, but the distinction seems clear. Despite my tendency to wait for prestige structures to “just collapse, already,” I happen to think this model is sensible in the context of well-known reference books. Larousse is “reliable, dependable, trusty.” Like comfort food. Or like any number of items sold in commercials with an old-time feel.

So, “Wikipedia the model” is quite different from the Larousse model but both Wikipedia and Petit Larousse can be used in similar ways.

Another stream of thought, here, revolves around the venerable institution known as Encyclopædia Britannica. Britannica recently made it possible for bloggers (and other people publishing textual content online) to apply for an account giving them access to the complete online content of the encyclopedia. With this access comes the possibility to make specific articles available to our readers via simple linking, in a move reminiscent of the Financial Times model.

Since I received my “blogger accreditation to Britannica content,” I did browse some article on Britannica.com. I receive Britannica’s “On This Day” newsletter of historical events in my inbox daily and it did lead me to some intriguing entries. I did “happen” on some interesting content and I even used Britannica links on my main blog as well as in some forum posts for a course I teach online.

But, I must say, Britannica.com is just “not doing it for me.”

For one thing, the site is cluttered and cumbersome. Content is displayed in small chunks, extra content is almost dominant, links to related items are often confusing and, more sadly, many articles just don’t have enough content to make visits satisfying or worthwhile. Not to mention that it is quite difficult to link to a specific part of the content as the site doesn’t use page anchors in a standard way.

To be honest, I was enthusiastic when I first read about Britannica.com’s blogger access. Perhaps because of the (small) thrill of getting “privileged” access to protected content, I thought I might find the site useful. But time and again, I had to resort to Wikipedia. Wikipedia, like an old Larousse dictionary, is dependable. Besides, I trust my sense of judgement to not be too affect by inaccurate or invalid information.

One aspect of my deception with Britannica relates to the fact that, when I write things online, I use links as a way to give readers more information, to help them exercise critical thinking, to get them thinking about some concepts and issues, and/or to play with some potential ambiguity. In all of those cases, I want to link to a resource which is straightforward, easy to access, easy to share, clear, and “open toward the rest of the world.”

Britannica is not it. Despite all its “credibility” and perceived prestige, Britannica.com isn’t providing me with the kind of service I’m looking for. I don’t need a reference book in the traditional sense. I need something to give to other people.

After waxing nostalgic about Larousse and ranting about Britannica, I realize how funny some of this may seem, from the outside. In fact, given the structure of the Larousse.fr site, I already think that I won’t find it much more useful than Britannica for my needs and I’ll surely resort to Wikipedia, yet again.

But, at least, it’s all given me the opportunity to stream some thoughts about reference books. Yes, I’m enough of a knowledge geek to enjoy it.