Academics and Their Publics

(Why Are Academics So) Misunderstood?

Misunderstood by Raffi Asdourian
Misunderstood by Raffi Asdourian

Academics are misunderstood.

Almost by definition.

Pretty much any academic eventually feels that s/he is misunderstood. Misunderstandings about some core notions in about any academic field are involved in some of the most common pet peeves among academics.

In other words, there’s nothing as transdisciplinary as misunderstanding.

It can happen in the close proximity of a given department (“colleagues in my department misunderstand my work”). It can happen through disciplinary boundaries (“people in that field have always misunderstood our field”). And, it can happen generally: “Nobody gets us.”

It’s not paranoia and it’s probably not self-victimization. But there almost seems to be a form of “onedownmanship” at stake with academics from different disciplines claiming that they’re more misunderstood than others. In fact, I personally get the feeling that ethnographers are more among the most misunderstood people around, but even short discussions with friends in other fields (including mathematics) have helped me get the idea that, basically, we’re all misunderstood at the same “level” but there are variations in the ways we’re misunderstood. For instance, anthropologists in general are mistaken for what they aren’t based on partial understanding by the general population.

An example from my own experience, related to my decision to call myself an “informal ethnographer.” When you tell people you’re an anthropologist, they form an image in their minds which is very likely to be inaccurate. But they do typically have an image in their minds. On the other hand, very few people have any idea about what “ethnography” means, so they’re less likely to form an opinion of what you do from prior knowledge. They may puzzle over the term and try to take a guess as to what “ethnographer” might mean but, in my experience, calling myself an “ethnographer” has been a more efficient way to be understood than calling myself an “anthropologist.”

This may all sound like nitpicking but, from the inside, it’s quite impactful. Linguists are frequently asked about the number of languages they speak. Mathematicians are taken to be number freaks. Psychologists are perceived through the filters of “pop psych.” There are many stereotypes associated with engineers. Etc.

These misunderstandings have an impact on anyone’s work. Not only can it be demoralizing and can it impact one’s sense of self-worth, but it can influence funding decisions as well as the use of research results. These misunderstandings can underminine learning across disciplines. In survey courses, basic misunderstandings can make things very difficult for everyone. At a rather basic level, academics fight misunderstandings more than they fight ignorance.

The  main reason I’m discussing this is that I’ve been given several occasions to think about the interface between the Ivory Tower and the rest of the world. It’s been a major theme in my blogposts about intellectuals, especially the ones in French. Two years ago, for instance, I wrote a post in French about popularizers. A bit more recently, I’ve been blogging about specific instances of misunderstandings associated with popularizers, including Malcolm Gladwell’s approach to expertise. Last year, I did a podcast episode about ethnography and the Ivory Tower. And, just within the past few weeks, I’ve been reading a few things which all seem to me to connect with this same issue: common misunderstandings about academic work. The connections are my own, and may not be so obvious to anyone else. But they’re part of my motivations to blog about this important issue.

In no particular order:

But, of course, I think about many other things. Including (again, in no particular order):

One discussion I remember, which seems to fit, included comments about Germaine Dieterlen by a friend who also did research in West Africa. Can’t remember the specifics but the gist of my friend’s comment was that “you get to respect work by the likes of Germaine Dieterlen once you start doing field research in the region.” In my academic background, appreciation of Germaine Dieterlen’s may not be unconditional, but it doesn’t necessarily rely on extensive work in the field. In other words, while some parts of Dieterlen’s work may be controversial and it’s extremely likely that she “got a lot of things wrong,” her work seems to be taken seriously by several French-speaking africanists I’ve met. And not only do I respect everyone but I would likely praise someone who was able to work in the field for so long. She’s not my heroine (I don’t really have heroes) or my role-model, but it wouldn’t have occurred to me that respect for her wasn’t widespread. If it had seemed that Dieterlen’s work had been misunderstood, my reflex would possibly have been to rehabilitate her.

In fact, there’s  a strong academic tradition of rehabilitating deceased scholars. The first example which comes to mind is a series of articles (PDF, in French) and book chapters by UWO linguistic anthropologist Regna Darnell.about “Benjamin Lee Whorf as a key figure in linguistic anthropology.” Of course, saying that these texts by Darnell constitute a rehabilitation of Whorf reveals a type of evaluation of her work. But that evaluation comes from a third person, not from me. The likely reason for this case coming up to my mind is that the so-called “Sapir-Whorf Hypothesis” is among the most misunderstood notions from linguistic anthropology. Moreover, both Whorf and Sapir are frequently misunderstood, which can make matters difficulty for many linguistic anthropologists talking with people outside the discipline.

The opposite process is also common: the “slaughtering” of “sacred cows.” (First heard about sacred cows through an article by ethnomusicologist Marcia Herndon.) In some significant ways, any scholar (alive or not) can be the object of not only critiques and criticisms but a kind of off-handed dismissal. Though this often happens within an academic context, the effects are especially lasting outside of academia. In other words, any scholar’s name is likely to be “sullied,” at one point or another. Typically, there seems to be a correlation between the popularity of a scholar and the likelihood of her/his reputation being significantly tarnished at some point in time. While there may still be people who treat Darwin, Freud, Nietzsche, Socrates, Einstein, or Rousseau as near divinities, there are people who will avoid any discussion about anything they’ve done or said. One way to put it is that they’re all misunderstood. Another way to put it is that their main insights have seeped through “common knowledge” but that their individual reputations have decreased.

Perhaps the most difficult case to discuss is that of Marx (Karl, not Harpo). Textbooks in introductory sociology typically have him as a key figure in the discipline and it seems clear that his insight on social issues was fundamental in social sciences. But, outside of some key academic contexts, his name is associated with a large series of social events about which people tend to have rather negative reactions. Even more so than for Paul de Man or  Martin Heidegger, Marx’s work is entangled in public opinion about his ideas. Haven’t checked for examples but I’m quite sure that Marx’s work is banned in a number of academic contexts. However, even some of Marx’s most ardent opponents are likely to agree with several aspects of Marx’s work and it’s sometimes funny how Marxian some anti-Marxists may be.

But I digress…

Typically, the “slaughtering of sacred cows” relates to disciplinary boundaries instead of social ones. At least, there’s a significant difference between your discipline’s own “sacred cows” and what you perceive another discipline’s “sacred cows” to be. Within a discipline, the process of dismissing a prior scholar’s work is almost œdipean (speaking of Freud). But dismissal of another discipline’s key figures is tantamount to a rejection of that other discipline. It’s one thing for a physicist to show that Newton was an alchemist. It’d be another thing entirely for a social scientist to deconstruct James Watson’s comments about race or for a theologian to argue with Darwin. Though discussions may have to do with individuals, the effects of the latter can widen gaps between scholarly disciplines.

And speaking of disciplinarity, there’s a whole set of issues having to do with discussions “outside of someone’s area of expertise.” On one side, comments made by academics about issues outside of their individual areas of expertise can be very tricky and can occasionally contribute to core misunderstandings. The fear of “talking through one’s hat” is quite significant, in no small part because a scholar’s prestige and esteem may greatly decrease as a result of some blatantly inaccurate statements (although some award-winning scholars seem not to be overly impacted by such issues).

On the other side, scholars who have to impart expert knowledge to people outside of their discipline  often have to “water down” or “boil down” their ideas and, in effect, oversimplifying these issues and concepts. Partly because of status (prestige and esteem), lowering standards is also very tricky. In some ways, this second situation may be more interesting. And it seems unavoidable.

How can you prevent misunderstandings when people may not have the necessary background to understand what you’re saying?

This question may reveal a rather specific attitude: “it’s their fault if they don’t understand.” Such an attitude may even be widespread. Seems to me, it’s not rare to hear someone gloating about other people “getting it wrong,” with the suggestion that “we got it right.”  As part of negotiations surrounding expert status, such an attitude could even be a pretty rational approach. If you’re trying to position yourself as an expert and don’t suffer from an “impostor syndrome,” you can easily get the impression that non-specialists have it all wrong and that only experts like you can get to the truth. Yes, I’m being somewhat sarcastic and caricatural, here. Academics aren’t frequently that dismissive of other people’s difficulties understanding what seem like simple concepts. But, in the gap between academics and the general population a special type of intellectual snobbery can sometimes be found.

Obviously, I have a lot more to say about misunderstood academics. For instance, I wanted to address specific issues related to each of the links above. I also had pet peeves about widespread use of concepts and issues like “communities” and “Eskimo words for snow” about which I sometimes need to vent. And I originally wanted this post to be about “cultural awareness,” which ends up being a core aspect of my work. I even had what I might consider a “neat” bit about public opinion. Not to mention my whole discussion of academic obfuscation (remind me about “we-ness and distinction”).

But this is probably long enough and the timing is right for me to do something else.

I’ll end with an unverified anecdote that I like. This anecdote speaks to snobbery toward academics.

[It’s one of those anecdotes which was mentioned in a course I took a long time ago. Even if it’s completely fallacious, it’s still inspiring, like a tale, cautionary or otherwise.]

As the story goes (at least, what I remember of it), some ethnographers had been doing fieldwork  in an Australian cultural context and were focusing their research on a complex kinship system known in this context. Through collaboration with “key informants,” the ethnographers eventually succeeded in understanding some key aspects of this kinship system.

As should be expected, these kinship-focused ethnographers wrote accounts of this kinship system at the end of their field research and became known as specialists of this system.

After a while, the fieldworkers went back to the field and met with the same people who had described this kinship system during the initial field trip. Through these discussions with their “key informants,” the ethnographers end up hearing about a radically different kinship system from the one about which they had learnt, written, and taught.

The local informants then told the ethnographers: “We would have told you earlier about this but we didn’t think you were able to understand it.”

Jazz and Identity: Comment on Lydon’s Iyer Interview

My reactions to @RadioOpenSource interview with Vijay Iyer.

Radio Open Source » Blog Archive » Vijay Iyer’s Life in Music: “Striving is the Back Story…”.

Sounds like it will be a while before the United States becomes a truly post-racial society.

Iyer can define himself as American and he can even one-up other US citizens in Americanness, but he’s still defined by his having “a Brahmin Indian name and heritage, and a Yale degree in physics.”

Something by which I was taken aback, at IU Bloomington ten years ago, is the fact that those who were considered to be “of color” (as if colour were the factor!) were expected to mostly talk about their “race” whereas those who were considered “white” were expected to remain silent when notions of “race” and ethnicity came up for discussion. Granted, ethnicity and “race” were frequently discussed, so it was possible to hear the voices of those “of color” on a semi-regular basis. Still, part of my culture shock while living in the MidWest was the conspicuous silence of students with brilliant ideas who happened to be considered African-American.

Something similar happened with gender, on occasion, in that women were strongly encouraged to speak out…when a gender angle was needed. Thankfully, some of these women (at least, among those whose “racial” identity was perceived as neutral) did speak up, regardless of topic. But there was still an expectation that when they did, their perspective was intimately gendered.

Of course, some gender lines were blurred: the gender ratio among faculty members was relatively balanced (probably more women than men), the chair of the department was a woman for a time, and one department secretary was a man. But women’s behaviours were frequently interpreted in a gender-specific way, while men were often treated as almost genderless. Male privilege manifested itself in the fact that it was apparently difficult for women not to be gender-conscious.

Those of us who were “international students” had the possibility to decide when our identities were germane to the discussion. At least, I was able to push my «différence» when I so pleased, often by becoming the token Francophone in discussions about Francophone scholars, yet being able not to play the “Frenchie card” when I didn’t find it necessary. At the same time, my behaviour may have been deemed brash and a fellow student teased me by calling me “Mr. Snottyhead.” As an instructor later told me, “it’s just that, since you’re Canadian, we didn’t expect you to be so different.” (My response: “I know some Canadians who would despise that comment. But since I’m Québécois, it doesn’t matter.”) This was in reference to a seminar with twenty students, including seven “internationals”: one Zimbabwean, one Swiss-German, two Koreans, one Japanese, one Kenyan, and one “Québécois of Swiss heritage.” In this same graduate seminar, the instructor expected everyone to know of Johnny Appleseed and of John Denver.

Again, a culture shock. Especially for someone coming from a context in which the ethnic identity of the majority is frequently discussed and in which cultural identity is often “achieved” instead of being ascribed. This isn’t to say that Quebec society is devoid of similar issues. Everybody knows, Quebec has more than its fair share of identity-based problems. The fact of the matter is, Quebec society is entangled in all sorts of complex identity issues, and for many of those, Quebec may appear underprepared. The point is precisely that, in Quebec, identity politics is a matter for everyone. Nobody has the luxury to treat their identity as “neutral.”

Going back to Iyer… It’s remarkable that his thoughtful comments on Jazz end up associated more with his background than with his overall approach. As if what he had to say were of a different kind than those from Roy Hayes or Robin Kelley. As if Iyer had more in common with Koo Nimo than with, say, Sonny Rollins. Given Lydon’s journalistic background, it’s probably significant that the Iyer conversation carried the “Life in Music” name of  the show’s music biography series yet got “filed under” the show’s “Year of India” series. I kid you not.

And this is what we hear at the end of each episode’s intro:

This is Open Source, from the Watson Institute at Brown University. An American conversation with Global attitude, we call it.

Guess the “American” part was taken by Jazz itself, so Iyer was assigned the “Global” one. Kind of wishing the roles were reversed, though Iyer had rehearsed his part.

But enough symbolic interactionism. For now.

During Lydon’s interview with Iyer, I kept being reminded of a conversation (in Brookline)  with fellow Canadian-ethnomusicologist-and-Jazz-musician Tanya Kalmanovitch. Kalmanovitch had fantastic insight to share on identity politics at play through the international (yet not post-national) Jazz scene. In fact, methinks she’d make a great Open Source guest. She lives in Brooklyn but works as assistant chair of contemporary improv at NEC, in B-Town, so Lydon could probably meet her locally.

Anyhoo…

In some ways, Jazz is more racialized and ethnicized now than it was when Howie Becker published Outsiders. (hey, I did hint symbolic interactionism’d be back!). It’s also very national, gendered, compartmentalized… In a word: modern. Of course, Jazz (or something like it) shall play a role in postmodernity. But only if it sheds itself of its modernist trappings. We should hear out Kevin Mahogany’s (swung) comments about a popular misconception:

Some cats work from nine to five
Change their life for line of jive
Never had foresight to see
Where the changes had to be
Thought that they had heard the word
Thought it all died after Bird
But we’re still swingin’

The following anecdote seems à propos.

Branford Marsalis quartet on stage outside at the Indy Jazz Fest 1999. Some dude in the audience starts heckling the band: “Play something we know!” Marsalis, not losing his cool, engaged the heckler in a conversation on Jazz history, pushing the envelope, playing the way you want to play, and expected behaviour during shows. Though the audience sounded divided when Marsalis advised the heckler to go to Chaka Khan‘s show on the next stage over, if that was more to the heckler’s liking, there wasn’t a major shift in the crowd and, hopefully, most people understood how respectful Marsalis’s comments really were. What was especially precious is when Marsalis asked the heckler: “We’re cool, man?”

It’s nothing personal.

I Hate Books

I want books dead. For social reasons.

In a way, this is a followup to a discussion happening on Facebook after something I posted (available publicly on Twitter): “(Alexandre) wishes physical books a quick and painfree death. / aime la connaissance.”

As I expected, the reactions I received were from friends who are aghast: how dare I dismiss physical books? Don’t I know no shame?

Apparently, no, not in this case.

And while I posted it as a quip, it’s the result of a rather long reflection. It’s not that I’m suddenly anti-books. It’s that I stopped buying several of the “pro-book” arguments a while ago.

Sure, sure. Books are the textbook case of technlogy which needs no improvement. eBooks can’t replace the experience of doing this or that with a book. But that’s what folkloristics defines as a functional shift. Like woven baskets which became objects of nostalgia, books are being maintained as the model for a very specific attitude toward knowledge construction based on monolithic authored texts vetted by gatekeepers and sold as access to information.

An important point, here, is that I’m not really thinking about fiction. I used to read two novel-length works a week (collections of short stories, plays…), for a period of about 10 years (ages 13 to 23). So, during that period, I probably read about 1,000 novels, ranging from Proust’s Recherche to Baricco’s Novecentoand the five books of Rabelais’s Pantagruel series. This was after having read a fair deal of adolescent and young adult fiction. By today’s standards, I might be considered fairly well-read.

My life has changed a lot, since that time. I didn’t exactly stop reading fiction but my move through graduate school eventually shifted my reading time from fiction to academic texts. And I started writing more and more, online and offline.
In the same time, the Web had also been making me shift from pointed longform texts to copious amounts of shortform text. Much more polyvocal than what Bakhtin himself would have imagined.

(I’ve also been shifting from French to English, during that time. But that’s almost another story. Or it’s another part of the story which can reamin in the backdrop without being addressed directly at this point. Ask, if you’re curious.)
The increase in my writing activity is, itself, a shift in the way I think, act, talk… and get feedback. See, the fact that I talk and write a lot, in a variety of circumstances, also means that I get a lot of people to play along. There’s still a risk of groupthink, in specific contexts, but one couldn’t say I keep getting things from the same perspective. In fact, the very Facebook conversation which sparked this blogpost is an example, as the people responding there come from relatively distant backgrounds (though there are similarities) and were not specifically queried about this. Their reactions have a very specific value, to me. Sure, it comes in the form of writing. But it’s giving me even more of something I used to find in writing: insight. The stuff you can’t get through Google.

So, back to books.

I dislike physical books. I wish I didn’t have to use them to read what I want to read. I do have a much easier time with short reading sessions on a computer screen that what would turn into rather long periods of time holding a book in my hands.

Physical books just don’t do it for me, anymore. The printing press is, like, soooo 1454!

Yes, books had “a good run.” No, nothing replaces them. That’s not the way it works. Movies didn’t replace theater, television didn’t replace radio, automobiles didn’t replace horses, photographs didn’t replace paintings, books didn’t replace orality. In fact, the technology itself doesn’t do much by itself. But social contexts recontextualize tools. If we take technology to be the set of both tools and the knowledge surrounding it, technology mostly goes through social processes, since tool repertoires and corresponding knowledge mostly shift in social contexts, not in their mere existence. Gutenberg’s Bible was a “game-changer” for social, as well as technical reasons.

And I do insist on orality. Journalists and other “communication is transmission of information” followers of Shannon&Weaver tend to portray writing as the annihilation of orality. How long after the invention of writing did Homer transfer an oral tradition to the writing media? Didn’t Albert Lord show the vitality of the epic well into the 20th Century? Isn’t a lot of our knowledge constructed through oral means? Is Internet writing that far, conceptually, from orality? Is literacy a simple on/off switch?

Not only did I maintain an interest in orality through the most book-focused moments of my life but I probably care more about orality now than I ever did. So I simply cannot accept the idea that books have simply replaced the human voice. It doesn’t add up.

My guess is that books won’t simply disappear either. There should still be a use for “coffee table books” and books as gifts or collectables. Records haven’t disappeared completely and CDs still have a few more days in dedicated stores. But, in general, we’re moving away from the “support medium” for “content” and more toward actual knowledge management in socially significant contexts.

In these contexts, books often make little sense. Reading books is passive while these contexts are about (hyper-)/(inter-)active.

Case in point (and the reason I felt compelled to post that Facebook/Twitter quip)…
I hear about a “just released” French book during a Swiss podcast. Of course, it’s taken a while to write and publish. So, by the time I heard about it, there was no way to participate in the construction of knowledge which led to it. It was already “set in stone” as an “opus.”

Looked for it at diverse bookstores. One bookstore could eventually order it. It’d take weeks and be quite costly (for something I’m mostly curious about, not depending on for something really important).

I eventually find it in the catalogue at BANQ. I reserve it. It wasn’t on the shelves, yet, so I had to wait until it was. It took from November to February. I eventually get a message that I have a couple of days to pick up my reservation but I wasn’t able to go. So it went back on the “just released” shelves. I had the full call number but books in that section aren’t in their call number sequence. I spent several minutes looking back and forth between eight shelves to eventually find out that there were four more shelves in the “humanities and social sciences” section. The book I was looking was on one of those shelves.

So, I was able to borrow it.

Phew!

In the metro, I browse through it. Given my academic reflex, I look for the back matter first. No bibliography, no index, a ToC with rather obscure titles (at random: «Taylor toujours à l’œuvre»/”Taylor still at work,” which I’m assuming to be a reference to continuing taylorism). The book is written by two separate dudes but there’s no clear indication of who wrote what. There’s a preface (by somebody else) but no “acknowledgments” section, so it’s hard to see who’s in their network. Footnotes include full URLs to rather broad sites as well as “discussion with <an author’s name>.” The back cover starts off with references to French popular culture (including something about “RER D,” which would be difficult to search). Information about both authors fits in less than 40 words (including a list of publication titles).

The book itself is fairly large print, ways almost a pound (422g, to be exact) for 327 pages (including front and back matter). Each page seems to be about 50 characters per line, about 30 lines per page. So, about half a million characters or 3500 tweets (including spaces). At 5+1 characters per word, about 80,000 words (I have a 7500-words blogpost, written in an afternoon). At about 250 words per minute, about five hours of reading. This book is listed at 19€ (about 27CAD).
There’s no direct way to do any “postprocessing” with the text: no speech synthesis for visually impaired, concordance analysis, no machine translation, even a simple search for occurences of “Sarkozy” is impossible. Not to mention sharing quotes with students or annotating in an easy-to-retrieve fashion (à la Diigo).

Like any book, it’s impossible to read in the dark and I actually have a hard time to find a spot where I can read with appropriate lighting.

Flipping through the book, I get the impression that there’s some valuable things to spark discussions, but there’s also a whole lot of redundancy with frequent discussions on the topic (the Future of Journalism, or #FoJ, as a matter of fact). My guesstimate is that, out of 5 hours of reading, I’d get at most 20 pieces of insight that I’d have exactly no way to find elsewhere. Comparable books to which I listened as audiobooks, recently, had much less. In other words, I’d have at most 20 tweets worth of things to say from the book. Almost a 200:1 compression.
Direct discussion with the authors could produce much more insight. The radio interviews with these authors already contained a few insight hints, which predisposed me to look for more. But, so many months later, without the streams of thought which animated me at the time, I end up with something much less valuable than what I wanted to get, back in November.

Bottomline: Books aren’t necessarily “broken” as a tool. They just don’t fit my life, anymore.

Development and Quality: Reply to Agile Diary

Getting on the soapbox about developers.

Former WiZiQ product manager Vikrama Dhiman responded to one of my tweets with a full-blown blogpost, thereby giving support to Matt Mullenweg‘s point that microblogging goes hand-in-hand with “macroblogging.”

My tweet:

enjoys draft æsthetics yet wishes more developers would release stable products. / adopte certains produits trop rapidement.

Vikrama’s post:

Good Enough Software Does Not Mean Bad Software « Agile Diary, Agile Introduction, Agile Implementation.

My reply:

“To an engineer, good enough means perfect. With an artist, there’s no such thing as perfect.” (Alexander Calder)

Thanks a lot for your kind comments. I’m very happy that my tweet (and status update) triggered this.

A bit of context for my tweet (actually, a post from Ping.fm, meant as a status update, thereby giving support in favour of conscious duplication, «n’en déplaise aux partisans de l’action contre la duplication».)

I’ve been thinking about what I call the “draft æsthetics.” In fact, I did a podcast episode about it. My description of that episode was:

Sometimes, there is such a thing as “Good Enough.”

Though I didn’t emphasize the “sometimes” part in that podcast episode, it was an important part of what I wanted to say. In fact, my intention wasn’t to defend draft æsthetics but to note that there seems to be a tendency toward this æsthetic mode. I do situate myself within that mode in many things I do, but it really doesn’t mean that this mode should be the exclusive one used in any context.

That aforequoted tweet was thus a response to my podcast episode on draft æsthetics. “Yes, ‘good enough’ may work, sometimes. But it needs not be applied in all cases.”

As I often get into convoluted discussions with people who seem to think that I condone or defend a position because I take it for myself, the main thing I’d say there is that I’m not only a relativist but I cherish nuance. In other words, my tweet was a way to qualify the core statement I was talking about in my podcast episode (that “good enough” exists, at times). And that statement isn’t necessarily my own. I notice a pattern by which this statement seems to be held as accurate by people. I share that opinion, but it’s not a strongly held belief of mine.

Of course, I digress…

So, the tweet which motivated Vikrama had to do with my approach to “good enough.” In this case, I tend to think about writing but in view of Eric S. Raymond’s approach to “Release Early, Release Often” (RERO). So there is a connection to software development and geek culture. But I think of “good enough” in a broader sense.

Disclaimer: I am not a coder.

The Calder quote remained in my head, after it was mentioned by a colleague who had read it in a local newspaper. One reason it struck me is that I spend some time thinking about artists and engineers, especially in social terms. I spend some time hanging out with engineers but I tend to be more on the “artist” side of what I perceive to be an axis of attitudes found in some social contexts. I do get a fair deal of flack for some of my comments on this characterization and it should be clear that it isn’t meant to imply any evaluation of individuals. But, as a model, the artist and engineer distinction seems to work, for me. In a way, it seems more useful than the distinction between science and art.

An engineer friend with whom I discussed this kind of distinction was quick to point out that, to him, there’s no such thing as “good enough.” He was also quick to point out that engineers can be creative and so on. But the point isn’t to exclude engineers from artistic endeavours. It’s to describe differences in modes of thought, ways of knowing, approaches to reality. And the way these are perceived socially. We could do a simple exercise with terms like “troubleshooting” and “emotional” to be assigned to the two broad categories of “engineer” and “artist.” Chances are that clear patterns would emerge. Of course, many concepts are as important to both sides (“intelligence,” “innovation”…) and they may also be telling. But dichotomies have heuristic value.

Now, to go back to software development, the focus in Vikrama’s Agile Diary post…

What pushed me to post my status update and tweet is in fact related to software development. Contrary to what Vikrama presumes, it wasn’t about a Web application. And it wasn’t even about a single thing. But it did have to do with firmware development and with software documentation.

The first case is that of my Fonera 2.0n router. Bought it in early November and I wasn’t able to connect to its private signal using my iPod touch. I could connect to the router using the public signal, but that required frequent authentication, as annoying as with ISF. Since my iPod touch is my main WiFi device, this issue made my Fonera 2.0n experience rather frustrating.

Of course, I’ve been contacting Fon‘s tech support. As is often the case, that experience was itself quite frustrating. I was told to reset my touch’s network settings which forced me to reauthenticate my touch on a number of networks I access regularly and only solved the problem temporarily. The same tech support person (or, at least, somebody using the same name) had me repeat the same description several times in the same email message. Perhaps unsurprisingly, I was also told to use third-party software which had nothing to do with my issue. All in all, your typical tech support experience.

But my tweet wasn’t really about tech support. It was about the product. Thougb I find the overall concept behind the Fonera 2.0n router very interesting, its implementation seems to me to be lacking. In fact, it reminds me of several FLOSS development projects that I’ve been observing and, to an extent, benefitting from.

This is rapidly transforming into a rant I’ve had in my “to blog” list for a while about “thinking outside the geek box.” I’ll try to resist the temptation, for now. But I can mention a blog thread which has been on my mind, in terms of this issue.

Firefox 3 is Still a Memory Hog — The NeoSmart Files.

The blogpost refers to a situation in which, according to at least some users (including the blogpost’s author), Firefox uses up more memory than it should and becomes difficult to use. The thread has several comments providing support to statements about the relatively poor performance of Firefox on people’s systems, but it also has “contributions” from an obvious troll, who keeps assigning the problem on the users’ side.

The thing about this is that it’s representative of a tricky issue in the geek world, whereby developers and users are perceived as belonging to two sides of a type of “class struggle.” Within the geek niche, users are often dismissed as “lusers.” Tech support humour includes condescending jokes about “code 6”: “the problem is 6″ from the screen.” The aforementioned Eric S. Raymond wrote a rather popular guide to asking questions in geek circles which seems surprisingly unaware of social and cultural issues, especially from someone with an anthropological background. Following that guide, one should switch their mind to that of a very effective problem-solver (i.e., the engineer frame) to ask questions “the smart way.” Not only is the onus on users, but any failure to comply with these rules may be met with this air of intellectual superiority encoded in that guide. IOW, “Troubleshoot now, ask questions later.”

Of course, many users are “guilty” of all sorts of “crimes” having to do with not reading the documentation which comes with the product or with simply not thinking about the issue with sufficient depth before contacting tech support. And as the majority of the population is on the “user” side, the situation can be described as both a form of marginalization (geek culture comes from “nerd” labels) and a matter of elitism (geek culture as self-absorbed).

This does have something to do with my Fonera 2.0n. With it, I was caught in this dynamic whereby I had to switch to the “engineer frame” in order to solve my problem. I eventually did solve my Fonera authentication problem, using a workaround mentioned in a forum post about another issue (free registration required). Turns out, the “release candidate” version of my Fonera’s firmware does solve the issue. Of course, this new firmware may cause other forms of instability and installing it required a bit of digging. But it eventually worked.

The point is that, as released, the Fonera 2.0n router is a geek toy. It’s unpolished in many ways. It’s full of promise in terms of what it may make possible, but it failed to deliver in terms of what a router should do (route a signal). In this case, I don’t consider it to be a finished product. It’s not necessarily “unstable” in the strict sense that a software engineer might use the term. In fact, I hesitated between different terms to use instead of “stable,” in that tweet, and I’m not that happy with my final choice. The Fonera 2.0n isn’t unstable. But it’s akin to an alpha version released as a finished product. That’s something we see a lot of, these days.

The main other case which prompted me to send that tweet is “CivRev for iPhone,” a game that I’ve been playing on my iPod touch.

I’ve played with different games in the Civ franchise and I even used the FLOSS version on occasion. Not only is “Civilization” a geek classic, but it does connect with some anthropological issues (usually in a problematic view: Civ’s worldview lacks anthro’s insight). And it’s the kind of game that I can easily play while listening to podcasts (I subscribe to a number of th0se).

What’s wrong with that game? Actually, not much. I can’t even say that it’s unstable, unlike some other items in the App Store. But there’s a few things which aren’t optimal in terms of documentation. Not that it’s difficult to figure out how the game works. But the game is complex enough that some documentation is quite useful. Especially since it does change between one version of the game and another. Unfortunately, the online manual isn’t particularly helpful. Oh, sure, it probably contains all the information required. But it’s not available offline, isn’t optimized for the device it’s supposed to be used with, doesn’t contain proper links between sections, isn’t directly searchable, and isn’t particularly well-written. Not to mention that it seems to only be available in English even though the game itself is available in multiple languages (I play it in French).

Nothing tragic, of course. But coupled with my Fonera experience, it contributed to both a slight sense of frustration and this whole reflection about unfinished products.

Sure, it’s not much. But it’s “good enough” to get me started.

Transparency and Secrecy

Musings on transparency and secrecy, related to both my professional reorientation and my personal life.

[Started working on this post on December 1st, based on something which happened a few days prior. Since then, several things happened which also connected to this post. Thought the timing was right to revisit the entry and finally publish it. Especially since a friend just teased me for not blogging in a while.]

I’m such a strong advocate of transparency that I have a real problem with secrecy.

I know, transparency is not exactly the mirror opposite of secrecy. But I think my transparency-radical perspective causes some problem in terms of secrecy-management.

“Haven’t you been working with a secret society in Mali?,” you ask. Well, yes, I have. And secrecy hasn’t been a problem in that context because it’s codified. Instead of a notion of “absolute secrecy,” the Malian donsow I’ve been working with have a subtle, nuanced, complex, layered, contextually realistic, elaborate, and fascinating perspective on how knowledge is processed, “transmitted,” managed. In fact, my dissertation research had a lot to do with this form of knowledge management. The term “knowledge people” (“karamoko,” from kalan+mogo=learning+people) truly applies to members of hunter’s associations in Mali as well as to other local experts. These people make a clear difference between knowledge and information. And I can readily relate to their approach. Maybe I’ve “gone native,” but it’s more likely that I was already in that mode before I ever went to Mali (almost 11 years ago).

Of course, a high value for transparency is a hallmark of academia. The notion that “information wants to be free” makes more sense from an academic perspective than from one focused on a currency-based economy. Even when people are clear that “free” stands for “freedom”/«libre» and not for “gratis”/«gratuit» (i.e. “free as in speech, not free as in beer”), there persists a notion that “free comes at a cost” among those people who are so focused on growth and profit. IMHO, most the issues with the switch to “immaterial economies” (“information economy,” “attention economy,” “digital economy”) have to do with this clash between the value of knowledge and a strict sense of “property value.”

But I digress.

Or, do I…?

The phrase “radical transparency” has been used in business circles related to “information and communication technology,” a context in which the “information wants to be free” stance is almost the basis of a movement.

I’m probably more naïve than most people I have met in Mali. While there, a friend told me that he thought that people from the United States were naïve. While he wasn’t referring to me, I can easily acknowledge that the naïveté he described is probably characteristic of my own attitude. I’m North American enough to accept this.

My dedication to transparency was tested by an apparently banal set of circumstances, a few days before I drafted this post. I was given, in public, information which could potentially be harmful if revealed to a certain person. The harm which could be done is relatively small. The person who gave me that information wasn’t overstating it. The effects of my sharing this information wouldn’t be tragic. But I was torn between my radical transparency stance and my desire to do as little harm as humanly possible. So I refrained from sharing this information and decided to write this post instead.

And this post has been sitting in my “draft box” for a while. I wrote a good number of entries in the meantime but I still had this one at the back of my mind. On the backburner. This is where social media becomes something more of a way of life than an activity. Even when I don’t do anything on this blog, I think about it quite a bit.

As mentioned in the preamble, a number of things have happened since I drafted this post which also relate to transparency and secrecy. Including both professional and personal occurrences. Some of these comfort me in my radical transparency position while others help me manage secrecy in a thoughtful way.

On the professional front, first. I’ve recently signed a freelance ethnography contract with Toronto-based consultancy firm Idea Couture. The contract included a non-disclosure agreement (NDA). Even before signing the contract/NDA, I was asking fellow ethnographer and blogger Morgan Gerard about disclosure. Thanks to him, I now know that I can already disclose several things about this contract and that, once the results are public, I’ll be able to talk about this freely. Which all comforts me on a very deep level. This is precisely the kind of information and knowledge management I can relate to. The level of secrecy is easily understandable (inopportune disclosure could be detrimental to the client). My commitment to transparency is unwavering. If all contracts are like this, I’ll be quite happy to be a freelance ethnographer. It may not be my only job (I already know that I’ll be teaching online, again). But it already fits in my personal approach to information, knowledge, insight.

I’ll surely blog about private-sector ethnography. At this point, I’ve mostly been preparing through reading material in the field and discussing things with friends or colleagues. I was probably even more careful than I needed to be, but I was still able to exchange ideas about market research ethnography with people in diverse fields. I sincerely think that these exchanges not only add value to my current work for Idea Couture but position me quite well for the future. I really am preparing for freelance ethnography. I’m already thinking like a freelance ethnographer.

There’s a surprising degree of “cohesiveness” in my life, these days. Or, at least, I perceive my life as “making sense.”

And different things have made me say that 2009 would be my year. I get additional evidence of this on a regular basis.

Which brings me to personal issues, still about transparency and secrecy.

Something has happened in my personal life, recently, that I’m currently unable to share. It’s a happy circumstance and I’ll be sharing it later, but it’s semi-secret for now.

Thing is, though, transparency was involved in that my dedication to radical transparency has already been paying off in these personal respects. More specifically, my being transparent has been valued rather highly and there’s something about this type of validation which touches me deeply.

As can probably be noticed, I’m also becoming more public about some emotional dimensions of my life. As an artist and a humanist, I’ve always been a sensitive person, in-tune with his emotions. Specially positive ones. I now feel accepted as a sensitive person, even if several people in my life tend to push sensitivity to the side. In other words, I’ve grown a lot in the past several months and I now want to share my growth with others. Despite reluctance toward the “touchy-feely,” specially in geek and other male-centric circles, I’ve decided to “let it all loose.” I fully respect those who dislike this. But I need to be myself.

Quest for Expertise

Who came up with the “rule of thumb” which says that it takes “ten (10) years and/or ten thousand (10,000) hours to become an expert?”

Will at Work Learning: People remember 10%, 20%…Oh Really?.

This post was mentioned on the mailing-list for the Society for Teaching and Learning in Higher Education (STLHE-L).

In that post, Will Thalheimer traces back a well-known claim about learning to shoddy citations. While it doesn’t invalidate the base claim (that people tend to retain more information through certain cognitive processes), Thalheimer does a good job of showing how a graph which has frequently been seen in educational fields was based on faulty interpretation of work by prominent scholars, mixed with some results from other sources.

Quite interesting. IMHO, demystification and critical thinking are among the most important things we can do in academia. In fact, through training in folkloristics, I have become quite accustomed to this specific type of debunking.

I have in mind a somewhat similar claim that I’m currently trying to trace. Preliminary searches seem to imply that citations of original statements have a similar hyperbolic effect on the status of this claim.

The claim is what a type of “rule of thumb” in cognitive science. A generic version could be stated in the following way:

It takes ten years or 10,000 hours to become an expert in any field.

The claim is a rather famous one from cognitive science. I’ve heard it uttered by colleagues with a background in cognitive science. In 2006, I first heard about such a claim from Philip E. Ross, on an episode of Scientific American‘s Science Talk podcast to discuss his article on expertise. I later read a similar claim in Daniel Levitin’s 2006 This Is Your Brain On Music. The clearest statement I could find back in Levitin’s book is the following (p. 193):

The emerging picture from such studies is that ten thousand hours of practice is required to achieve the level of mastery associated with being a world-class expert – in anything.

More recently, during a keynote speech he was giving as part of his latest book tour, I heard a similar claim from presenter extraordinaire Malcolm Gladwell. AFAICT, this claim runs at the centre of Gladwell’s recent book: Outliers: The Story of Success. In fact, it seems that Gladwell uses the same quote from Levitin, on page 40 of Outliers (I just found that out).

I would like to pinpoint the origin for the claim. Contrary to Thalheimer’s debunking, I don’t expect that my search will show that the claim is inaccurate. But I do suspect that the “rule of thumb” versions may be a bit misled. I already notice that most people who set up such claims are doing so without direct reference to the primary literature. This latter comment isn’t damning: in informal contexts, constant referal to primary sources can be extremely cumbersome. But it could still be useful to clear up the issue. Who made this original claim?

I’ve tried a few things already but it’s not working so well. I’m collecting a lot of references, to both online and printed material. Apart from Levitin’s book and a few online comments, I haven’t yet read the material. Eventually, I’d probably like to find a good reference on the cognitive basis for expertise which puts this “rule of thumb” in context and provides more elaborate data on different things which can be done during that extensive “time on task” (including possible skill transfer).

But I should proceed somewhat methodically. This blogpost is but a preliminary step in this process.

Since Philip E. Ross is the first person on record I heard talk about this claim, a logical first step for me is to look through this SciAm article. Doing some text searches on the printable version of his piece, I find a few interesting things including the following (on page 4 of the standard version):

Simon coined a psychological law of his own, the 10-year rule, which states that it takes approximately a decade of heavy labor to master any field.

Apart from the ten thousand (10,000) hours part of the claim, this is about as clear a statement as I’m looking for. The “Simon” in question is Herbert A. Simon, who did research on chess at the Department of Psychology at Carnegie-Mellon University with colleague William G. Chase.  So I dig for diverse combinations of “Herbert Simon,” “ten(10)-year rule,” “William Chase,” “expert(ise),” and/or “chess.” I eventually find two primary texts by those two authors, both from 1973: (Chase and Simon, 1973a) and (Chase and Simon, 1973b).

The first (1973a) is an article from Cognitive Psychology 4(1): 55-81, available for download on ScienceDirect (toll access). Through text searches for obvious words like “hour*,” “year*,” “time,” or even “ten,” it seems that this article doesn’t include any specific statement about the amount of time required to become an expert. The quote which appears to be the most relevant is the following:

Behind this perceptual analysis, as with all skills (cf., Fitts & Posner, 1967), lies an extensive cognitive apparatus amassed through years of constant practice.

While it does relate to the notion that there’s a cognitive basis to practise, the statement is generic enough to be far from the “rule of thumb.”

The second Chase and Simon reference (1973b) is a chapter entitled “The Mind’s Eye in Chess” (pp. 215-281) in the proceedings of the Eighth Carnegie Symposium on Cognition as edited by William Chase and published by Academic Press under the title Visual Information Processing. I borrowed a copy of those proceedings from Concordia and have been scanning that chapter visually for some statements about the “time on task.” Though that symposium occurred in 1972 (before the first Chase and Simon reference was published), the proceedings were apparently published after the issue of Cognitive Psychology since the authors mention that article for background information.

I do find some interesting quotes, but nothing that specific:

By a rough estimate, the amount of time each player has spent playing chess, studying chess, and otherwise staring at chess positions is perhaps 10,000 to 50,000 hours for the Master; 1,000 to 5,000 hours for the Class A player; and less than 100 horus for the beginner. (Chase and Simon 1973b: 219)

or:

The organization of the Master’s elaborate repertoire of information takes thousands of hours to build up, and the same is true of any skilled task (e.g., football, music). That is why practice is the major independent variable in the acquisition of skill. (Chase and Simon 1973b: 279, emphasis in the original, last sentences in the text)

Maybe I haven’t scanned these texts properly but those quotes I find seem to imply that Simon hadn’t really devised his “10-year rule” in a clear, numeric version.

I could probably dig for more Herbert Simon wisdom. Before looking (however cursorily) at those 1973 texts, I was using Herbert Simon as a key figure in the origin of that “rule of thumb.” To back up those statements, I should probably dig deeper in the Herbert Simon archives. But that might require more work than is necessary and it might be useful to dig through other sources.

In my personal case, the other main written source for this “rule of thumb” is Dan Levitin. So, using online versions of his book, I look for comments about expertise. (I do own a copy of the book and I’m assuming the Index contains page numbers for references on expertise. But online searches are more efficient and possibly more thorough on specific keywords.) That’s how I found the statement, quoted above. I’m sure it’s the one which was sticking in my head and, as I found out tonight, it’s the one Gladwell used in his first statement on expertise in Outliers.

So, where did Levitin get this? I could possibly ask him (we’ve been in touch and he happens to be local) but looking for those references might require work on his part. A preliminary step would be to look through Levitin’s published references for Your Brain On Music.

Though Levitin is a McGill professor, Your Brain On Music doesn’t follow the typical practise in English-speaking academia of ladling copious citations onto any claim, even the most truistic statements. Nothing strange in this difference in citation practise.  After all, as Levitin explains in his Bibliographic Notes:

This book was written for the non-specialist and not for my colleagues, and so I have tried to simplify topics without oversimplifying them.

In this context, academic-style citation-fests would make the book too heavy. Levitin does, however, provide those “Bibliographic Notes” at the end of his book and on the website for the same book. In the Bibliographic Notes of that site, Levitin adds a statement I find quite interesting in my quest for “sources of claims”:

Because I wrote this book for the general reader, I want to emphasize that there are no new ideas presented in this book, no ideas that have not already been presented in scientific and scholarly journals as listed below.

So, it sounds like going through those references is a good strategy to locate at least solid references on that specific “10,000 hour” claim. Among relevant references on the cognitive basis of expertise (in Chapter 7), I notice the following texts which might include specific statements about the “time on task” to become an expert. (An advantage of the Web version of these bibliographic notes is that Levitin provides some comments on most references; I put Levitin’s comments in parentheses.)

  • Chi, Michelene T.H., Robert Glaser, and Marshall J. Farr, eds. 1988. The Nature of Expertise. Hillsdale, New Jersey: Lawrence Erlbaum Associates. (Psychological studies of expertise, including chess players)
  • Ericsson, K. A., and J. Smith, eds. 1991. Toward a General Theory of Expertise: prospects and limits. New York: Cambridge University Press. (Psychological studies of expertise, including chess players)
  • Hayes, J. R. 1985. Three problems in teaching general skills. In Thinking and Learning Skills: Research and Open Questions, edited by S. F. Chipman, J. W. Segal and R. Glaser. Hillsdale, NJ: Erlbaum. (Source for the study of Mozart’s early works not being highly regarded, and refutation that Mozart didn’t need 10,000 hours like everyone else to become an expert.)
  • Howe, M. J. A., J. W. Davidson, and J. A. Sloboda. 1998. Innate talents: Reality or myth? Behavioral & Brain Sciences 21 (3):399-442. (One of my favorite articles, although I don’t agree with everything in it; an overview of the “talent is a myth” viewpoint.)
  • Sloboda, J. A. 1991. Musical expertise. In Toward a general theory of expertise, edited by K. A. Ericcson (sic) and J. Smith. New York: Cambridge University Press. (Overview of issues and findings in musical expertise literature)

I have yet to read any of those references. I did borrow Ericsson and Smith when I first heard about Levitin’s approach to talent and expertise (probably through a radio and/or podcast appearance). But I had put the issue of expertise on the back-burner. It was always at the back of my mind and I did blog about it, back then. But it took Gladwell’s talk to wake me up. What’s funny, though, is that the “time on task” statements in (Ericsson and Smith,  1991) seem to lead back to (Chase and Simon, 1973b).

At this point, I get the impression that the “it takes a decade and/or 10,000 hours to become an expert”:

  • was originally proposed as a vague hypothesis a while ago (the year 1899 comes up);
  • became an object of some consideration by cognitive psychologists at the end of the 1960s;
  • became more widely accepted in the 1970s;
  • was tested by Benjamin Bloom and others in the 1980s;
  • was precised by Ericsson and others in the late 1980s;
  • gained general popularity in the mid-2000s;
  • is being further popularized by Malcolm Gladwell in late 2008.

Of course, I’ll have to do a fair bit of digging and reading to verify any of this, but it sounds like the broad timeline makes some sense. One thing, though, is that it doesn’t really seem that anybody had the intention of spelling it out as a “rule” or “law” in such a format as is being carried around. If I’m wrong, I’m especially surprised that a clear formulation isn’t easier to find.

As an aside, of sorts… Some people seem to associate the claim with Gladwell, at this point. Not very surprsing, given the popularity of his books, the effectiveness of his public presentations, the current context of his book tour, and the reluctance of the general public to dig any deeper than the latest source.

The problem, though, is that it doesn’t seem that Gladwell himself has done anything to “set the record straight.” He does quote Levitin in Outliers, but I heard him reply to questions and comments as if the research behind the “ten years or ten thousand hours” claim had some association with him. From a popular author like Gladwell, it’s not that awkward. But these situations are perfect opportunities for popularizers like Gladwell to get a broader public interested in academia. As Gladwell allegedly cares about “educational success” (as measured on a linear scale), I would have expected more transparency.

Ah, well…

So, I have some work to do on all of this. It will have to wait but this placeholder might be helpful. In fact, I’ll use it to collect some links.

 

Some relevant blogposts of mine on talent, expertise, effort, and Levitin.

And a whole bunch of weblinks to help me in my future searches (I have yet to really delve in any of this).

Blogging and Literary Standards

Comment on literary quality and blogging, in response to a conversation between novelist Rick Moody and podcasting pioneer Chris Lydon.

I wrote the following comment in response to a conversation between novelist Rick Moody and podcasting pioneer Chris Lydon:

Open Source » Blog Archive » In the Obama Moment: Rick Moody.

In keeping with the RERO principle I describe in that comment, the version on the Open Source site is quite raw. As is my habit, these days, I pushed the “submit” button without rereading what I had written. This version is edited, partly because I noticed some glaring mistakes and partly because I wanted to add some links. (Blog comments are often tagged for moderation if they contain too many links.) As I started editing that comment, I changed a few things, some of which have consequences to the meaning of my comment. There’s this process, in both writing and editing, which “generates new thoughts.” Yet another argument for the RERO principle.

I can already think of an addendum to this post, revolving on my personal position on writing styles (informed by my own blogwriting experience) along with my relative lack of sensitivity for Anglo writing. But I’m still blogging this comment on a standalone basis.

Read on, please… Continue reading “Blogging and Literary Standards”

Intello-Bullying

A topic which I’ll revisit, to be sure. But while I’m at it…
I tend to react rather strongly to a behaviour which I consider the intellectual equivalent of schoolyard bullying.
Notice that I don’t claim to be above this kind of behaviour. I’m not. In fact, one reason for my blogging this is that I have given some thought to my typical anti-bullying reaction. Not that I feel bad about it. But I do wonder if it might not be a good idea to adopt a variety of mechanisms to respond to bullying, in conjunction with my more “gut response” knee-jerk reactions and habits.
Notice also that i’m not describing individual bullies. I’m not complaining about persons. I’m thinking about behaviour. Granted, certain behaviours are typically associated with certain people and bullying is no exception. But instead of blaming, I’d like to assess, at least as a step in a given direction. What can I do? I’m an ethnographer.
Like schoolyardb bullying, intello-bullying is based on a perceived strength used to exploit and/or harm those who perceived as weaker. Like physical strength, the perception of “intellectual strength” on which intello-bullying is based needs not have any objective validity. We’re in subjectivity territory, here. And subjects perceive in patterned but often obscure ways. Those who think of themselves as “strong” in intellectual as well as physical senses, are sometimes the people who are insecure as to their overall strengths and weaknesses.
Unlike schoolyard bullying, intello-bullying can be, and often is, originated by otherwise reasonably mature people. In fact, some of the most agressive intello-bullying comes from well-respected “career intellectuals” who “should know better.” Come to think of it, this type of bullying is probably the one I personally find the most problematic. But, again, I’m not talking about bullies. I’m not describing people. I’m talking about behaviour. And implications if behaviour.
My personal reactions may come from remnants of my impostor syndrome. Or maybe they come from a non-exclusive sense of self-worth that I found lying around in my life, as I was getting my happiness back. As much I try, I can’t help but feel that intello-bullying is a sign of intellectual self-absorption, which eventually link to weakness. Sorry, folks, but it seems to me that if you feel the need, even temporarily, to impose your intellectual strength on those you perceive as intellectually weak, I’ll assume you may “have issues to solve.” in fact, I react the same way when I perceive my own behaviour as tantamount to bullying. It’s the behaviour I have issues with. Not the person.
And this is the basis of my knee-jerks: when I witness bullying, I turn into a bully’s bully. Yeah, pretty dangerous. And quite unexpected for a lifelong pacifist like yours truly. But, at least I can talk and think about it. Unapologetically.
You know, this isn’t something I started doing yesterday. In fact, it may be part of a long-standing mission of mine. Half-implicit at first. Currently “assumed,” assessed, acknowledged. Accepted.
Before you blame me for the appearance of an “avenger complex” in this description, please give some more thought to bullying in general. My hunch is that many of you will admit that you value the existence of anti-bullies in schoolyards or in other contexts. You may prefer it if cases of bullying are solved through other means (sanction by school officials or by parents, creation of safe zones…). But I’d be somewhat surprised if your thoughts about anti-bullying prevention left no room for non-violent but strength-based control by peers. If it is the case, I’d be very interested in your comments on the issue. After all, I may be victim of some idiosyncratic notion of justice which you find inappropriate. I’m always willing to relativize.
Bear in mind that I’m not talking about retaliation. Though it may sound like it, this is no “eye for an eye” rule. Nor is it “present the left cheek.” it’s more like crowd control. Or this form of “non-abusive” technique used by occupational therapists and others while helping patients/clients who are “disorganizing.” Basically, I’m talking about responding to (intello-)bullying with calm but some strength being asserted. In the case of “fighting with words,” in my case, it may sound smug and even a bit dismissive. But it’s a localized smugness which I have a hard time finding unhealthy.
In a sense, I hope I’m talking about “taking the high road.” With a bit of self-centredness which has altruistic goals. “”I’ll act as if I were stronger than you, because you used your perceived strength to dominate somebody else. I don’t have anything against you but I feel you should be put in your place. Don’t make me go to the next step through which I can make you weep.”
At this point, I’m thinking martial arts. I don’t practise any martial art but, as an outsider, I get the impression this thinking goes well with some martial arts. Maybe judo, which allegedly relies on using your opponent’s strength. Or Tae Kwon Do, which always sounded “assertive yet peaceful” when described by practitioners.
The corrolary of all this is my attitude toward those who perceive themselves as weak. I have this strong tendency to want them to feel stronger. Both out of this idiosyncratic atttude toward justice and because of my compulsive empathy. So, when someone says something like “I’m not that smart” or “I don’t have anything to contribute,” I switch to the “nurturing mode” that I may occasionally use in class or with children. I mean not to patronize, though it probably sounds paternalistic to outside observers. It’s just a reaction I have. I don’t even think its consequences are that negative in most contexts.
Academic contexts are full of cases of intello-bullying. Classrooms, conferences, outings… Put a group of academics in a room and unless there’s a strong sense of community (Turner would say “communitas”), intello-bullying is likely to occur. At the very least, you may witness posturing, which I consider a mild form of bullying. It can be as subtle as a tricky question ask to someone who is unlikely to provide a face-saving answer and it can be as aggressive as questioning someone’s inteligence directly or claiming to have gone much beyond what somebody else has said.
In my mind, the most extreme context for this type of bullying is the classroom and it involves a teacher bullying a learner. Bullying between isn’t much better but, as a teacher, I’m even more troubled by the imposong authority structure based on status.

I put “cyber-bullying” as a tag because, in my mind, cyber-bullying (like trolling, flamebaiting and other agressive behaviours online) is a form of intello-bullying. It’s using a perceived “intellectual strength” to dominate. It’s very close to schoolyard bullying but because it may not rely on a display of physical strength, I tend to associate it with mind-based behaviour.
As I think about these issues, I keep thinking of snarky comments. Contrary to physical attacks, snarks necessitate a certain state of mind to be effective. They need to tap on some insecurity, some self-perceived weakness in the victim. But they can be quite dangerous in the right context.
As I write this, I think about my own snarky comments. Typically, they either come after some escalation or they will be as indefinite as possible. But they can be extremely insulting if they’re internalized by some people.
Two come from a fairly known tease/snark. Namely

If you’re so smart, why ain’t you rich?

(With several variants.)

I can provide several satisfactory answers to what is ostensibly a question. But, as much as I try, I can’t relate to the sentiment behind this rhetorical utterance, regardless of immediate context (but regardful of the broader social context). This may have to do with the fact that “getting rich” really isn’t my goal in life. Not only do I agree with the statement that “money can’t buy happiness” and do I care more about happiness than more easily measurable forms of success, but my high empathy levels do include a concept of egalitarianism and solidarity which makes this emphasis on wealth sound counter-productive.

Probably because of my personal reactions to that snark, I have created at least two counter-snarks. My latest one, and the one which may best represent my perspective, is the following:

If you’re so smart, why ain’t you happy?

With direct reference to the original “wealth and intelligence” snark, I wish to bring attention to what I perceive to be a more appropriate goal in life (because it’s my own goal): pursuit of happiness. What I like about this “rhetorical question” is that it’s fairly ambiguous yet has some of the same effects as the “don’t think about pink elephants” illocutionary act. As a rhetorical question, it needs not be face-threatening. Because the “why aren’t you happy?” question can stand on its own, the intelligence premise “dangles.” And, more importantly, it represents one of my responses to what I perceive as a tendency (or attitude and “phase”) associating happiness with lack of intelligence. The whole “ignorance is bliss” and «imbécile heureux» perspective. Voltaire’s Candide and (failed) attempts to discredit Rousseau. Uses of “touchy-feely” and “warm and fuzzy” as insults. In short, the very attitude which makes most effectively tricks out intellectuals in the “pursuit of happiness.”

I posted my own snarky comment on micro-blogs and other social networks. A friend replied rather negatively. Though I can understand my friend’s issues with my snark, I also care rather deeply about delinking intelligence and depression.

A previous snark of mine was much more insulting. In fact, I would never ever use it with any individual, because I abhor insulting others. Especially about their intelligence. But it does sound to me like an efficient way to unpack the original snark. Pretty obvious and rather “nasty”:

If you’re so rich, why ain’t you smart?

Again, I wouldn’t utter this to anyone. I did post it through social media. But, like the abovementioned snark on happiness, it wasn’t aimed at any specific person. Though I find it overly insulting, I do like its “counterstrike” power in witticism wars.

As announced through the “placeholder” tag and in the prefacing statement (or disclaimer), this post is but a draft. I’ll revisit this whole issue on several occasions and it’s probably better that I leave this post alone. Most of it was written while riding the bus from Ottawa to Montreal (through the WordPress editor available on the App Store). Though I’ve added a few things which weren’t in this post when I arrived in Montreal (e.g., a link to NAPPI training), I should probably leave this as a “bus ride post.”

I won’t even proofread this post.

RERO!

The Issue Is Respect

As a creative generalist, I don’t tend to emphasize expert status too much, but I do see advantages in complementarity between people who act in different spheres of social life. As we say in French, «à chacun son métier et les vaches seront bien gardées» (“to each their own profession and cows will be well-kept”).

The diversity of skills, expertise, and interest is especially useful when people of different “walks of life” can collaborate with one another. Tolerance, collegiality, dialogue. When people share ideas, the potential is much greater if their ideas are in fact different. Very simple principle, which runs through anthropology as the study of human diversity (through language, time, biology, and culture).

The problem, though, is that people from different “fields” tend not to respect one another’s work. For instance, a life scientist and a social scientist often have a hard time understanding one another because they simply don’t respect their interlocutor’s discipline. They may respect each other as human beings but they share a distrust as to the very usefulness of the other person’s field.

Case in point: entomologist Paul R. Ehrlich, who spoke at the Seminar About Long Term Thinking (SALT) a few weeks ago.

The Long Now Blog » Blog Archive » Paul Ehrlich, “The Dominant Animal: Human Evolution and the Environment”

Ehrlich seems to have a high degree of expertise in population studies and, in that SALT talk, was able to make fairly interesting (though rather commonplace) statements about human beings. For instance, he explicitly addressed the tendency, in mainstream media, to perceive genetic determinism where it has no place. Similarly, his discussion about the origins and significance of human language was thoughtful enough that it could lead other life scientists to at least take a look at language.

What’s even more interesting is that Ehrlich realizes that social sciences can be extremely useful in solving the environmental issues which concern him the most. As we learn during the question period after this talk, Ehrlich is currently talking with some economists. And, contrary to business professors, economists participate very directly in the broad field of social sciences.

All of this shows quite a bit of promise, IMVHAWISHIMVVVHO. But the problem has to do with respect, it seems.

Now, it might well be that Ehrlich esteems and respects his economist colleagues. Their methods may be sufficiently compatible with his that he actually “hears what they’re saying.” But he doesn’t seem to “extend this courtesy” to my own highly esteemed colleagues in ethnographic disciplines. Ehrlich simply doesn’t grok the very studies which he states could be the most useful for him.

There’s a very specific example during the talk but my point is broader. When that specific issue was revealed, I had already been noticing an interdisciplinary problem. And part of that problem was my own.

Ehrlich’s talk was fairly entertaining, although rather unsurprising in the typical “doom and gloom” exposé to which science and tech shows have accustomed us. Of course, it was fairly superficial on even the points about which Ehrlich probably has the most expertise. But that’s expected of this kind of popularizer talk. But I started reacting quite negatively to several of his points when he started to make the kinds of statements which make any warm-blooded ethnographer cringe. No, not the fact that his concept of “culture” is so unsophisticated that it could prevent a student of his from getting a passing grade in an introductory course in cultural anthropology. But all sorts of comments which clearly showed that his perspective on human diversity is severely restricted. Though he challenges some ideas about genetic determinism, Ehrlich still holds to a form of reductionism which social scientists would associate with scholars who died before Ehrlich was born.

So, my level of respect for Ehrlich started to fade, with each of those half-baked pronouncments about cultural diversity and change.

Sad, I know. Especially since I respect every human being equally. But it doesn’t mean that I respect all statements equally. As is certainly the case for many other people, my respect for a person’s pronouncements may diminish greatly if those words demonstrate a lack of understanding of something in which I have a relatively high degree of expertise. In other words, a heart surgeon could potentially listen to a journalist talk about “cultural evolution” without blinking an eye but would likely lose “intellectual patience” if, in the same piece, the journalist starts to talk about heart diseases. And this impatience may retroactively carry over to the discussion about “cultural evolution.” As we tend to say in the ethnography of communication, context is the thing.

And this is where I have to catch myself. It’s not because Ehrlich made statements about culture which made him appear clueless that what he said about the connections between population and environment is also clueless. I didn’t, in fact, start perceiving his points about ecology as misled for the very simple reason that we have been saying the same things, in ethnographic disciplines. But that’s dangerous: selectively accepting statements because they reinforce what you already know. Not what academic work is supposed to be about.

In fact, there was something endearing about Ehrlich. He may not understand the study of culture and he doesn’t seem to have any training in the study of society, but at least he was trying to understand. There was even a point in his talk when he something which would be so obvious to any social scientist that I could have gained a new kind of personal respect for Ehrlich’s openness, if it hadn’t been for his inappropriate statements about culture.

The saddest part is about dialogue. If a social scientist is to work with Ehrlich and she reacts the same way I did, dialogue probably won’t be established. And if Ehrlich’s attitude toward epistemological approaches different from his own are represented by the statements he made about ethnography, chances are that he will only respect those of my social science colleagues who share his own reductionist perspective.

It should be obvious that there’s an academic issue, here, in terms of inter-disciplinarity. But there’s also a personal issue. In my own life, I don’t want to restrict myself to conversations with people who think the same way I do.

And We’re Still Lecturing

Forty years ago this month, students in Paris started a movement of protests and strikes. May ’68.

Among French-speakers, the events are remembered as the onset of a cultural revolution of sorts (with both negative and positive connotations). As we reached the 40 year anniversary of those events, some journalists and commentators have looked back at the social changes associated with the Paris student revolts of May, 1968.

The May ’68 movement also had some pedagogical bases. Preparing an online course, these days, I get to think about learning. And to care about students.

As I was yet to be born at the time, May ’68 resonates more for generational reasons than pedagogical ones. But a Montreal journalist who observed some of those events 40 years ago has been talking about what she perceived as irrationality surrounding such issues as abolishing lecture-based courses («cours magistraux»).

This journalist’s reaction and a cursory comparison of the present situation with what I’ve heard of pre-1968 teaching both lead me on a reflection path about learning. Especially in terms of lecturing.

As a social constructivist, I have no passion for “straight lectures.” On occasion, I bemoan the fact that lecturing is (still) the primary teaching mode in many parts of the world. The pedagogical ideas forcefully proposed more than a generation ago are apparently not prevalent in most mainstream educational systems.

What happened?

This is an especially difficult question for an idealist like me. We wish for change. Change happens. Then, some time later, changes have been reversed. Maybe more progressively. But, it seems, inexorably.

Sisyphean. Or, maybe, buddhist.

Is it really the way things work?

Possibly. But I prefer to maintain my idealism.

So… Before I was born, some baby-booming students in Paris revolted against teaching practises. We still talk about it. Nowadays, these teaching practises against which students revolted are apparently quite common in Paris universities. As they are in many other parts of the world. But not exactly everywhere.

Online learning appears more compatible with teaching methods inspired by social constructivism (and constructionism) than with “straight lecturing.” My idealism for alternative learning methods is fed partly by online learning.

Online lectures are possible. Yet the very structure of online communication implies some freedoms in the way lecture attendees approach these “teachings.”

At the very least, online lectures make few requirements in terms of space. Technically, a student could be watching online lectures while laying down on a beach. Beaches sound like a radically different context from the large lecture halls out of which some ’68ers decided to “take to the streets.”

Contrary to classroom lectures, online lectures may allow time-shifting. In some cases, prerecorded lectures (or podcasts) may be paused, rewinded, fastforwarded, etc. Learning for the TiVo generation?

Online lectures also make painfully obvious the problems with straight lecturing. The rigid hierarchy. Students’ relative facelessness. The lack of interactivity. The content focus. All these work well for “rote learning.” But there are other ways to learn.

Not that memorization plays no part in learning or that there is no value in the “retention of [a text’s] core information” (Schaefer 2008: xxi). It’s just that… Many of us perceive learning to be more than brain-stuffing.

As should be obvious from my tone and previous posts, I count myself as one of those who perceive lectures to be too restrictive. Oh, sure, I’ve lectured to large and medium-sized classrooms. In fact, I even enjoy lecturing when I get to do it. And I fully realize that there are many possible approaches to teaching. In fact, my observation is that teaching methods are most effective when they are adapted to a specific situation, not when they follow some set of general principles. In this context, lecturing may work well when “lecturer and lecturees are in sync.” When students and teacher are “on the same page,” lectures can be intellectually stimulating, thought-provoking, challenging, useful. Conversely, alternative teaching methods can have disastrous consequences when they are applied haphazardly by people who were trained with “straight lecturing” in mind. In fact, my perception is that many issues with Quebec’s most recent education reform (the “competency based program” about which Quebec parents have been quite vocal) are associated with the indiscriminate application of constructivist/constructionist principles to all learning contexts in the province. IMHO, a more flexible application of the program coupled with considerate teacher training might have prevented several of the problems which plagued Quebec’s reform.

Unlike ’68ers, I don’t want to abolish lectures. I just hope we can adopt a diversity of methods in diverse contexts.

Back in 1968, my father was a student of Jean Piaget, in Geneva. Many of Piaget’s ideas about learning were quite compatible with what Parisian students were clamoring for.

Beyond the shameless name-dropping, my mentioning Piaget relates to something I perceive as formative. Both in my educational and in my personal lives. My mother had much more of an impact on my life. But my father supplied me with something of the Piaget spirit. And this spirit is found in different places. Including online.

The compatibility between online learning and lecture-less teaching methods seems to be a topic for frequent discussions among eLearning circles including LearnHubNing, and the Moodle community. Not that online technology determines pedagogical methods. But the “fit” of online technology with different approaches to learning and teaching is the stuff constructionist teachers’ dreams are made of.

One dimension of the “fit” is in terms of flexibility. Online, learners may (and are sometimes forced to) empower themselves using personal methods. Not that learners are left to their own devices. But the Internet is big and “wild” enough to encourage survival strategies in learning contexts. Perhaps more than the lecture hall, the online world makes critical thinking vital. And critical thinking may lead to creative and innovative solutions.
Another dimension to the fit, and one which may be more trivial than some EdTech enthusiasts seem to assume, is the “level of interactivity” afforded diverse online tools. You know, the Flash-based or other learning objects which should make learning fun and effective. I personally like the dancing mice a lot. But my impression is that these cool tools require too much effort for their possible learning outcomes. I do, however, have high hopes for the kind of interactivity common to the “social platform” sometimes known (perhaps abusively) as “Web 2.0.” Putting things online is definitely not a panacea for adequate pedagogical practise. And while “School 2.0” is an interesting concept, the buzzwordiness of some of these concepts makes me take pause. But, clearly, some students are using adequate learning strategies through the interactive character of online communication.

As I’ll be teaching online for several weeks, I’ll surely have many other things to say about these learning issues in a pseudo-historical context. In the meantime, I assume that this blogpost may bring me some thoughtful comments. 😉

Cultural References and Mass Media

An effect of my not having a television is that I occasionally miss “references to popular culture.” Continue reading “Cultural References and Mass Media”

Banality of Heroism

Wow! I’m speechless!

Open Source » Blog Archive » The Banality of Evil, Part II

Continue reading “Banality of Heroism”

Humanistic Sociocentrism

There must be a common term for this and it is certainly well-known. A kind of wishful thinking of the trailblazer type. A combination of utopianism, humanism, naïveté, forward-thinking, and ethnocentrism. You wish for society to change in a given way, you predict that society will eventually switch to that direction, you wait patiently for social changes to happen, and you eventually notice that you’re in the minority.

Been thinking about “dreamers” («rêveurs», in Amélie), artists, idealists, intellectuals, marginals, elites, trend-setters. May even consider myself part of that group, somehow. A tiny minority. Running the gamut from hyper-specialist to Renaissance-type polymath. Getting jobs in different sectors but mostly in fields such as business, academia, expressive culture, or diplomacy.

Using the pattern of “ethnocentrism,” sociocentrism as social limits on thinking. Not necessarily thinking your social class to be better than others. But failing to notice that members of other social groups (in this case, the majority groups) may not think along the same lines as you do.

It might be what prevents some people to become successful politicians. Social life might be better that way.

French «Intellectuels» (draft)

[Old draft of a post that I never finished writing… Started it in late February.]

Been thinking about intellectuals, especially French ones. It might have been a long-standing issue for me. To this French-speaking North American academic, the theme is obvious.

More specifically, though.

Was listening to a podcast with French journalist Daniel Schneidermann who, among other things, is a blogger. During the podcast, Schneidermann made a simple yet interesting comment about validation by readers. As a journalist, he has an obligationto adopt strict standards, verify sources, etc. As a blogger, he knows that if something that he says is inaccurate, blog readers will quickly point out the mistake. Again, dead simple. One of the basic things people have understood about online communication since at least 1994. But some journalists have typically been slow to understand the implications, perhaps because it causes a sea change in their practise. So Scheidermann’s comment was relatively “refreshing” in such a context.

Wanted to blog on that issue. Went to Scheidermann’s blog and read a few things. Noticed one about a Wikipedia entry on Schneidermann. While the blogger understands the value of reader validation, he seems to be uneasy with the fact that his Wikipedia entry was, when he first read it, disproportionally devoted to some specific issues in his life. Which leads me to the intellectuel thing.

A little over ten years ago, Pierre Bourdieu was on Schneidermann’s television set for a show about television. Bourdieu had been thinking and writing about television’s social impact. The context in which Schneidermann invited Bourdieu was a series of political and social events centering on an important strike with which Bourdieu had been associated. By participating in the show, Bourdieu had the (secret) intention of demonstrating television’s incapacity at taking distance from itself. Bourdieu had participated in another television show a few years prior and apparently saw his presence on a television set as an occasion to experiment with some important issues having to do with the media’s channeling of dialogue. Didn’t see the show but had heard about the events that followed without following it. A brief summary, from very limited evidence.After appearing on the show, Bourdieu published a short piece in Le Monde diplomatique (Schneidermann was a journalist at Le Monde). That piece was strongly-worded but can be seen as a fairly typical media analysis by a social scientist or other scholar. Not Bourdieu’s most memorable work, maybe, but clear and simple, if a bit watered down at times. In fact, the analysis looked more Barthes-type semiotics than Bourdieu’s more, erm, “socially confrontational” work.

Schneidermann’s response to Bourdieu’s analysis looks more like a knee-jerk reaction to what was perceived as personal attacks. Kind of sad, really. In fact, the introduction to that response points out the relevance of Bourdieu’s interrogations.

At any rate, one aspect of Schneidermann’s response which is pretty telling in context is the repeated use of the term intellectuel at key points in that text. It’s not so much about the term itself, although it does easily become a loaded term. An intellectual could simply be…

[Google: define intellectual…]:

a person who uses his or her intellect to study, reflect, or speculate on a variety of different ideas

[ Thank you, Wikipedia! 😉 ]

But, in context, repeated use of the term, along with repeated mentions of Collège de France (a prestigious yet unusual academic institution) may give the impression that Schneidermann was reacting less to Bourdieu as former guest than to the actions of an intellectuel. Obligatory Prévert citation:

Il ne faut pas laisser les intellectuels jouer avec les allumettes.

(Intellectuals shouldn’t be allowed to play with matches.)

Now, second stream of thought on intellectuels. Was teaching an ethnomusicology course at an anthropology department. A frequent reaction by students was that we were intellectualizing music too much. Understandable reaction. Music isn’t just an intellectual object. But, after all, isn’t the role of academia to understand life intellectually?

Those comments tended to come in reaction to some of the more difficult readings. To be fair, other reactions included students who point out that an author’s analysis isn’t going beyond some of the more obvious statements and yet others are cherishing the intellectual dimensions of our perspective on music. Altogether the class went extremely well, but the intellectual character of some of the content was clearly surprising to some.

The third strand or stream of thought on intellectuels came on February 27 in a television show with Jacques Attali. His was a typical attitude of confidence in being a “jack of all trades” who didn’t hesitate to take part in politics, public service, and commercial initiatives. I personally have been influenced by some of Jacques Attali’s work and, though I may disagree with several of his ideas, I have nothing but respect for his carreer. His is a refreshingly unapologetic form of intellectualism. Not exclusion of non-intellectuals. Just an attempt at living peacefully with everyone while thinking about as many issues as possible. He isn’t my hero but he deserves my respect, along with people like Yoro Sidibe, Jean-Jacques Rousseau, Louis Armstrong, Boris Vian, Jan Garbarek, Georges Brassens, Steven Feld, Roland Barthes, James Brown, and Serge Gainsbourg.

A fourth thread came in a departmental conference at Université de Montréal’s Department of Anthropology. Much discussion of the involvement of anthropologists in social life. And the visit of two public intellectuals who happen to be anthropological provocateurs, here in Quebec: Serge Bouchard and Bernard Arcand.. . .

Never finished this draft.

Should really follow on these threads. They have been haunting me for almost a year. And connect with multiple issues that I tend to think about.

My attitude now is that through blogs, mailing-lists, online forums, classes, lectures, conferences, informal and formal discussions, I’m able to help people think about a large set of different issues, whether or not they agree with me on any single point. Not because I’m somehow better than others: I’m clearly not. Not because my ideas are better than those cherished by others: they clearly aren’t. Possibly because I’m extremely talkative. And enthusiastic about talking to just about anyone. There’s even a slight chance that I may have understood something important about my “role in life,” my “calling.” If so, great. If not, I’m having fun anyway and I don’t mind being (called) an intellectual. 😉

Do Play with Your Tongue!

Speaking of Synecdoche, N.Y., which sounds very unique indeed, there’s something to be said about having fun with language. Yeah, it’s intellectual humour. But who said intellectuals aren’t allowed to have fun?

Found a nice page explaining some tropes and schemes in a straightforward way. Reminded me of my favourite author, who uses language in such a playful manner. It might be my lack of knowledge of English-speaking literature but I find this kind of playful language much less common in English than in French. Yet, perhaps because of my obsession for language, I care more about language and fun than about plot, narrative, or story.