Academics and Their Publics

(Why Are Academics So) Misunderstood?

Misunderstood by Raffi Asdourian
Misunderstood by Raffi Asdourian

Academics are misunderstood.

Almost by definition.

Pretty much any academic eventually feels that s/he is misunderstood. Misunderstandings about some core notions in about any academic field are involved in some of the most common pet peeves among academics.

In other words, there’s nothing as transdisciplinary as misunderstanding.

It can happen in the close proximity of a given department (“colleagues in my department misunderstand my work”). It can happen through disciplinary boundaries (“people in that field have always misunderstood our field”). And, it can happen generally: “Nobody gets us.”

It’s not paranoia and it’s probably not self-victimization. But there almost seems to be a form of “onedownmanship” at stake with academics from different disciplines claiming that they’re more misunderstood than others. In fact, I personally get the feeling that ethnographers are more among the most misunderstood people around, but even short discussions with friends in other fields (including mathematics) have helped me get the idea that, basically, we’re all misunderstood at the same “level” but there are variations in the ways we’re misunderstood. For instance, anthropologists in general are mistaken for what they aren’t based on partial understanding by the general population.

An example from my own experience, related to my decision to call myself an “informal ethnographer.” When you tell people you’re an anthropologist, they form an image in their minds which is very likely to be inaccurate. But they do typically have an image in their minds. On the other hand, very few people have any idea about what “ethnography” means, so they’re less likely to form an opinion of what you do from prior knowledge. They may puzzle over the term and try to take a guess as to what “ethnographer” might mean but, in my experience, calling myself an “ethnographer” has been a more efficient way to be understood than calling myself an “anthropologist.”

This may all sound like nitpicking but, from the inside, it’s quite impactful. Linguists are frequently asked about the number of languages they speak. Mathematicians are taken to be number freaks. Psychologists are perceived through the filters of “pop psych.” There are many stereotypes associated with engineers. Etc.

These misunderstandings have an impact on anyone’s work. Not only can it be demoralizing and can it impact one’s sense of self-worth, but it can influence funding decisions as well as the use of research results. These misunderstandings can underminine learning across disciplines. In survey courses, basic misunderstandings can make things very difficult for everyone. At a rather basic level, academics fight misunderstandings more than they fight ignorance.

The  main reason I’m discussing this is that I’ve been given several occasions to think about the interface between the Ivory Tower and the rest of the world. It’s been a major theme in my blogposts about intellectuals, especially the ones in French. Two years ago, for instance, I wrote a post in French about popularizers. A bit more recently, I’ve been blogging about specific instances of misunderstandings associated with popularizers, including Malcolm Gladwell’s approach to expertise. Last year, I did a podcast episode about ethnography and the Ivory Tower. And, just within the past few weeks, I’ve been reading a few things which all seem to me to connect with this same issue: common misunderstandings about academic work. The connections are my own, and may not be so obvious to anyone else. But they’re part of my motivations to blog about this important issue.

In no particular order:

But, of course, I think about many other things. Including (again, in no particular order):

One discussion I remember, which seems to fit, included comments about Germaine Dieterlen by a friend who also did research in West Africa. Can’t remember the specifics but the gist of my friend’s comment was that “you get to respect work by the likes of Germaine Dieterlen once you start doing field research in the region.” In my academic background, appreciation of Germaine Dieterlen’s may not be unconditional, but it doesn’t necessarily rely on extensive work in the field. In other words, while some parts of Dieterlen’s work may be controversial and it’s extremely likely that she “got a lot of things wrong,” her work seems to be taken seriously by several French-speaking africanists I’ve met. And not only do I respect everyone but I would likely praise someone who was able to work in the field for so long. She’s not my heroine (I don’t really have heroes) or my role-model, but it wouldn’t have occurred to me that respect for her wasn’t widespread. If it had seemed that Dieterlen’s work had been misunderstood, my reflex would possibly have been to rehabilitate her.

In fact, there’s  a strong academic tradition of rehabilitating deceased scholars. The first example which comes to mind is a series of articles (PDF, in French) and book chapters by UWO linguistic anthropologist Regna Darnell.about “Benjamin Lee Whorf as a key figure in linguistic anthropology.” Of course, saying that these texts by Darnell constitute a rehabilitation of Whorf reveals a type of evaluation of her work. But that evaluation comes from a third person, not from me. The likely reason for this case coming up to my mind is that the so-called “Sapir-Whorf Hypothesis” is among the most misunderstood notions from linguistic anthropology. Moreover, both Whorf and Sapir are frequently misunderstood, which can make matters difficulty for many linguistic anthropologists talking with people outside the discipline.

The opposite process is also common: the “slaughtering” of “sacred cows.” (First heard about sacred cows through an article by ethnomusicologist Marcia Herndon.) In some significant ways, any scholar (alive or not) can be the object of not only critiques and criticisms but a kind of off-handed dismissal. Though this often happens within an academic context, the effects are especially lasting outside of academia. In other words, any scholar’s name is likely to be “sullied,” at one point or another. Typically, there seems to be a correlation between the popularity of a scholar and the likelihood of her/his reputation being significantly tarnished at some point in time. While there may still be people who treat Darwin, Freud, Nietzsche, Socrates, Einstein, or Rousseau as near divinities, there are people who will avoid any discussion about anything they’ve done or said. One way to put it is that they’re all misunderstood. Another way to put it is that their main insights have seeped through “common knowledge” but that their individual reputations have decreased.

Perhaps the most difficult case to discuss is that of Marx (Karl, not Harpo). Textbooks in introductory sociology typically have him as a key figure in the discipline and it seems clear that his insight on social issues was fundamental in social sciences. But, outside of some key academic contexts, his name is associated with a large series of social events about which people tend to have rather negative reactions. Even more so than for Paul de Man or  Martin Heidegger, Marx’s work is entangled in public opinion about his ideas. Haven’t checked for examples but I’m quite sure that Marx’s work is banned in a number of academic contexts. However, even some of Marx’s most ardent opponents are likely to agree with several aspects of Marx’s work and it’s sometimes funny how Marxian some anti-Marxists may be.

But I digress…

Typically, the “slaughtering of sacred cows” relates to disciplinary boundaries instead of social ones. At least, there’s a significant difference between your discipline’s own “sacred cows” and what you perceive another discipline’s “sacred cows” to be. Within a discipline, the process of dismissing a prior scholar’s work is almost œdipean (speaking of Freud). But dismissal of another discipline’s key figures is tantamount to a rejection of that other discipline. It’s one thing for a physicist to show that Newton was an alchemist. It’d be another thing entirely for a social scientist to deconstruct James Watson’s comments about race or for a theologian to argue with Darwin. Though discussions may have to do with individuals, the effects of the latter can widen gaps between scholarly disciplines.

And speaking of disciplinarity, there’s a whole set of issues having to do with discussions “outside of someone’s area of expertise.” On one side, comments made by academics about issues outside of their individual areas of expertise can be very tricky and can occasionally contribute to core misunderstandings. The fear of “talking through one’s hat” is quite significant, in no small part because a scholar’s prestige and esteem may greatly decrease as a result of some blatantly inaccurate statements (although some award-winning scholars seem not to be overly impacted by such issues).

On the other side, scholars who have to impart expert knowledge to people outside of their discipline  often have to “water down” or “boil down” their ideas and, in effect, oversimplifying these issues and concepts. Partly because of status (prestige and esteem), lowering standards is also very tricky. In some ways, this second situation may be more interesting. And it seems unavoidable.

How can you prevent misunderstandings when people may not have the necessary background to understand what you’re saying?

This question may reveal a rather specific attitude: “it’s their fault if they don’t understand.” Such an attitude may even be widespread. Seems to me, it’s not rare to hear someone gloating about other people “getting it wrong,” with the suggestion that “we got it right.”  As part of negotiations surrounding expert status, such an attitude could even be a pretty rational approach. If you’re trying to position yourself as an expert and don’t suffer from an “impostor syndrome,” you can easily get the impression that non-specialists have it all wrong and that only experts like you can get to the truth. Yes, I’m being somewhat sarcastic and caricatural, here. Academics aren’t frequently that dismissive of other people’s difficulties understanding what seem like simple concepts. But, in the gap between academics and the general population a special type of intellectual snobbery can sometimes be found.

Obviously, I have a lot more to say about misunderstood academics. For instance, I wanted to address specific issues related to each of the links above. I also had pet peeves about widespread use of concepts and issues like “communities” and “Eskimo words for snow” about which I sometimes need to vent. And I originally wanted this post to be about “cultural awareness,” which ends up being a core aspect of my work. I even had what I might consider a “neat” bit about public opinion. Not to mention my whole discussion of academic obfuscation (remind me about “we-ness and distinction”).

But this is probably long enough and the timing is right for me to do something else.

I’ll end with an unverified anecdote that I like. This anecdote speaks to snobbery toward academics.

[It’s one of those anecdotes which was mentioned in a course I took a long time ago. Even if it’s completely fallacious, it’s still inspiring, like a tale, cautionary or otherwise.]

As the story goes (at least, what I remember of it), some ethnographers had been doing fieldwork  in an Australian cultural context and were focusing their research on a complex kinship system known in this context. Through collaboration with “key informants,” the ethnographers eventually succeeded in understanding some key aspects of this kinship system.

As should be expected, these kinship-focused ethnographers wrote accounts of this kinship system at the end of their field research and became known as specialists of this system.

After a while, the fieldworkers went back to the field and met with the same people who had described this kinship system during the initial field trip. Through these discussions with their “key informants,” the ethnographers end up hearing about a radically different kinship system from the one about which they had learnt, written, and taught.

The local informants then told the ethnographers: “We would have told you earlier about this but we didn’t think you were able to understand it.”

Transparency and Secrecy

Musings on transparency and secrecy, related to both my professional reorientation and my personal life.

[Started working on this post on December 1st, based on something which happened a few days prior. Since then, several things happened which also connected to this post. Thought the timing was right to revisit the entry and finally publish it. Especially since a friend just teased me for not blogging in a while.]

I’m such a strong advocate of transparency that I have a real problem with secrecy.

I know, transparency is not exactly the mirror opposite of secrecy. But I think my transparency-radical perspective causes some problem in terms of secrecy-management.

“Haven’t you been working with a secret society in Mali?,” you ask. Well, yes, I have. And secrecy hasn’t been a problem in that context because it’s codified. Instead of a notion of “absolute secrecy,” the Malian donsow I’ve been working with have a subtle, nuanced, complex, layered, contextually realistic, elaborate, and fascinating perspective on how knowledge is processed, “transmitted,” managed. In fact, my dissertation research had a lot to do with this form of knowledge management. The term “knowledge people” (“karamoko,” from kalan+mogo=learning+people) truly applies to members of hunter’s associations in Mali as well as to other local experts. These people make a clear difference between knowledge and information. And I can readily relate to their approach. Maybe I’ve “gone native,” but it’s more likely that I was already in that mode before I ever went to Mali (almost 11 years ago).

Of course, a high value for transparency is a hallmark of academia. The notion that “information wants to be free” makes more sense from an academic perspective than from one focused on a currency-based economy. Even when people are clear that “free” stands for “freedom”/«libre» and not for “gratis”/«gratuit» (i.e. “free as in speech, not free as in beer”), there persists a notion that “free comes at a cost” among those people who are so focused on growth and profit. IMHO, most the issues with the switch to “immaterial economies” (“information economy,” “attention economy,” “digital economy”) have to do with this clash between the value of knowledge and a strict sense of “property value.”

But I digress.

Or, do I…?

The phrase “radical transparency” has been used in business circles related to “information and communication technology,” a context in which the “information wants to be free” stance is almost the basis of a movement.

I’m probably more naïve than most people I have met in Mali. While there, a friend told me that he thought that people from the United States were naïve. While he wasn’t referring to me, I can easily acknowledge that the naïveté he described is probably characteristic of my own attitude. I’m North American enough to accept this.

My dedication to transparency was tested by an apparently banal set of circumstances, a few days before I drafted this post. I was given, in public, information which could potentially be harmful if revealed to a certain person. The harm which could be done is relatively small. The person who gave me that information wasn’t overstating it. The effects of my sharing this information wouldn’t be tragic. But I was torn between my radical transparency stance and my desire to do as little harm as humanly possible. So I refrained from sharing this information and decided to write this post instead.

And this post has been sitting in my “draft box” for a while. I wrote a good number of entries in the meantime but I still had this one at the back of my mind. On the backburner. This is where social media becomes something more of a way of life than an activity. Even when I don’t do anything on this blog, I think about it quite a bit.

As mentioned in the preamble, a number of things have happened since I drafted this post which also relate to transparency and secrecy. Including both professional and personal occurrences. Some of these comfort me in my radical transparency position while others help me manage secrecy in a thoughtful way.

On the professional front, first. I’ve recently signed a freelance ethnography contract with Toronto-based consultancy firm Idea Couture. The contract included a non-disclosure agreement (NDA). Even before signing the contract/NDA, I was asking fellow ethnographer and blogger Morgan Gerard about disclosure. Thanks to him, I now know that I can already disclose several things about this contract and that, once the results are public, I’ll be able to talk about this freely. Which all comforts me on a very deep level. This is precisely the kind of information and knowledge management I can relate to. The level of secrecy is easily understandable (inopportune disclosure could be detrimental to the client). My commitment to transparency is unwavering. If all contracts are like this, I’ll be quite happy to be a freelance ethnographer. It may not be my only job (I already know that I’ll be teaching online, again). But it already fits in my personal approach to information, knowledge, insight.

I’ll surely blog about private-sector ethnography. At this point, I’ve mostly been preparing through reading material in the field and discussing things with friends or colleagues. I was probably even more careful than I needed to be, but I was still able to exchange ideas about market research ethnography with people in diverse fields. I sincerely think that these exchanges not only add value to my current work for Idea Couture but position me quite well for the future. I really am preparing for freelance ethnography. I’m already thinking like a freelance ethnographer.

There’s a surprising degree of “cohesiveness” in my life, these days. Or, at least, I perceive my life as “making sense.”

And different things have made me say that 2009 would be my year. I get additional evidence of this on a regular basis.

Which brings me to personal issues, still about transparency and secrecy.

Something has happened in my personal life, recently, that I’m currently unable to share. It’s a happy circumstance and I’ll be sharing it later, but it’s semi-secret for now.

Thing is, though, transparency was involved in that my dedication to radical transparency has already been paying off in these personal respects. More specifically, my being transparent has been valued rather highly and there’s something about this type of validation which touches me deeply.

As can probably be noticed, I’m also becoming more public about some emotional dimensions of my life. As an artist and a humanist, I’ve always been a sensitive person, in-tune with his emotions. Specially positive ones. I now feel accepted as a sensitive person, even if several people in my life tend to push sensitivity to the side. In other words, I’ve grown a lot in the past several months and I now want to share my growth with others. Despite reluctance toward the “touchy-feely,” specially in geek and other male-centric circles, I’ve decided to “let it all loose.” I fully respect those who dislike this. But I need to be myself.

Answers on Expertise

Follow-up to my post on a quest for the origin of the “rule of thumb” about expertise.

As a follow-up on my previous post…

Quest for Expertise « Disparate.

(I was looking for the origin of the “10 years or 10,000 hours to be an expert” claim.)

Interestingly enough, that post is getting a bit of blog attention.

I’m so grateful about this attention that it made me tweet the following:

Trackbacks, pings, and blog comments are blogger gifts.

I also posted a question about this on Mahalo Answers (after the first comment, by Alejna, appeared on my blog, but before other comments and trackbacks appeared). I selected glaspell’s answer as the best answer
(glaspell also commented on my blog entry).

At this point, my impression is that what is taken as a “rule” on expertise is a simplification of results from a larger body of research with an emphasis on work by K. Anders Ericsson but with little attention paid to primary sources.
The whole process is quite satisfying, to me. Not just because we might all gain a better understanding of how this “claim” became so generalized, but because the process as a whole shows both powers and limitations of the Internet. I tend to claim (publicly) that the ‘Net favours critical thinking (because we eventually all claims with grains of salt). But it also seems that, even with well-known research done in English, it can be rather difficult to follow all the connections across the literature. If you think about more obscure work in non-dominant languages, it’s easy to realize that Google’s dream of organizing the world’s information isn’t yet true.

By the by, I do realize that my quest was based on a somewhat arbitrary assumption: that this “rule of thumb” is now understood as a solid rule. But what I’ve noticed in popular media since 2006 leads me to believe that the claim is indeed taken as a hard and fast rule.

I’m not blaming anyone, in this case. I don’t think that anyone’s involvement in the “chain of transmission” was voluntarily misleading and I don’t even think that it was that essential. As with many other ideas, what “sticks” is what seems to make sense in context. Actually, this strong tendency for “convenient” ideas to be more widely believed relates to a set of tricky issues with which academics have to deal, on a daily basis. Sagan’s well-known “baloney detector” is useful, here. But it’s also in not so wide use.

One thing which should also be clear: I’m not saying that Ericsson and other researchers have done anything shoddy or inappropriate. Their work is being used outside of its original context, which is often an issue.

Mass media coverage of academic research was the basis of series of entries on the original Language Log, including one of my favourite blogposts, Mark Liberman’s Language Log: Raising standards — by lowering them. The main point, I think, is that secluded academics in the Ivory Tower do little to alleviate this problem.

But I digress.
And I should probably reply to the other comments on the entry itself.

Quest for Expertise

Who came up with the “rule of thumb” which says that it takes “ten (10) years and/or ten thousand (10,000) hours to become an expert?”

Will at Work Learning: People remember 10%, 20%…Oh Really?.

This post was mentioned on the mailing-list for the Society for Teaching and Learning in Higher Education (STLHE-L).

In that post, Will Thalheimer traces back a well-known claim about learning to shoddy citations. While it doesn’t invalidate the base claim (that people tend to retain more information through certain cognitive processes), Thalheimer does a good job of showing how a graph which has frequently been seen in educational fields was based on faulty interpretation of work by prominent scholars, mixed with some results from other sources.

Quite interesting. IMHO, demystification and critical thinking are among the most important things we can do in academia. In fact, through training in folkloristics, I have become quite accustomed to this specific type of debunking.

I have in mind a somewhat similar claim that I’m currently trying to trace. Preliminary searches seem to imply that citations of original statements have a similar hyperbolic effect on the status of this claim.

The claim is what a type of “rule of thumb” in cognitive science. A generic version could be stated in the following way:

It takes ten years or 10,000 hours to become an expert in any field.

The claim is a rather famous one from cognitive science. I’ve heard it uttered by colleagues with a background in cognitive science. In 2006, I first heard about such a claim from Philip E. Ross, on an episode of Scientific American‘s Science Talk podcast to discuss his article on expertise. I later read a similar claim in Daniel Levitin’s 2006 This Is Your Brain On Music. The clearest statement I could find back in Levitin’s book is the following (p. 193):

The emerging picture from such studies is that ten thousand hours of practice is required to achieve the level of mastery associated with being a world-class expert – in anything.

More recently, during a keynote speech he was giving as part of his latest book tour, I heard a similar claim from presenter extraordinaire Malcolm Gladwell. AFAICT, this claim runs at the centre of Gladwell’s recent book: Outliers: The Story of Success. In fact, it seems that Gladwell uses the same quote from Levitin, on page 40 of Outliers (I just found that out).

I would like to pinpoint the origin for the claim. Contrary to Thalheimer’s debunking, I don’t expect that my search will show that the claim is inaccurate. But I do suspect that the “rule of thumb” versions may be a bit misled. I already notice that most people who set up such claims are doing so without direct reference to the primary literature. This latter comment isn’t damning: in informal contexts, constant referal to primary sources can be extremely cumbersome. But it could still be useful to clear up the issue. Who made this original claim?

I’ve tried a few things already but it’s not working so well. I’m collecting a lot of references, to both online and printed material. Apart from Levitin’s book and a few online comments, I haven’t yet read the material. Eventually, I’d probably like to find a good reference on the cognitive basis for expertise which puts this “rule of thumb” in context and provides more elaborate data on different things which can be done during that extensive “time on task” (including possible skill transfer).

But I should proceed somewhat methodically. This blogpost is but a preliminary step in this process.

Since Philip E. Ross is the first person on record I heard talk about this claim, a logical first step for me is to look through this SciAm article. Doing some text searches on the printable version of his piece, I find a few interesting things including the following (on page 4 of the standard version):

Simon coined a psychological law of his own, the 10-year rule, which states that it takes approximately a decade of heavy labor to master any field.

Apart from the ten thousand (10,000) hours part of the claim, this is about as clear a statement as I’m looking for. The “Simon” in question is Herbert A. Simon, who did research on chess at the Department of Psychology at Carnegie-Mellon University with colleague William G. Chase.  So I dig for diverse combinations of “Herbert Simon,” “ten(10)-year rule,” “William Chase,” “expert(ise),” and/or “chess.” I eventually find two primary texts by those two authors, both from 1973: (Chase and Simon, 1973a) and (Chase and Simon, 1973b).

The first (1973a) is an article from Cognitive Psychology 4(1): 55-81, available for download on ScienceDirect (toll access). Through text searches for obvious words like “hour*,” “year*,” “time,” or even “ten,” it seems that this article doesn’t include any specific statement about the amount of time required to become an expert. The quote which appears to be the most relevant is the following:

Behind this perceptual analysis, as with all skills (cf., Fitts & Posner, 1967), lies an extensive cognitive apparatus amassed through years of constant practice.

While it does relate to the notion that there’s a cognitive basis to practise, the statement is generic enough to be far from the “rule of thumb.”

The second Chase and Simon reference (1973b) is a chapter entitled “The Mind’s Eye in Chess” (pp. 215-281) in the proceedings of the Eighth Carnegie Symposium on Cognition as edited by William Chase and published by Academic Press under the title Visual Information Processing. I borrowed a copy of those proceedings from Concordia and have been scanning that chapter visually for some statements about the “time on task.” Though that symposium occurred in 1972 (before the first Chase and Simon reference was published), the proceedings were apparently published after the issue of Cognitive Psychology since the authors mention that article for background information.

I do find some interesting quotes, but nothing that specific:

By a rough estimate, the amount of time each player has spent playing chess, studying chess, and otherwise staring at chess positions is perhaps 10,000 to 50,000 hours for the Master; 1,000 to 5,000 hours for the Class A player; and less than 100 horus for the beginner. (Chase and Simon 1973b: 219)

or:

The organization of the Master’s elaborate repertoire of information takes thousands of hours to build up, and the same is true of any skilled task (e.g., football, music). That is why practice is the major independent variable in the acquisition of skill. (Chase and Simon 1973b: 279, emphasis in the original, last sentences in the text)

Maybe I haven’t scanned these texts properly but those quotes I find seem to imply that Simon hadn’t really devised his “10-year rule” in a clear, numeric version.

I could probably dig for more Herbert Simon wisdom. Before looking (however cursorily) at those 1973 texts, I was using Herbert Simon as a key figure in the origin of that “rule of thumb.” To back up those statements, I should probably dig deeper in the Herbert Simon archives. But that might require more work than is necessary and it might be useful to dig through other sources.

In my personal case, the other main written source for this “rule of thumb” is Dan Levitin. So, using online versions of his book, I look for comments about expertise. (I do own a copy of the book and I’m assuming the Index contains page numbers for references on expertise. But online searches are more efficient and possibly more thorough on specific keywords.) That’s how I found the statement, quoted above. I’m sure it’s the one which was sticking in my head and, as I found out tonight, it’s the one Gladwell used in his first statement on expertise in Outliers.

So, where did Levitin get this? I could possibly ask him (we’ve been in touch and he happens to be local) but looking for those references might require work on his part. A preliminary step would be to look through Levitin’s published references for Your Brain On Music.

Though Levitin is a McGill professor, Your Brain On Music doesn’t follow the typical practise in English-speaking academia of ladling copious citations onto any claim, even the most truistic statements. Nothing strange in this difference in citation practise.  After all, as Levitin explains in his Bibliographic Notes:

This book was written for the non-specialist and not for my colleagues, and so I have tried to simplify topics without oversimplifying them.

In this context, academic-style citation-fests would make the book too heavy. Levitin does, however, provide those “Bibliographic Notes” at the end of his book and on the website for the same book. In the Bibliographic Notes of that site, Levitin adds a statement I find quite interesting in my quest for “sources of claims”:

Because I wrote this book for the general reader, I want to emphasize that there are no new ideas presented in this book, no ideas that have not already been presented in scientific and scholarly journals as listed below.

So, it sounds like going through those references is a good strategy to locate at least solid references on that specific “10,000 hour” claim. Among relevant references on the cognitive basis of expertise (in Chapter 7), I notice the following texts which might include specific statements about the “time on task” to become an expert. (An advantage of the Web version of these bibliographic notes is that Levitin provides some comments on most references; I put Levitin’s comments in parentheses.)

  • Chi, Michelene T.H., Robert Glaser, and Marshall J. Farr, eds. 1988. The Nature of Expertise. Hillsdale, New Jersey: Lawrence Erlbaum Associates. (Psychological studies of expertise, including chess players)
  • Ericsson, K. A., and J. Smith, eds. 1991. Toward a General Theory of Expertise: prospects and limits. New York: Cambridge University Press. (Psychological studies of expertise, including chess players)
  • Hayes, J. R. 1985. Three problems in teaching general skills. In Thinking and Learning Skills: Research and Open Questions, edited by S. F. Chipman, J. W. Segal and R. Glaser. Hillsdale, NJ: Erlbaum. (Source for the study of Mozart’s early works not being highly regarded, and refutation that Mozart didn’t need 10,000 hours like everyone else to become an expert.)
  • Howe, M. J. A., J. W. Davidson, and J. A. Sloboda. 1998. Innate talents: Reality or myth? Behavioral & Brain Sciences 21 (3):399-442. (One of my favorite articles, although I don’t agree with everything in it; an overview of the “talent is a myth” viewpoint.)
  • Sloboda, J. A. 1991. Musical expertise. In Toward a general theory of expertise, edited by K. A. Ericcson (sic) and J. Smith. New York: Cambridge University Press. (Overview of issues and findings in musical expertise literature)

I have yet to read any of those references. I did borrow Ericsson and Smith when I first heard about Levitin’s approach to talent and expertise (probably through a radio and/or podcast appearance). But I had put the issue of expertise on the back-burner. It was always at the back of my mind and I did blog about it, back then. But it took Gladwell’s talk to wake me up. What’s funny, though, is that the “time on task” statements in (Ericsson and Smith,  1991) seem to lead back to (Chase and Simon, 1973b).

At this point, I get the impression that the “it takes a decade and/or 10,000 hours to become an expert”:

  • was originally proposed as a vague hypothesis a while ago (the year 1899 comes up);
  • became an object of some consideration by cognitive psychologists at the end of the 1960s;
  • became more widely accepted in the 1970s;
  • was tested by Benjamin Bloom and others in the 1980s;
  • was precised by Ericsson and others in the late 1980s;
  • gained general popularity in the mid-2000s;
  • is being further popularized by Malcolm Gladwell in late 2008.

Of course, I’ll have to do a fair bit of digging and reading to verify any of this, but it sounds like the broad timeline makes some sense. One thing, though, is that it doesn’t really seem that anybody had the intention of spelling it out as a “rule” or “law” in such a format as is being carried around. If I’m wrong, I’m especially surprised that a clear formulation isn’t easier to find.

As an aside, of sorts… Some people seem to associate the claim with Gladwell, at this point. Not very surprsing, given the popularity of his books, the effectiveness of his public presentations, the current context of his book tour, and the reluctance of the general public to dig any deeper than the latest source.

The problem, though, is that it doesn’t seem that Gladwell himself has done anything to “set the record straight.” He does quote Levitin in Outliers, but I heard him reply to questions and comments as if the research behind the “ten years or ten thousand hours” claim had some association with him. From a popular author like Gladwell, it’s not that awkward. But these situations are perfect opportunities for popularizers like Gladwell to get a broader public interested in academia. As Gladwell allegedly cares about “educational success” (as measured on a linear scale), I would have expected more transparency.

Ah, well…

So, I have some work to do on all of this. It will have to wait but this placeholder might be helpful. In fact, I’ll use it to collect some links.

 

Some relevant blogposts of mine on talent, expertise, effort, and Levitin.

And a whole bunch of weblinks to help me in my future searches (I have yet to really delve in any of this).

Gender and Culture

Cursory observations on differences in gender stereotypes between the United States and Quebec.

A friend sent me a link to the following video:

JC Penney: Beware of the Doghouse | Creativity Online.

In that video, a man is “sent to the doghouse” (a kind of prison for insensitive men) because he offered a vacuum cleaner to his wife. It’s part of a marketing campaign through which men are expected to buy diamonds to their wives and girlfriends.

The campaign is quite elaborate and the main website for the campaign makes interesting uses of social media.

For instance, that site makes use of Facebook Connect as a way to tap viewers’ online social network. FC is a relatively new feature (the general release was last week) and few sites have been putting it to the test. In this campaign’s case, a woman can use her Facebook account to connect to her husband or boyfriend and either send him a warning about his insensitivity to her needs (of diamonds) or “put him in the doghouse.” From a social media perspective, it can accurately be described as “neat.”

The site also uses Share This to facilitate the video‘s diffusion  through various social media services, from WordPress.com to Diigo. This tends to be an effective strategy to encourage “viral marketing.” (And, yes, I fully realize that I actively contribute to this campaign’s “viral spread.”)

The campaign could be a case study in social marketing.

But, this time, I’m mostly thinking about gender.

Simply put, I think that this campaign would fare rather badly in Quebec because of its use of culturally inappropriate gender stereotypes.

As I write this post, I receive feedback from Swedish ethnomusicologist Maria Ljungdahl who shares some insight about gender stereotypes. As Maria says, the stereotypes in this ad are “global.” But my sense is that these “global stereotypes” are not that compatible with local culture, at least among Québécois (French-speaking Quebeckers).

See, as a Québécois born and raised as a (male) feminist, I tend to be quite gender-conscious. I might even say that my gender awareness may be somewhat above the Québécois average and gender relationships are frequently used in definitions of Québécois identity.

In Québécois media, advertising campaigns portraying men as naïve and subservient have frequently been discussed. Ten or so years ago, these portrayals were a hot topic (searches for Brault & Martineau, Tim Hortons, and Un gars, une fille should eventually lead to appropriate evidence). Current advertising campaigns seem to me more subtle in terms of male figures, but careful analysis would be warranted as discussions of those portrayals are more infrequent than they have been in the past.

That video and campaign are, to me, very US-specific. Because I spent a significant amount of time in Indiana, Massachusetts, and Texas, my initial reaction while watching the video had more to do with being glad that it wasn’t the typical macrobrewery-style sexist ad. This reaction also has to do with the context for my watching that video as I was unclear as to the gender perspective of the friend who sent me the link (a male homebrewer from the MidWest currently living in Texas).

By the end of the video, however, I reverted to my Québécois sensibility. I also reacted to the obvious commercialism, partly because one of my students has been working on engagement rings in our material culture course.

But my main issue was with the presumed insensitivity of men.

Granted, part of this is personal. I define myself as a “sweet and tendre man” and I’m quite happy about my degree of sensitivity, which may in fact be slightly higher than average, even among Québécois. But my hunch is that this presumption of male insensitivity may not have very positive effects on the perception of such a campaign. Québécois watching this video may not groan but they may not find it that funny either.

There’s a generational component involved and, partly because of a discussion of writing styles in a generational perspective, I have been thinking about “generations” as a useful model for explaining cultural diversity to non-ethnographers.

See, such perceived generational groups as “Baby Boomers” and “Generation X” need not be defined as monolithic, monadic, bounded entities and they have none of the problems associated with notions of “ethnicity” in the general public. “Generations” aren’t “faraway tribes” nor do they imply complete isolation. Some people may tend to use “generational” labels in such terms that they appear clearly defined (“Baby Boomers are those individuals born between such and such years”). And there is some confusion between this use of “historical generations” and what the concept of “generation” means in, say, the study of kinship systems. But it’s still relatively easy to get people to think about generations in cultural terms: they’re not “different cultures” but they still seem to be “culturally different.”

Going back to gender… The JC Penney marketing campaign visibly lumps together people of different ages. The notion seems to be that doghouse-worthy male insensitivity isn’t age-specific or related to inexperience. The one man who was able to leave the doghouse based on his purchase of diamonds is relatively “age-neutral” as he doesn’t really seem to represent a given age. Because this attempt at crossing age divisions seems so obvious, I would assume that it came in the context of perceived differences in gender relationships. Using the logic of those who perceive the second part of the 20th Century as a period of social emancipation, one might presume that younger men are less insensitive than older men (who were “brought up” in a cultural context which was “still sexist”). If there are wide differences in the degree of sensitivity of men of different ages, a campaign aiming at a broad age range needs to diminish the importance of these differences. “The joke needs to be funny to men of all ages.”

The Quebec context is, I think, different. While we do perceive the second part of the 20th Century (and, especially, the 1970s) as a period of social emancipation (known as the “Quiet Revolution” or «Révolution Tranquille»), the degree of sensitivity to gender issues appears to be relatively level, across the population. At a certain point in time, one might have argued that older men were still insensitive (at the same time as divorcées in their forties might have been regarded as very assertive) but it seems difficult to make such a distinction in the current context.

All this to say that the JC Penney commercial is culturally inappropriate for Québécois society? Not quite. Though the example I used was this JC Penney campaign, I’m thinking about broader contexts for Québécois identity (for a variety of personal reasons, including the fact that I have been back in Québec for several months, now).

My claim is…

Ethnographic field research would go a long way to unearth culturally appropriate categories which might eventually help marketers cater to Québécois.

Of course, the agency which produced that JC Penney ad (Saatchi & Saatchi) was targeting the US market (JC Penney doesn’t have locations in Quebec) and I received the link through a friend in the US. But it was an interesting opportunity for me to think and write about a few issues related to the cultural specificity of gender stereotypes.

Ethnographic Disciplines

Just because this might be useful in the future…
I perceive a number of academic disciplines to be “ethnographic” in the sense that they use the conceptual and epistemic apparatus of “ethnography.” (“Ethnography” taken here as an epistemological position in human research, not as “the description of a people” in either literary or methodological uses.)

I don’t mean by this that practitioners are all expected to undertake ethnographic field research or that their methods are exclusively ethnographic. I specifically wish to point out that ethnography is not an “exclusive prerogative” of anthropology. And I perceive important connections between these disciplines.

In no particular order:

  • Ethnohistory
  • Ethnolinguistics (partly associated with Linguistic Anthropology)
  • Folkloristics
  • Ethnomusicology
  • Ethnocinematography (partly associated with Visual Anthropology)
  • Ethnology (Cultural Anthropology)

The following disciplines (the “micros”), while not ethnographic per se, often have ethnographic components at the present time.

  • Microhistory
  • Microsociology
  • Microeconomics

Health research and market research also make frequent use of ethnographic methods, these days (especially through “qualitative data analysis” software). But I’m not clear on how dedicated these researchers are to the epistemological bases for ethnography.

It may all sound very idiosyncratic. But I still think it works, as a way to provide working definitions for disciplines and approaches.

Thoughts, comments, suggestions, questions?

Blogging Academe

LibriVox founder and Montreal geek Hugh McGuire recently posted a blog entry in which he gave a series of nine arguments for academics to blog:

Why Academics Should Blog

Hugh’s post reminded me of one of my favourite blogposts by an academic, a pointed defence of blogging by Mark Liberman, of Language Log fame.
Raising standards –by lowering them

While I do agree with Hugh’s points, I would like to reframe and rephrase them.

Clearly, I’m enthusiastic about blogging. Not that I think every academic should, needs to, ought to blog. But I do see clear benefits of blogging in academic contexts.

Academics do a number of different things, from search committees to academic advising. Here, I focus on three main dimensions of an academic’s life: research, teaching, and community outreach. Other items in a professor’s job description may benefit from blogging but these three main components tend to be rather prominent in terms of PTR (promotion, tenure, reappointment). What’s more, blogging can help integrate these dimensions of academic life in a single set of activities.

Impact

In relation to scholarship, the term “impact” often refers to the measurable effects of a scholar’s publication through a specific field. “Citation impact,” for instance, refers to the number of times a given journal article has been cited by other scholars. This kind of measurement is directly linked to Google’s PageRank algorithm which is used to assess the relevance of their search results. The very concept of “citation impact” relates very directly to the “publish or perish” system which, I would argue, does more to increase stress levels among full-time academic than to enhance scholarship. As such, it may need some rethinking. What does “citation impact” really measure? Is the most frequently cited text on a given subject necessarily the most relevant? Isn’t there a clustering effect, with some small groups of well-known scholars citing one another without paying attention to whatever else may happen in their field, especially in other languages?

An advantage of blogging is that this type of impact is easy to monitor. Most blogging platforms have specific features for “statistics,” which let bloggers see which of their posts have been visited (“hit”) most frequently. More sophisticated analysis is available on some blogging platforms, especially on paid ones. These are meant to help bloggers monetize their blogs through advertising. But the same features can be quite useful to an academic who wants to see which blog entries seem to attract the most traffic.

Closer to “citation impact” is the fact that links to a given post are visible within that post through the ping and trackback systems. If another blogger links to this very blogpost, a link to that second blogger’s post will appear under mine as a link. In other words, a blogpost can embed future references.

In terms of teaching, thinking about impact through blogging can also have interesting effects. If students are blogging, they can cite and link to diverse items and these connections can serve as a representation of the constructive character of learning. But even if students don’t blog, a teacher blogging course-related material can increase the visibility of that course. In some cases, this visibility may lead to inter-institutional collaboration or increased enrollment.

Transparency

While secrecy may be essential in some academic projects, most academics tend to adopt a favourable attitude toward transparency. Academia is about sharing information and spreading knowledge, not about protecting information or about limiting knowledge to a select few.

Bloggers typically value transparency.

There are several ethical issues which relate to transparency. Some ethical principles prevent transparency (for instance, most research projects involving “human subjects” require anonymity). But academic ethics typically go with increased transparency on the part of the researcher. For instance, informed consent by a “human subject” requires complete disclosure of how the data will be used and protected. There are usually requirements for the primary investigator to be reachable during the research project.

Transparency is also valuable in teaching. While some things should probably remain secret (say, answers to exam questions), easy access to a number of documents makes a lot of sense in learning contexts.

Public Intellectuals

It seems that the term “intellectual” gained currency as a label for individuals engaged in public debates. While public engagement has taken a different type of significance, over the years, but the responsibility for intellectuals to communicate publicly is still a matter of interest.

Through blogging, anyone can engage in public debate, discourse, or dialogue.

Reciprocity

Scholars working with “human subjects” often think about reciprocity. While remuneration may be the primary mode of retribution for participation in a research project, a broader concept of reciprocity is often at stake. Those who participated in the project usually have a “right to know” about the results of that study. Even when it isn’t the case and the results of the study remain secret, the asymmetry of human subjects revealing something about themselves to scholars who reveal nothing seems to clash with fundamental principles in contemporary academia.

Reciprocity in teaching can lead directly to some important constructivist principles. The roles of learners and teachers, while not completely interchangeable, are reciprocal. A teacher may learn and a learner may teach.

Playing with Concepts

Blogging makes it easy to try concepts out. More than “thinking out loud,” the type of blogging activity I’m thinking about can serve as a way to “put ideas on paper” (without actual paper) and eventually get feedback on those ideas.

In my experience, microblogging (Identi.ca, Twitter…) has been more efficient than extended blogging in terms of getting conceptual feedback. In fact, social networks (Facebook, more specifically) have been even more conducive to hashing out concepts.

Many academics do hash concepts out with students, especially with graduate students. The advantage is that students are likely to understand concepts quickly as they already share some of the same references as the academic who is playing with those concepts. There’s already a context for mutual understanding. The disadvantage is that a classroom context is fairly narrow to really try out the implications of a concept.

A method I like to use is to use fairly catchy phrases and leave concepts fairly raw, at first. I then try the same concept in diverse contexts, on my blogs or off.

The main example I have in mind is the “social butterfly effect.” It may sound silly at first but I find it can be a basis for discussion, especially if it spreads a bit.

A subpoint, here, is that this method allows for “gauging interest” in new concepts and it can often lead one in completely new directions. By blogging about concepts, an academic can tell if this concept has a chance to stick in a broad frame (outside the Ivory Tower) and may be given insight from outside disciplines.

Playing with Writing

This one probably applies more to “junior academics” (including students) but it can also work with established academics who enjoy diversifying their writing styles. Simply put: blogwriting is writing practise.

A common idea, in cognitive research on expertise, is that it takes about ten thousand hours to become an expert. For better or worse, academics are experts at writing. And we gain that expertise through practise. In this context, it’s easy to see blogging as a “writing exercise.” At least, that would be a perspective to which I can relate.

My impression is that writing skills are most efficiently acquired through practise. The type of practise I have in mind is “low-stakes,” in the sense that the outcomes of a writing exercise are relatively inconsequential. The basis for this perspective is that self-consciousness, inhibition, and self-censorship tend to get in the way of fluid writing. High-stakes writing (such as graded assignments) can make a lot of sense at several stages in the learning process, but overemphasis on evaluating someone’s writing skills will likely stress out the writer more than make her/him motivated to write.

This impression is to a large extent personal. I readily notice that when I get too self-conscious about my own writing (self-unconscious, even), my writing becomes much less fluid. In fact, because writing about writing tends to make one self-conscious, my writing this post is much less efficient than my usual writing sessions.

In my mind, there’s a cognitive basis to this form of low-stakes, casual writing. As with language acquisition, learning occurs whether or not we’re corrected. According to most research in language acquisition, children acquire their native languages through exposure, not through a formal learning process. My guess is that the same apply to writing.

In some ways, this is a defence of drafts. “Draft out your ideas without overthinking what might be wrong about your writing.” Useful advice, at least in my experience. The further point is to do something with those drafts, the basis for the RERO principle: “release your text in the wild, even if it may not correspond to your standards.” Every text is a work in progress. Especially in a context where you’re likely to get feedback (i.e., blogging). Trial and error, with a feedback mechanism. In my experience, feedback on writing tends to be given in a thoughtful and subtle fashion while feedback on ideas can be quite harsh.

The notion of writing styles is relevant, here. Some of Hugh’s arguments about the need for blogging in academia revolve around the notion that “academics are bad writers.” My position is that academics are expert writers but that academic writing is a very specific beast. Hugh’s writing standards might clash with typical writing habits among academics (which often include neologisms and convoluted metaphors). Are Hugh’s standards appropriate in terms of academic writing? Possibly, but why then are academic texts rating so low on writing standards after having been reviewed by peers and heavily edited? The relativist’s answer is, to me, much more convincing: academic texts are typically judged through standards which are context-specific. Judging academic writing with outside standards is like judging French writing with English standards (or judging prose through the standards of classic poetry).

Still, there’s something to be said about readability. Especially when these texts are to be used outside academia. Much academic writing is meant to remain within the walls of the Ivory Tower yet most academic disciplines benefit from some interaction with “the general public.” Though it may not be taught in universities and colleges, the skill of writing for a broader public is quite valuable. In fact, it may easily be transferable to teaching, especially if students come from other disciplines. Furthermore, writing outside one’s discipline is required in any type of interdisciplinary context, including project proposals for funding agencies.

No specific writing style is implied in blogging. A blogger can use whatever style she/he chooses for her/his posts. At the same time, blogging tends to encourage writing which is broadly readable and makes regular use of hyperlinks to connect to further information. In my opinion, this type of writing is a quite appropriate one in which academics can extend their skills.

“Public Review”

Much of the preceding connects with peer review, which was the basis of Mark Liberman’s post.

In academia’s recent history, “peer reviewed publications” have become the hallmark of scholarly writing. Yet, as Steve McIntyre claims, the current state of academic peer review may not be as efficient at ensuring scholarly quality as its proponents claim it to be. As opposed to financial auditing, for instance, peer review implies very limited assessment based on data. And I would add that the very notion of “peer” could be assessed more carefully in such a context.

Overall, peer review seems to be relatively inefficient as a “reality check.” This might sound like a bold claim and I should provide data to support it. But I mostly want to provoke some thought as to what the peer review process really implies. This is not about reinventing the wheel but it is about making sure we question assumptions about the process.

Blogging implies public scrutiny. This directly relates to transparency, discussed above. But there is also the notion of giving the public the chance to engage with the outcomes of academic research. Sure, the general public sounds like a dangerous place to propose some ideas (especially if they have to do with health or national security). But we may give some thought to Linus’s law and think about the value of “crowdsourcing” academic falsification.

Food for Thought

There’s a lot more I want to add but I should heed my call to RERO. Otherwise, this post will remain in my draft posts for an indefinite period of time, gathering dust and not allowing any timely discussion. Perhaps more than at any other point, I would be grateful for any thoughtful comment about academic blogging.

In fact, I will post this blog entry “as is,” without careful proofreading. Hopefully, it will be the start of a discussion.

I will “send you off” with a few links related to blogging in academic contexts, followed by Hugh’s list of arguments.

Links on Academic Blogging

(With an Anthropological emphasis)

Hugh’s List

  1. You need to improve your writing
  2. Some of your ideas are dumb
  3. The point of academia is to expand knowledge
  4. Blogging expands your readership
  5. Blogging protects and promotes your ideas
  6. Blogging is Reputation
  7. Linking is better than footnotes
  8. Journals and blogs can (and should) coexist
  9. What have journals done for you lately?

The Need for Social Science in Social Web/Marketing/Media (Draft)

[Been sitting on this one for a little while. Better RERO it, I guess.]

Sticking My Neck Out (Executive Summary)

I think that participants in many technology-enthusiastic movements which carry the term “social” would do well to learn some social science. Furthermore, my guess is that ethnographic disciplines are very well-suited to the task of teaching participants in these movements something about social groups.

Disclaimer

Despite the potentially provocative title and my explicitly stating a position, I mostly wish to think out loud about different things which have been on my mind for a while.

I’m not an “expert” in this field. I’m just a social scientist and an ethnographer who has been observing a lot of things online. I do know that there are many experts who have written many great books about similar issues. What I’m saying here might not seem new. But I’m using my blog as a way to at least write down some of the things I have in mind and, hopefully, discuss these issues thoughtfully with people who care.

Also, this will not be a guide on “what to do to be social-savvy.” Books, seminars, and workshops on this specific topic abound. But my attitude is that every situation needs to be treated in its own context, that cookie-cutter solutions often fail. So I would advise people interested in this set of issues to train themselves in at least a little bit of social science, even if much of the content of the training material seems irrelevant. Discuss things with a social scientist, hire a social scientist in your business, take a course in social science, and don’t focus on advice but on the broad picture. Really.

Clarification

Though they are all different, enthusiastic participants in “social web,” “social marketing,” “social media,” and other “social things online” do have some commonalities. At the risk of angering some of them, I’m lumping them all together as “social * enthusiasts.” One thing I like about the term “enthusiast” is that it can apply to both professional and amateurs, to geeks and dabblers, to full-timers and part-timers. My target isn’t a specific group of people. I just observed different things in different contexts.

Links

Shameless Self-Promotion

A few links from my own blog, for context (and for easier retrieval):

Shameless Cross-Promotion

A few links from other blogs, to hopefully expand context (and for easier retrieval):

Some raw notes

  • Insight
  • Cluefulness
  • Openness
  • Freedom
  • Transparency
  • Unintended uses
  • Constructivism
  • Empowerment
  • Disruptive technology
  • Innovation
  • Creative thinking
  • Critical thinking
  • Technology adoption
  • Early adopters
  • Late adopters
  • Forced adoption
  • OLPC XO
  • OLPC XOXO
  • Attitudes to change
  • Conservatism
  • Luddites
  • Activism
  • Impatience
  • Windmills and shelters
  • Niche thinking
  • Geek culture
  • Groupthink
  • Idea horizon
  • Intersubjectivity
  • Influence
  • Sphere of influence
  • Influence network
  • Social butterfly effect
  • Cog in a wheel
  • Social networks
  • Acephalous groups
  • Ego-based groups
  • Non-hierarchical groups
  • Mutual influences
  • Network effects
  • Risk-taking
  • Low-stakes
  • Trial-and-error
  • Transparency
  • Ethnography
  • Epidemiology of ideas
  • Neural networks
  • Cognition and communication
  • Wilson and Sperber
  • Relevance
  • Global
  • Glocal
  • Regional
  • City-State
  • Fluidity
  • Consensus culture
  • Organic relationships
  • Establishing rapport
  • Buzzwords
  • Viral
  • Social
  • Meme
  • Memetic marketplace
  • Meta
  • Target audience

Let’s Give This a Try

The Internet is, simply, a network. Sure, technically it’s a meta-network, a network of networks. But that is pretty much irrelevant, in social terms, as most networks may be analyzed at different levels as containing smaller networks or being parts of larger networks. The fact remains that the ‘Net is pretty easy to understand, sociologically. It’s nothing new, it’s just a textbook example of something social scientists have been looking at for a good long time.

Though the Internet mostly connects computers (in many shapes or forms, many of them being “devices” more than the typical “personal computer”), the impact of the Internet is through human actions, behaviours, thoughts, and feelings. Sure, we can talk ad nauseam about the technical aspects of the Internet, but these topics have been covered a lot in the last fifteen years of intense Internet growth and a lot of people seem to be ready to look at other dimensions.

The category of “people who are online” has expanded greatly, in different steps. Here, Martin Lessard’s description of the Internet’s Six Cultures (Les 6 cultures d’Internet) is really worth a read. Martin’s post is in French but we also had a blog discussion in English, about it. Not only are there more people online but those “people who are online” have become much more diverse in several respects. At the same time, there are clear patterns on who “online people” are and there are clear differences in uses of the Internet.

Groups of human beings are the very basic object of social science. Diversity in human groups is the very basis for ethnography. Ethnography is simply the description of (“writing about”) human groups conceived as diverse (“peoples”). As simple as ethnography can be, it leads to a very specific approach to society which is very compatible with all sorts of things relevant to “social * enthusiasts” on- and offline.

While there are many things online which may be described as “media,” comparing the Internet to “The Mass Media” is often the best way to miss “what the Internet is all about.” Sure, the Internet isn’t about anything (about from connecting computers which, in turn, connect human beings). But to get actual insight into the ‘Net, one probably needs to free herself/himself of notions relating to “The Mass Media.” Put bluntly, McLuhan was probably a very interesting person and some of his ideas remain intriguing but fallacies abound in his work and the best thing to do with his ideas is to go beyond them.

One of my favourite examples of the overuse of “media”-based concepts is the issue of influence. In blogging, podcasting, or selling, the notion often is that, on the Internet as in offline life, “some key individuals or outlets are influential and these are the people by whom or channels through which ideas are disseminated.” Hence all the Technorati rankings and other “viewer statistics.” Old techniques and ideas from the times of radio and television expansion are used because it’s easier to think through advertising models than through radically new models. This is, in fact, when I tend to bring back my explanation of the “social butterfly effect“: quite frequently, “influence” online isn’t through specific individuals or outlets but even when it is, those people are influential through virtue of connecting to diverse groups, not by the number of people they know. There are ways to analyze those connections but “measuring impact” is eventually missing the point.

Yes, there is an obvious “qual. vs. quant.” angle, here. A major distinction between non-ethnographic and ethnographic disciplines in social sciences is that non-ethnographic disciplines tend to be overly constrained by “quantitative analysis.” Ultimately, any analysis is “qualitative” but “quantitative methods” are a very small and often limiting subset of the possible research and analysis methods available. Hence the constriction and what some ethnographers may describe as “myopia” on the part of non-ethnographers.

Gone Viral

The term “viral” is used rather frequently by “social * enthusiasts” online. I happen to think that it’s a fairly fitting term, even though it’s used more by extension than by literal meaning. To me, it relates rather directly to Dan Sperber’s “epidemiological” treatment of culture (see Explaining Culture) which may itself be perceived as resembling Dawkins’s well-known “selfish gene” ideas made popular by different online observers, but with something which I perceive to be (to use simple semiotic/semiological concepts) more “motivated” than the more “arbitrary” connections between genetics and ideas. While Sperber could hardly be described as an ethnographer, his anthropological connections still make some of his work compatible with ethnographic perspectives.

Analysis of the spread of ideas does correspond fairly closely with the spread of viruses, especially given the nature of contacts which make transmission possible. One needs not do much to spread a virus or an idea. This virus or idea may find “fertile soil” in a given social context, depending on a number of factors. Despite the disadvantages of extending analogies and core metaphors too far, the type of ecosystem/epidemiology analysis of social systems embedded in uses of the term “viral” do seem to help some specific people make sense of different things which happen online. In “viral marketing,” the type of informal, invisible, unexpected spread of recognition through word of mouth does relate somewhat to the spread of a virus. Moreover, the metaphor of “viral marketing” is useful in thinking about the lack of control the professional marketer may have on how her/his product is perceived. In this context, the term “viral” seems useful.

The Social

While “viral” seems appropriate, the even more simple “social” often seems inappropriately used. It’s not a ranty attitude which makes me comment negatively on the use of the term “social.” In fact, I don’t really care about the use of the term itself. But I do notice that use of the term often obfuscates what is the obvious social character of the Internet.

To a social scientist, anything which involves groups is by definition “social.” Of course, some groups and individuals are more gregarious than others, some people are taken to be very sociable, and some contexts are more conducive to heightened social interactions. But social interactions happen in any context.
As an example I used (in French) in reply to this blog post, something as common as standing in line at a grocery store is representative of social behaviour and can be analyzed in social terms. Any Web page which is accessed by anyone is “social” in the sense that it establishes some link, however tenuous and asymmetric, between at least two individuals (someone who created the page and the person who accessed that page). Sure, it sounds like the minimal definition of communication (sender, medium/message, receiver). But what most people who talk about communication seem to forget (unlike Jakobson), is that all communication is social.

Sure, putting a comment form on a Web page facilitates a basic social interaction, making the page “more social” in the sense of “making that page easier to use explicit social interaction.” And, of course, adding some features which facilitate the act of sharing data with one’s personal contacts is a step above the contact form in terms of making certain type of social interaction straightforward and easy. But, contrary to what Google Friend Connect implies, adding those features doesn’t suddenly make the site social. The site itself isn’t really social and, assuming some people visited it, there was already a social dimension to it. I’m not nitpicking on word use. I’m saying that using “social” in this way may blind some people to social dimensions of the Internet. And the consequences can be pretty harsh, in some cases, for overlooking how social the ‘Net is.

Something similar may be said about the “Social Web,” one of the many definitions of “Web 2.0” which is used in some contexts (mostly, the cynic would say, “to make some tool appear ‘new and improved'”). The Web as a whole was “social” by definition. Granted, it lacked the ease of social interaction afforded such venerable Internet classics as Usenet and email. But it was already making some modes of social interaction easier to perceive. No, this isn’t about “it’s all been done.” It’s about being oblivious to the social potential of tools which already existed. True, the period in Internet history known as “Web 2.0” (and the onset of the Internet’s sixth culture) may be associated with new social phenomena. But there is little evidence that the association is causal, that new online tools and services created a new reality which suddenly made it possible for people to become social online. This is one reason I like Martin Lessard’s post so much. Instead of postulating the existence of a brand new phenomenon, he talks about the conditions for some changes in both Internet use and the form the Web has taken.

Again, this isn’t about terminology per se. Substitute “friendly” for “social” and similar issues might come up (friendship and friendliness being disconnected from the social processes which underline them).

Adoptive Parents

Many “social * enthusiasts” are interested in “adoption.” They want their “things” to be adopted. This is especially visible among marketers but even in social media there’s an issue of “getting people on board.” And some people, especially those without social science training, seem to be looking for a recipe.

Problem is, there probably is no such thing as a recipe for technology adoption.

Sure, some marketing practises from the offline world may work online. Sometimes, adapting a strategy from the material world to the Internet is very simple and the Internet version may be more effective than the offline version. But it doesn’t mean that there is such a thing as a recipe. It’s a matter of either having some people who “have a knack for this sort of things” (say, based on sensitivity to what goes on online) or based on pure luck. Or it’s a matter of measuring success in different ways. But it isn’t based on a recipe. Especially not in the Internet sphere which is changing so rapidly (despite some remarkably stable features).

Again, I’m partial to contextual approaches (“fully-customized solutions,” if you really must). Not just because I think there are people who can do this work very efficiently. But because I observe that “recipes” do little more than sell “best-selling books” and other items.

So, what can we, as social scientists, say about “adoption?” That technology is adopted based on the perceived fit between the tools and people’s needs/wants/goals/preferences. Not the simple “the tool will be adopted if there’s a need.” But a perception that there might be a fit between an amorphous set of social actors (people) and some well-defined tools (“technologies”). Recognizing this fit is extremely difficult and forcing it is extremely expensive (not to mention completely unsustainable). But social scientists do help in finding ways to adapt tools to different social situations.

Especially ethnographers. Because instead of surveys and focus groups, we challenge assumptions about what “must” fit. Our heads and books are full of examples which sound, in retrospect, as common sense but which had stumped major corporations with huge budgets. (Ask me about McDonald’s in Brazil or browse a cultural anthropology textbook, for more information.)

Recently, while reading about issues surrounding the OLPC’s original XO computer, I was glad to read the following:

John Heskett once said that the critical difference between invention and innovation was its mass adoption by users. (Niti Bhan The emperor has designer clothes)

Not that this is a new idea, for social scientists. But I was glad that the social dimension of technology adoption was recognized.

In marketing and design spheres especially, people often think of innovation as individualized. While some individuals are particularly adept at leading inventions to mass adoption (Steve Jobs being a textbook example), “adoption comes from the people.” Yes, groups of people may be manipulated to adopt something “despite themselves.” But that kind of forced adoption is still dependent on a broad acceptance, by “the people,” of even the basic forms of marketing. This is very similar to the simplified version of the concept of “hegemony,” so common in both social sciences and humanities. In a hegemony (as opposed to a totalitarian regime), no coercion is necessary because the logic of the system has been internalized by people who are affected by it. Simple, but effective.

In online culture, adept marketers are highly valued. But I’m quite convinced that pre-online marketers already knew that they had to “learn society first.” One thing with almost anything happening online is that “the society” is boundless. Country boundaries usually make very little sense and the social rules of every local group will leak into even the simplest occasion. Some people seem to assume that the end result is a cultural homogenization, thereby not necessitating any adaptation besides the move from “brick and mortar” to online. Others (or the same people, actually) want to protect their “business models” by restricting tools or services based on country boundaries. In my mind, both attitudes are ineffective and misleading.

Sometimes I Feel Like a Motherless Child

I think the Cluetrain Manifesto can somehow be summarized through concepts of freedom, openness, and transparency. These are all very obvious (in French, the book title is something close to “the evident truths manifesto”). They’re also all very social.

Social scientists often become activists based on these concepts. And among social scientists, many of us are enthusiastic about the social changes which are happening in parallel with Internet growth. Not because of technology. But because of empowerment. People are using the Internet in their own ways, the one key feature of the Internet being its lack of centralization. While the lack of centralized control may be perceived as a “bad thing” by some (social scientists or not), there’s little argument that the ‘Net as a whole is out of the control of specific corporations or governments (despite the large degree of consolidation which has happened offline and online).

Especially in the United States, “freedom” is conceived as a basic right. But it’s also a basic concept in social analysis. As some put it: “somebody’s rights end where another’s begin.” But social scientists have a whole apparatus to deal with all the nuances and subtleties which are bound to come from any situation where people’s rights (freedom) may clash or even simply be interpreted differently. Again, not that social scientists have easy, ready-made answers on these issues. But we’re used to dealing with them. We don’t interpret freedom as a given.

Transparency is fairly simple and relates directly to how people manage information itself (instead of knowledge or insight). Radical transparency is giving as much information as possible to those who may need it. Everybody has a “right to learn” a lot of things about a given institution (instead of “right to know”), when that institution has a social impact. Canada’s Access to Information Act is quite representative of the move to transparency and use of this act has accompanied changes in the ways government officials need to behave to adapt to a relatively new reality.

Openness is an interesting topic, especially in the context of the so-called “Open Source” movement. Radical openness implies participation by outsiders, at least in the form of verbal feedback. The cluefulness of “opening yourself to your users” is made obvious in the context of successes by institutions which have at least portrayed themselves as open. What’s in my mind unfortunate is that many institutions now attempt to position themselves on the openness end of the “closed/proprietary to open/responsive” scale without much work done to really open themselves up.

Communitas

Mottoes, slogans, and maxims like “build it and they will come,” “there’s a sucker born every minute,” “let them have cake,” and “give them what they want” all fail to grasp the basic reality of social life: “they” and “we” are linked. We’re all different and we’re all connected. We all take parts in groups. These groups are all associated with one another. We can’t simply behave the same way with everyone. Identity has two parts: sense of belonging (to an “in-group”) and sense of distinction (from an “out-group”). “Us/Them.”

Within the “in-group,” if there isn’t any obvious hierarchy, the sense of belonging can take the form that Victor Turner called “communitas” and which happens in situations giving real meaning to the notion of “community.” “Community of experience,” “community of practise.” Eckert and Wittgenstein brought to online networks. In a community, contacts aren’t always harmonious. But people feel they fully belong. A network isn’t the same thing as a community.

The World Is My Oyster

Despite the so-called “Digital Divide” (or, more precisely, the maintenance online of global inequalities), the ‘Net is truly “Global.” So is the phone, now that cellphones are accomplishing the “leapfrog effect.” But this one Internet we have (i.e., not Internet2 or other such specialized meta-network) is reaching everywhere through a single set of compatible connections. The need for cultural awareness is increased, not alleviated by online activities.

Release Early, Release Often

Among friends, we call it RERO.

The RERO principle is a multiple-pass system. Instead of waiting for the right moment to release a “perfect product” (say, a blogpost!), the “work in progress” is provided widely, garnering feedback which will be integrated in future “product versions.” The RERO approach can be unnerving to “product developers,” but it has proved its value in online-savvy contexts.

I use “product” in a broad sense because the principle applies to diverse contexts. Furthermore, the RERO principle helps shift the focus from “product,” back into “process.”

The RERO principle may imply some “emotional” or “psychological” dimensions, such as humility and the acceptance of failure. At some level, differences between RERO and “trial-and-error” methods of development appear insignificant. Those who create something should not expect the first try to be successful and should recognize mistakes to improve on the creative process and product. This is similar to the difference between “rehearsal” (low-stakes experimentation with a process) and “performance” (with responsibility, by the performer, for evaluation by an audience).

Though applications of the early/often concept to social domains are mostly satirical, there is a social dimension to the RERO principle. Releasing a “product” implies a group, a social context.

The partial and frequent “release” of work to “the public” relates directly to openness and transparency. Frequent releases create a “relationship” with human beings. Sure, many of these are “Early Adopters” who are already overrepresented. But the rapport established between an institution and people (users/clients/customers/patrons…) can be transfered more broadly.

Releasing early seems to shift the limit between rehearsal and performance. Instead of being able to do mistakes on your own, your mistakes are shown publicly and your success is directly evaluated. Yet a somewhat reverse effect can occur: evaluation of the end-result becomes a lower-stake rating at different parts of the project because expectations have shifted to the “lower” end. This is probably the logic behind Google’s much discussed propensity to call all its products “beta.”

While the RERO principle does imply a certain openness, the expectation that each release might integrate all the feedback “users” have given is not fundamental to releasing early and frequently. The expectation is set by a specific social relationship between “developers” and “users.” In geek culture, especially when users are knowledgeable enough about technology to make elaborate wishlists, the expectation to respond to user demand can be quite strong, so much so that developers may perceive a sense of entitlement on the part of “users” and grow some resentment out of the situation. “If you don’t like it, make it yourself.” Such a situation is rather common in FLOSS development: since “users” have access to the source code, they may be expected to contribute to the development project. When “users” not only fail to fulfil expectations set by open development but even have the gumption to ask developers to respond to demands, conflicts may easily occur. And conflicts are among the things which social scientists study most frequently.

Putting the “Capital” Back into “Social Capital”

In the past several years, ”monetization” (transforming ideas into currency) has become one of the major foci of anything happening online. Anything which can be a source of profit generates an immediate (and temporary) “buzz.” The value of anything online is measured through typical currency-based economics. The relatively recent movement toward ”social” whatever is not only representative of this tendency, but might be seen as its climax: nowadays, even social ties can be sold directly, instead of being part of a secondary transaction. As some people say “The relationship is the currency” (or “the commodity,” or “the means to an end”). Fair enough, especially if these people understand what social relationships entail. But still strange, in context, to see people “selling their friends,” sometimes in a rather literal sense, when social relationships are conceived as valuable. After all, “selling the friend” transforms that relationship, diminishes its value. Ah, well, maybe everyone involved is just cynical. Still, even their cynicism contributes to the system. But I’m not judging. Really, I’m not. I’m just wondering
Anyhoo, the “What are you selling anyway” question makes as much sense online as it does with telemarketers and other greed-focused strangers (maybe “calls” are always “cold,” online). It’s just that the answer isn’t always so clear when the “business model” revolves around creating, then breaking a set of social expectations.
Me? I don’t sell anything. Really, not even my ideas or my sense of self. I’m just not good at selling. Oh, I do promote myself and I do accumulate social capital. As social butterflies are wont to do. The difference is, in the case of social butterflies such as myself, no money is exchanged and the social relationships are, hopefully, intact. This is not to say that friends never help me or never receive my help in a currency-friendly context. It mostly means that, in our cases, the relationships are conceived as their own rewards.
I’m consciously not taking the moral high ground, here, though some people may easily perceive this position as the morally superior one. I’m not even talking about a position. Just about an attitude to society and to social relationships. If you will, it’s a type of ethnographic observation from an insider’s perspective.

Makes sense?

Open Access in Canada: CIHR Weighs In

This could be big news. Canada’s major health-research agency has announced policy support for Open Access.

Open access to health research publications: CIHR unveils new policy – CIHR

“This open access policy will serve as a model for other funding agencies”

On the other hand, this policy sounds “non-binding”:

CIHR will require its researchers to ensure that their original research articles are freely available online within six months of publication.

grant recipients must make every effort to ensure that their peer-reviewed research articles are freely available as soon as possible after publication

So, it’s not as strong as some other mandates out there. But that might not be such a bad thing either. The benefits of OA are sufficiently clear that the only thing which is needed is that it becomes common practise. If some researchers fail to comply right away, punishing them might not be the most appropriate action. In fact, the way the press release is written, the policy seems to encourage a positive attitude toward OA which sounds quite convincing in the context of Canadian academia.
There have been lots of talk about OA in Canada and elsewhere, but this action by the CIHR can have important consequences.

Personally, and though colleagues surely disagree, I appreciate the fact that “the public” is included in the statement:

As a publicly-funded organization, we have a responsibility to ensure that new advances in health research are available to those who need it and can use it – researchers world-wide, the public and policy makers.

Oh, sure, this doesn’t mean that patients will suddenly be able to use all sorts of information about diseases they may have. We all know about the dangers of self-diagnosis, so that might not be a good thing. The health science literature is difficult enough to understand that chances are that broad readership by the general public may not apply. But the point is made that research (applied or basic) is done for the common good. Not just to get tenure in a university. One thing which might happen, from OA, is that careers in research get started because of access to research documents. Instead of keeping research in the Ivory Tower, research can be more easily distributed to anyone. This can even prevent the spread of disinformation!

For one thing, Open Access to research publications makes a lot of sense in a society which gives so much importance to free access to information. IMHO, the social significance of the Canadian Access to Information Act is often underestimated.

I do hope agencies such as SSHRC will adopt policies similar to the CIHR. Actually the SSHRC has been active in terms of OA, but AFAICT, doesn’t “require its researchers to ensure that their original research articles are freely available online within six months of publication.” OTOH, the SSHRC does have an initiative to support OA journals financially.

Looking forward to what well-known OA advocates will say about this. Surely, they’ll be more critical than I am, saying that this action, on the part of the CIHR, is but one step in the direction of generalised OA.

I may be overly enthusiastic here. But I do think this can be a turning point in Canadian academia.

Schools, Research, Relevance

The following was sent to the Moodle Lounge.

Business schools and research | Practically irrelevant? | Economist.com

My own reaction to this piece…
Well, well…
The title and the tone are, IMHO, rather inflammatory. For those who follow tech news, this could sound like a column by John C. Dvorak. The goal is probably to spark conversation about the goals of business schools. Only a cynic (rarely found in academia 😛 ) would say that they’re trying to increase readership. 😎

The article does raise important issues, although many of those have been tackled in the past. For instance, the tendency for educational institutions to look at the short-term gains of their “employees’ work” for their own programs instead of looking at the broader picture in terms of social and human gains. Simple rankings decreasing the diversity of programmes. Professors who care more about their careers than about their impact on the world. The search for “metrics” in scholarship (citation impact, patents-count, practical impact…). The quest for prestige. Reluctance to change. Etc.

This point could lead to something interesting:

AACSB justifies its stance by saying that it wants schools and faculty to play to their strengths, whether they be in pedagogy, in the research of practical applications, or in scholarly endeavour.

IMHO, it seems to lead to a view of educational institutions which does favour diversity. We need some schools which are really good at basic research. We need other schools (or other people at the same schools) to be really good ast creating learning environments. And some people should be able to do the typical goal-oriented “R&D” for very practical purposes, with business partners in mind. It takes all kinds. And because some people forget the necessity for diverse environments, it’s an important point to reassess.
The problem is, though, that the knee-jerk reaction apparently runs counter to the “diversity” argument. Possibly because of the AACSB’s own recommendations or maybe because of a difference of opinion, academics (and the anonymous Economist journalist) seem to understand the AACSB’s stance as meaning that all programs should be evaluated with the exact same criteria which give less room for basic research. Similar things have been done in the past and, AFAICT, basic research eventually makes a comeback, one way or the other. A move toward “practical outcomes” is often a stopgap measure in a “bearish” context.

To jump on the soapbox for a second. I personally do think that there should be more variety in academic careers, including in business schools. Those who do undertake basic research are as important as the others. But it might be ill-advised to require every faculty member at every school to have an impressive research résumé every single year. Those people whose “calling” it is to actually teach should have some space and should probably not be judged using the same criteria as those who perceive teaching as an obstacle in their research careers. This is not to say that teachers should do no research. But it does mean that requiring proof of excellence in research of everyone involved is a very efficient way to get both shoddy research and dispassionate teaching. In terms of practical implications for the world outside the Ivory Tower, often subsumed under the category of “Service,” there are more elements which should “count” than direct gain from a given project with a powerful business partner. (After all, there is more volatility in this context than in most academic endeavours.) IMHO, some people are doing more for their institutions by going “in the world” and getting people interested in learning than by working for a private sponsor. Not that private sponsors are unimportant. But one strength of academic institutions is that they can be neutral enough to withstand changes in the “market.”

Phew! 😉

Couldn’t help but notice that the article opens the door for qualitative and inductive research. Given the current trend in and toward ethnography, this kind of attitude could make it easier to “sell” ethnography to businesses.
What made me laugh in a discussion of video-based ethnographic observation is that they keep contrasting “ethnography” (at least, the method they use at EverydayLives) with “research.” 😀

The advantage of this distinction, though, in the context of this Economist piece, is that marketeers and other business-minded people might then see ethnography as an alternative for what is perceived as “practically irrelevant” research. 💡

Research and Regulations

Talk about chilling effects

Colleges and Universities – Institutional Review Boards – Ethics – New York Times

Interesting that the NYT should take this issue on. Because of its readership, it might have an impact. Many researchers have, in fact, been having these discussions, including in ethnographic disciplines.

Defending Quebec’s Cegep System

Disclaimer: So far, I’ve taught at six universities and one college in Indiana, Massachusetts, New Brunswick, and Quebec. In Quebec, I’ve taught at Montreal’s Université de Montréal (French-speaking) and Concordia University (English-speaking). This entry is mostly about my teaching experience in Montreal in contrast to my teaching experience in the MidWest and Northeast regions of the United States. Having spent some time in Mali, Switzerland, and France, I do realise that many education systems outside of Canada and the U.S. work pretty much like Quebec’s.

It’s partly my bias as a Québécois, I’m sure. Or it’s the weather. Yet I can’t help but being amazed at how well-prepared my students at both Concordia University and Université de Montréal have been, so far. Though personal characteristics could conceivably play a part, I usually see my Quebec students’ preparedness in relation to the Cegep system that we have here in Quebec.

“So,” I hear you ask, “what is the Cegep system anyway?” Well, it’s the educational system that we have, here in Quebec. It includes Cegeps.

“But…”

Yeah, I know. 😉

“Cegep” or “CEGEP” (pronounced “sea-jep” or “say-jep”) is a Quebec French acronym which stands for «Collège d’enseignement général et professionnel» (“College of General and Professional Education”). A Cegep is a post-secondary institution («Collège») which serves both as a comprehensive («Général») transitional period between secondary school and university as well as vocational («Professionnel») training («Enseignement») in fields like nursing, robotics, or computer science. People in the U.S. could think of it as a blend of a vocational school, a community college, a prep school, a continuing education program, and a two-year liberal arts college. A Cegep’s degree («diplôme d’études collégiales» or “DEC,” pronounced “deck”) can be compared with things like the French «baccalauréat» or the Swiss «maturité», but less Euro-hierarchical. (Please note that «baccalauréat» (or «bacc.», pronounced “back”) is used in Quebec to refer to the bachelor’s degree.)

Though I haven’t been in direct contact with many Cegep students for quite a while, I find the Cegep system to be one of the best features of the Quebec education system.

Of course, I tend to idealise things a fair bit and I know many people whose opinion of the Cegep system is much less enthusiastic than mine. Still, through both informal and formal discussions with many university students and faculty in Canada, France, Switzerland, and the United States, my positive perspective on the Cegep system keeps being reinforced.

One reason this issue keeps being relevant is that provincial politicians, school board administrators, and some other members of Quebec society occasionally attack the Cegep system for different reasons. On the other hand, I have yet to meet a university professor who has very negative things to say about the Cegep system. They might come out with this blog entry, but it would take a fair bit to get me, as a university instructor, to see Cegeps in very negative a light.

Cegeps were an effect of Quebec’s Quiet Revolution (late 1960s through the 1970s). They’re a somewhat recent phenomenon, so we can’t really see all of their social effects, but have existed for long enough a period of intense social change that they have really taken roots in the fabric of Quebec culture. (I love mixing metaphors! 😉 )

I’m a little bit unclear as to whether or not the requirements have remained the same since my own time as a music student at Cégep Saint-Laurent (1989-1991), but here’s a description in the present tense of how Cegeps worked when I went to one almost twenty years ago. All Quebeckers younger than 21 who wish to go to a university in Quebec need to complete at least two years’ worth of Cegep courses after secondary school (grades 7-11, here). “Professional” (vocational) programs last three years and also work for university requirements if a Cegep graduate wants to go to a university. For those 21 or older, life experience usually counts as equivalent to the Cegep requirement for applying to Quebec universities (at least, that’s the way it was, way back when). Even then, most university applicants go through Cegep even if they are old enough to enter a university program without a DEC as Cegep is an efficient way to prepare for university. Many programs at Quebec universities use representations of Cegep grades (kind of like a normalised GPA) as admission criteria. It wasn’t the case for my B.Sc. in anthropology at Université de Montréal (1991-1994). Unlike the United States where standardised tests are so common, Quebec students don’t take SAT-like general exams before going to university. To an extent, comprehensive training in a Cegep achieves some of the same goals as SAT scores do in the United States.

As far as I know, non-Quebec students need to go through specific requirements before they can begin a Bachelor’s degree at a Quebec university (B.A. and B.S. programs usually last three years, here). I’m not really clear on the details but it implies that even non-Cegep students are specifically prepared to go to university.

Even with students who never went to Cegep, the existence of Cegeps makes a large difference in the Quebec education system as it raises the bar for university behaviour. In Quebec, the kinds of mistakes college students tend to make in their “college years” in the U.S. are supposed to have been done during Cegep years in Quebec. So Quebec’s university students are less likely to make them

Unlike pupils in secondary schools, Cegep students enter a specific study program. On paper, course requirements in a typical Cegep program look quite a bit like freshman and sophomore requirements at a North American university or college outside of Quebec. Students choose their own courses (possibly with an advisor, I can’t remember) and usually get a fair bit of “free” time. At Saint-Laurent, my weekly scheduled only included 15 hours of classes but I also had 15 hours of Big Band rehearsal every week and would usually spend thirty hours of individual instrument practise as well as thirty hours of study every week. Yes, that was a bit much but I feel it really prepared me for an academic career. 😉

The equivalent of “General Education Requirements” in Cegeps include philosophy and physical education courses. The philosophy courses are quite basic but they still prepare students to think about issues which tend to be very important in academic contexts. And, at least in the courses I’ve had at Saint-Laurent, we did read primary texts from important thinkers, like the complete text of Nietzsche’s Zur Genealogie der Moral (translated into French).

As compared to most North American universities, Cegeps charge almost nothing. When I was at Saint-Laurent, we had administrative fees of about $80 and no tuition fees. It has probably changed since that time, but I’m quite sure Cegep fees are nothing like the outrageous tuition fees paid by college and university students in many parts of the United States. What this means to students is that the financial cost of a Cegep program is fairly minimal. Of course, there are many costs associated with going through school during that time. For one thing, a good proportion of Cegep students live in appartments, which can be fairly expensive. And it’s difficult to work full-time while doing a Cegep degree. But, as compared to the typical situation in the U.S., the stakes in dropping a Cegep program or switching to a new one are low enough that students use this time as an opportunity to get to know what they want to do with their lives.

In other words, Cegep students who may look like they’re “wasting their time” are going through the period of socialisation associated with late adolescence in different parts of the world. If, as is quite common, they find out that they don’t necessarily want to get a university degree or that their original degree program was nothing like they planned, they still got something out of their Cegep experience at little cost. Given the functioning costs of universities, such shifts in learning orientation carry very high social and individual costs if they happen in universities. “Wasting” a DEC in Natural Sciences by then moving on to become an artist is nothing as compared to dropping a pre-Med degree to join the Peace Corps. In cases where public funding to universities is important, the difference is extremely significant, socially.

For many people, Cegep is in fact a way to experience student life to see if they like it. As painful as it may be for some academics and prestige-hungry parents to learn, many people don’t really want to spend that many years (and that much money) as college/university students. In fact, there are those brilliant students who, one day, realise that they just want to learn on their own while working as, say, a cashier at a university cafeteria. My guess is that social pressure and diploma prestige are the only reasons such people ever go through post-secondary education in the first place. I also feel that they should have a right to choose the life that they want. You know: “Pursuit of Happiness” and all of that…

As some would be quick to point out, there are some people who spend years and years in Cegeps, unsuccessfully looking for the perfect program for them, and end up working at low-paying jobs all their lives. These may sound like lost souls but I really think that they are more likely to contribute to society as a whole than the equivalent long-term “undecided majors” in U.S. universities.

Because Cegeps’ individual costs are relatively low, Cegep students often do experiment a lot with courses in different fields. It may seem like a stretch but my hunch is that this experimental tendency might be one of the reasons is so productive in creative domains like musical productions and circus shows. If it weren’t for Cegeps, I would never have spent two years of my life in intensive training as a musician. I already (since age 13) that I wanted to become an anthropologist and my DEC in music wasn’t necessary for anything I ever did. But it greatly enhanced my life more than many university programs ever do.

Cegeps often count significant numbers of what U.S. college people tend to call “non-traditional students” (older than the “typical” post-K-12 undergrad). These include fascinating people like mature women who are getting a Cegep degree as part of a life-changing experience (say, after a divorce). Because of this, the average age in a Cegep can be higher than in the typical U.S. graduate school. It also means that Cegep students coming directly from secondary schools are getting accustomed to interacting with people whose life experience may involve parenthood, career development, and long-term personal relationships.

For diverse reasons, Cegeps are the locus of most of the active student movements in Quebec, some of which have led to important strikes and other forms of student protest. Student strikes have had a deep impact in Quebec’s recent history. Not that students have forced long-lasting policy changes by themselves but many members of recent generations of Quebeckers have gotten a taste for political involvement through student protest. Though I was living in Indiana at the time (2004-2005), I have seen important effects of the most recent student strike on some dimensions of Quebec society. At the time, around 200 000 Quebec students went on strike in protest of the provincial government’s changes to the financial aid system. At one point, 100 000 students had taken to the streets to march as part of the student movement. The government eventually backed down on the changes it was implementing and people still talk about the effects of this strike. It is likely that the strike will not have any effect on any specific political party and political scientists would probably say that the strike failed to produce a “political class.” Yet, and this is an important point, the target of the strike wasn’t a political party but a perceived discrepancy between the ideals of two generations. In my personal opinion, such a social movement is much more important than partisan politics. In such a context, it isn’t surprising to see many young Quebeckers become social activists, may it be for environmental causes or to fight some global inequalities. They become like this in Cegeps. Since the majority of secondary school students eventually go to Cegeps, this social involvement has nothing to do with the elitism of “Revolutions” of the early nationalist era. Cegep students are the perfect example of individualistic (one would say «libertaire») social engagement.

Not only are Cegep students socially involved but they are usually considered to be socially mature.

Quite significantly, many young adults in Quebec learn how to drink by the time they finish Cegep. Drinking age is 18 here and people usually start Cegep at age 17. As has been happening in different parts of the world for the longest time, cafés and bars around Cegep and university campuses tend to be important meeting space for students. Coffee is the drink of choice for many students during the day but alcoholic drinks (including craft beer, nowadays) bring students together for long discussions in the evening and nights. Because student alcohol consumption is widely accepted, students never feel the need to hide in residence halls or “greek houses” to enjoy each other’s company.

In such a context, it’s easy to understand why university students in Quebec are very generally seen as responsible adults. In the U.S., I’ve heard both students and professors describe university students of any age as “kids,” a term I find very symptomatic of tricky educational and academic issues. As I see universities as a place to do serious academic work and not as a place for parents to drop their kids until they grow up, I have many reasons to support Quebec’s Cegep system or anything which may achieve the same results. 🙂

English Syntax and Buffaloing

This Wikipedia entry was featured recently:

Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo. – Wikipedia, the free encyclopedia

It would be a very good example to use in an introductory class for almost any language science. The article itself is well-written.

Medici and Innovation

First encountered the notion of the Medici effect through this interview with Frans Johansson in Ubiquity, a journal frequently mentioned on the Humanist Discussion Group.
A recent article about important changes coming from simple ideas made me post a short blog entry about changes from simple ideas. Interestingly enough, Johansson himself posted a comment to that entry.
This is in fact a frequent stream of thought, for me. In both business and academia, we tend to live through ideas. Specific ideas. Especially those which can generate money or research projects. An important dimension of the “Medici Effect” seems to be that simple ideas can lead to great accomplishments. Another important dimension is that ideas are both generated in and implemented by groups. Some social contexts seem especially conducive to new ideas. This perspective is well-known enough that even Denys Arcand’s Invasions Barbares had something to say about it.
There’s a lot of directions one could take to talk about innovation from that point. Among the possible threads: artistic creativity, personal innovation, sense of discovery, the economies of ideas, ideas come from the people, “intellectual property,” fluid/organic innovation, boundless ideas, innovation through links between ideas, Lavoisier on ideas (nothing is created or lost, everything is transformed, including ideas), and so on and so forth.
My personal feeling is that the very concept of innovation has become something of a “core value” for a number of people, especially in industrialized society. The type of “newer is better” view of “progress” in both society and technology.
In my mind, the best thing to do is simply to bring ideas together, a “shock of ideas” («le choc des idées»). Hence the long list of tags… 😉