Scriptocentrism and the Freedom to Think

Books are modern. Online textuality is postmodern.

As a comment on my previous blogpost on books, a friend sent me (through Facebook) a link to a blogpost about a petition to Amazon with the following statement:

The freedom to read is tantamount to the freedom to think.

As this friend and I are both anthros+africanists, I’m reacting (perhaps a bit strongly) to that statement.

Given my perspective, I would dare say that I find this statement (brought about by DbD)… ethnocentric.

There, I said it.

And I’ll try to back it up in this blogpost in order to spark even more discussion.

We won’t exhaust this topic any time soon, but I feel there’s a lot we can do about it which has rarely been done.

I won’t use the textbook case of “Language in the Inner City,” but it could help us talk about who decides, in a given social context, what is important. We both come from a literacy-focused background, so we may have to take a step back. Not sure if Bourdieu has commented on Labov, especially in terms of what all this means for “education,” but I’d even want to bring in Ivan Illich, at some point.

Hunters with whom I’ve been working, in Mali, vary greatly in terms of literacy. Some of them have a strong university background and one can even write French legalese (he’s a judge). Others (or some of the same) have gone to Koranic school long enough that can read classical Arabic. Some have the minimal knowledge of Arabic which suffices, for them, to do divination. Many of them have a very low level of functional literacy. There’s always someone around them who can read and write, so they’re usually not out of the loop and it’s not like the social hierarchy stereotypical of the Catholic Church during the Middle Ages in Europe. It’s a very different social context which can hardly be superimposed with the history of writing and the printing press in Europe.

In terms of “freedom to thinik,” I really wouldn’t say that they’re lacking. Of course, “free thinker” has a specific meaning in liberal societies with a European background. But even this meaning can be applied to many people I’ve met in Mali.

And I go back to the social context. Those with the highest degree of functional literacy aren’t necessarily those with the highest social status. And unlike Harlem described by Labov, it’s a relatively independent context from the one in which literacy is a sine qua non. Sure, it’s a neocolonial context and Euro-Americans keep insisting that literacy in Latin script is “the most important thing ever” if they are to become a true liberal democracy. Yet, internally, it’s perfectly possible for someone to think freely, get recognition, and help other people to think without going through the written medium.

Many of those I know who have almost nonexistent skills in the written medium also have enough power (in a Weberian sense) that they get others to do the reading and writing for them. And because there are many social means to ensure that communication has worked appropriately, these “scribes” aren’t very likely to use this to take anything away from those for whom they read and write.

In Switzerland, one of my recent ancestors was functionally illiterate. Because of this, she “signed away” most of her wealth. Down the line, I’m one of her very few heirs. So, in a way, I lost part of my inheritance due to illiteracy.

Unless the switch to a European model for notarial services becomes complete, a case like this is unlikely to occur among people I know in Mali. If it does happen, it’s clearly not a failure of the oral system but a problem with this kind of transition. It’s somewhat similar to the situation with women in diverse parts of the continent during the period of direct colonialism: the fact that women have lost what powers they had (say, in a matrilineal/matrilocal society) has to do with the switch to a hierarchical system which put the emphasis on new factors which excluded the type of influence women had.

In other words, I fully understand the connections between liberalism and literacy and I’ve heard enough about the importance of the printing press and journalism in these liberal societies to understand what role reading has played in those contexts. I simply dispute the notion that these connections should be universal.

Yes, I wish the “Universal Declaration of Human Rights” (including the (in)famous Article 26, which caused so many issues) were more culturally aware.

I started reading Deschooling Society a few weeks ago. In terms of “insight density,” it’s much higher than the book which prompted this discussion. While reading the first chapter, I constructed a number of ideas which I personally find useful.

I haven’t finished reading the book. Yet. I might eventually finish it. But much of what I wanted to get from that book, I was able to get from diverse sources. Including that part of the book I did read, sequentially. But, also, everything which has been written about Illich since 1971. And I’ll be interested in reading comments by the reading group at Wikiversity.

Given my background, I have as many “things to say” about the issues surrounding schooling as what I’ve read. If I had the time, I could write as much on what I’ve read from that book and it’d probably bring me a lot of benefits.

I’ve heard enough strong reactions against this attitude I’m displaying that I can hear it, already: “how can you talk about a book you haven’t read.” And I sincerely think these people miss an important point. I wouldn’t go so far as to say that their reading habits are off (that’d be mean), especially since those are well-adapted to certain contexts, including what I call scriptocentrism. Not that these people are scriptocentric. But their attitude “goes well with” scriptocentrism.

Academia, despite being to context for an enormous amount of writing and reading, isn’t displaying that kind of scriptocentrism. Sure, a lot of what we do needs to be written (although, it’s often surprising how much insight goes unwritten in the work of many an academic). And we do get evaluated through our writing. Not to mention that we need to write in a very specific mode, which almost causes a diglossia.

But we simply don’t feel forced to “read the whole text.”

A colleague has described this as the “dirty little secret” of academia. And one which changes many things for students, to the point that it almost sounds as if it remains a secret so as to separate students into categories of “those who get it” and “the mass.”

It doesn’t take a semester to read a textbook so there are students who get the impression that they can simply read the book in a weekend and take the exams. These students may succeed, depending on the course. In fact, they may get really good grades. But they run into a wall if they want to go on with a career making any use of knowledge construction skills.

Bill Reimer has interesting documents about “better reading.” It’s a PowerPoint presentation accompanied by exercises in a PDF format. (No, I won’t discuss format here.)

I keep pointing students to those documents for a simple reason: Reimer isn’t advocating reading every word in sequence. His “skim then focus” advice might be the one piece which is harder to get through to people but it’s tremendously effective in academic contexts. It’s also one which is well-adapted to the kind of online reading I’m thinking about. And not necessarily that good for physical books. Sure, you can efficiently flip pages in a book. But skimming a text on paper is more likely to be about what stands out visually than about the structure of the text. Especially with book-length texts. The same advice holds with physical books, of course. After all, this kind of advice originally comes from that historical period which I might describe as the “heyday of books”: the late 20th Century. But I’d say that the kind of “better reading” Reimer describes is enhanced in the context of online textuality. Not just the “Read/Write Web” but Instant Messaging, email, forums, ICQ, wikis, hypertext, Gopher, even PowerPoint…

Much of this has to do with different models of human communication. The Shannon/Weaver crowd have a linear/directional model, based on information processing. Codec and modem. Something which, after Irvine’s Shadow Conversations, I tend to call “the football theory of communication.” This model might be the best-known one, especially among those who study in departments of communication along with other would-be journalists. Works well for a “broadcast” medium with mostly indirect interaction (books, television, radio, cinema, press conferences, etc.). Doesn’t work so well for the backchannel-heavy “smalltalk”  stuff of most human communication actually going on in this world.

Some cognitivists (including Chomsky) have a schema-based model. Constructivists (from Piaget on) have an elaborate model based on knowledge. Several linguistic anthropologists (including yours truly but also Judith Irvine, Richard Bauman, and Dell Hymes) have a model which gives more than lipservice to the notion of performance. And there’s a functional model of any human communication in Jakobson’s classic text on verbal communication. It’s a model which can sound as if it were linear/bidirectional but it’s much broader than this. His six “functions of verbal communication” do come from six elements of the communication process (channel, code, form, context, speaker, listener). But each of these elements embeds a complex reality and Jakobson’s model seems completely compatible with a holistic approach to human communication. In fact, Jakobson has had a tremendous impact on a large variety of people, including many key figures in linguistic anthropology along with Lévi-Strauss and, yes, even Chomsky.

(Sometimes, I wish more people knew about Jakobson. Oh, wait! Since Jakobson was living in the US, I need to americanize this statement: “Jakobson is the most underrated scholar ever.”)

All these models do (or, in my mind, should) integrate written communication. Yet scriptocentrism has often led us far away from “texts as communication” and into “text as an object.” Scriptocentrism works well with modernity. Going away from scriptocentrism is a way to accept our postmodern reality.

Advertisement

I Hate Books

I want books dead. For social reasons.

In a way, this is a followup to a discussion happening on Facebook after something I posted (available publicly on Twitter): “(Alexandre) wishes physical books a quick and painfree death. / aime la connaissance.”

As I expected, the reactions I received were from friends who are aghast: how dare I dismiss physical books? Don’t I know no shame?

Apparently, no, not in this case.

And while I posted it as a quip, it’s the result of a rather long reflection. It’s not that I’m suddenly anti-books. It’s that I stopped buying several of the “pro-book” arguments a while ago.

Sure, sure. Books are the textbook case of technlogy which needs no improvement. eBooks can’t replace the experience of doing this or that with a book. But that’s what folkloristics defines as a functional shift. Like woven baskets which became objects of nostalgia, books are being maintained as the model for a very specific attitude toward knowledge construction based on monolithic authored texts vetted by gatekeepers and sold as access to information.

An important point, here, is that I’m not really thinking about fiction. I used to read two novel-length works a week (collections of short stories, plays…), for a period of about 10 years (ages 13 to 23). So, during that period, I probably read about 1,000 novels, ranging from Proust’s Recherche to Baricco’s Novecentoand the five books of Rabelais’s Pantagruel series. This was after having read a fair deal of adolescent and young adult fiction. By today’s standards, I might be considered fairly well-read.

My life has changed a lot, since that time. I didn’t exactly stop reading fiction but my move through graduate school eventually shifted my reading time from fiction to academic texts. And I started writing more and more, online and offline.
In the same time, the Web had also been making me shift from pointed longform texts to copious amounts of shortform text. Much more polyvocal than what Bakhtin himself would have imagined.

(I’ve also been shifting from French to English, during that time. But that’s almost another story. Or it’s another part of the story which can reamin in the backdrop without being addressed directly at this point. Ask, if you’re curious.)
The increase in my writing activity is, itself, a shift in the way I think, act, talk… and get feedback. See, the fact that I talk and write a lot, in a variety of circumstances, also means that I get a lot of people to play along. There’s still a risk of groupthink, in specific contexts, but one couldn’t say I keep getting things from the same perspective. In fact, the very Facebook conversation which sparked this blogpost is an example, as the people responding there come from relatively distant backgrounds (though there are similarities) and were not specifically queried about this. Their reactions have a very specific value, to me. Sure, it comes in the form of writing. But it’s giving me even more of something I used to find in writing: insight. The stuff you can’t get through Google.

So, back to books.

I dislike physical books. I wish I didn’t have to use them to read what I want to read. I do have a much easier time with short reading sessions on a computer screen that what would turn into rather long periods of time holding a book in my hands.

Physical books just don’t do it for me, anymore. The printing press is, like, soooo 1454!

Yes, books had “a good run.” No, nothing replaces them. That’s not the way it works. Movies didn’t replace theater, television didn’t replace radio, automobiles didn’t replace horses, photographs didn’t replace paintings, books didn’t replace orality. In fact, the technology itself doesn’t do much by itself. But social contexts recontextualize tools. If we take technology to be the set of both tools and the knowledge surrounding it, technology mostly goes through social processes, since tool repertoires and corresponding knowledge mostly shift in social contexts, not in their mere existence. Gutenberg’s Bible was a “game-changer” for social, as well as technical reasons.

And I do insist on orality. Journalists and other “communication is transmission of information” followers of Shannon&Weaver tend to portray writing as the annihilation of orality. How long after the invention of writing did Homer transfer an oral tradition to the writing media? Didn’t Albert Lord show the vitality of the epic well into the 20th Century? Isn’t a lot of our knowledge constructed through oral means? Is Internet writing that far, conceptually, from orality? Is literacy a simple on/off switch?

Not only did I maintain an interest in orality through the most book-focused moments of my life but I probably care more about orality now than I ever did. So I simply cannot accept the idea that books have simply replaced the human voice. It doesn’t add up.

My guess is that books won’t simply disappear either. There should still be a use for “coffee table books” and books as gifts or collectables. Records haven’t disappeared completely and CDs still have a few more days in dedicated stores. But, in general, we’re moving away from the “support medium” for “content” and more toward actual knowledge management in socially significant contexts.

In these contexts, books often make little sense. Reading books is passive while these contexts are about (hyper-)/(inter-)active.

Case in point (and the reason I felt compelled to post that Facebook/Twitter quip)…
I hear about a “just released” French book during a Swiss podcast. Of course, it’s taken a while to write and publish. So, by the time I heard about it, there was no way to participate in the construction of knowledge which led to it. It was already “set in stone” as an “opus.”

Looked for it at diverse bookstores. One bookstore could eventually order it. It’d take weeks and be quite costly (for something I’m mostly curious about, not depending on for something really important).

I eventually find it in the catalogue at BANQ. I reserve it. It wasn’t on the shelves, yet, so I had to wait until it was. It took from November to February. I eventually get a message that I have a couple of days to pick up my reservation but I wasn’t able to go. So it went back on the “just released” shelves. I had the full call number but books in that section aren’t in their call number sequence. I spent several minutes looking back and forth between eight shelves to eventually find out that there were four more shelves in the “humanities and social sciences” section. The book I was looking was on one of those shelves.

So, I was able to borrow it.

Phew!

In the metro, I browse through it. Given my academic reflex, I look for the back matter first. No bibliography, no index, a ToC with rather obscure titles (at random: «Taylor toujours à l’œuvre»/”Taylor still at work,” which I’m assuming to be a reference to continuing taylorism). The book is written by two separate dudes but there’s no clear indication of who wrote what. There’s a preface (by somebody else) but no “acknowledgments” section, so it’s hard to see who’s in their network. Footnotes include full URLs to rather broad sites as well as “discussion with <an author’s name>.” The back cover starts off with references to French popular culture (including something about “RER D,” which would be difficult to search). Information about both authors fits in less than 40 words (including a list of publication titles).

The book itself is fairly large print, ways almost a pound (422g, to be exact) for 327 pages (including front and back matter). Each page seems to be about 50 characters per line, about 30 lines per page. So, about half a million characters or 3500 tweets (including spaces). At 5+1 characters per word, about 80,000 words (I have a 7500-words blogpost, written in an afternoon). At about 250 words per minute, about five hours of reading. This book is listed at 19€ (about 27CAD).
There’s no direct way to do any “postprocessing” with the text: no speech synthesis for visually impaired, concordance analysis, no machine translation, even a simple search for occurences of “Sarkozy” is impossible. Not to mention sharing quotes with students or annotating in an easy-to-retrieve fashion (à la Diigo).

Like any book, it’s impossible to read in the dark and I actually have a hard time to find a spot where I can read with appropriate lighting.

Flipping through the book, I get the impression that there’s some valuable things to spark discussions, but there’s also a whole lot of redundancy with frequent discussions on the topic (the Future of Journalism, or #FoJ, as a matter of fact). My guesstimate is that, out of 5 hours of reading, I’d get at most 20 pieces of insight that I’d have exactly no way to find elsewhere. Comparable books to which I listened as audiobooks, recently, had much less. In other words, I’d have at most 20 tweets worth of things to say from the book. Almost a 200:1 compression.
Direct discussion with the authors could produce much more insight. The radio interviews with these authors already contained a few insight hints, which predisposed me to look for more. But, so many months later, without the streams of thought which animated me at the time, I end up with something much less valuable than what I wanted to get, back in November.

Bottomline: Books aren’t necessarily “broken” as a tool. They just don’t fit my life, anymore.

Homeroasting and Coffee Geekness

I bought the i-Roast 2 homeroaster: I’m one happy (but crazy) coffee geek.

I’m a coffee geek. By which I mean that I have a geeky attitude to coffee. I’m passionate about the crafts and arts of coffee making, I seek coffee-related knowledge wherever I can find it, I can talk about coffee until people’s eyes glaze over (which happens more quickly than I’d guess possible), and I even dream about coffee gadgets. I’m not a typical gadget freak, as far as geek culture goes, but coffee is one area where I may invest in some gadgetry.

Perhaps my most visible acts of coffee geekery came in the form of updates I posted through diverse platforms about my home coffee brewing experiences. Did it from February to July. These posts contained cryptic details about diverse measurements, including water temperature and index of refraction. It probably contributed to people’s awareness of my coffee geek identity, which itself has been the source of fun things like a friend bringing me back coffee from Ethiopia.

But I digress, a bit. This is both about coffee geekness in general and about homeroasting in particular.

See, I bought myself this Hearthware i-Roast 2 dedicated homeroasting device. And I’m dreaming about coffee again.

Been homeroasting since December 2002, at the time I moved to Moncton, New Brunswick and was lucky enough to get in touch with Terry Montague of Down Esst Coffee.

Though I had been wishing to homeroast for a while before that and had become an intense coffee-lover fifteen years prior to contacting him, Terry is the one who enabled me to start roasting green coffee beans at home. He procured me a popcorn popper, sourced me some quality green beans, gave me some advice. And off I was.

Homeroasting is remarkably easy. And it makes a huge difference in one’s appreciation of coffee. People in the coffee industry, especially baristas and professional roasters, tend to talk about the “channel” going from the farmer to the “consumer.” In some ways, homeroasting gets the coffee-lover a few steps closer to the farmer, both by eliminating a few intermediaries in the channel and by making coffee into much less of a commodity. Once you’ve spent some time smelling the fumes emanated by different coffee varietals and looking carefully at individual beans, you can’t help but get a deeper appreciation for the farmer’s and even the picker’s work. When you roast 150g or less at a time, every coffee bean seems much more valuable. Further, as you experiment with different beans and roast profiles, you get to experience coffee in all of its splendour.

A popcorn popper may sound like a crude way to roast coffee. And it might be. Naysayers may be right in their appraisal of poppers as a coffee roasting method. You’re restricted in different ways and it seems impossible to produce exquisite coffee. But having roasted with a popper for seven years, I can say that my poppers gave me some of my most memorable coffee experiences. Including some of the most pleasant ones, like this organic Sumatra from Theta Ridge Coffee that I roasted in my campus appartment at IUSB and brewed using my beloved Brikka.

Over the years, I’ve roasted a large variety of coffee beans. I typically buy a pound each of three or four varietals and experiment with them for a while.

Mostly because I’ve been moving around quite a bit, I’ve been buying green coffee beans from a rather large variety of places. I try to buy them locally, as much as possible (those beans have travelled far enough and I’ve had enough problems with courier companies). But I did participate in a few mail orders or got beans shipped to me for some reason or another. Sourcing green coffee beans has almost been part of my routine in those different places where I’ve been living since 2002: Moncton, Montreal, Fredericton, South Bend, Northampton, Brockton, Cambridge, and Austin. Off the top of my head, I’ve sourced beans from:

  1. Down East
  2. Toi, moi & café
  3. Brûlerie Saint-Denis
  4. Brûlerie des quatre vents
  5. Terra
  6. Theta Ridge
  7. Dean’s Beans
  8. Green Beanery
  9. Cuvée
  10. Fair Bean
  11. Sweet Maria’s
  12. Evergreen Coffee
  13. Mon café vert
  14. Café-Vrac
  15. Roastmasters
  16. Santropol

And probably a few other places, including this one place in Ethiopia where my friend Erin bought some.

So, over the years, I got beans from a rather large array of places and from a wide range of regional varietals.

I rapidly started blending freshly-roasted beans. Typically, I would start a blend by roasting three batches in a row. I would taste some as “single origin” (coffee made from a single bean varietal, usually from the same farm or estate), shortly after roasting. But, typically, I would mix my batches of freshly roasted coffee to produce a main blend. I would then add fresh batches after a few days to fine-tune the blend to satisfy my needs and enhance my “palate” (my ability to pick up different flavours and aromas).

Once the quantity of green beans in a particular bag would fall below an amount I can reasonably roast as a full batch (minimum around 100g), I would put those green beans in a pre-roast blend, typically in a specially-marked ziplock bag. Roasting this blend would usually be a way for me to add some complexity to my roasted blends.

And complexity I got. Lots of diverse flavours and aromas. Different things to “write home about.”

But I was obviously limited in what I could do with my poppers. The only real controls that I had in homeroasting, apart from blending, consisted in the bean quantity and roasting time. Ambient temperature was clearly a factor, but not one over which I was able to exercise much control. Especially since I frequently ended up roasting outside, so as to not incommodate people with fumes, noise, and chaff. The few homeroast batches which didn’t work probably failed because of low ambient temperature.

One reason I stuck with poppers for so long was that I had heard that dedicated roasters weren’t that durable. I’ve probably used three or four different hot air popcorn poppers, over the years. Eventually, they just stop working, when you use them for coffee beans. As I’d buy them at garage sales and Salvation Army stores for 3-4$, replacing them didn’t feel like such a financially difficult thing to do, though finding them could occasionally be a challenge. Money was also an issue. Though homeroasting was important for me, I wasn’t ready to pay around 200$ for an entry-level dedicated roaster. I was thinking about saving money for a Behmor 1600, which offers several advantages over other roasters. But I finally gave in and bought my i-Roast as a kind of holiday gift to myself.

One broad reason is that my financial situation has improved since I started a kind of partial professional reorientation (PPR). I have a blogpost in mind about this PPR, and I’ll probably write it soon. But this post isn’t about my PPR.

Although, the series of events which led to my purchase does relate to my PPR, somehow.

See, the beans I (indirectly) got from Roastmasters came from a friend who bought a Behmor to roast cocoa beans. The green coffee beans came with the roaster but my friend didn’t want to roast coffee in his brand new Behmor, to avoid the risk of coffee oils and flavours getting into his chocolate. My friend asked me to roast some of these beans for his housemates (he’s not that intensely into coffee, himself). When I went to drop some homeroasted coffee by the Station C co-working space where he spends some of his time, my friend was discussing a project with Duncan Moore, whom I had met a few times but with whom I had had few interactions. The three of us had what we considered a very fruitful yet very short conversation. Later on, I got to do a small but fun project with Duncan. And I decided to invest that money into coffee.

A homeroaster seemed like the most appropriate investment. The Behmor was still out of reach but the i-Roast seemed like a reasonable purchase. Especially if I could buy it used.

But I was also thinking about buying it new, as long as I could get it quickly. It took me several years to make a decision about this purchase but, once I made it, I wanted something as close to “instant gratification” as possible. In some ways, the i-Roast was my equivalent to Little Mrs Sommers‘s “pair of silk stockings.”

At the time, Mon café vert seemed like the only place where I could buy a new i-Roast. I tried several times to reach them to no avail. As I was in the Mile-End as I decided to make that purchase, I went to Caffè in Gamba, both to use the WiFi signal and to check if, by any chance, they might not have started selling roasters. They didn’t, of course, homeroasters isn’t mainstream enough. But, as I was there, I saw the Hario Ceramic Coffee Mill Skerton, a “hand-cranked” coffee grinder about which I had read some rather positive reviews.

For the past few years, I had been using a Bodum Antigua conical burr electric coffee grinder. This grinder was doing the job, but maybe because of “wear and tear,” it started taking a lot longer to grind a small amount of coffee. The grind took so long, at some points, that the grounds were warm to the touch and it seemed like the grinder’s motor was itself heating.

So I started dreaming about the Baratza Vario, a kind of prosumer electric grinder which seemed like the ideal machine for someone who uses diverse coffee making methods. The Vario is rather expensive and seemed like overkill, for my current coffee setup. But I was lusting over it and, yes, dreaming about it.

One day, maybe, I’ll be able to afford a Vario.

In the meantime, and more reasonably, I had been thinking about “Turkish-style mills.” A friend lent me a box-type manual mill at some point and I did find it produced a nice grind, but it wasn’t that convenient for me, partly because the coffee drops into a small drawer which rapidly gets full. A handmill seemed somehow more convenient and there are some generic models which are sold in different parts of the World, especially in the Arab World. So I got the impression that I might be able to find handmills locally and started looking for them all over the place, enquiring at diverse stores and asking friends who have used those mills in the past. Of course, they can be purchased online. But they end up being relatively expensive and my manual experience wasn’t so positive as to convince me to spend so much money on one.

The Skerton was another story. It was much more convenient than a box-type manual mill. And, at Gamba, it was inexpensive enough for me to purchase it on the spot. I don’t tend to do this very often so I did feel strange about such an impulse purchase. But I certainly don’t regret it.

Especially since it complements my other purchases.

So, going to the i-Roast.

Over the years, I had been looking for the i-Roast and Behmor at most of the obvious sites where one might buy used devices like these. eBay, Craig’s List, Kijiji… As a matter of fact, I had seen an i-Roast on one of these, but I was still hesitating. Not exactly sure why, but it probably had to do with the fact that these homeroasters aren’t necessarily that durable and I couldn’t see how old this particular i-Roast was.

I eventually called to find out, after taking my decision to get an i-Roast. Turns out that it’s still under warranty, is in great condition, and was being sold by a very interesting (and clearly trustworthy) alto singer who happens to sing with a friend of mine who is also a local beer homebrewer. The same day I bought the roaster, I went to the cocoa-roasting friend’s place and saw a Behmor for the first time. And I tasted some really nice homemade chocolate. And met other interesting people including a couple that I saw, again, while taking the bus after purchasing the roaster.

The series of coincidences in that whole situation impressed me in a sense of awe. Not out of some strange superstition or other folk belief. But different things are all neatly packaged in a way that most of my life isn’t. Nothing weird about this. The packaging is easy to explain and mostly comes from my own perception. The effect is still there that it all fits.

And the i-Roast 2 itself fits, too.

It’s clearly not the ultimate coffee geek’s ideal roaster. But I get the impression it could become so. In fact, one reason I hesitated to buy the i-Roast 2 is that I was wondering if Hearthware might be coming out with the i-Roast 3, in the not-so-distant future.

I’m guessing that Hearthware might be getting ready to release a new roaster. I’m using unreliable information, but it’s still an educated guess. So, apparently…

I could just imagine what the i-Roast 3 might be. As I’m likely to get, I have a number of crazy ideas.

One “killer feature” actually relates both to the differences between the i-Roast and i-Roast 2 as well as to the geek factor behind homeroasting: roast profiles as computer files. Yes, I know, it sounds crazy. And, somehow, it’s quite unlikely that Hearthware would add such a feature on an entry-level machine. But I seriously think it’d make the roaster much closer to a roasting geek’s ultimate machine.

For one thing, programming a roast profile on the i-Roast is notoriously awkward. Sure, you get used to it. But it’s clearly suboptimal. And one major improvement of the i-Roast 2 over the original i-Roast is that the original version didn’t maintain profiles if you unplugged it. The next step, in my mind, would be to have some way to transfer a profile from a computer to the roaster, say via a slot for SD cards or even a USB port.

What this would open isn’t only the convenience of saving profiles, but actually a way to share them with fellow homeroasters. Since a lot in geek culture has to do with sharing information, a neat effect could come out of shareable roast profiles. In fact, when I looked for example roast profiles, I found forum threads, guides, and incredibly elaborate experiments. Eventually, it might be possible to exchange roasting profiles relating to coffee beans from the same shipment and compare roasting. Given the well-known effects of getting a group of people using online tools to share information, this could greatly improve the state of homeroasting and even make it break out of the very small niche in which it currently sits.

Of course, there are many problems with that approach, including things as trivial as voltage differences as well as bigger issues such as noise levels:

But I’m still dreaming about such things.

In fact, I go a few steps further. A roaster which could somehow connect to a computer might also be used to track data about temperature and voltage. In my own experiments with the i-Roast 2, I’ve been logging temperatures at 15 second intervals along with information about roast profile, quantity of beans, etc. It may sound extreme but it already helped me achieve a result I wanted to achieve. And it’d be precisely the kind of information I would like to share with other homeroasters, eventually building a community of practice.

Nothing but geekness, of course. Shall the geek inherit the Earth?

Actively Reading: “Teach Naked” sans PowerPoint

Diigo comments about a CHE piece on moving lectures out of the classroom.

Some Diigo comments on a Chronicle piece on moving lectures out of the classroom. (Or, if you ask the piece’s author and some commenters, on PowerPoint as a source of boredom.)

I’d like to transform some of my own comments in a standalone blog entry, especially given the discussions Pamthropologist and I have been having through comments on her blog and mine. (And I just noticed Pamthropologist had written her own blogpost about this piece…) As I’m preparing for the Fall semester, I tend to think a lot about learning and teaching but I also get a bit less time.

Semi-disclaimer: John Bentley, instructional developer and programme coordinator at Concordia’s CTLS pointed me to this piece. John used to work for the Open University and the BBC. Together, John and I are currently developing a series of workshops on the use of online tools in learning and teaching. We’ve been discussing numerous dimensions of the connection between learning, teaching, and online tools. Our current focus is on creating communities of learners. One thing that I find especially neat about this collaboration is that our perspectives and spheres of expertise are quite different. Makes for interesting and thoughtful discussions.

‘Teach Naked’ Effort Strips Computers From Classrooms – Technology – The Chronicle of Higher Education

  • Not to be too snarky but… I can’t help but feel this is typical journalism. Take a complex issue, get a diverse array of comments on it, boil it down to an overly simplistic point about some polarizing question (PPT: is it evil?). Tadaa! You got an article and you’ve discouraged critical thinking.Sorry. I’m bad. I really shouldn’t go there.But I guess I’m disappointed in myself. When I first watched the video interview, I was reacting fairly strongly against Bowen. After reading (very actively!) the whole piece, I now realize that Jeff Young is the one who set the whole thing up.The problem with this is that I should know better. Right?Well, ok, I wasn’t that adamantly opposed to Bowen. I didn’t shout at my computer screen or anything. But watching the video interview again, after reading the piece, I notice that I interpret as much more open a discussion than the setup made it sound like. In other words, I went from thinking that Bowen was imposing a radical view on members of his faculty to hearing Bowen proposing ideas about ways to cope with social changes surrounding university education.The statement about most on-campus lectures being bad is rather bold, but it’s nothing we haven’t heard and it’s a reasonable comment to make in such a context. The stronger statement against PPT is actually weakened by Bowen himself in two ways: he explicitly talks about using PPT online and he frames his comment in comparison with podcasts. It then sounds like his problem isn’t with PPT itself. It’s with the use of PPT in the classroom by comparison to both podcasts and PPTs online. He may be wrong about the relative merits of podcasts, online “presentations,” and classroom lectures using PPT. But his opinion is much less radical than what I originally thought.Still, there’s room for much broader discussion of what classroom lectures and PPT presentations imply in teaching. Young’s piece and several Diigo comments on it focus on the value of PPT either in the abstract or through appropriate use. But there’s a lot more ground to cover, including such apparently simple issues as the effort needed to create compelling “presentation content” or students’ (and future employers’) expectations about PPT presentations.
  • Mr. Bowen wants to discourage professors from using PowerPoint, because they often lean on the slide-display program as a crutch rather using it as a creative tool.
    • damn you got there first! comment by dean groom
    • I think the more important point that’s being made by the article – is something that many of us in edtech world realised very quickly – that being able to teach well is a prerequisite to being able to effectively and creatively engage technology to help others learn…Powerpoint is probably the most obvious target because oif its ubiquity – but I suspect that there will also be a backlash when the masses start adopting other technologies… they’ll be misused just as effectively as PPT is.When we can assume that all university lecturers/tutors are effective teachers then the argument will be moot… until then we’ll continue to see death by powerpoint and powerpointlessness…I’m a drama teacher and love the idea of active rooms filled with proactive engaged learners… and if we have proactive engaged learners we can more effectively deploy technology in the mix…The world of teaching and learning is far from perfect and expectations seem to be geared towards a paradigm that says : “professors should tell me every last thing I need to know in order to get good grades and if students sat still and shut up long enough they might just learn something useful.”I even had one “lecturer” recently tell me “I’m a subject specialist, why do I need to know about pedagogy?” – sadly he was serious. comment by Kim FLINTOFF
    • On the subject specialist uninterested in pedagogy…It’s not an uncommon perspective, in university teaching. In fact, it might be more common among French-speakers, as most of those I’ve heard say something like this were French-speakers.I reacted quite negatively when I first heard some statement about university teachers not needing pedagogy. Don’t they care about learning?But… Isn’t there a point to be made about “non-pedagogy?”Not trying to be contrarian, here. Not playing devil’s advocate. Nor am I going on the kind of “anti-anti” PoMo mode which seems not to fit too well in English-speaking communities. I’m just thinking about teacher-less learning. And a relativist’s attitude to not judge before I know more. After all, can we safely assume that courses given by someone with such a reluctant attitude to learning pedagogy are inherently bad?There are even some people out there who take constructivism and constructionism to such an extreme that they’d say teachers aren’t needed. To an extent, the OLPC project has been going in that direction. “Students will teach themselves. We don’t need to train teachers or to engage with them in building this project.”There’s also a lot of discussion about learning outside of formal institutions. Including “on-the-job training” but also all sorts of learning strategies which don’t rely on the teacher/student (mentee, apprentice, pupil…) hierarchy. For instance, actual learning occurs in a large set of online activities. Enthusiastic people learn about things that passion them by reading about the subject, participating in online discussions, presenting their work for feedback, etc. Oftentimes, there is a hierarchy in terms of prestige, but it’s mostly negotiated through actions and not set in advance. More like “achieved status” than “ascribed status” (to use a convenient distinction from SOC101 courses). As this kind of training not infrequently leads to interesting careers, we’d be remiss to ignore the trend.Speaking of trends… It’s quite clear that many universities tend toward a more consumer-based approach. Students register and pay tuition to get “credentials” (good grades and impressive degrees). The notion that they might be there to do the actual learning is going by the wayside. In some professional contexts, people are quite explicit about how little they learnt in classrooms. It makes for difficult teaching contexts (especially at prestigious universities in the US), but it’s also something with which people learn to cope.My personal attitude is that “learning happens despite teachers.” I still think teachers make a difference, that we should learn about learners and learning, that pedagogy matters a whole lot. In fact, I’m passionate about pedagogy and I do what I can to improve my teaching.Yet the bottomline is: do people learn? If they do, does it matter what pedagogical training the teacher has? This isn’t a rhetorical question. comment by Alexandre Enkerli
  • A study published in the April issue of British Educational Research Journal
  • PowerPoint was one of the dullest methods they saw.
    • Can somebody post links to especially good PowerPoint files? comment by Bill Chapman
    • I don’t think this is really about PPT, but more about blind use of technology. It’s not the software to blame but the user.Also if you’re looking for great PPT examples, check out slideshare.net comment by Dean Shareski
    • Looking forward to reading what their criteria are for boredom.And the exact justification they give for lectures needing not to be boring.Or if they discuss the broad implications of lecturing, as opposed to the many other teaching methods that we use.Now, to be honest, I do use PPT in class. In fact, my PPT slides are the very example of what many people would consider boring: text outlines transformed into bullet points. Usually black on white, without images.But, overall, students seem to find me engaging. In student evaluations, I do get the occasional comment about the course being boring, but that’s also about the book and the nature of what we discuss.I upload these PPT files to Slideshare before going to class. In seminars, I use the PPT file to outline some topics, themes, and questions brought up by students and I upload the updated file after class.The PPT files on Slideshare are embedded into Moodle and serve as “course notes,” in conjunction with the audio recordings from the class meetings. These slides may include material which wasn’t covered in class.During “lecture,” I often spend extend periods of time discussing things with the class as a whole, leaving a slide up as a reminder of the general topic. Going from a bullet point to an extended discussion has the benefit of providing context for the discussion. When I started teaching, several students were saying that I’m “disorganized.” I still get a few comments like that but they’re much less frequent. And I still go on tangents, based on interactions with the group.Once in a while, I refrain from using PPT altogether. Which can lead to interesting challenges, in part because of student expectations and the fact that the screen becomes an indicator that “teaching is going on.”Perhaps a more important point: I try to lecture as little as possible. My upper-level courses are rapidly transformed into seminars. Even in large classes, the last class meetings of the semester involve just a few minutes of lecturing.This may all sound like a justification for my teaching method. But it’s also a reaction to the frequent discussions about PPT as evil. I do hate PPT, but I still use it.If only Google Wave could be released soon, we could use it to replace PPT. Wikis and microblogging tools are good and well, but they’re not as efficient in terms of real-time collaboration on complex material. comment by Alexandre Enkerli
  • seminars, practical sessions, and group discussions
  • In other words, tech-free classrooms were the most engaging.
    • Does it follow so directly? It’s quite easy to integrate technology with “seminars, practical sessions, and group discussions.” comment by Alexandre Enkerli
  • better than many older classroom technologies, like slate chalkboards or overhead transparencies
    • Which seems to support a form of technological determinism or, at least, a notion of a somewhat consistent improvement in the use of tools, if not in the tools themselves. comment by Alexandre Enkerli
  • But technology has hardly revolutionized the classroom experience for most college students, despite millions of dollars in investment and early predictions that going digital would force professors to rethink their lectures and would herald a pedagogical renaissance.
    • If so, then it’s only because profs aren’t bringing social technologies into their classrooms. Does the author of this article understand what’s current in ed tech? comment by Shelly Blake-Plock
    • the problem here is that in higher education, student satisfaction drives a service mentality – and students WANT summised PPTs and the want PODCASTS. Spoooon feeeeeed me – for I am paying. comment by dean groom
    • A rather broad statement which might be difficult to support with evidence.If we look at “classroom experience” in different contexts, we do notice large differences. Not necessarily in a positive sense. Technology is an integral part of all sorts of changes happening in, around, and away from the classroom.It would be quite different if that sentence said: “But institutional programs based on the adoption of specific tools in the classroom have hardly revolutionized…” It’s still early to assess the effectiveness of these programs, especially if we think about lifelong learning and about ongoing social changes related to technology use. But the statement would make more sense if it were more directly tied to specific programs instead of being a blanket critique of “technology” (left undefined). comment by Alexandre Enkerli
  • dream of shaking up college instruction
    • One of the most interesting parts of the interview with Bowen has to do with the notion that this isn’t, in fact, about following a dream. It’s about remaining relevant in a changing world. There’s a lot about Bowen’s perspective which sounds quite strange, to me. But the notion that universities should “wake up and smell the coffee” is something I wish were the object of more discussion in academic circles. comment by Alexandre Enkerli
  • Here’s the kicker, though: The biggest resistance to Mr. Bowen’s ideas has come from students, some of whom have groused about taking a more active role during those 50-minute class periods.
    • Great points, here. Let’s wish more students were involved in this conversation. It’s not just “about” them.One thing we should probably not forget about student populations is that they’re diverse. Chances are, some students in Meadows are delighted by the discussion focus. Others may be puzzled. It’s likely an adaptation for most of them. And it doesn’t sound like they were ever consulted about those changes. comment by Alexandre Enkerli
  • lecture model is pretty comfortable
    • And, though many of us are quick to criticize it, it’s difficult to avoid in the current systems of formal education in which we work. comment by Alexandre Enkerli
  • cool gadgets
    • The easiest way to dismiss the social role of technology is to call tools “gadgets.” But are these tools really just gadgets? In fact, some tools which are put to good use really aren’t that cool or even new. Are we discussing them enough? Are we aware of how they fit in the grand scheme of things?An obvious example would be cellphones. Some administrators and teachers perceive them as a nuisance. Rather few people talk about educational opportunities with cellphones, even though they already are used by people in different parts of the World to empower themselves and to learn. Negroponte has explicltly dimissed the educational potential of cellphones but the World isn’t waiting for approval from designers. comment by Alexandre Enkerli
  • seasoned performer,
    • There’s a larger point to be about performance in teaching. Including through a reference to Dick Bauman’s “Verbal Art as Performance” or other dimensions of Performance Theory.There’s also a more “mundane” point about a kind of conflict in universities between academic material and performance. In French-speaking universities, at least, it’s not uncommon to hear teachers talk about the necessity to be a “performer” as something of a distraction in teaching. Are teachers in front of the class to entertain students or is the classroom an environment in which to think and learn about difficult concepts? The consumer approach to universities, pushed in part by administrators who run universities like businesses, tends to emphasize the “entertainment paradigm,” hence the whole “boredom” issue.Having said all of this, Bowen’s own attitude goes beyond this simplistic “entertainment paradigm.” In fact, it sounds like he’s specifically not advocating for lectures to become a series of TEDtalks. Judging from the interview, it sounds like he might say that TEDtalk-style presentation should be put online and classroom-time should be devoted to analyzing those presentations.I do consider myself a performer, as I’ve been playing saxophone in a rather broad range of circumstances, from outdoor stages at festivals to concert halls. And my experience as a performer does influence the way I teach large classes. At the same time, it probably makes more salient the distinction between teaching and performing. comment by Alexandre Enkerli
  • The goateed administrator sported a suit jacket over a dark T-shirt
    • Though I’d be the first one to say that context is key, I fail to see what Bowen’s clothes contribute to the discussion. comment by Alexandre Enkerli
  • philosophical argument about the best way to engage students, he grounded it
  • information delivery common in today’s classroom lectures should be recorded and delivered to students as podcasts or online videos before class sessions
    • Fully agreed. Especially if we throw other things in the mix such as journal articles and collaboratively-created learning material. comment by Alexandre Enkerli
  • short online multiple-choice tests.
    • I don’t think he’s using the mc tests with an essessment focus rather an engagement focus – noit necessarily the most sophisticated but done playfully and creatively it can be a good first step to getting reluctatnt students to engage in first instance… comment by Kim FLINTOFF
    • I would also “defend” the use of MCTs in this context. Especially if the stakes are relatively low, the questions are well-crafted, and students do end up engaging.Like PPT, MCTs have some advantages, including because of student expectations.But, of course, it’s rather funny to hear Bowen talk about shaking things up and find out that he uses such tools. Still, the fact that these tests are online (and, one would think, taken outside of class time) goes well with Bowen’s main point about class time vs. tech-enabled work outside of class. comment by Alexandre Enkerli
  • Introduce issues of debate within the discipline and get the students to weigh in based on the knowledge they have from those lecture podcasts, Mr. Bowen says.
    • This wouldn’t be too difficult to do in social sciences and there are scenarios in which it would work wonderfully for lab sciences (if we think of “debate” as something similar to “discussion” sections in scientific articles).At the same time, some people do react negatively to such approaches based not on discipline but on “responsibilities of the university.” Some people even talk about responsibilities toward students’ parents! comment by Alexandre Enkerli
  • But if the student believes they can contribute, they’re a whole lot more motivated to enter the discourse, and to enter the discipline.
    • Sounds a bit like some of the “higher” positions in William Perry’s scheme. comment by Alexandre Enkerli
  • don’t be boring
    • Is boredom induced exclusively by the teacher? Can a student bored during a class meeting still be motivated and engaged in the material at another point? Should we apply the same principle to the readings we assign? Is there a way to efficiently assess the “boredom factor” of an academic article? How can we convince academic publishers that fighting boredom isn’t necessarily done through the addition of pretty pictures? comment by Alexandre Enkerli
  • you need a Ph.D. to figure it out
    • While I agree that these panels are difficult to use and could afford a redesign, the joke about needing a PhD sounds a bit strange in context. comment by Alexandre Enkerli
  • plug in their laptops
    • There’s something of a more general move toward getting people to use their own computers in the workplace. In fact, classroom computers are often so restricted as to be quite cumbersome to use in teaching. comment by Alexandre Enkerli
  • allow students to work in groups more easily
    • Not a bad idea. A good number of classrooms are structured in a way that makes it very hard to get students to do group work. Of course, it’s possible to do group work in any setting, but it’s remarkable how some of these seemingly trivial matters as the type of desk used can be enough to discourage some teachers from using certain teaching strategies. comment by Alexandre Enkerli
  • The classroom computers were old and needed an upgrade when Mr. Bowen arrived, so ditching them instead saved money.
    • Getting into the core of the issue. The reason it’s so important to think about “new ways” to do things isn’t necessarily that “old ways” weren’t conducive to learning. It’s because there are increased pressures on the system and some seem to perceive that cost-cutting and competition from online learning, making the issue so pressing. comment by Alexandre Enkerli
  • eliminate one staff position for a technician
    • Sounds sad, especially since support staff is already undervalued. But, at the same time, it does sound like relatively rational cost-cutting. One would just wish that they replaced that position with, say, teaching support. comment by Alexandre Enkerli
  • gave every professor a laptop
    • Again, this is a rather common practise outside of universities. Knowing how several colleagues think, this may also function as a way to “keep them happy.” comment by Alexandre Enkerli
  • support so they could create their own podcasts and videos.
    • This is where the tech support position which was cut could be useful. Recording and podcasting aren’t difficult to set up or do. But it’s an area where support can mean more than answering questions about which button to press. In fact, podcasting projects are an ideal context for collaboration between tech, teach, and research. comment by Alexandre Enkerli
  • lugging their laptops to class,
    • It can be an issue, especially if there wasn’t a choice in the type of laptop which could be used. comment by Alexandre Enkerli
  • She’s made podcasts for her course on “Critical Scholarship in Communication” that feature interviews she recorded with experts in the field.
    • One cool thing about these podcasting projects is that people can build upon them, one semester after the other. Interviews with practitioners do help provide a multiplicity of voices. And, yes, getting students to produce their own content is often a good way to go, especially if the topic is somehow related to the activity. Getting students in applied communication to create material does sound appropriate. comment by Alexandre Enkerli
  • they come in actually much more informed
    • Sounds effective. Especially since Bowen’s approach seems to be oriented toward pre-class preparation. comment by Alexandre Enkerli
  • if they had been assigned a reading.
    • There’s a lot to be said about this. One reason this method may be more efficient than reading assignments could have to do with the particularities of written language, especially the very formal style of those texts we often assign as readings. Not that students shouldn’t read, of course! But there’s a case to be made for some readings being replaced by oral sources, especially those which have to do with people’s experience in a field. Reading primary source material, integrating some reference texts, and using oral material can all be part of an appropriate set of learning strategies. comment by Alexandre Enkerli
  • created podcast lectures
    • An advantage of such “lecturecasts,” “profcasts,” and “slidecasts” is that they’re relatively easy to build and can be tightly structured. It’s not the end-all of learning material, but it’s a better substitute for classroom lectures than one might think.Still, there’s room for improvement in the technology itself. For instance, it’d be nice to have Revver-style comments in the timeline. comment by Alexandre Enkerli
  • shows movie clips from his laptop
    • This one is slightly surprising because one would expect that these clips could easily be shown, online, ahead of class. It might have to do with the chilling effect of copyright regulation or Heffernan’s strategy of getting “fresh” feedback. There would have been good questions to ask Heffernan in preparation for this piece. comment by Alexandre Enkerli
  • “Strangely enough, the people who are most resistant to this model are the students, who are used to being spoon-fed material that is going to be quote unquote on the test,” says Mr. Heffernan. “Students have been socialized to view the educational process as essentially passive. The only way we’re going to stop that is by radically refiguring the classroom in precisely the way José wants to do it.”
    • This interpretation sounds a tiny bit lopsided. After all, aren’t there students who were already quite active and engaged in the “old system” who have expressed some resistance to the “new system?” Sounds likely to me. But maybe my students are different.One fascinating thing is the level of agreement, among teachers, about the necessity to have students who aren’t passive. I certainly share this opinion but there are teachers in this World who actually do prefer students who are somewhat passive and… “obedient.” comment by Alexandre Enkerli
  • The same sequence of events
    • That part is quite significant: Bowen was already a reformer and already had gone through the process. In this case, he sounds more like one of those CEOs who are hired to save a company from a difficult situation. He originally sounded more like someone who wanted to impose specific views about teaching. comment by Alexandre Enkerli
  • ‘I paid for a college education and you’re not going to lecture?'”
    • A fairly common reaction, in certain contexts. A combination of the infamous “sense of entitlement,” the “customer-based approach to universities,” and student expectations about the way university teaching is supposed to go.One version I’ve had in student evaluations is that the student felt like s/he was hearing too much from other students instead of from me. It did pain me, because of the disconnect between what I was trying to do and that student’s notion of what university courses are supposed to bring her/him. comment by Alexandre Enkerli
  • PowerPoint lecture
    • As a commenter to my blog was saying about lectures in general, some of us (myself included) have been building a straw man. We may have negative attitudes toward certain teaching strategies, including some lecturing techniques. But that shouldn’t prevent us from discussing a wide array of teaching and learning strategies.In this case, it’s remarkable that despite the radical nature of Bowen’s reform, we learn that there are teachers who record PPT-based presentations. It then sounds like the issue isn’t so much about using PPT as it is about what is done in the classroom as opposed to what is done during the rest of the week.Boring or not, PPT lectures, even some which aren’t directly meant to engage students, can still find their place in the “teaching toolbox.” A dogmatic anti-PPT stance (such as the one displayed by this journalist) is unlikely to foster conversations about tools and learning. Based on the fact that teachers are in fact doing PPT lectures to be used outside the classroom, one ends up seeing Bowen’s perspective as much more open than that of the Chronicle’s editorial staff. comment by Alexandre Enkerli
  • Sandi Mann, the British researcher who led the recent study on student attitudes toward teaching, argues that boredom has serious implications in an educational setting.
    • Unsurprising perspective. Wonder if it had any impact on Mann’s research results. Makes the research sound more oriented than one might hope. comment by Alexandre Enkerli
  • according to some studies
  • low-cost online alternatives to the traditional campus experience
    • This could have been the core issue discussed in an article about Bowen. Especially if we are to have a thoughtful conversation about the state of higher education in a changing context. Justification for high tuition fees, the latent functions of “college life,” the likely outcome of “competing with free,” the value of the complete learning experience as opposed to the value of information transmission… comment by Alexandre Enkerli
  • give away videos
    • This is the “competing with free” part, to which record companies have been oblivious for so long but which makes OCW appear like quite a forward-looking proposition. comment by Alexandre Enkerli
  • colleges must make sure their in-person teaching really is superior to those alternatives
    • It’s both a free-market argument, which goes so well with the customer-based approach to learning, and a plea to consider learning in a broader way than the mere transmission of information from authoritative source to passive mass. An old saw, for sure, but one which surprisingly hasn’t been heard by everyone. comment by Alexandre Enkerli
  • add value
    • This might be appropriate language to convince trustees. At some institutions, this might be more important than getting students’ or teachers’ approval. comment by Alexandre Enkerli
  • not being online
    • Although, they do have an online presence. The arguments used have more to do with blended learning than with exclusively face-to-face methods. comment by Alexandre Enkerli
  • might need to stay a low-tech zone to survive.
    • Rubbish there is no reason to dumb down learning; and he obviously is not teaching 2500 students at one time. PPT is not the problem here, and this really is a collection of facile arguements that are not ironically substantiated. Lowering his overhead does not increase student learning – wheres the evidence? comment by dean groom
    • Come to think of it, it sounds like the argument was made more forcefully by Young than by Bowen himself. Bowen is certainly quite vocal but the “need… to survive” sounds a tad bit stronger than Bowen’s project.What’s funny is that the video made Bowen sound almost opinionated. The article makes Young sound like he has his own axe to grind comment by Alexandre Enkerli

A Glocal Network of City-States?

Can we even think about a glocal network of city-states?

This one should probably be in a fictive mode, maybe even in a science-fiction genre. In fact, I’m reconnecting with literature after a long hiatus and now would be an interesting time to start writing fiction. But I’ll still start this as one of those  “ramblings” blogposts that I tend to build or which tend to come to me.

The reason this should be fiction is that it might sound exceedingly naïve, especially for a social scientist. I tend to “throw ideas out there” and see what sticks to other ideas, but this broad idea about which I’ve been thinking for a while may sound rather crazy, quaint, unsophisticated.

See, while my academic background is rather solid, I don’t have formal training in political science. In fact, I’ve frequently avoided several academic activities related to political science as a discipline. Or to journalism as a discipline. Part of my reluctance to involve myself in academic activities related political science relates to my reaction to journalism. The connection may not seem obvious to everyone but I see political science as a discipline in the same frame, and participating in the same worldview, as what I find problematic in journalism.

The simplest way to contextualize this connection is the (“modern”) notion of the “Nation-State.” That context involves me personally. As an anthropologist, as a post-modernist, as a “dual citizen” of two countries, as a folklorist, as a North American with a relatively salient European background, as a “citizen of the World,” and as a member of a community which has switched in part from a “nationalist” movement to other notions of statehood. Simply put: I sincerely think that the notion of a “Nation-State” is outdated and that it will (whether it should or not) give way to other social constructs.

A candidate to replace the conceptual apparatus of the “Nation-State” is both global and local, both post-modern and ancient: a glocal network of city-states (GNoCS).

Yes, I know, it sounds awkward. No, I’m not saying that things would necessarily be better in a post-national world. And I have no idea when this shift from the “nation-states” frame to a network of city-states may happen. But I sincerely think that it could happen. And that it could happen rather quickly.

Not that the shift would be so radical as to obliterate the notion of “nation-state” overnight. In this case, I’m closer to Foucault’s épistémè than to Kuhn’s paradigm. After all, while the “Democratic Nation-State” model is global, former social structures are still present around the Globe and the very notion of a “Nation-State” takes different values in different parts of the world. What I envision has less to do with the linear view of history than with a perspective in which different currents of social change interact with one another over time, evoking shifts in polarity for those who hold a binary perspective on social issues.

I started “working on” this post four months ago. I was just taking some notes in a blog draft, in view of a blogpost, instead of simply keeping general notes, as I tend to do. This post remained on my mind and I’ve been accumulating different threads which can connect to my basic idea. I now realize that this blogpost will be more of a placeholder for further thinking than a “milestone” in my reflection on the topic. My reluctance to publish this blog entry had as much to do with an idiosyncratic sense of prudence as with time-management or any other issue. In other words, I was wary of sticking my neck out. Which might explain why this post is so personal as compared to most of my posts in English.

As uninformed as I may seem of the minutiae of national era political science, I happen to think that there’s a lot of groupthink involved in the way several people describe political systems. For instance, there’s a strong tendency for certain people, journalists especially, to “count countries.” With relatively few exceptions (especially those which have to do with specific international institutions like the United Nations or the “G20”) the number of countries involved in an event only has superficial significance. Demographic discrepancies between these national entities, not tio mention a certain degree of diversity in their social structures or even government apparatus, makes “counting countries” appear quite misleading, especially when the issue has to do with, say, social dynamics or geography. It sounds at times like people have a vague “political map of the World” in their heads and that this image preempts other approaches to global diversity. This may sound like a defensive stance on my part, as I try to position myself as “perhaps crazy but not more than others are.” But the issue goes deeper. In fact, it seems that “countries” are so ingrained  in some people’s minds and political borders are so obvious that local and regional issues are perceived as micro-version of what happens at the “national level.” This image doesn’t seem so strange when we talk about partisan politics but it appears quite inappropriate when we talk about a broad range of other subjects, from epidemiology to climate change, from online communication to geology, from language to religion.

An initial spark in my thinking about several of these issues came during Beverly Stoeltje‘s interdisciplinary Ph.D. seminar on nationalism at Indiana University Bloomington, back in 2000. Not only was this seminar edifying on many levels, but it represented a kind of epiphany moment in my reflections on not only nationalism itself (with related issues of patriotism, colonialism, and citizenship) but on a range of social issues and changes.

My initial “realization” was on the significance of the shift from Groulx-style French-Canadian nationalism to what Lévesque called «souveraineté-association» (“sovereignty-association”) and which served as the basis for the Quebec sovereignty movement.

While this all connects to well-known issues in political science and while it may (again) sound exceedingly naïve, I mean it in a very specific way which, I think, many people who discuss Quebec’s political history may rarely visit. As with other shifts about which I think, I don’t envision the one from French-Canadian nationalism (FCN) to Quebec sovereignty movement (QSM) to be radical or complete. But it was significant and broad-reaching.

Regardless of Lévesque’s personal view on nationalism (a relatively recent television series on his life had it that he became anti-nationalist after a visit to concentration camps), the very idea that there may exist a social movement oriented toward sovereignty outside of the nationalist logic seems quite important to me personally. The fact that this movement may only be represented in partisan politics as nationalism complicates the issue and may explain a certain confusion in terms of the range of Quebec’s current social movements. In other words, the fact that anti-nationalists are consistently lumped together with nationalists in the public (and journalistic) eye makes it difficult to discuss post-nationalism in this part of the Globe.

But Quebec’s history is only central to my thinking because I was born and Montreal and grew up through the Quiet Revolution. My reflections on a post-national shift are hopefully broader than historical events in a tiny part of the Globe.

In fact, my initial attempt at drafting this blogpost came after I attended a talk by Satoshi Ikeda entitled The Global Financial Crisis and the End of Neoliberalism. (November 27, 2008, Concordia University, SGW H-1125-12; found thanks to Twistory). My main idea at this point was that part of the solution to global problems were local.

But I was also thinking about The Internet.

Contrary to what technological determinists tend to say, the ‘Net isn’t changing things as much as it is part of a broad set of changes. In other words, the global communication network we now know as the Internet is embedded in historical contexts, not the ultimate cause of History. At the risk of replacing technological determinism with social determinism, one might point out that the ‘Net existed (both technologically and institutionally) long before its use became widespread. Those of us who observed a large influx of people online during the early to mid-1990s might even think that social changes were more significant in making the ‘Net what it is today than any “immanent” feature of the network as it was in, say, 1991.

Still, my thinking about the ‘Net has to do with the post-national shift. The ‘Net won’t cause the shift to new social and political structures. But it’s likely to “play a part” in that shift, to be prominently places as we move into a post-national reality.

There’s a number of practical and legal issues with a wide range of online activities which make it clear that the ‘Net fits more in a global structure than in an “international” one. Examples I have in mind include issues of copyright, broadcast rights, “national content,” and access to information, not to mention the online setting for some grassroots movements and the notion of “Internet citizenry.” In all of these cases, “Globalization” expands much beyond trade and currency-based economy.

Then, there’s the notion of “glocalization.” Every time I use the term “glocal,” I point out how “ugly” it is. The term hasn’t gained any currency (AFAICT) but I keep thinking that the concept can generate something interesting. What I personally have in mind is a movement away from national structures into both a globally connected world and a more local significance. The whole “Think Local, Act Global” idea (which I mostly encountered as “Think Global, Drink Local” as a motto). “Despite” the ‘Net, location still matters. But many people are also global-looking.

All of this is part of the setup for some of my reflections on a GNoCS. A kind of prelude/prologue. While my basic idea is very much a “pie in the sky,” I do have more precise notions about what the future may look like and the conditions in which some social changes might happen. At this point, I realize that these thoughts will be part of future blogposts, including some which might be closer to science-fiction than to this type semi- (or pseudo-) scholarly rambling.

But I might still flesh out a few notes.

Demographically, cities may matter more now than ever as the majority of the Globe’s population is urban. At least, the continued urbanization trend may fit well with a city-focused post-national model.

Some metropolitan areas have become so large as to connect with one another, constituting a kind of urban continuum. Contrary to boundaries between “nation-states,” divisions between cities can be quite blurry. In fact, a same location can be connected to dispersed centres of activity and people living in the same place can participate in more than one local sphere. Rotterdam-Amsterdam, Tokyo-Kyoto, Boston-NYC…

Somewhat counterintuitvely, urban areas tend to work relatively as the source of solutions to problems in the natural environment. For instance, some mayors have taken a lead in terms of environmental initiatives, not waiting for their national governments. And such issues as public transportations represent core competencies for municipal governments.

While transborder political entities like the European Union (EU), the African Union (AU), and the North American Free-Trade Agreement (NAFTA) are enmeshed in the national logic, they fit well with notions of globalized decentralization. As the mayor of a small Swiss town was saying on the event of Switzerland’s official 700th anniversary, we can think about «l’Europe des régions» (“Europe of regions”), beyond national borders.

Speaking of Switzerland, the confederacy/confederation model fits rather well with a network structure, perhaps more than with the idea of a “nation-state.” It also seems to go well with some forms of participatory democracy (as opposed to representative democracy). Not to mean that Switzerland or any other confederation/confederacy works as a participatory democracy. But these notions can help situate this GNoCS.

While relatively rare and unimportant “on the World Stage,” micro-states and micro-nations represent interesting cases in view of post-nationalist entities. For one thing, they may help dispel the belief that any political apart from the “nation-state” is a “reversal” to feudalism or even (Greek) Antiquity. The very existence of those entities which are “the exceptions to the rule” make it possible to “think outside of the national box.”

Demographically at the opposite end of the spectrum from microstates and micronations, the notion of a China-India union (or even a collaboration between China, India, Brazil, and Russia) may sound crazy in the current state of national politics but it would go well with a restructuring of the Globe, especially if this “New World Order” goes beyond currency-based trade.

Speaking of currency, the notion of the International Monetary Fund having its own currency is quite striking as a sign of a major shift from the “nation-state” logic. Of course, the IMF is embedded in “national” structures, but it can shift the focus away from “individual countries.”

The very notion of “democracy” has been on many lips, over the years. Now may be the time to pay more than lipservice to a notion of “Global Democracy,” which would transcend national boundaries (and give equal rights to all people across the Globe). Chances are that representative democracy may still dominate but a network structure connecting a large number of localized entities can also fit in other systems including participatory democracy, consensus culture, republicanism, and even the models of relatively egalitarian systems that some cultural anthropologists have been constructing over the years.

I still have all sorts of notes about examples and issues related to this notion of a GNoCS. But that will do for now.

Social Networks and Microblogging

Event-based microblogging and the social dimensions of online social networks.

Microblogging (Laconica, Twitter, etc.) is still a hot topic. For instance, during the past few episodes of This Week in Tech, comments were made about the preponderance of Twitter as a discussion theme: microblogging is so prominent on that show that some people complain that there’s too much talk about Twitter. Given the centrality of Leo Laporte’s podcast in geek culture (among Anglos, at least), such comments are significant.

The context for the latest comments about TWiT coverage of Twitter had to do with Twitter’s financials: during this financial crisis, Twitter is given funding without even asking for it. While it may seem surprising at first, given the fact that Twitter hasn’t publicized a business plan and doesn’t appear to be profitable at this time, 

Along with social networking, microblogging is even discussed in mainstream media. For instance, Médialogues (a media critique on Swiss national radio) recently had a segment about both Facebook and Twitter. Just yesterday, Comedy Central’s The Daily Show with Jon Stewart made fun of compulsive twittering and mainstream media coverage of Twitter (original, Canadian access).

Clearly, microblogging is getting some mindshare.

What the future holds for microblogging is clearly uncertain. Anything can happen. My guess is that microblogging will remain important for a while (at least a few years) but that it will transform itself rather radically. Chances are that other platforms will have microblogging features (something Facebook can do with status updates and something Automattic has been trying to do with some WordPress themes). In these troubled times, Montreal startup Identi.ca received some funding to continue developing its open microblogging platform.  Jaiku, bought by Google last year, is going open source, which may be good news for microblogging in general. Twitter itself might maintain its “marketshare” or other players may take over. There’s already a large number of third-party tools and services making use of Twitter, from Mahalo Answers to Remember the Milk, Twistory to TweetDeck.

Together, these all point to the current importance of microblogging and the potential for further development in that sphere. None of this means that microblogging is “The Next Big Thing.” But it’s reasonable to expect that microblogging will continue to grow in use.

(Those who are trying to grok microblogging, Common Craft’s Twitter in Plain English video is among the best-known descriptions of Twitter and it seems like an efficient way to “get the idea.”)

One thing which is rarely mentioned about microblogging is the prominent social structure supporting it. Like “Social Networking Systems” (LinkedIn, Facebook, Ning, MySpace…), microblogging makes it possible for people to “connect” to one another (as contacts/acquaintances/friends). Like blogs, microblogging platforms make it possible to link to somebody else’s material and get notifications for some of these links (a bit like pings and trackbacks). Like blogrolls, microblogging systems allow for lists of “favourite authors.” Unlike Social Networking Systems but similar to blogrolls, microblogging allow for asymmetrical relations, unreciprocated links: if I like somebody’s microblogging updates, I can subscribe to those (by “following” that person) and publicly show my appreciation of that person’s work, regardless of whether or not this microblogger likes my own updates.

There’s something strangely powerful there because it taps the power of social networks while avoiding tricky issues of reciprocity, “confidentiality,” and “intimacy.”

From the end user’s perspective, microblogging contacts may be easier to establish than contacts through Facebook or Orkut. From a social science perspective, microblogging links seem to approximate some of the fluidity found in social networks, without adding much complexity in the description of the relationships. Subscribing to someone’s updates gives me the role of “follower” with regards to that person. Conversely, those I follow receive the role of “following” (“followee” would seem logical, given the common “-er”/”-ee” pattern). The following and follower roles are complementary but each is sufficient by itself as a useful social link.

Typically, a microblogging system like Twitter or Identi.ca qualifies two-way connections as “friendship” while one-way connections could be labelled as “fandom” (if Andrew follows Betty’s updates but Betty doesn’t follow Andrew’s, Andrew is perceived as one of Betty’s “fans”). Profiles on microblogging systems are relatively simple and public, allowing for low-involvement online “presence.” As long as updates are kept public, anybody can connect to anybody else without even needing an introduction. In fact, because microblogging systems send notifications to users when they get new followers (through email and/or SMS), subscribing to someone’s update is often akin to introducing yourself to that person. 

Reciprocating is the object of relatively intense social pressure. A microblogger whose follower:following ratio is far from 1:1 may be regarded as either a snob (follower:following much higher than 1:1) or as something of a microblogging failure (follower:following much lower than 1:1). As in any social context, perceived snobbery may be associated with sophistication but it also carries opprobrium. Perry Belcher  made a video about what he calls “Twitter Snobs” and some French bloggers have elaborated on that concept. (Some are now claiming their right to be Twitter Snobs.) Low follower:following ratios can result from breach of etiquette (for instance, ostentatious self-promotion carried beyond the accepted limit) or even non-human status (many microblogging accounts are associated to “bots” producing automated content).

The result of the pressure for reciprocation is that contacts are reciprocated regardless of personal relations.  Some users even set up ways to automatically follow everyone who follows them. Despite being tricky, these methods escape the personal connection issue. Contrary to Social Networking Systems (and despite the term “friend” used for reciprocated contacts), following someone on a microblogging service implies little in terms of friendship.

One reason I personally find this fascinating is that specifying personal connections has been an important part of the development of social networks online. For instance, long-defunct SixDegrees.com (one of the earliest Social Networking Systems to appear online) required of users that they specified the precise nature of their relationship to users with whom they were connected. Details escape me but I distinctly remember that acquaintances, colleagues, and friends were distinguished. If I remember correctly, only one such personal connection was allowed for any pair of users and this connection had to be confirmed before the two users were linked through the system. Facebook’s method to account for personal connections is somewhat more sophisticated despite the fact that all contacts are labelled as “friends” regardless of the nature of the connection. The uniform use of the term “friend” has been decried by many public commentators of Facebook (including in the United States where “friend” is often applied to any person with whom one is simply on friendly terms).

In this context, the flexibility with which microblogging contacts are made merits consideration: by allowing unidirectional contacts, microblogging platforms may have solved a tricky social network problem. And while the strength of the connection between two microbloggers is left unacknowledged, there are several methods to assess it (for instance through replies and republished updates).

Social contacts are the very basis of social media. In this case, microblogging represents a step towards both simplified and complexified social contacts.

Which leads me to the theme which prompted me to start this blogpost: event-based microblogging.

I posted the following blog entry (in French) about event-based microblogging, back in November.

Microblogue d’événement

I haven’t received any direct feedback on it and the topic seems to have little echoes in the social media sphere.

During the last PodMtl meeting on February 18, I tried to throw my event-based microblogging idea in the ring. This generated a rather lengthy between a friend and myself. (Because I don’t want to put words in this friend’s mouth, who happens to be relatively high-profile, I won’t mention this friend’s name.) This friend voiced several objections to my main idea and I got to think about this basic notion a bit further. At the risk of sounding exceedingly opinionated, I must say that my friend’s objections actually comforted me in the notion that my “event microblog” idea makes a lot of sense.

The basic idea is quite simple: microblogging instances tied to specific events. There are technical issues in terms of hosting and such but I’m mostly thinking about associating microblogs and events.

What I had in mind during the PodMtl discussion has to do with grouping features, which are often requested by Twitter users (including by Perry Belcher who called out Twitter Snobs). And while I do insist on events as a basis for those instances (like groups), some of the same logic applies to specific interests. However, given the time-sensitivity of microblogging, I still think that events are more significant in this context than interests, however defined.

In the PodMtl discussion, I frequently referred to BarCamp-like events (in part because my friend and interlocutor had participated in a number of such events). The same concept applies to any event, including one which is just unfolding (say, assassination of Guinea-Bissau’s president or bombings in Mumbai).

Microblogging users are expected to think about “hashtags,” those textual labels preceded with the ‘#’ symbol which are meant to categorize microblogging updates. But hashtags are problematic on several levels.

  • They require preliminary agreement among multiple microbloggers, a tricky proposition in any social media. “Let’s use #Bissau09. Everybody agrees with that?” It can get ugly and, even if it doesn’t, the process is awkward (especially for new users).
  • Even if agreement has been reached, there might be discrepancies in the way hashtags are typed. “Was it #TwestivalMtl or #TwestivalMontreal, I forgot.”
  • In terms of language economy, it’s unsurprising that the same hashtag would be used for different things. Is “#pcmtl” about Podcamp Montreal, about personal computers in Montreal, about PCM Transcoding Library…?
  • Hashtags are frequently misunderstood by many microbloggers. Just this week, a tweep of mine (a “peep” on Twitter) asked about them after having been on Twitter for months.
  • While there are multiple ways to track hashtags (including through SMS, in some regions), there is no way to further specify the tracked updates (for instance, by user).
  • The distinction between a hashtag and a keyword is too subtle to be really useful. Twitter Search, for instance, lumps the two together.
  • Hashtags take time to type. Even if microbloggers aren’t necessarily typing frantically, the time taken to type all those hashtags seems counterproductive and may even distract microbloggers.
  • Repetitively typing the same string is a very specific kind of task which seems to go against the microblogging ethos, if not the cognitive processes associated with microblogging.
  • The number of character in a hashtag decreases the amount of text in every update. When all you have is 140 characters at a time, the thirteen characters in “#TwestivalMtl” constitute almost 10% of your update.
  • If the same hashtag is used by a large number of people, the visual effect can be that this hashtag is actually dominating the microblogging stream. Since there currently isn’t a way to ignore updates containing a certain hashtag, this effect may even discourage people from using a microblogging service.

There are multiple solutions to these issues, of course. Some of them are surely discussed among developers of microblogging systems. And my notion of event-specific microblogs isn’t geared toward solving these issues. But I do think separate instances make more sense than hashtags, especially in terms of specific events.

My friend’s objections to my event microblogging idea had something to do with visibility. It seems that this friend wants all updates to be visible, regardless of the context. While I don’t disagree with this, I would claim that it would still be useful to “opt out” of certain discussions when people we follow are involved. If I know that Sean is participating in a PHP conference and that most of his updates will be about PHP for a period of time, I would enjoy the possibility to hide PHP-related updates for a specific period of time. The reason I talk about this specific case is simple: a friend of mine has manifested some frustration about the large number of updates made by participants in Podcamp Montreal (myself included). Partly in reaction to this, he stopped following me on Twitter and only resumed following me after Podcamp Montreal had ended. In this case, my friend could have hidden Podcamp Montreal updates and still have received other updates from the same microbloggers.

To a certain extent, event-specific instances are a bit similar to “rooms” in MMORPG and other forms of real-time many-to-many text-based communication such as the nostalgia-inducing Internet Relay Chat. Despite Dave Winer’s strong claim to the contrary (and attempt at defining microblogging away from IRC), a microblogging instance could, in fact, act as a de facto chatroom. When such a structure is needed. Taking advantage of the work done in microblogging over the past year (which seems to have advanced more rapidly than work on chatrooms has, during the past fifteen years). Instead of setting up an IRC channel, a Web-based chatroom, or even a session on MSN Messenger, users could use their microblogging platform of choice and either decide to follow all updates related to a given event or simply not “opt-out” of following those updates (depending on their preferences). Updates related to multiple events are visible simultaneously (which isn’t really the case with IRC or chatrooms) and there could be ways to make event-specific updates more prominent. In fact, there would be easy ways to keep real-time statistics of those updates and get a bird’s eye view of those conversations.

And there’s a point about event-specific microblogging which is likely to both displease “alpha geeks” and convince corporate users: updates about some events could be “protected” in the sense that they would not appear in the public stream in realtime. The simplest case for this could be a company-wide meeting during which backchannel is allowed and even expected “within the walls” of the event. The “nothing should leave this room” attitude seems contradictory to social media in general, but many cases can be made for “confidential microblogging.” Microblogged conversations can easily be archived and these archives could be made public at a later date. Event-specific microblogging allows for some control of the “permeability” of the boundaries surrounding the event. “But why would people use microblogging instead of simply talking to another?,” you ask. Several quick answers: participants aren’t in the same room, vocal communication is mostly single-channel, large groups of people are unlikely to communicate efficiently through oral means only, several things are more efficiently done through writing, written updates are easier to track and archive…

There are many other things I’d like to say about event-based microblogging but this post is already long. There’s one thing I want to explain, which connects back to the social network dimension of microblogging.

Events can be simplistically conceived as social contexts which bring people together. (Yes, duh!) Participants in a given event constitute a “community of experience” regardless of the personal connections between them. They may be strangers, ennemies, relatives, acquaintances, friends, etc. But they all share something. “Participation,” in this case, can be relatively passive and the difference between key participants (say, volunteers and lecturers in a conference) and attendees is relatively moot, at a certain level of analysis. The key, here, is the set of connections between people at the event.

These connections are a very powerful component of social networks. We typically meet people through “events,” albeit informal ones. Some events are explicitly meant to connect people who have something in common. In some circles, “networking” refers to something like this. The temporal dimension of social connections is an important one. By analogy to philosophy of language, the “first meeting” (and the set of “first impressions”) constitute the “baptism” of the personal (or social) connection. In social media especially, the nature of social connections tends to be monovalent enough that this “baptism event” gains special significance.

The online construction of social networks relies on a finite number of dimensions, including personal characteristics described in a profile, indirect connections (FOAF), shared interests, textual content, geographical location, and participation in certain activities. Depending on a variety of personal factors, people may be quite inclusive or rather exclusive, based on those dimensions. “I follow back everyone who lives in Austin” or “Only people I have met in person can belong to my inner circle.” The sophistication with which online personal connections are negotiated, along such dimensions, is a thing of beauty. In view of this sophistication, tools used in social media seem relatively crude and underdeveloped.

Going back to the (un)conference concept, the usefulness of having access to a list of all participants in a given event seems quite obvious. In an open event like BarCamp, it could greatly facilitate the event’s logistics. In a closed event with paid access, it could be linked to registration (despite geek resistance, closed events serve a purpose; one could even imagine events where attendance is free but the microblogging backchannel incurs a cost). In some events, everybody would be visible to everybody else. In others, there could be a sort of ACL for diverse types of participants. In some cases, people could be allowed to “lurk” without being seen while in others radically transparency could be enforced. For public events with all participants visible, lists of participants could be archived and used for several purposes (such as assessing which sessions in a conference are more popular or “tracking” event regulars).

One reason I keep thinking about event-specific microblogging is that I occasionally use microblogging like others use business cards. In a geek crowd, I may ask for someone’s Twitter username in order to establish a connection with that person. Typically, I will start following that person on Twitter and find opportunities to communicate with that person later on. Given the possibility for one-way relationships, it establishes a social connection without requiring personal involvement. In fact, that person may easily ignore me without the danger of a face threat.

If there were event-specific instances from microblogging platforms, we could manage connections and profiles in a more sophisticated way. For instance, someone could use a barebones profile for contacts made during an impersonal event and a full-fledged profile for contacts made during a more “intimate” event. After noticing a friend using an event-specific business card with an event-specific email address, I got to think that this event microblogging idea might serve as a way to fill a social need.

 

More than most of my other blogposts, I expect comments on this one. Objections are obviously welcomed, especially if they’re made thoughtfully (like my PodMtl friend made them). Suggestions would be especially useful. Or even questions about diverse points that I haven’t addressed (several of which I can already think about).

So…

 

What do you think of this idea of event-based microblogging? Would you use a microblogging instance linked to an event, say at an unconference? Can you think of fun features an event-based microblogging instance could have? If you think about similar ideas you’ve seen proposed online, care to share some links?

 

Thanks in advance!

Back in Mac: Low End Edition

I’m happy to go “back in Mac,” even on a low end machine.

Today, I’m buying an old Mac mini G4 1.25GHz. Yes, a low end computer from 2005. It’ll be great to be back in Mac after spending most of my computer life on XP for three years.

This mini is slower than my XP desktop (emachines H3070). But that doesn’t really matter for what I want to do.

There’s something to be said about computers being “fast enough.” Gamers and engineers may not grok this concept, since they always want more. But there’s a point at which computers don’t really need to be faster, for some categories of uses.

Car analogies are often made, in computer discussions, and this case seems fairly obvious. Some cars are still designed to “push the envelope,” in terms of performance. Yet most cars, including some relatively inexpensive ones, are already fast enough to run on highways beyond the speed limits in North America. Even in Europe, most drivers don’t tend to push their cars to the limit. Something vaguely similar happens with computers, though there are major differences. For instance, the difference in cost between fast driving and normal driving is a factor with cars while it isn’t so much of a factor with computers. With computers, the need for cooling and battery power (on laptops) do matter but, even if they were completely solved, there’s a limit to the power needed for casual computer use.

This isn’t contradicting Moore’s Law directly. Chips do increase exponentially in speed-to-cost ratio. But the effects aren’t felt the same way through all uses of computers, especially if we think about casual use of desktop and laptop “personal computers.” Computer chips in other devices (from handheld devices to cars or DVD players) benefit from Moore’s Law, but these are not what we usually mean by “computer,” in daily use.
The common way to put it is something like “you don’t need a fast machine to do email and word processing.”

The main reason I needed a Mac is that I’ll be using iMovie to do simple video editing. Video editing does push the limits of a slow computer and I’ll notice those limits very readily. But it’ll still work, and that’s quite interesting to think about, in terms of the history of personal computing. A Mac mini G4 is a slug, in comparison with even the current Mac mini Core 2 Duo. But it’s fast enough for even some tasks which, in historical terms, have been processor-intensive.

None of this is meant to say that the “need for speed” among computer users is completely manufactured. As computers become more powerful, some applications of computing technologies which were nearly impossible at slower speeds become easy to do. In fact, there certainly are things which we don’t even imagine becoming which will be easy to do in the future, thanks to improvements in computer chip performance. Those who play processor-intensive games always want faster machines and they certainly feel the “need for speed.” But, it seems to me, the quest for raw speed isn’t the core of personal computing, anymore.

This all reminds me of the Material Culture course I was teaching in the Fall: the Social Construction of Technology, Actor-Network Theory, the Social Shaping of Technology, etc.

So, a low end computer makes sense.

While iMovie is the main reason I decided to get a Mac at this point, I’ve been longing for Macs for three years. There were times during which I was able to use somebody else’s Mac for extended periods of time but this Mac mini G4 will be the first Mac to which I’ll have full-time access since late 2005, when my iBook G3 died.

As before, I’m happy to be “back in Mac.” I could handle life on XP, but it never felt that comfortable and I haven’t been able to adapt my workflow to the way the Windows world works. I could (and probably should) have worked on Linux, but I’m not sure it would have made my life complete either.

Some things I’m happy to go back to:

  • OmniOutliner
  • GarageBand
  • Keynote
  • Quicksilver
  • Nisus Thesaurus
  • Dictionary
  • Preview
  • Terminal
  • TextEdit
  • BibDesk
  • iCal
  • Address Book
  • Mail
  • TAMS Analyzer
  • iChat

Now I need to install some RAM in this puppy.

My Problem With Journalism

I hate having an axe to grind. Really, I do. “It’s unlike me.” When I notice that I catch myself grinding an axe, I “get on my own case.” I can be quite harsh with my own self.

But I’ve been trained to voice my concerns. And I’ve been perceiving an important social problem for a while.

So I “can’t keep quiet about it.”

If everything goes really well, posting this blog entry might be liberating enough that I will no longer have any axe to grind. Even if it doesn’t go as well as I hope, it’ll be useful to keep this post around so that people can understand my position.

Because I don’t necessarily want people to agree with me. I mostly want them to understand “where I come from.”

So, here goes:

Journalism may have outlived its usefulness.

Like several other “-isms” (including nationalism, colonialism, imperialism, and racism) journalism is counterproductive in the current state of society.

This isn’t an ethical stance, though there are ethical positions which go with it. It’s a statement about the anachronic nature of journalism. As per functional analysis, everything in society needs a function if it is to be maintained. What has been known as journalism is now taking new functions. Eventually, “journalism as we know it” should, logically, make way for new forms.

What these new forms might be, I won’t elaborate in this post. I have multiple ideas, especially given well-publicised interests in social media. But this post isn’t about “the future of journalism.”

It’s about the end of journalism.

Or, at least, my looking forward to the end of journalism.

Now, I’m not saying that journalists are bad people and that they should just lose their jobs. I do think that those who were trained as journalists need to retool themselves, but this post isn’t not about that either.

It’s about an axe I’ve been grinding.

See, I can admit it, I’ve been making some rather negative comments about diverse behaviours and statements, by media people. It has even become a habit of mine to allow myself to comment on something a journalist has said, if I feel that there is an issue.

Yes, I know: journalists are people too, they deserve my respect.

And I do respect them, the same way I respect every human being. I just won’t give them the satisfaction of my putting them on a pedestal. In my mind, journalists are people: just like anybody else. They deserve no special treatment. And several of them have been arrogant enough that I can’t help turning their arrogance back to them.

Still, it’s not about journalist as people. It’s about journalism “as an occupation.” And as a system. An outdated system.

Speaking of dates, some context…

I was born in 1972 and, originally,I was quite taken by journalism.

By age twelve, I was pretty much a news junkie. Seriously! I was “consuming” a lot of media at that point. And I was “into” media. Mostly television and radio, with some print mixed in, as well as lots of literary work for context: this is when I first read French and Russian authors from the late 19th and early 20th centuries.

I kept thinking about what was happening in The World. Back in 1984, the Cold War was a major issue. To a French-Canadian tween, this mostly meant thinking about the fact that there were (allegedly) US and USSR “bombs pointed at us,” for reasons beyond our direct control.

“Caring about The World” also meant thinking about all sorts of problems happening across The Globe. Especially poverty, hunger, diseases, and wars. I distinctly remember caring about the famine in Ethiopia. And when We Are the World started playing everywhere, I felt like something was finally happening.

This was one of my first steps toward cynicism. And I’m happy it occured at age twelve because it allowed me to eventually “snap out of it.” Oh, sure, I can still be a cynic on occasion. But my cynicism is contextual. I’m not sure things would have been as happiness-inducing for me if it hadn’t been for that early start in cynicism.

Because, you see, The World disinterested itself quite rapidly with the plight of Ethiopians. I distinctly remember asking myself, after the media frenzy died out, what had happened to Ethiopians in the meantime. I’m sure there has been some report at the time claiming that the famine was over and that the situation was “back to normal.” But I didn’t hear anything about it, and I was looking. As a twelve-year-old French-Canadian with no access to a modem, I had no direct access to information about the situation in Ethiopia.

Ethiopia still remained as a symbol, to me, of an issue to be solved. It’s not the direct cause of my later becoming an africanist. But, come to think of it, there might be a connection, deeper down than I had been looking.

So, by the end of the Ethiopian famine of 1984-85, I was “losing my faith in” journalism.

I clearly haven’t gained a new faith in journalism. And it all makes me feel quite good, actually. I simply don’t need that kind of faith. I was already training myself to be a critical thinker. Sounds self-serving? Well, sorry. I’m just being honest. What’s a blog if the author isn’t honest and genuine?

Flash forward to 1991, when I started formal training in anthropology. The feeling was exhilarating. I finally felt like I belonged. My statement at the time was to the effect that “I wasn’t meant for anthropology: anthropology was meant for me!” And I was learning quite a bit about/from The World. At that point, it already did mean “The Whole Wide World,” even though my knowledge of that World was fairly limited. And it was a haven of critical thinking.

Ideal, I tell you. Moan all you want, it felt like the ideal place at the ideal time.

And, during the summer of 1993, it all happened: I learnt about the existence of the “Internet.” And it changed my life. Seriously, the ‘Net did have a large part to play in important changes in my life.

That event, my discovery of the ‘Net, also has a connection to journalism. The person who described the Internet to me was Kevin Tuite, one of my linguistic anthropology teachers at Université de Montréal. As far as I can remember, Kevin was mostly describing Usenet. But the potential for “relatively unmediated communication” was already a big selling point. Kevin talked about the fact that members of the Caucasian diaspora were able to use the Internet to discuss with their relatives and friends back in the Caucasus about issues pertaining to these independent republics after the fall of the USSR. All this while media coverage was sketchy at best (sounded like journalism still had a hard time coping with the new realities).

As you can imagine, I was more than intrigued and I applied for an account as soon as possible. In the meantime, I bought at 2400 baud modem, joined some local BBSes, and got to chat about the Internet with several friends, some of whom already had accounts. Got my first email account just before semester started, in August, 1993. I can still see traces of that account, but only since April, 1994 (I guess I wasn’t using my address in my signature before this). I’ve been an enthusiastic user of diverse Internet-based means of communication since then.

But coming back to journalism, specifically…

Journalism missed the switch.

During the past fifteen years, I’ve been amazed at how clueless members of mainstream media institutions have been to “the power of the Internet.” This was during Wired Magazine’s first year as a print magazine and we (some friends and I) were already commenting upon the fact that print journalists should look at what was coming. Eventually, they would need to adapt. “The Internet changes everything,” I thought.

No, I didn’t mean that the Internet would cause any of the significant changes that we have seeing around us. I tend to be against technological determinism (and other McLuhan tendencies). Not that I prefer sociological determinism yet I can’t help but think that, from ARPAnet to the current state of the Internet, most of the important changes have been primarily social: if the Internet became something, it’s because people are making it so, not because of some inexorable technological development.

My enthusiastic perspective on the Internet was largely motivated by the notion that it would allow people to go beyond the model from the journalism era. Honestly, I could see the end of “journalism as we knew it.” And I’m surprised, fifteen years later, that journalism has been among the slowest institutions to adapt.

In a sense, my main problem with journalism is that it maintains a very stratified structure which gives too much weight to the credibility of specific individuals. Editors and journalists, who are part of the “medium” in the old models of communication, have taken on a gatekeeping role despite the fact that they rarely are much more proficient thinkers than people who read them. “Gatekeepers” even constitute a “textbook case” in sociology, especially in conflict theory. Though I can easily perceive how “constructed” that gatekeeping model may be, I can easily relate to what it entails in terms of journalism.

There’s a type of arrogance embedded in journalistic self-perception: “we’re journalists/editors so we know better than you; you need us to process information for you.” Regardless of how much I may disagree with some of his words and actions, I take solace in the fact that Murdoch, a key figure in today’s mainstream media, talked directly at this arrogance. Of course, he might have been pandering. But the very fact that he can pay lip-service to journalistic arrogance is, in my mind, quite helpful.

I think the days of fully stratified gatekeeping (a “top-down approach” to information filtering) are over. Now that information is easily available and that knowledge is constructed socially, any “filtering” method can be distributed. I’m not really thinking of a “cream rises to the top” model. An analogy with water sources going through multiple layers of mountain rock would be more appropriate to a Swiss citizen such as myself. But the model I have in mind is more about what Bakhtin called “polyvocality” and what has become an ethical position on “giving voice to the other.” Journalism has taken voice away from people. I have in mind a distributed mode of knowledge construction which gives everyone enough voice to have long-distance effects.

At the risk of sounding too abstract (it’s actually very clear in my mind, but it requires a long description), it’s a blend of ideas like: the social butterfly effect, a post-encyclopedic world, and cultural awareness. All of these, in my mind, contribute to this heightened form of critical thinking away from which I feel journalism has led us.

The social butterfly effect is fairly easy to understand, especially now that social networks are so prominent. Basically, the “butterfly effect” from chaos theory applied to social networks. In this context, a “social butterfly” is a node in multiple networks of varying degrees of density and clustering. Because such a “social butterfly” can bring things (ideas, especially) from one such network to another, I argue that her or his ultimate influence (in agregate) is larger than that of someone who sits at the core of a highly clustered network. Yes, it’s related to “weak ties” and other network classics. But it’s a bit more specific, at least in my mind. In terms of journalism, the social butterfly effect implies that the way knowledge is constructed needs not come from a singular source or channel.

The “encyclopedic world” I have in mind is that of our good friends from the French Enlightenment: Diderot and the gang. At that time, there was a notion that the sum of all knowledge could be contained in the Encyclopédie. Of course, I’m simplifying. But such a notion is still discussed fairly frequently. The world in which we now live has clearly challenged this encyclopedic notion of exhaustiveness. Sure, certain people hold on to that notion. But it’s not taken for granted as “uncontroversial.” Actually, those who hold on to it tend to respond rather positively to the journalistic perspective on human events. As should be obvious, I think the days of that encyclopedic worldview are counted and that “journalism as we know it” will die at the same time. Though it seems to be built on an “encyclopedia” frame, Wikipedia clearly benefits from distributed model of knowledge management. In this sense, Wikipedia is less anachronistic than Britannica. Wikipedia also tends to be more insightful than Britannica.

The cultural awareness point may sound like an ethnographer’s pipe dream. But I perceive a clear connection between Globalization and a certain form of cultural awareness in information and knowledge management. This is probably where the Global Voices model can come in. One of the most useful representations of that model comes from a Chris Lydon’s Open Source conversation with Solana Larsen and Ethan Zuckerman. Simply put, I feel that this model challenges journalism’s ethnocentrism.

Obviously, I have many other things to say about journalism (as well as about its corrolate, nationalism).

But I do feel liberated already. So I’ll leave it at that.

Student Engagement: The Gym Analogy (Updated: Credited)

Heard about this recently and probably heard it before. It’s striking me more now than before, for some reason.

[Update: I heard about this analogy through Peace Studies scholar Laurie Lamoureux Scholes (part-time faculty and doctoral candidate in Religion at Concordia University). Lamoureux Scholes’s colleague John Bilodeau is the intermediate source for this analogy and may have seen it on the RateYourStudents blog. There’s nothing like giving credit where credit is due and I’m enough of a folklorist to care about transmission. Besides, the original RYS gym-themed blog entry can be quite useful.]

Those of us who teach at universities and colleges (especially in North America and especially among English-speakers, I would guess) have encountered this “sense of entitlement” which has such deep implications in the ways some students perceive learning. Some students feel and say that, since they (or their parents) pay large sums for their post-secondary education, they are entitled to a “special treatment” which often involves the idea of getting high grades with little effort.

In my experience, this sense of entitlement correlates positively with the prestige of the institution. Part of this has to do with tuition fees required by those universities and colleges. But there’s also the notion that, since they were admitted to a program at such a selective school, they must be the “cream of the crop” and therefore should be treated with deference. Similarly, “traditional students” (18-25) are in my experience more likely to display a sense of entitlement than “non-traditional students” (older than 25) who have very specific reasons to attend a college or university.

The main statements used by students in relation to their sense of entitlement usually have some connection to tuition fees perceived to transform teaching into a hired service, regardless of other factors. “My parents pay a lot of money for your salary so I’m allowed to get what I want.” (Of course, those students may not realize that a tiny fraction of tuition fees actually goes in the pocket of the instructor, but that’s another story.) In some cases, the parents can easily afford that amount paid in tuitions but the statements are the same. In other cases, the statements come from the notion that parents have “worked very hard to put me in school.” The results, in terms of entitlement, are quite similar.

Simply put, those students who feel a strong sense of entitlement tend to “be there for the degree” while most other students are “there to learn.”

Personally, I tend to assume students want to learn and I value student engagement in learning processes very highly. As a result, I often have a harder time working with students with a sense of entitlement. I can adapt myself to work with them if I assess their positions early on (preferably, before the beginning of a semester) but it requires a good deal of effort for me to teach in a context in which the sense of entitlement is “endemic.” In other words, “I can handle a few entitled students” if I know in advance what to expect but I find it demotivating to teach a group of students who “are only there for the degree.”

A large part of my own position has to do with the types of courses I have been teaching (anthropology, folkloristics, and sociology) and my teaching philosophy also “gets in the way.” My main goal is a constructivist one: create an appropriate environment, with students, in which learning can happen efficiently. I’m rarely (if ever) trying to “cram ideas into students’ heads,” though I do understand the value of that type of teaching in some circumstances. I occasionally try to train students for a task but my courses have rarely been meant to be vocational in that sense (I could certainly do vocational training, in which case I would adapt my methods).

So, the gym analogy. At this point, I find it’s quite fitting as an answer to the “my parents paid for this course so I should get a high grade.”

Tuition fees are similar to gym membership: regardless of the amount you pay, you can only expect results if you make the effort.

Simple and effective.

Of course, no analogy is perfect. I think the “effort” emphasis is more fitting in physical training than in intellectual and conceptual training. But, thankfully, the analogy does not imply that students should “get grades for effort” more than athletes assume effort is sufficient to improve their physical skills.

One thing I like about this analogy is that it can easily resonate with a large category of students who are, in fact, the “gym type.” Sounds irrelevant but the analogy is precisely the type of thing which might stick in the head of those students who care about physical training (even if they react negatively at first) and many “entitled students” have a near Greek/German attitude toward their bodies. In fact, some of the students with the strongest sense of entitlement are high-profile athletes: some of them sound like they expect to have minions to take exams for them!

An important advantage of the gym analogy, in a North American context, is that it focuses on individual responsibility. While not always selfish, the sense of entitlement is self-centred by definition. Given the North American tendency toward independence training and a strong focus on individual achievement in North American academic institutions, the “individualist” character of the sense of entitlement shouldn’t surprise anyone. In fact, those “entitled students” are unlikely to respond very positively to notions of solidarity, group learning, or even “team effort.”

Beyond individual responsibility, the gym analogy can help emphasise individual goals, especially in comparison to team sports. In North America, team sports play a very significant role in popular culture and the distinction between a gym and a sports team can resonate in a large conceptual field. The gym is the locale for individual achievement while the sports team (which could be the basis of another analogy) is focused on group achievement.

My simplest definition of a team is as “a task-oriented group.” Some models of group development (especially Tuckman’s catchy “Forming, Storming, Norming, Performing“) are best suited in relation to teams. Task-based groups connect directly with the Calvinistic ideology of progress (in a Weberian perspective), but they also embed a “community-building” notion which is often absent from the “social Darwinism” of some capital-driven discourse. In other words, a team sports analogy could have some of the same advantages as the gym analogy (such as a sense of active engagement) with the added benefit of bringing into focus the social aspects of learning.

Teamwork skills are highly valued in the North American workplace. In learning contexts, “teamwork” often takes a buzzword quality. The implicit notion seems to be that the natural tendency for individuals to work against everybody else but that teams, as unnatural as they may seem, are necessary for the survival of broad institutions (such as the typical workplace). In other words, “learning how to work well in teams” sounds like a struggle against “human nature.” This implicit perspective relates to the emphasis on “individual achievement” and “independence training” represented effectively in the gym analogy.

So, to come back to that gym analogy…

In a gym, everyone is expected to set her or his own goals, often with the advice of a trainer. The notion is that this selection of goals is completely free of outside influence save for “natural” goals related to general health. In this context, losing weight is an obvious goal (the correlation between body mass and health being taken as a given) but it is still chosen by the individual. “You can only succeed if you set yourself to succeed” seems to be a common way to put it. Since this conception is “inscribed in the mind” of some students, it may be a convenient tool to emphasise learning strategies: “you can only learn if you set yourself to learn.” Sounds overly simple, but it may well work. Especially if we move beyond the idea some students have that they’re so “smart” that they “don’t need to learn.”

What it can imply in terms of teaching is quite interesting. An instructor takes on the role of a personal trainer. Like a sports team’s coach, a trainer is “listened to” and “obeyed.” There might be a notion of hierarchy involved (at least in terms of skills: the trainer needs to impress), but the main notion is that of division of labour. Personally, I could readily see myself taking on the “personal trainer” role in a learning context, despite the disadvantages of customer-based approaches to learning. One benefit of the trainer role is that what students (or their parents) pay for is a service, not “learning as a commodity.”

Much of this reminds me of Alex Golub’s blogpost on “Factory, Lab, Guild, Studio” notions to be used in describing academic departments. Using Golub’s blogpost as inspiration, I blogged about departments, Samba schools, and the Medici Effect. In the meantime, my understanding of learning has deepened but still follows similar lines. And I still love the “Samba school” concept. I can now add the gym and the sports teams to my analogical apparatus to use in describing my teaching to students or anybody else.

Hopefully, any of these analogies can be used to help students engage themselves in the learning process.

That’s all I can wish for.

Intello-Bullying

A topic which I’ll revisit, to be sure. But while I’m at it…
I tend to react rather strongly to a behaviour which I consider the intellectual equivalent of schoolyard bullying.
Notice that I don’t claim to be above this kind of behaviour. I’m not. In fact, one reason for my blogging this is that I have given some thought to my typical anti-bullying reaction. Not that I feel bad about it. But I do wonder if it might not be a good idea to adopt a variety of mechanisms to respond to bullying, in conjunction with my more “gut response” knee-jerk reactions and habits.
Notice also that i’m not describing individual bullies. I’m not complaining about persons. I’m thinking about behaviour. Granted, certain behaviours are typically associated with certain people and bullying is no exception. But instead of blaming, I’d like to assess, at least as a step in a given direction. What can I do? I’m an ethnographer.
Like schoolyardb bullying, intello-bullying is based on a perceived strength used to exploit and/or harm those who perceived as weaker. Like physical strength, the perception of “intellectual strength” on which intello-bullying is based needs not have any objective validity. We’re in subjectivity territory, here. And subjects perceive in patterned but often obscure ways. Those who think of themselves as “strong” in intellectual as well as physical senses, are sometimes the people who are insecure as to their overall strengths and weaknesses.
Unlike schoolyard bullying, intello-bullying can be, and often is, originated by otherwise reasonably mature people. In fact, some of the most agressive intello-bullying comes from well-respected “career intellectuals” who “should know better.” Come to think of it, this type of bullying is probably the one I personally find the most problematic. But, again, I’m not talking about bullies. I’m not describing people. I’m talking about behaviour. And implications if behaviour.
My personal reactions may come from remnants of my impostor syndrome. Or maybe they come from a non-exclusive sense of self-worth that I found lying around in my life, as I was getting my happiness back. As much I try, I can’t help but feel that intello-bullying is a sign of intellectual self-absorption, which eventually link to weakness. Sorry, folks, but it seems to me that if you feel the need, even temporarily, to impose your intellectual strength on those you perceive as intellectually weak, I’ll assume you may “have issues to solve.” in fact, I react the same way when I perceive my own behaviour as tantamount to bullying. It’s the behaviour I have issues with. Not the person.
And this is the basis of my knee-jerks: when I witness bullying, I turn into a bully’s bully. Yeah, pretty dangerous. And quite unexpected for a lifelong pacifist like yours truly. But, at least I can talk and think about it. Unapologetically.
You know, this isn’t something I started doing yesterday. In fact, it may be part of a long-standing mission of mine. Half-implicit at first. Currently “assumed,” assessed, acknowledged. Accepted.
Before you blame me for the appearance of an “avenger complex” in this description, please give some more thought to bullying in general. My hunch is that many of you will admit that you value the existence of anti-bullies in schoolyards or in other contexts. You may prefer it if cases of bullying are solved through other means (sanction by school officials or by parents, creation of safe zones…). But I’d be somewhat surprised if your thoughts about anti-bullying prevention left no room for non-violent but strength-based control by peers. If it is the case, I’d be very interested in your comments on the issue. After all, I may be victim of some idiosyncratic notion of justice which you find inappropriate. I’m always willing to relativize.
Bear in mind that I’m not talking about retaliation. Though it may sound like it, this is no “eye for an eye” rule. Nor is it “present the left cheek.” it’s more like crowd control. Or this form of “non-abusive” technique used by occupational therapists and others while helping patients/clients who are “disorganizing.” Basically, I’m talking about responding to (intello-)bullying with calm but some strength being asserted. In the case of “fighting with words,” in my case, it may sound smug and even a bit dismissive. But it’s a localized smugness which I have a hard time finding unhealthy.
In a sense, I hope I’m talking about “taking the high road.” With a bit of self-centredness which has altruistic goals. “”I’ll act as if I were stronger than you, because you used your perceived strength to dominate somebody else. I don’t have anything against you but I feel you should be put in your place. Don’t make me go to the next step through which I can make you weep.”
At this point, I’m thinking martial arts. I don’t practise any martial art but, as an outsider, I get the impression this thinking goes well with some martial arts. Maybe judo, which allegedly relies on using your opponent’s strength. Or Tae Kwon Do, which always sounded “assertive yet peaceful” when described by practitioners.
The corrolary of all this is my attitude toward those who perceive themselves as weak. I have this strong tendency to want them to feel stronger. Both out of this idiosyncratic atttude toward justice and because of my compulsive empathy. So, when someone says something like “I’m not that smart” or “I don’t have anything to contribute,” I switch to the “nurturing mode” that I may occasionally use in class or with children. I mean not to patronize, though it probably sounds paternalistic to outside observers. It’s just a reaction I have. I don’t even think its consequences are that negative in most contexts.
Academic contexts are full of cases of intello-bullying. Classrooms, conferences, outings… Put a group of academics in a room and unless there’s a strong sense of community (Turner would say “communitas”), intello-bullying is likely to occur. At the very least, you may witness posturing, which I consider a mild form of bullying. It can be as subtle as a tricky question ask to someone who is unlikely to provide a face-saving answer and it can be as aggressive as questioning someone’s inteligence directly or claiming to have gone much beyond what somebody else has said.
In my mind, the most extreme context for this type of bullying is the classroom and it involves a teacher bullying a learner. Bullying between isn’t much better but, as a teacher, I’m even more troubled by the imposong authority structure based on status.

I put “cyber-bullying” as a tag because, in my mind, cyber-bullying (like trolling, flamebaiting and other agressive behaviours online) is a form of intello-bullying. It’s using a perceived “intellectual strength” to dominate. It’s very close to schoolyard bullying but because it may not rely on a display of physical strength, I tend to associate it with mind-based behaviour.
As I think about these issues, I keep thinking of snarky comments. Contrary to physical attacks, snarks necessitate a certain state of mind to be effective. They need to tap on some insecurity, some self-perceived weakness in the victim. But they can be quite dangerous in the right context.
As I write this, I think about my own snarky comments. Typically, they either come after some escalation or they will be as indefinite as possible. But they can be extremely insulting if they’re internalized by some people.
Two come from a fairly known tease/snark. Namely

If you’re so smart, why ain’t you rich?

(With several variants.)

I can provide several satisfactory answers to what is ostensibly a question. But, as much as I try, I can’t relate to the sentiment behind this rhetorical utterance, regardless of immediate context (but regardful of the broader social context). This may have to do with the fact that “getting rich” really isn’t my goal in life. Not only do I agree with the statement that “money can’t buy happiness” and do I care more about happiness than more easily measurable forms of success, but my high empathy levels do include a concept of egalitarianism and solidarity which makes this emphasis on wealth sound counter-productive.

Probably because of my personal reactions to that snark, I have created at least two counter-snarks. My latest one, and the one which may best represent my perspective, is the following:

If you’re so smart, why ain’t you happy?

With direct reference to the original “wealth and intelligence” snark, I wish to bring attention to what I perceive to be a more appropriate goal in life (because it’s my own goal): pursuit of happiness. What I like about this “rhetorical question” is that it’s fairly ambiguous yet has some of the same effects as the “don’t think about pink elephants” illocutionary act. As a rhetorical question, it needs not be face-threatening. Because the “why aren’t you happy?” question can stand on its own, the intelligence premise “dangles.” And, more importantly, it represents one of my responses to what I perceive as a tendency (or attitude and “phase”) associating happiness with lack of intelligence. The whole “ignorance is bliss” and «imbécile heureux» perspective. Voltaire’s Candide and (failed) attempts to discredit Rousseau. Uses of “touchy-feely” and “warm and fuzzy” as insults. In short, the very attitude which makes most effectively tricks out intellectuals in the “pursuit of happiness.”

I posted my own snarky comment on micro-blogs and other social networks. A friend replied rather negatively. Though I can understand my friend’s issues with my snark, I also care rather deeply about delinking intelligence and depression.

A previous snark of mine was much more insulting. In fact, I would never ever use it with any individual, because I abhor insulting others. Especially about their intelligence. But it does sound to me like an efficient way to unpack the original snark. Pretty obvious and rather “nasty”:

If you’re so rich, why ain’t you smart?

Again, I wouldn’t utter this to anyone. I did post it through social media. But, like the abovementioned snark on happiness, it wasn’t aimed at any specific person. Though I find it overly insulting, I do like its “counterstrike” power in witticism wars.

As announced through the “placeholder” tag and in the prefacing statement (or disclaimer), this post is but a draft. I’ll revisit this whole issue on several occasions and it’s probably better that I leave this post alone. Most of it was written while riding the bus from Ottawa to Montreal (through the WordPress editor available on the App Store). Though I’ve added a few things which weren’t in this post when I arrived in Montreal (e.g., a link to NAPPI training), I should probably leave this as a “bus ride post.”

I won’t even proofread this post.

RERO!

The Issue Is Respect

As a creative generalist, I don’t tend to emphasize expert status too much, but I do see advantages in complementarity between people who act in different spheres of social life. As we say in French, «à chacun son métier et les vaches seront bien gardées» (“to each their own profession and cows will be well-kept”).

The diversity of skills, expertise, and interest is especially useful when people of different “walks of life” can collaborate with one another. Tolerance, collegiality, dialogue. When people share ideas, the potential is much greater if their ideas are in fact different. Very simple principle, which runs through anthropology as the study of human diversity (through language, time, biology, and culture).

The problem, though, is that people from different “fields” tend not to respect one another’s work. For instance, a life scientist and a social scientist often have a hard time understanding one another because they simply don’t respect their interlocutor’s discipline. They may respect each other as human beings but they share a distrust as to the very usefulness of the other person’s field.

Case in point: entomologist Paul R. Ehrlich, who spoke at the Seminar About Long Term Thinking (SALT) a few weeks ago.

The Long Now Blog » Blog Archive » Paul Ehrlich, “The Dominant Animal: Human Evolution and the Environment”

Ehrlich seems to have a high degree of expertise in population studies and, in that SALT talk, was able to make fairly interesting (though rather commonplace) statements about human beings. For instance, he explicitly addressed the tendency, in mainstream media, to perceive genetic determinism where it has no place. Similarly, his discussion about the origins and significance of human language was thoughtful enough that it could lead other life scientists to at least take a look at language.

What’s even more interesting is that Ehrlich realizes that social sciences can be extremely useful in solving the environmental issues which concern him the most. As we learn during the question period after this talk, Ehrlich is currently talking with some economists. And, contrary to business professors, economists participate very directly in the broad field of social sciences.

All of this shows quite a bit of promise, IMVHAWISHIMVVVHO. But the problem has to do with respect, it seems.

Now, it might well be that Ehrlich esteems and respects his economist colleagues. Their methods may be sufficiently compatible with his that he actually “hears what they’re saying.” But he doesn’t seem to “extend this courtesy” to my own highly esteemed colleagues in ethnographic disciplines. Ehrlich simply doesn’t grok the very studies which he states could be the most useful for him.

There’s a very specific example during the talk but my point is broader. When that specific issue was revealed, I had already been noticing an interdisciplinary problem. And part of that problem was my own.

Ehrlich’s talk was fairly entertaining, although rather unsurprising in the typical “doom and gloom” exposé to which science and tech shows have accustomed us. Of course, it was fairly superficial on even the points about which Ehrlich probably has the most expertise. But that’s expected of this kind of popularizer talk. But I started reacting quite negatively to several of his points when he started to make the kinds of statements which make any warm-blooded ethnographer cringe. No, not the fact that his concept of “culture” is so unsophisticated that it could prevent a student of his from getting a passing grade in an introductory course in cultural anthropology. But all sorts of comments which clearly showed that his perspective on human diversity is severely restricted. Though he challenges some ideas about genetic determinism, Ehrlich still holds to a form of reductionism which social scientists would associate with scholars who died before Ehrlich was born.

So, my level of respect for Ehrlich started to fade, with each of those half-baked pronouncments about cultural diversity and change.

Sad, I know. Especially since I respect every human being equally. But it doesn’t mean that I respect all statements equally. As is certainly the case for many other people, my respect for a person’s pronouncements may diminish greatly if those words demonstrate a lack of understanding of something in which I have a relatively high degree of expertise. In other words, a heart surgeon could potentially listen to a journalist talk about “cultural evolution” without blinking an eye but would likely lose “intellectual patience” if, in the same piece, the journalist starts to talk about heart diseases. And this impatience may retroactively carry over to the discussion about “cultural evolution.” As we tend to say in the ethnography of communication, context is the thing.

And this is where I have to catch myself. It’s not because Ehrlich made statements about culture which made him appear clueless that what he said about the connections between population and environment is also clueless. I didn’t, in fact, start perceiving his points about ecology as misled for the very simple reason that we have been saying the same things, in ethnographic disciplines. But that’s dangerous: selectively accepting statements because they reinforce what you already know. Not what academic work is supposed to be about.

In fact, there was something endearing about Ehrlich. He may not understand the study of culture and he doesn’t seem to have any training in the study of society, but at least he was trying to understand. There was even a point in his talk when he something which would be so obvious to any social scientist that I could have gained a new kind of personal respect for Ehrlich’s openness, if it hadn’t been for his inappropriate statements about culture.

The saddest part is about dialogue. If a social scientist is to work with Ehrlich and she reacts the same way I did, dialogue probably won’t be established. And if Ehrlich’s attitude toward epistemological approaches different from his own are represented by the statements he made about ethnography, chances are that he will only respect those of my social science colleagues who share his own reductionist perspective.

It should be obvious that there’s an academic issue, here, in terms of inter-disciplinarity. But there’s also a personal issue. In my own life, I don’t want to restrict myself to conversations with people who think the same way I do.

Enthused Tech

Yesterday, I held a WiZiQ session on the use of online tech in higher education:

Enthusing Higher Education: Getting Universities and Colleges to Play with Online Tools and Services

Slideshare

(Full multimedia recording available here)

During the session, Nellie Deutsch shared the following link:

Diffusion of Innovations, by Everett Rogers (1995)

Haven’t read Rogers’s book but it sounds like a contextually easy to understand version of ideas which have been quite clear in Boasian disciplines (cultural anthropology, folkloristics, cultural ecology…) for a while. But, in this sometimes obsessive quest for innovation, it might in fact be useful to go back to basic ideas about the social mechanisms which can be observed in the adoption of new tools and techniques. It’s in fact the thinking behind this relatively recent blogpost of mine:

Technology Adoption and Active Reading

My emphasis during the WiZiQ session was on enthusiasm. I tend to think a lot about occasions in which, thinking about possibilities afforded technology relates to people getting “psyched up.” In a way, this is exactly how I can define myself as a tech enthusiast: I get easy psyched up in the context of discussions about technology.

What’s funny is that I’m no gadget freak. I don’t care about the tool. I just love to dream up possibilities. And I sincerely think that I’m not alone. We might even guess that a similar dream-induced excitement animates true gadget freaks, who must have the latest tool. Early adopters are a big part of geek culture and, though still small, geek culture is still a niche.

Because I know I’ll keep on talking about these things on other occasions, I can “leave it at that,” for now.

RERO‘s my battle cry.

TBC

Crazy App Idea: Happy Meter

I keep getting ideas for apps I’d like to see on Apple’s App Store for iPod touch and iPhone. This one may sound a bit weird but I think it could be fun. An app where you can record your mood and optionally broadcast it to friends. It could become rather sophisticated, actually. And I think it can have interesting consequences.

The idea mostly comes from Philippe Lemay, a psychologist friend of mine and fellow PDA fan. Haven’t talked to him in a while but I was just thinking about something he did, a number of years ago (in the mid-1990s). As part of an academic project, Philippe helped develop a PDA-based research program whereby subjects would record different things about their state of mind at intervals during the day. Apart from the neatness of the data gathering technique, this whole concept stayed with me. As a non-psychologist, I personally get the strong impression that recording your moods frequently during the day can actually be a very useful thing to do in terms of mental health.

And I really like the PDA angle. Since I think of the App Store as transforming Apple’s touch devices into full-fledged PDAs, the connection is rather strong between Philippe’s work at that time and the current state of App Store development.

Since that project of Philippe’s, a number of things have been going on which might help refine the “happy meter” concept.

One is that “lifecasting” became rather big, especially among certain groups of Netizens (typically younger people, but also many members of geek culture). Though the lifecasting concept applies mostly to video streams, there are connections with many other trends in online culture. The connection with vidcasting specifically (and podcasting generally) is rather obvious. But there are other connections. For instance, with mo-, photo-, or microblogging. Or even with all the “mood” apps on Facebook.

Speaking of Facebook as a platform, I think it meshes especially well with touch devices.

So, “happy meter” could be part of a broader app which does other things: updating Facebook status, posting tweets, broadcasting location, sending personal blogposts, listing scores in a Brain Age type game, etc.

Yet I think the “happy meter” could be useful on its own, as a way to track your own mood. “Turns out, my mood was improving pretty quickly on that day.” “Sounds like I didn’t let things affect me too much despite all sorts of things I was going through.”

As a mood-tracker, the “happy meter” should be extremely efficient. Because it’s easy, I’m thinking of sliders. One main slider for general mood and different sliders for different moods and emotions. It would also be possible to extend the “entry form” on occasion, when the user wants to record more data about their mental state.

Of course, everything would be save automatically and “sent to the cloud” on occasion. There could be a way to selectively broadcast some slider values. The app could conceivably send reminders to the user to update their mood at regular intervals. It could even serve as a “break reminder” feature. Though there are limitations on OSX iPhone in terms of interapplication communication, it’d be even neater if the app were able to record other things happening on the touch device at the same time, such as music which is playing or some apps which have been used.

Now, very obviously, there are lots of privacy issues involved. But what social networking services have taught us is that users can have pretty sophisticated notions of privacy management, if they’re given the chance. For instance, adept Facebook users may seem to indiscrimately post just about everything about themselves but are often very clear about what they want to “let out,” in context. So, clearly, every type of broadcasting should be controlled by the user. No opt-out here.

I know this all sounds crazy. And it all might be a very bad idea. But the thing about letting my mind wander is that it helps me remain happy.

Visualizing Touch Devices in Education

Took me a while before I watched this concept video about iPhone use on campus.

Connected: The Movie – Abilene Christian University

Sure, it’s a bit campy. Sure, some features aren’t available on the iPhone yet. But the basic concepts are pretty much what I had in mind.

Among things I like in the video:

  • The very notion of student empowerment runs at the centre of it.
  • Many of the class-related applications presented show an interest in the constructivist dimensions of learning.
  • Material is made available before class. Face-to-face time is for engaging in the material, not rehashing it.
  • The technology is presented as a way to ease the bureaucratic aspects of university life, relieving a burden on students (and, presumably, on everyone else involved).
  • The “iPhone as ID” concept is simple yet powerful, in context.
  • Social networks (namely Facebook and MySpace, in the video) are embedded in the campus experience.
  • Blended learning (called “hybrid” in the video) is conceived as an option, not as an obligation.
  • Use of the technology is specifically perceived as going beyond geek culture.
  • The scenarios (use cases) are quite realistic in terms of typical campus life in the United States.
  • While “getting an iPhone” is mentioned as a perk, it’s perfectly possible to imagine technology as a levelling factor with educational institutions, lowering some costs while raising the bar for pedagogical standards.
  • The shift from “eLearning” to “mLearning” is rather obvious.
  • ACU already does iTunes U.
  • The video is released under a Creative Commons license.

Of course, there are many directions things can go, from here. Not all of them are in line with the ACU dream scenario. But I’m quite hope judging from some apparently random facts: that Apple may sell iPhones through universities, that Apple has plans for iPhone use on campuses,  that many of the “enterprise features” of iPhone 2.0 could work in institutions of higher education, that the Steve Jobs keynote made several mentions of education, that Apple bundles iPod touch with Macs, that the OLPC XOXO is now conceived more as a touch handheld than as a laptop, that (although delayed) Google’s Android platform can participate in the same usage scenarios, and that browser-based computing apparently has a bright future.

Handhelds for the Rest of Us?

Ok, it probably shouldn’t become part of my habits but this is another repost of a blog comment motivated by the OLPC XO.

This time, it’s a reply to Niti Bhan’s enthusiastic blogpost about the eeePC: Perspective 2.0: The little eeePC that could has become the real “iPod” of personal computing

This time, I’m heavily editing my comments. So it’s less of a repost than a new blogpost. In some ways, it’s partly a follow-up to my “Ultimate Handheld Device” post (which ended up focusing on spatial positioning).

Given the OLPC context, the angle here is, hopefully, a culturally aware version of “a handheld device for the rest of us.”

Here goes…

I think there’s room in the World for a device category more similar to handhelds than to subnotebooks. Let’s call it “handhelds for the rest of us” (HftRoU). Something between a cellphone, a portable gaming console, a portable media player, and a personal digital assistant. Handheld devices exist which cover most of these features/applications, but I’m mostly using this categorization to think about the future of handhelds in a globalised World.

The “new” device category could serve as the inspiration for a follow-up to the OLPC project. One thing about which I keep thinking, in relation to the “OLPC” project, is that the ‘L’ part was too restrictive. Sure, laptops can be great tools for students, especially if these students are used to (or need to be trained in) working with and typing long-form text. But I don’t think that laptops represent the most “disruptive technology” around. If we think about their global penetration and widespread impact, cellphones are much closer to the leapfrog effect about which we all have been writing.

So, why not just talk about a cellphone or smartphone? Well, I’m trying to think both more broadly and more specifically. Cellphones are already helping people empower themselves. The next step might to add selected features which bring them closer to the OLPC dream. Also, since cellphones are widely distributed already, I think it’s important to think about devices which may complement cellphones. I have some ideas about non-handheld tools which could make cellphones even more relevant in people’s lives. But they will have to wait for another blogpost.

So, to put it simply, “handhelds for the rest of us” (HftRoU) are somewhere between the OLPC XO-1 and Apple’s original iPhone, in terms of features. In terms of prices, I dream that it could be closer to that of basic cellphones which are in the hands of so many people across the globe. I don’t know what that price may be but I heard things which sounded like a third of the price the OLPC originally had in mind (so, a sixth of the current price). Sure, it may take a while before such a low cost can be reached. But I actually don’t think we’re in a hurry.

I guess I’m just thinking of the electronics (and global) version of the Ford T. With more solidarity in mind. And cultural awareness.

Google’s Open Handset Alliance (OHA) may produce something more appropriate to “global contexts” than Apple’s iPhone. In comparison with Apple’s iPhone, devices developed by the OHA could be better adapted to the cultural, climatic, and economic conditions of those people who don’t have easy access to the kind of computers “we” take for granted. At the very least, the OHA has good representation on at least three continents and, like the old OLPC project, the OHA is officially dedicated to openness.

I actually care fairly little about which teams will develop devices in this category. In fact, I hope that new manufacturers will spring up in some local communities and that major manufacturers will pay attention.

I don’t care about who does it, I’m mostly interested in what the devices will make possible. Learning, broadly speaking. Communicating, in different ways. Empowering themselves, generally.

One thing I have in mind, and which deviates from the OLPC mission, is that there should be appropriate handheld devices for all age-ranges. I do understand the focus on 6-12 year-olds the old OLPC had. But I don’t think it’s very productive to only sell devices to that age-range. Especially not in those parts of the world (i.e., almost anywhere) where generation gaps don’t imply that children are isolated from adults. In fact, as an anthropologist, I react rather strongly to the thought that children should be the exclusive target of a project meant to empower people. But I digress, as always.

I don’t tend to be a feature-freak but I have been thinking about the main features the prototypical device in this category should have. It’s not a rigid set of guidelines. It’s just a way to think out loud about technology’s integration in human life.

The OS and GUI, which seem like major advantages of the eeePC, could certainly be of the mobile/handheld type instead of the desktop/laptop type. The usual suspects: Symbian, NewtonOS, Android, Zune, PalmOS, Cocoa Touch, embedded Linux, Playstation Portable, WindowsCE, and Nintendo DS. At a certain level of abstraction, there are so many commonalities between all of these that it doesn’t seem very efficient to invent a completely new GUI/OS “paradigm,” like OLPC’s Sugar was apparently trying to do.

The HftRoU require some form of networking or wireless connectivity feature. WiFi (802.11*), GSM, UMTS, WiMAX, Bluetooth… Doesn’t need to be extremely fast, but it should be flexible and it absolutely cannot be cost-prohibitive. IP might make much more sense than, say, SMS/MMS, but a lot can be done with any kind of data transmission between devices. XO-style mesh networking could be a very interesting option. As VoIP has proven, voice can efficiently be transmitted as data so “voice networks” aren’t necessary.

My sense is that a multitouch interface with an accelerometer would be extremely effective. Yes, I’m thinking of Apple’s Touch devices and MacBooks. As well as about the Microsoft Surface, and Jeff Han’s Perceptive Pixel. One thing all of these have shown is how “intuitive” it can be to interact with a machine using gestures. Haptic feedback could also be useful but I’m not convinced it’s “there yet.”

I’m really not sure a keyboard is very important. In fact, I think that keyboard-focused laptops and tablets are the wrong basis for thinking about “handhelds for the rest of us.” Bear in mind that I’m not thinking about devices for would-be office workers or even programmers. I’m thinking about the broadest user base you can imagine. “The Rest of Us” in the sense of, those not already using computers very directly. And that user base isn’t that invested in (or committed to) touch-typing. Even people who are very literate don’t tend to be extremely efficient typists. If we think about global literacy rates, typing might be one thing which needs to be leapfrogged. After all, a cellphone keypad can be quite effective in some hands and there are several other ways to input text, especially if typing isn’t too ingrained in you. Furthermore, keyboards aren’t that convenient in multilingual contexts (i.e., in most parts of the world). I say: avoid the keyboard altogether, make it available as an option, or use a virtual one. People will complain. But it’s a necessary step.

If the device is to be used for voice communication, some audio support is absolutely required. Even if voice communication isn’t part of it (and I’m not completely convinced it’s the one required feature), audio is very useful, IMHO (I’m an aural guy). In some parts of the world, speakers are much favoured over headphones or headsets. But I personally wish that at least some HftRoU could have external audio inputs/outputs. Maybe through USB or an iPod-style connector.

A voice interface would be fabulous, but there still seem to be technical issues with both speech recognition and speech synthesis. I used to work in that field and I keep dreaming, like Bill Gates and others do, that speech will finally take the world by storm. But maybe the time still hasn’t come.

It’s hard to tell what size the screen should be. There probably needs to be a range of devices with varying screen sizes. Apple’s Touch devices prove that you don’t need a very large screen to have an immersive experience. Maybe some HftRoU screens should in fact be larger than that of an iPhone or iPod touch. Especially if people are to read or write long-form text on them. Maybe the eeePC had it right. Especially if the devices’ form factor is more like a big handheld than like a small subnotebook (i.e., slimmer than an eeePC). One reason form factor matters, in my mind, is that it could make the devices “disappear.” That, and the difference between having a device on you (in your pocket) and carrying a bag with a device in it. Form factor was a big issue with my Newton MessagePad 130. As the OLPC XO showed, cost and power consumption are also important issues regarding screen size. I’d vote for a range of screens between 3.5 inch (iPhone) and 8.9 inch (eeePC 900) with a rather high resolution. A multitouch version of the XO’s screen could be a major contribution.

In terms of both audio and screen features, some consideration should be given to adaptive technologies. Most of us take for granted that “almost anyone” can hear and see. We usually don’t perceive major issues in the fact that “personal computing” typically focuses on visual and auditory stimuli. But if these devices truly are “for the rest of us,” they could help empower visually- or hearing-impaired individuals, who are often marginalized. This is especially relevant in the logic of humanitarianism.

HftRoU needs a much autonomy from a power source as possible. Both in terms of the number of hours devices can be operated without needing to be connected to a power source and in terms of flexibility in power sources. Power management is a major technological issue, with portable, handheld, and mobile devices. Engineers are hard at work, trying to find as many solutions to this issue as they can. This was, obviously, a major area of research for the OLPC. But I’m not even sure the solutions they have found are the only relevant ones for what I imagine HftRoU to be.

GPS could have interesting uses, but doesn’t seem very cost-effective. Other “wireless positioning systems” (à la Skyhook) might reprsent a more rational option. Still, I think positioning systems are one of the next big things. Not only for navigation or for location-based targeting. But for a set of “unintended uses” which are the hallmark of truly disruptive technology. I still remember an article (probably in the venerable Wired magazine) about the use of GPS/GIS for research into climate change. Such “unintended uses” are, in my mind, much closer to the constructionist ideal than the OLPC XO’s unified design can ever get.

Though a camera seems to be a given in any portable or mobile device (even the OLPC XO has one), I’m not yet that clear on how important it really is. Sure, people like taking pictures or filming things. Yes, pictures taken through cellphones have had a lasting impact on social and cultural events. But I still get the feeling that the main reason cameras are included on so many devices is for impulse buying, not as a feature to be used so frequently by all users. Also, standalone cameras probably have a rather high level of penetration already and it might be best not to duplicate this type of feature. But, of course, a camera could easily be a differentiating factor between two devices in the same category. I don’t think that cameras should be absent from HftRoU. I just think it’s possible to have “killer apps” without cameras. Again, I’m biased.

Apart from networking/connectivity uses, Bluetooth seems like a luxury. Sure, it can be neat. But I don’t feel it adds that much functionality to HftRoU. Yet again, I could be proven wrong. Especially if networking and other inter-device communication are combined. At some abstract level, there isn’t that much difference between exchanging data across a network and controlling a device with another device.

Yes, I do realize I pretty much described an iPod touch (or an iPhone without camera, Bluetooth, or cellphone fees). I’ve been lusting over an iPod touch since September and it does colour my approach. I sincerely think the iPod touch could serve as an inspiration for a new device type. But, again, I care very little about which company makes that device. I don’t even care about how open the operating system is.

As long as our minds are open.

And We’re Still Lecturing

Forty years ago this month, students in Paris started a movement of protests and strikes. May ’68.

Among French-speakers, the events are remembered as the onset of a cultural revolution of sorts (with both negative and positive connotations). As we reached the 40 year anniversary of those events, some journalists and commentators have looked back at the social changes associated with the Paris student revolts of May, 1968.

The May ’68 movement also had some pedagogical bases. Preparing an online course, these days, I get to think about learning. And to care about students.

As I was yet to be born at the time, May ’68 resonates more for generational reasons than pedagogical ones. But a Montreal journalist who observed some of those events 40 years ago has been talking about what she perceived as irrationality surrounding such issues as abolishing lecture-based courses («cours magistraux»).

This journalist’s reaction and a cursory comparison of the present situation with what I’ve heard of pre-1968 teaching both lead me on a reflection path about learning. Especially in terms of lecturing.

As a social constructivist, I have no passion for “straight lectures.” On occasion, I bemoan the fact that lecturing is (still) the primary teaching mode in many parts of the world. The pedagogical ideas forcefully proposed more than a generation ago are apparently not prevalent in most mainstream educational systems.

What happened?

This is an especially difficult question for an idealist like me. We wish for change. Change happens. Then, some time later, changes have been reversed. Maybe more progressively. But, it seems, inexorably.

Sisyphean. Or, maybe, buddhist.

Is it really the way things work?

Possibly. But I prefer to maintain my idealism.

So… Before I was born, some baby-booming students in Paris revolted against teaching practises. We still talk about it. Nowadays, these teaching practises against which students revolted are apparently quite common in Paris universities. As they are in many other parts of the world. But not exactly everywhere.

Online learning appears more compatible with teaching methods inspired by social constructivism (and constructionism) than with “straight lecturing.” My idealism for alternative learning methods is fed partly by online learning.

Online lectures are possible. Yet the very structure of online communication implies some freedoms in the way lecture attendees approach these “teachings.”

At the very least, online lectures make few requirements in terms of space. Technically, a student could be watching online lectures while laying down on a beach. Beaches sound like a radically different context from the large lecture halls out of which some ’68ers decided to “take to the streets.”

Contrary to classroom lectures, online lectures may allow time-shifting. In some cases, prerecorded lectures (or podcasts) may be paused, rewinded, fastforwarded, etc. Learning for the TiVo generation?

Online lectures also make painfully obvious the problems with straight lecturing. The rigid hierarchy. Students’ relative facelessness. The lack of interactivity. The content focus. All these work well for “rote learning.” But there are other ways to learn.

Not that memorization plays no part in learning or that there is no value in the “retention of [a text’s] core information” (Schaefer 2008: xxi). It’s just that… Many of us perceive learning to be more than brain-stuffing.

As should be obvious from my tone and previous posts, I count myself as one of those who perceive lectures to be too restrictive. Oh, sure, I’ve lectured to large and medium-sized classrooms. In fact, I even enjoy lecturing when I get to do it. And I fully realize that there are many possible approaches to teaching. In fact, my observation is that teaching methods are most effective when they are adapted to a specific situation, not when they follow some set of general principles. In this context, lecturing may work well when “lecturer and lecturees are in sync.” When students and teacher are “on the same page,” lectures can be intellectually stimulating, thought-provoking, challenging, useful. Conversely, alternative teaching methods can have disastrous consequences when they are applied haphazardly by people who were trained with “straight lecturing” in mind. In fact, my perception is that many issues with Quebec’s most recent education reform (the “competency based program” about which Quebec parents have been quite vocal) are associated with the indiscriminate application of constructivist/constructionist principles to all learning contexts in the province. IMHO, a more flexible application of the program coupled with considerate teacher training might have prevented several of the problems which plagued Quebec’s reform.

Unlike ’68ers, I don’t want to abolish lectures. I just hope we can adopt a diversity of methods in diverse contexts.

Back in 1968, my father was a student of Jean Piaget, in Geneva. Many of Piaget’s ideas about learning were quite compatible with what Parisian students were clamoring for.

Beyond the shameless name-dropping, my mentioning Piaget relates to something I perceive as formative. Both in my educational and in my personal lives. My mother had much more of an impact on my life. But my father supplied me with something of the Piaget spirit. And this spirit is found in different places. Including online.

The compatibility between online learning and lecture-less teaching methods seems to be a topic for frequent discussions among eLearning circles including LearnHubNing, and the Moodle community. Not that online technology determines pedagogical methods. But the “fit” of online technology with different approaches to learning and teaching is the stuff constructionist teachers’ dreams are made of.

One dimension of the “fit” is in terms of flexibility. Online, learners may (and are sometimes forced to) empower themselves using personal methods. Not that learners are left to their own devices. But the Internet is big and “wild” enough to encourage survival strategies in learning contexts. Perhaps more than the lecture hall, the online world makes critical thinking vital. And critical thinking may lead to creative and innovative solutions.
Another dimension to the fit, and one which may be more trivial than some EdTech enthusiasts seem to assume, is the “level of interactivity” afforded diverse online tools. You know, the Flash-based or other learning objects which should make learning fun and effective. I personally like the dancing mice a lot. But my impression is that these cool tools require too much effort for their possible learning outcomes. I do, however, have high hopes for the kind of interactivity common to the “social platform” sometimes known (perhaps abusively) as “Web 2.0.” Putting things online is definitely not a panacea for adequate pedagogical practise. And while “School 2.0” is an interesting concept, the buzzwordiness of some of these concepts makes me take pause. But, clearly, some students are using adequate learning strategies through the interactive character of online communication.

As I’ll be teaching online for several weeks, I’ll surely have many other things to say about these learning issues in a pseudo-historical context. In the meantime, I assume that this blogpost may bring me some thoughtful comments. 😉

“To Be Verified”: Trivia and Critical Thinking

A friend posted a link to the following list of factoids on his Facebook profile: Useless facts, Weird Information, humor. It contains such intriguing statements about biology, language, inventions, etc.

Similar lists abound, often containing the same tidbits:

Several neat pieces of trivial information. Not exactly “useless.” But gratuitous and irrelevant. The type of thing you may wish to plug in a conversation. Especially at the proverbial “cocktail party.” This is, after all, an appropriate context for attention economy. But these lists are also useful as preparation for game shows and barroom competitions. The stuff of erudition.

One of my first reflexes, when I see such lists of trivia online, is to look for ways to evaluate their accuracy. This is partly due to my training in folkloristics, as “netlore” is a prolific medium for verbal folklore (folk beliefs, rumors, urban legends, myths, and jokes). My reflex is also, I think, a common reaction among academics. After all, the detective work of critical thinking is pretty much our “bread and butter.” Sure, we can become bothersome with this. “Don’t be a bore, it’s just trivia.” But many of us may react from a fear of such “trivial” thinking preventing more careful consideration.

An obvious place to start verifying these tidbits is Snopes. In fact, they do debunk several of the statements made in those lists. For instance, the one about an alleged Donald Duck “ban” in Finland found in the list my friend shared through Facebook. Unfortunately, however, many factoids are absent from Snopes, despite that site’s extensive database.

These specific trivia lists are quite interesting. They include some statements which are easy to verify. For instance, the product of two numbers. (However, many calculators are insufficiently precise for the specific example used in those factoid lists.) The ease with which one can verify the accuracy of some statements brings an air of legitimacy to the list in which those easily verified statements are included. The apparent truth-value of those statements is such that a complete list can be perceived as being on unshakable foundations. For full effectiveness, the easily verified statements should not be common knowledge. “Did you know? Two plus two equals four.”

Other statements appear to be based on hypothesis. The plausibility of such statements may be relatively difficult to assess for anyone not familiar with research in that specific field. For instance, the statement about typical life expectancy of currently living humans compared to individual longevity. At first sight, it does seem plausible that today’s extreme longevity would only benefit extremely few individuals in the future. Yet my guess is that those who do research on aging may rebut the statement that “Only one person in two billion will live to be 116 or older.” Because such statements require special training, their effect is a weaker version of the legitimizing effect of easily verifiable statements.

Some of the most difficult statements to assess are the ones which contain quantifiers, especially those for uniqueness. There may, in fact, be “only one” fish which can blink with both eyes. And it seems possible that the English language may include only one word ending in “-mt” (or, to avoid pedantic disclaimers, “only one common word”). To verify these claims, one would need to have access to an exhaustive catalog of fish species or English words. While the dream of “the Web as encyclopedia” may hinge on such claims of exhaustivity, there is a type of “black swan effect” related to the common fallacy about lack of evidence being considered sufficient evidence of lack.

I just noticed, while writing this post, a Google Answers page which not only evaluates the accuracy of several statements found in those trivia lists but also mentions ease of verifiability as a matter of interest. Critical thinking is active in many parts of the online world.

An obvious feature of those factoid lists, found online or in dead-tree print, is the lack of context. Even when those lists are concerned with a single topic (say, snails or sleep), they provide inadequate context for the information they contain. I’m using the term “context” rather loosely as it covers both the text’s internal relationships (the “immediate context,” if you will) and the broader references to the world at large. Without going into details about philosophy of language, these approaches clearly inform my perspective.

A typical academic, especially an English-speaking one, might put the context issue this way: “citation needed.” After all, the Wikipedia approach to truth is close to current academic practice (especially in English-speaking North America) with peer-review replacing audits. Even journalists are trained to cite sources, though they rarely help others apply critical thinking to those sources. In some ways, sources are conceived as the most efficient way to assess accuracy.

My own approach isn’t that far from the citation-happy one. Like most other academics, I’ve learned the value of an appropriate citation. Where I “beg to differ” is on the perceived “weight” of a citation as support. Through an awkward quirk of academic writing, some citation practices amount to fallacious appeal to authority. I’m probably overreacting about this but I’ve heard enough academics make statements equating citations with evidence that I tend to be weary of what I perceive to be excessive referencing. In fact, some of my most link-laden posts could be perceived as attempts to poke fun at citation-happy writing styles. One may even notice my extensive use of Wikipedia links. These are sometimes meant as inside jokes (to my own sorry self). Same thing with many of my blogging tags/categories, actually. Yes, blogging can be playful.

The broad concept is that, regardless of a source’s authority, critical thinking should be applied as much as possible. No more, no less.

Even Teachers Get the Blues

(With apologies to k.d. lang. Without apologies to  Gus Van Sant.)

In response to a forum discussion on teacher-rating sites, someone posted a link to this blog: Rate Your Students.

I also posted about disillusion. But the teacher in my post was meant more as a fictional character than as a personification of my own attitude.

Simply put,  despite some frustrations, I’m quite satisfied with my teaching life. Not necessarily because I get positive feedback from students. But because teaching is rewarding in many ways.

Been meaning to blog about the Spirit of Inquiry conference during which I presented on learning materials. It was quite interesting a context. Like-minded teachers from all over Canada, from many different disciplines and institutional backgrounds. Everyone pretty much in agreement on the necessity to think about teaching in a diversity of ways. A lot of thoughtful discussion about rather deep issues.  Almost as welcoming as the food and culture conference during which I talked about craft beer culture. So I’m thinking about teaching quite a bit.

I have been more impressed by students than by fellow teachers. Oh, some students are difficult to deal with, at times. But every single one of them has something interesting to contribute to any course they take. While I realise that this attitude sounds like the bursting blossom idealism decried by the aforementioned blog, I don’t mind saying it.

Here’s why: I don’t really feel disillusioned because I don’t recall ever being “illusioned.”

I’ve met a lot of teachers in my young life. My mother married two teachers and teachers do tend to connect with other teachers. My father (my mother’s second husband) transmitted part of his teaching philosophy to me. As [name-dropping]daddy was trained by Jean Piaget[/name-dropping], this teaching philosophy of his was quite specific. Yes, constructivism and all that. But also a certain dose of cynicism, especially toward blanket statements about student performance. This made me somewhat impervious to teaching disappointment.

In English-speaking parts of North America, there’s a lot of what I think of as “studies have shown” perspectives on learning. A good deal of blind trust for results of survey research on teaching effectiveness. These survey research projects often emphasize the most common responses to teaching. In the mind of some of these people, learning is something that the majority of students should do when a teacher is “good.” Teaching effectiveness is obvious when a majority of students have “learnt their lesson,” so to speak. As you might guess, I don’t relate very well to these views. I respect the people who hold them but I feel a disconnect between my views of teaching and their views of learning. Sure, I adapt to these views when I teach in an environment where they are held by a good number of people. But I wish to keep some distance from these views.

The part I don’t like is when we (as teachers) are told to use very specific methods in order to ensure student learning. I really don’t have a problem with tips and tricks for teaching. They’re very inspiring and can really enhance teaching experience. What I’m less enthusiastic about is the type of “you should teach in blocks of 20 minutes at a time because studies have shown that students tend to have a difficult time concentrating for more than 20 minutes at a time.” I understand the effects of the “change-up” (switching from one task to another during a class period) and I have started to implement a teaching strategy which does involve a variety of interaction modes during a given class period. Yet the notion that “The One Way to Teach” implies piecemeal development is quite foreign to me.

“Where I come from,” we could have seminars lasting for seven straight hours and everyone’s attention seemed quite focused. Oh, sure, I’m pretty sure many people were daydreaming when others were talking. But that daydreaming was quite relevant to the discussion. Kind of like the “drift-off moment” in a successful sales pitch. The whole “what you say makes me think of,” with surprising and satisfying results. For instance, it’s easy to imagine the response “your talking about aesthetics makes me think of baking.” Sounds absurd at first, but it can be very useful. We’re merging horizons, pushing inter-subjectivity. We’re not making sure everyone remembers everything that has been said. There are recording devices for that.

Am I ranting? Maybe. But not about people themselves. If I’m venting frustrations, it’s because I enjoy what I do in the classroom and want to go several steps forward.

I do dream about teaching fairly regularly. In fact, when I woke up this morning, I was thinking about my own concept of critical thinking as it relates to my teaching philosophy. I would assume that it means that I was dreaming about teaching, probably because of the conference. Unfortunately for those who really think about learning and teaching, many people merely use “critical thinking” or “skill transfer” as buzzphrases to convince administrators that what they do is trendy. Fortunately, most of the conference attendees were using such concepts as “social constructivism” and “inquiry-based learning” in non-buzzphrase ways.

Still lots to say about teaching. But true to my RERO resolution,  I will leave it at that, for now.

Banality of Heroism

Wow! I’m speechless!

Open Source » Blog Archive » The Banality of Evil, Part II

Continue reading “Banality of Heroism”

Social Networking and eLearning

Oops! I did it again. Launched on one of my long-winded ramblings about the convergence between learning management systems (in this case, Moodle) and social networking sites (in this case, Facebook).

Executive summary:

Facebook’s power’s in fluid, organic networks. Moodle’s power’s in structured but flexible learning-based groups. I personally see a marriage made in heaven.

Lounge: Moodle as New Facebook