Category Archives: Digital Life

I Hate Books

In a way, this is a followup to a discussion happening on Facebook after something I posted (available publicly on Twitter): “(Alexandre) wishes physical books a quick and painfree death. / aime la connaissance.”

As I expected, the reactions I received were from friends who are aghast: how dare I dismiss physical books? Don’t I know no shame?

Apparently, no, not in this case.

And while I posted it as a quip, it’s the result of a rather long reflection. It’s not that I’m suddenly anti-books. It’s that I stopped buying several of the “pro-book” arguments a while ago.

Sure, sure. Books are the textbook case of technlogy which needs no improvement. eBooks can’t replace the experience of doing this or that with a book. But that’s what folkloristics defines as a functional shift. Like woven baskets which became objects of nostalgia, books are being maintained as the model for a very specific attitude toward knowledge construction based on monolithic authored texts vetted by gatekeepers and sold as access to information.

An important point, here, is that I’m not really thinking about fiction. I used to read two novel-length works a week (collections of short stories, plays…), for a period of about 10 years (ages 13 to 23). So, during that period, I probably read about 1,000 novels, ranging from Proust’s Recherche to Baricco’s Novecentoand the five books of Rabelais’s Pantagruel series. This was after having read a fair deal of adolescent and young adult fiction. By today’s standards, I might be considered fairly well-read.

My life has changed a lot, since that time. I didn’t exactly stop reading fiction but my move through graduate school eventually shifted my reading time from fiction to academic texts. And I started writing more and more, online and offline.
In the same time, the Web had also been making me shift from pointed longform texts to copious amounts of shortform text. Much more polyvocal than what Bakhtin himself would have imagined.

(I’ve also been shifting from French to English, during that time. But that’s almost another story. Or it’s another part of the story which can reamin in the backdrop without being addressed directly at this point. Ask, if you’re curious.)
The increase in my writing activity is, itself, a shift in the way I think, act, talk… and get feedback. See, the fact that I talk and write a lot, in a variety of circumstances, also means that I get a lot of people to play along. There’s still a risk of groupthink, in specific contexts, but one couldn’t say I keep getting things from the same perspective. In fact, the very Facebook conversation which sparked this blogpost is an example, as the people responding there come from relatively distant backgrounds (though there are similarities) and were not specifically queried about this. Their reactions have a very specific value, to me. Sure, it comes in the form of writing. But it’s giving me even more of something I used to find in writing: insight. The stuff you can’t get through Google.

So, back to books.

I dislike physical books. I wish I didn’t have to use them to read what I want to read. I do have a much easier time with short reading sessions on a computer screen that what would turn into rather long periods of time holding a book in my hands.

Physical books just don’t do it for me, anymore. The printing press is, like, soooo 1454!

Yes, books had “a good run.” No, nothing replaces them. That’s not the way it works. Movies didn’t replace theater, television didn’t replace radio, automobiles didn’t replace horses, photographs didn’t replace paintings, books didn’t replace orality. In fact, the technology itself doesn’t do much by itself. But social contexts recontextualize tools. If we take technology to be the set of both tools and the knowledge surrounding it, technology mostly goes through social processes, since tool repertoires and corresponding knowledge mostly shift in social contexts, not in their mere existence. Gutenberg’s Bible was a “game-changer” for social, as well as technical reasons.

And I do insist on orality. Journalists and other “communication is transmission of information” followers of Shannon&Weaver tend to portray writing as the annihilation of orality. How long after the invention of writing did Homer transfer an oral tradition to the writing media? Didn’t Albert Lord show the vitality of the epic well into the 20th Century? Isn’t a lot of our knowledge constructed through oral means? Is Internet writing that far, conceptually, from orality? Is literacy a simple on/off switch?

Not only did I maintain an interest in orality through the most book-focused moments of my life but I probably care more about orality now than I ever did. So I simply cannot accept the idea that books have simply replaced the human voice. It doesn’t add up.

My guess is that books won’t simply disappear either. There should still be a use for “coffee table books” and books as gifts or collectables. Records haven’t disappeared completely and CDs still have a few more days in dedicated stores. But, in general, we’re moving away from the “support medium” for “content” and more toward actual knowledge management in socially significant contexts.

In these contexts, books often make little sense. Reading books is passive while these contexts are about (hyper-)/(inter-)active.

Case in point (and the reason I felt compelled to post that Facebook/Twitter quip)…
I hear about a “just released” French book during a Swiss podcast. Of course, it’s taken a while to write and publish. So, by the time I heard about it, there was no way to participate in the construction of knowledge which led to it. It was already “set in stone” as an “opus.”

Looked for it at diverse bookstores. One bookstore could eventually order it. It’d take weeks and be quite costly (for something I’m mostly curious about, not depending on for something really important).

I eventually find it in the catalogue at BANQ. I reserve it. It wasn’t on the shelves, yet, so I had to wait until it was. It took from November to February. I eventually get a message that I have a couple of days to pick up my reservation but I wasn’t able to go. So it went back on the “just released” shelves. I had the full call number but books in that section aren’t in their call number sequence. I spent several minutes looking back and forth between eight shelves to eventually find out that there were four more shelves in the “humanities and social sciences” section. The book I was looking was on one of those shelves.

So, I was able to borrow it.

Phew!

In the metro, I browse through it. Given my academic reflex, I look for the back matter first. No bibliography, no index, a ToC with rather obscure titles (at random: «Taylor toujours à l’œuvre»/”Taylor still at work,” which I’m assuming to be a reference to continuing taylorism). The book is written by two separate dudes but there’s no clear indication of who wrote what. There’s a preface (by somebody else) but no “acknowledgments” section, so it’s hard to see who’s in their network. Footnotes include full URLs to rather broad sites as well as “discussion with <an author’s name>.” The back cover starts off with references to French popular culture (including something about “RER D,” which would be difficult to search). Information about both authors fits in less than 40 words (including a list of publication titles).

The book itself is fairly large print, ways almost a pound (422g, to be exact) for 327 pages (including front and back matter). Each page seems to be about 50 characters per line, about 30 lines per page. So, about half a million characters or 3500 tweets (including spaces). At 5+1 characters per word, about 80,000 words (I have a 7500-words blogpost, written in an afternoon). At about 250 words per minute, about five hours of reading. This book is listed at 19€ (about 27CAD).
There’s no direct way to do any “postprocessing” with the text: no speech synthesis for visually impaired, concordance analysis, no machine translation, even a simple search for occurences of “Sarkozy” is impossible. Not to mention sharing quotes with students or annotating in an easy-to-retrieve fashion (à la Diigo).

Like any book, it’s impossible to read in the dark and I actually have a hard time to find a spot where I can read with appropriate lighting.

Flipping through the book, I get the impression that there’s some valuable things to spark discussions, but there’s also a whole lot of redundancy with frequent discussions on the topic (the Future of Journalism, or #FoJ, as a matter of fact). My guesstimate is that, out of 5 hours of reading, I’d get at most 20 pieces of insight that I’d have exactly no way to find elsewhere. Comparable books to which I listened as audiobooks, recently, had much less. In other words, I’d have at most 20 tweets worth of things to say from the book. Almost a 200:1 compression.
Direct discussion with the authors could produce much more insight. The radio interviews with these authors already contained a few insight hints, which predisposed me to look for more. But, so many months later, without the streams of thought which animated me at the time, I end up with something much less valuable than what I wanted to get, back in November.

Bottomline: Books aren’t necessarily “broken” as a tool. They just don’t fit my life, anymore.


Why I Need an iPad

I’m one of those who feel the iPad is the right tool for the job.

This is mostly meant as a reply to this blogthread. But it’s also more generally about my personal reaction to Apple’s iPad announcement.

Some background.

I’m an ethnographer and a teacher. I read a fair deal, write a lot of notes, and work in a variety of contexts. These days, I tend to spend a good amount of time in cafés and other public places where I like to work without being too isolated. I also commute using public transit, listen to lots of podcast, and create my own. I’m also very aural.

I’ve used a number of PDAs, over the years, from a Newton MessagePad 130 (1997) to a variety of PalmOS devices (until 2008). In fact, some people readily associated me with PDA use.

As soon as I learnt about the iPod touch, I needed one. As soon as I’ve heard about the SafariPad, I wanted one. I’ve been an intense ‘touch user since the iPhone OS 2.0 release and I’m a happy camper.

(A major reason I never bought an iPhone, apart from price, is that it requires a contract.)

In my experience, the ‘touch is the most appropriate device for all sorts of activities which are either part of an other activity (reading during a commute) or are simply too short in duration to constitute an actual “computer session.” You don’t “sit down to work at your ‘touch” the way you might sit in front of a laptop or desktop screen. This works great for “looking up stufff” or “checking email.” It also makes a lot of sense during commutes in crowded buses or metros.

In those cases, the iPod touch is almost ideal. Ubiquitous access to Internet would be nice, but that’s not a deal-breaker. Alternative text-input methods would help in some cases, but I do end up being about as fast on my ‘touch as I was with Graffiti on PalmOS.

For other tasks, I have a Mac mini. Sure, it’s limited. But it does the job. In fact, I have no intention of switching for another desktop and I even have an eMachines collecting dust (it’s too noisy to make a good server).

What I miss, though, is a laptop. I used an iBook G3 for several years and loved it. For a little while later, I was able to share a MacBook with somebody else and it was a wonderful experience. I even got to play with the OLPC XO for a few weeks. That one was not so pleasant an experience but it did give me a taste for netbooks. And it made me think about other types of iPhone-like devices. Especially in educational contexts. (As I mentioned, I’m a teacher)

I’ve been laptop-less for a while, now. And though my ‘touch replaces it in many contexts, there are still times when I’d really need a laptop. And these have to do with what I might call “mobile sessions.”

For instance: liveblogging a conference or meeting. I’ve used my ‘touch for this very purpose on a good number of occasions. But it gets rather uncomfortable, after a while, and it’s not very fast. A laptop is better for this, with a keyboard and a larger form factor. But the iPad will be even better because of lower risks of RSI. A related example: just imagine TweetDeck on iPad.

Possibly my favourite example of a context in which the iPad will be ideal: presentations. Even before learning about the prospect of getting iWork on a tablet, presentations were a context in which I really missed a laptop.

Sure, in most cases, these days, there’s a computer (usually a desktop running XP) hooked to a projector. You just need to download your presentation file from Slideshare, show it from Prezi, or transfer it through USB. No biggie.

But it’s not the extra steps which change everything. It’s the uncertainty. Even if it’s often unfounded, I usually get worried that something might just not work, along the way. The slides might not show the same way as you see it because something is missing on that computer or that computer is simply using a different version of the presentation software. In fact, that software is typically Microsoft PowerPoint which, while convenient, fits much less in my workflow than does Apple Keynote.

The other big thing about presentations is the “presenter mode,” allowing you to get more content than (or different content from) what the audience sees. In most contexts where I’ve used someone else’s computer to do a presentation, the projector was mirroring the computer’s screen, not using it as a different space. PowerPoint has this convenient “presenter view” but very rarely did I see it as an available option on “the computer in the room.” I wish I could use my ‘touch to drive presentations, which I could do if I installed software on that “computer in the room.” But it’s not something that is likely to happen, in most cases.

A MacBook solves all of these problems. and it’s an obvious use for laptops. But how, then, is the iPad better? Basically because of interface. Switching slides on a laptop isn’t hard, but it’s more awkward than we realize. Even before watching the demo of Keynote on the iPad, I could simply imagine the actual pleasure of flipping through slides using a touch interface. The fit is “natural.”

I sincerely think that Keynote on the iPad will change a number of things, for me. Including the way I teach.

Then, there’s reading.

Now, I’m not one of those people who just can’t read on a computer screen. In fact, I even grade assignments directly from the screen. But I must admit that online reading hasn’t been ideal, for me. I’ve read full books as PDF files or dedicated formats on PalmOS, but it wasn’t so much fun, in terms of the reading process. And I’ve used my ‘touch to read things through Stanza or ReadItLater. But it doesn’t work so well for longer reading sessions. Even in terms of holding the ‘touch, it’s not so obvious. And, what’s funny, even a laptop isn’t that ideal, for me, as a reading device. In a sense, this is when the keyboard “gets in the way.”

Sure, I could get a Kindle. I’m not a big fan of dedicated devices and, at least on paper, I find the Kindle a bit limited for my needs. Especially in terms of sources. I’d like to be able to use documents in a variety of formats and put them in a reading list, for extended reading sessions. No, not “curled up in bed.” But maybe lying down in a sofa without external lighting. Given my experience with the ‘touch, the iPad is very likely the ideal device for this.

Then, there’s the overall “multi-touch device” thing. People have already been quite creative with the small touchscreen on iPhones and ‘touches, I can just imagine what may be done with a larger screen. Lots has been said about differences in “screen real estate” in laptop or desktop screens. We all know it can make a big difference in terms of what you can display at the same time. In some cases, two screens isn’t even a luxury, for instance when you code and display a page at the same time (LaTeX, CSS…). Certainly, the same qualitative difference applies to multitouch devices. Probably even more so, since the display is also used for input. What Han found missing in the iPhone’s multitouch was the ability to use both hands. With the iPad, Han’s vision is finding its space.

Oh, sure, the iPad is very restricted. For instance, it’s easy to imagine how much more useful it’d be if it did support multitasking with third-party apps. And a front-facing camera is something I was expecting in the first iPhone. It would just make so much sense that a friend seems very disappointed by this lack of videoconferencing potential. But we’re probably talking about predetermined expectations, here. We’re comparing the iPad with something we had in mind.

Then, there’s the issue of the competition. Tablets have been released and some multitouch tablets have recently been announced. What makes the iPad better than these? Well, we could all get in the same OS wars as have been happening with laptops and desktops. In my case, the investment in applications, files, and expertise that I have made in a Mac ecosystem rendered my XP years relatively uncomfortable and me appreciate returning to the Mac. My iPod touch fits right in that context. Oh, sure, I could use it with a Windows machine, which is in fact what I did for the first several months. But the relationship between the iPhone OS and Mac OS X is such that using devices in those two systems is much more efficient, in terms of my own workflow, than I could get while using XP and iPhone OS. There are some technical dimensions to this, such as the integration between iCal and the iPhone OS Calendar, or even the filesystem. But I’m actually thinking more about the cognitive dimensions of recognizing some of the same interface elements. “Look and feel” isn’t just about shiny and “purty.” It’s about interactions between a human brain, a complex sensorimotor apparatus, and a machine. Things go more quickly when you don’t have to think too much about where some tools are, as you’re working.

So my reasons for wanting an iPad aren’t about being dazzled by a revolutionary device. They are about the right tool for the job.


Installing BuddyPress on a Webhost

[Jump here for more technical details.]

A few months ago, I installed BuddyPress on my Mac to try it out. It was a bit of an involved process, so I documented it:

WordPress MU, BuddyPress, and bbPress on Local Machine « Disparate.

More recently, I decided to get a webhost. Both to run some tests and, eventually, to build something useful. BuddyPress seems like a good way to go at it, especially since it’s improved a lot, in the past several months.

In fact, the installation process is much simpler, now, and I ran into some difficulties because I was following my own instructions (though adapting the process to my webhost). So a new blogpost may be in order. My previous one was very (possibly too) detailed. This one is much simpler, technically.

One thing to make clear is that BuddyPress is a set of plugins meant for WordPress µ (“WordPress MU,” “WPMU,” “WPµ”), the multi-user version of the WordPress blogging platform. BP is meant as a way to make WPµ more “social,” with such useful features as flexible profiles, user-to-user relationships, and forums (through bbPress, yet another one of those independent projects based on WordPress).

While BuddyPress depends on WPµ and does follow a blogging logic, I’m thinking about it as a social platform. Once I build it into something practical, I’ll probably use the blogging features but, in a way, it’s more of a tool to engage people in online social activities. BuddyPress probably doesn’t work as a way to “build a community” from scratch. But I think it can be quite useful as a way to engage members of an existing community, even if this engagement follows a blogger’s version of a Pareto distribution (which, hopefully, is dissociated from elitist principles).

But I digress, of course. This blogpost is more about the practical issue of adding a BuddyPress installation to a webhost.

Webhosts have come a long way, recently. Especially in terms of shared webhosting focused on LAMP (or PHP/MySQL, more specifically) for blogs and content-management. I don’t have any data on this, but it seems to me that a lot of people these days are relying on third-party webhosts instead of relying on their own servers when they want to build on their own blogging and content-management platforms. Of course, there’s a lot more people who prefer to use preexisting blog and content-management systems. For instance, it seems that there are more bloggers on WordPress.com than on other WordPress installations. And WP.com blogs probably represent a small number of people in comparison to the number of people who visit these blogs. So, in a way, those who run their own WordPress installations are a minority in the group of active WordPress bloggers which, itself, is a minority of blog visitors. Again, let’s hope this “power distribution” not a basis for elite theory!

Yes, another digression. I did tell you to skip, if you wanted the technical details!

I became part of the “self-hosted WordPress” community through a project on which I started work during the summer. It’s a website for an academic organization and I’m acting as the organization’s “Web Guru” (no, I didn’t choose the title). The site was already based on WordPress but I was rebuilding much of it in collaboration with the then-current “Digital Content Editor.” Through this project, I got to learn a lot about WordPress, themes, PHP, CSS, etc. And it was my first experience using a cPanel- (and Fantastico-)enabled webhost (BlueHost, at the time). It’s also how I decided to install WordPress on my local machine and did some amount of work from that machine.

But the local installation wasn’t an ideal solution for two reasons: a) I had to be in front of that local machine to work on this project; and b) it was much harder to show the results to the person with whom I was collaborating.

So, in the Fall, I decided to get my own staging server. After a few quick searches, I decided HostGator, partly because it was available on a monthly basis. Since this staging server was meant as a temporary solution, HG was close to ideal. It was easy to set up as a PayPal “subscription,” wasn’t that expensive (9$/month), had adequate support, and included everything that I needed at that point to install a current version of WordPress and play with theme files (after importing content from the original site). I’m really glad I made that decision because it made a number of things easier, including working from different computers, and sending links to get feedback.

While monthly HostGator fees were reasonable, it was still a more expensive proposition than what I had in mind for a longer-term solution. So, recently, a few weeks after releasing the new version of the organization’s website, I decided to cancel my HostGator subscription. A decision I made without any regret or bad feeling. HostGator was good to me. It’s just that I didn’t have any reason to keep that account or to do anything major with the domain name I was using on HG.

Though only a few weeks elapsed since I canceled that account, I didn’t immediately set out to transition to a new webhost. I didn’t go from HostGator to another webhost.

But having my own webhost still remained at the back of my mind as something which might be useful. For instance, while not really making a staging server necessary, a new phase in the academic website project brought up a sandboxing idea. Also, I went to a “WordPress Montreal” meeting and got to think about further WordPress development/deployment, including using BuddyPress for my own needs (both as my own project and as a way to build my own knowledge of the platform) instead of it being part of an organization’s project. I was also thinking about other interesting platforms which necessitate a webhost.

(More on these other platforms at a later point in time. Bottom line is, I’m happy with the prospects.)

So I wanted a new webhost. I set out to do some comparison shopping, as I’m wont to do. In my (allegedly limited) experience, finding the ideal webhost is particularly difficult. For one thing, search results are cluttered with a variety of “unuseful” things such as rants, advertising, and limited comparisons. And it’s actually not that easy to give a new webhost a try. For one thing, these hosting companies don’t necessarily have the most liberal refund policies you could imagine. And, switching a domain name between different hosts and registrars is a complicated process through which a name may remain “hostage.” Had I realized what was involved, I might have used a domain name to which I have no attachment or actually eschewed the whole domain transition and just try the webhost without a dedicated domain name.

Doh!
Live and learn. I sure do. Loving almost every minute of it.

At any rate, I had a relatively hard time finding my webhost.

I really didn’t need “bells and whistles.” For instance, all the AdSense, shopping cart, and other business-oriented features which seem to be publicized by most webhosting companies have no interest, to me.

I didn’t even care so much about absolute degree of reliability or speed. What I’m to do with this host is fairly basic stuff. The core idea is to use my own host to bypass some limitations. For instance, WordPress.com doesn’t allow for plugins yet most of the WordPress fun has to do with plugins.

I did want an “unlimited” host, as much as possible. Not because expect to have huge resource needs but I just didn’t want to have to monitor bandwidth.

I thought that my needs would be basic enough that any cPanel-enabled webhost would fit. As much as I could see, I needed FTP access to something which had PHP 5 and MySQL 5. I expected to install things myself, without use of the webhost’s scripts but I also thought the host would have some useful scripts. Although I had already registered the domain I wanted to use (through Name.com), I thought it might be useful to have a free domain in the webhosting package. Not that domain names are expensive, it’s more of a matter of convenience in terms of payment or setup.

I ended up with FatCow. But, honestly, I’d probably go with a different host if I were to start over (which I may do with another project).

I paid 88$ for two years of “unlimited” hosting, which is quite reasonable. And, on paper, FatCow has everything I need (and I bunch of things I don’t need). The missing parts aren’t anything major but have to do with minor annoyances. In other words, no real deal-breaker, here. But there’s a few things I wish I had realized before I committed on FatCow with a domain name I actually want to use.

Something which was almost a deal-breaker for me is the fact that FatCow requires payment for any additional subdomain. And these aren’t cheap: the minimum is 5$/month for five subdomains, up to 25$/month for unlimited subdomains! Even at a “regular” price of 88$/year for the basic webhosting plan, the “unlimited subdomains” feature (included in some webhosting plans elsewhere) is more than three times more expensive than the core plan.

As I don’t absolutely need extra subdomains, this is mostly a minor irritant. But it’s one reason I’ll probably be using another webhost for other projects.

Other issues with FatCow are probably not enough to motivate a switch.

For instance, the PHP version installed on FatCow (5.2.1) is a few minor releases behind the one needed by some interesting web applications. No biggie, especially if PHP is updated in a relatively reasonable timeframe. But still makes for a slight frustration.

The MySQL version seems recent enough, but it uses non-standard tools to manage it, which makes for some confusion. Attempting to create some MySQL databases with obvious names (say “wordpress”) fails because the database allegedly exists (even though it doesn’t show up in the MySQL administration). In the same vein, the URL of the MySQL is <username>.fatcowmysql.com instead of localhost as most installers seem to expect. Easy to handle once you realize it, but it makes for some confusion.

In terms of Fantastico-like simplified installation of webapps, FatCow uses InstallCentral, which looks like it might be its own Fantastico replacement. InstallCentral is decent enough as an installation tool and FatCow does provide for some of the most popular blog and CMS platforms. But, in some cases, the application version installed by FatCow is old enough (2005!)  that it requires multiple upgrades to get to a current version. Compared to other installation tools, FatCow’s InstallCentral doesn’t seem really efficient at keeping track of installed and released versions.

Something which is partly a neat feature and partly a potential issue is the way FatCow handles Apache-related security. This isn’t something which is so clear to me, so I might be wrong.

Accounts on both BlueHost and HostGator include a public_html directory where all sorts of things go, especially if they’re related to publicly-accessible content. This directory serves as the website’s root, so one expects content to be available there. The “index.html” or “index.php” file in this directory serves as the website’s frontpage. It’s fairly obvious, but it does require that one would understand a few things about webservers. FatCow doesn’t seem to create a public_html directory in a user’s server space. Or, more accurately, it seems that the root directory (aka ‘/’) is in fact public_html. In this sense, a user doesn’t have to think about which directory to use to share things on the Web. But it also means that some higher-level directories aren’t available. I’ve already run into some issues with this and I’ll probably be looking for a workaround. I’m assuming there’s one. But it’s sometimes easier to use generally-applicable advice than to find a custom solution.

Further, in terms of access control… It seems that webapps typically make use of diverse directories and .htaccess files to manage some forms of access controls. Unix-style file permissions are also involved but the kind of access needed for a web app is somewhat different from the “User/Group/All” of Unix filesystems. AFAICT, FatCow does support those .htaccess files. But it has its own tools for building them. That can be a neat feature, as it makes it easier, for instance, to password-protect some directories. But it could also be the source of some confusion.

There are other issues I have with FatCow, but it’s probably enough for now.

So… On to the installation process… 😉

It only takes a few minutes and is rather straightforward. This is the most verbose version of that process you could imagine…

Surprised? 😎

Disclaimer: I’m mostly documenting how I did it and there are some things about which I’m unclear. So it may not work for you. If it doesn’t, I may be able to help but I provide no guarantee that I will. I’m an anthropologist, not a Web development expert.

As always, YMMV.

A few instructions here are specific to FatCow, but the general process is probably valid on other hosts.

I’m presenting things in a sequence which should make sense. I used a slightly different order myself, but I think this one should still work. (If it doesn’t, drop me a comment!)

In these instructions, straight quotes (“”) are used to isolate elements from the rest of the text. They shouldn’t be typed or pasted.

I use “example.com” to refer to the domain on which the installation is done. In my case, it’s the domain name I transfered to FatCow from another registrar but it could probably be done without a dedicated domain (in which case it would be “<username>.fatcow.com” where “<username>” is your FatCow username).

I started with creating a MySQL database for WordPress MU. FatCow does have phpMyAdmin but the default tool in the cPanel is labeled “Manage MySQL.” It’s slightly easier to use for creating new databases than phpMyAdmin because it creates the database and initial user (with confirmed password) in a single, easy-to-understand dialog box.

So I created that new database, user, and password, noting down this information. Since that password appears in clear text at some point and can easily be changed through the same interface, I used one which was easy to remember but wasn’t one I use elsewhere.
Then, I dowloaded the following files to my local machine in order to upload them to my FatCow server space. The upload can be done through either FTP or FatCow’s FileManager. I tend to prefer FTP (via CyberDuck on the Mac or FileZilla on PC). But the FileManager does allow for easy uploads.
(Wish it could be more direct, using the HTTP links directly instead of downloading to upload. But I haven’t found a way to do it through either FTP or the FileManager.)
At any rate, here are the four files I transfered to my FatCow space, using .zip when there’s a choice (the .tar.gz “tarball” versions also work but require a couple of extra steps).
  1. WordPress MU (wordpress-mu-2.9.1.1.zip, in my case)
  2. Buddymatic (buddymatic.0.9.6.3.1.zip, in my case)
  3. EarlyMorning (only one version, it seems)
  4. EarlyMorning-BP (only one version, it seems)

Only the WordPress MU archive is needed to install BuddyPress. The last three files are needed for EarlyMorning, a BuddyPress theme that I found particularly neat. It’s perfectly possible to install BuddyPress without this specific theme. (Although, doing so, you need to install a BuddyPress-compatible theme, if only by moving some folders to make the default theme available, as I explained in point 15 in that previous tutorial.) Buddymatic itself is a theme framework which includes some child themes, so you don’t need to install EarlyMorning. But installing it is easy enough that I’m adding instructions related to that theme.

These files can be uploaded anywhere in my FatCow space. I uploaded them to a kind of test/upload directory, just to make it clear, for me.

A major FatCow idiosyncrasy is its FileManager (actually called “FileManager Beta” in the documentation but showing up as “FileManager” in the cPanel). From my experience with both BlueHost and HostGator (two well-known webhosting companies), I can say that FC’s FileManager is quite limited. One thing it doesn’t do is uncompress archives. So I have to resort to the “Archive Gateway,” which is surprisingly slow and cumbersome.

At any rate, I used that Archive Gateway to uncompress the four files. WordPress µ first (in the root directory or “/”), then both Buddymatic and EarlyMorning in “/wordpress-mu/wp-content/themes” (you can chose the output directory for zip and tar files), and finally EarlyMorning-BP (anywhere, individual files are moved later). To uncompress each file, select it in the dropdown menu (it can be located in any subdirectory, Archive Gateway looks everywhere), add the output directory in the appropriate field in the case of Buddymatic or EarlyMorning, and press “Extract/Uncompress”. Wait to see a message (in green) at the top of the window saying that the file has been uncompressed successfully.

Then, in the FileManager, the contents of the EarlyMorning-BP directory have to be moved to “/wordpress-mu/wp-content/themes/earlymorning”. (Thought they could be uncompressed there directly, but it created an extra folder.) To move those files in the FileManager, I browse to that earlymorning-bp directory, click on the checkbox to select all, click on the “Move” button (fourth from right, marked with a blue folder), and add the output path: /wordpress-mu/wp-content/themes/earlymorning

These files are tweaks to make the EarlyMorning theme work with BuddyPress.

Then, I had to change two files, through the FileManager (it could also be done with an FTP client).

One change is to EarlyMorning’s style.css:

/wordpress-mu/wp-content/themes/earlymorning/style.css

There, “Template: thematic” has to be changed to “Template: buddymatic” (so, “the” should be changed to “buddy”).

That change is needed because the EarlyMorning theme is a child theme of the “Thematic” WordPress parent theme. Buddymatic is a BuddyPress-savvy version of Thematic and this changes the child-parent relation from Thematic to Buddymatic.

The other change is in the Buddymatic “extensions”:

/wordpress-mu/wp-content/themes/buddymatic/library/extensions/buddypress_extensions.php

There, on line 39, “$bp->root_domain” should be changed to “bp_root_domain()”.

This change is needed because of something I’d consider a bug but that a commenter on another blog was kind enough to troubleshoot. Without this modification, the login button in BuddyPress wasn’t working because it was going to the website’s root (example.com/wp-login.php) instead of the WPµ installation (example.com/wordpress-mu/wp-login.php). I was quite happy to find this workaround but I’m not completely clear on the reason it works.

Then, something I did which might not be needed is to rename the “wordpress-mu” directory. Without that change, the BuddyPress installation would sit at “example.com/wordpress-mu,” which seems a bit cryptic for users. In my mind, “example.com/<name>,” where “<name>” is something meaningful like “social” or “community” works well enough for my needs. Because FatCow charges for subdomains, the “<name>.example.com” option would be costly.

(Of course, WPµ and BuddyPress could be installed in the site’s root and the frontpage for “example.com” could be the BuddyPress frontpage. But since I think of BuddyPress as an add-on to a more complete site, it seems better to have it as a level lower in the site’s hierarchy.)

With all of this done, the actual WPµ installation process can begin.

The first thing is to browse to that directory in which WPµ resides, either “example.com/wordpress-mu” or “example.com/<name>” with the “<name>” you chose. You’re then presented with the WordPress µ Installation screen.

Since FatCow charges for subdomains, it’s important to choose the following option: “Sub-directories (like example.com/blog1).” It’s actually by selecting the other option that I realized that FatCow restricted subdomains.

The Database Name, username and password are the ones you created initially with Manage MySQL. If you forgot that password, you can actually change it with that same tool.

An important FatCow-specific point, here, is that “Database Host” should be “<username>.fatcowmysql.com” (where “<username>” is your FatCow username). In my experience, other webhosts use “localhost” and WPµ defaults to that.

You’re asked to give a name to your blog. In a way, though, if you think of BuddyPress as more of a platform than a blogging system, that name should be rather general. As you’re installing “WordPress Multi-User,” you’ll be able to create many blogs with more specific names, if you want. But the name you’re entering here is for BuddyPress as a whole. As with <name> in “example.com/<name>” (instead of “example.com/wordpress-mu”), it’s a matter of personal opinion.

Something I noticed with the EarlyMorning theme is that it’s a good idea to keep the main blog’s name relatively short. I used thirteen characters and it seemed to fit quite well.

Once you’re done filling in this page, WPµ is installed in a flash. You’re then presented with some information about your installation. It’s probably a good idea to note down some of that information, including the full paths to your installation and the administrator’s password.

But the first thing you should do, as soon as you log in with “admin” as username and the password provided, is probably to the change that administrator password. (In fact, it seems that a frequent advice in the WordPress community is to create a new administrator user account, with a different username than “admin,” and delete the “admin” account. Given some security issues with WordPress in the past, it seems like a good piece of advice. But I won’t describe it here. I did do it in my installation and it’s quite easy to do in WPµ.

Then, you should probably enable plugins here:

example.com/<name>/wp-admin/wpmu-options.php#menu

(From what I understand, it might be possible to install BuddyPress without enabling plugins, since you’re logged in as the administrator, but it still makes sense to enable them and it happens to be what I did.)

You can also change a few other options, but these can be set at another point.

One option which is probably useful, is this one:

Allow new registrations Disabled
Enabled. Blogs and user accounts can be created.
Only user account can be created.

Obviously, it’s not necessary. But in the interest of opening up the BuddyPress to the wider world without worrying too much about a proliferation of blogs, it might make sense. You may end up with some fake user accounts, but that shouldn’t be a difficult problem to solve.

Now comes the installation of the BuddyPress plugin itself. You can do so by going here:

example.com/<name>/wp-admin/plugin-install.php

And do a search for “BuddyPress” as a term. The plugin you want was authored by “The BuddyPress Community.” (In my case, version 1.1.3.) Click the “Install” link to bring up the installation dialog, then click “Install Now” to actually install the plugin.

Once the install is done, click the “Activate” link to complete the basic BuddyPress installation.

You now have a working installation of BuddyPress but the BuddyPress-savvy EarlyMorning isn’t enabled. So you need to go to “example.com/<name>/wp-admin/wpmu-themes.php” to enable both Buddymatic and EarlyMorning. You should then go to “example.com/<name>/wp-admin/themes.php” to activate the EarlyMorning theme.

Something which tripped me up because it’s now much easier than before is that forums (provided through bbPress) are now, literally, a one-click install. If you go here:

example.com/<name>/wp-admin/admin.php?page=bb-forums-setup

You can set up a new bbPress install (“Set up a new bbPress installation”) and everything will work wonderfully in terms of having forums fully integrated in BuddyPress. It’s so seamless that I wasn’t completely sure it had worked.

Besides this, I’d advise that you set up a few widgets for the BuddyPress frontpage. You do so through an easy-to-use drag-and-drop interface here:

example.com/<name>/wp-admin/widgets.php

I especially advise you to add the Twitter RSS widget because it seems to me to fit right in. If I’m not mistaken, the EarlyMorning theme contains specific elements to make this widget look good.

After that, you can just have fun with your new BuddyPress installation. The first thing I did was to register a new user. To do so, I logged out of my admin account,  and clicked on the Sign Up button. Since I “allow new registrations,” it’s a very simple process. In fact, this is one place where I think that BuddyPress shines. Something I didn’t explain is that you can add a series of fields for that registration and the user profile which goes with it.

The whole process really shouldn’t take very long. In fact, the longest parts have probably to do with waiting for Archive Gateway.

The rest is “merely” to get people involved in your BuddyPress installation. It can happen relatively easily, if you already have a group of people trying to do things together online. But it can be much more complicated than any software installation process… 😉


Development and Quality: Reply to Agile Diary

Former WiZiQ product manager Vikrama Dhiman responded to one of my tweets with a full-blown blogpost, thereby giving support to Matt Mullenweg‘s point that microblogging goes hand-in-hand with “macroblogging.”

My tweet:

enjoys draft æsthetics yet wishes more developers would release stable products. / adopte certains produits trop rapidement.

Vikrama’s post:

Good Enough Software Does Not Mean Bad Software « Agile Diary, Agile Introduction, Agile Implementation.

My reply:

“To an engineer, good enough means perfect. With an artist, there’s no such thing as perfect.” (Alexander Calder)

Thanks a lot for your kind comments. I’m very happy that my tweet (and status update) triggered this.

A bit of context for my tweet (actually, a post from Ping.fm, meant as a status update, thereby giving support in favour of conscious duplication, «n’en déplaise aux partisans de l’action contre la duplication».)

I’ve been thinking about what I call the “draft æsthetics.” In fact, I did a podcast episode about it. My description of that episode was:

Sometimes, there is such a thing as “Good Enough.”

Though I didn’t emphasize the “sometimes” part in that podcast episode, it was an important part of what I wanted to say. In fact, my intention wasn’t to defend draft æsthetics but to note that there seems to be a tendency toward this æsthetic mode. I do situate myself within that mode in many things I do, but it really doesn’t mean that this mode should be the exclusive one used in any context.

That aforequoted tweet was thus a response to my podcast episode on draft æsthetics. “Yes, ‘good enough’ may work, sometimes. But it needs not be applied in all cases.”

As I often get into convoluted discussions with people who seem to think that I condone or defend a position because I take it for myself, the main thing I’d say there is that I’m not only a relativist but I cherish nuance. In other words, my tweet was a way to qualify the core statement I was talking about in my podcast episode (that “good enough” exists, at times). And that statement isn’t necessarily my own. I notice a pattern by which this statement seems to be held as accurate by people. I share that opinion, but it’s not a strongly held belief of mine.

Of course, I digress…

So, the tweet which motivated Vikrama had to do with my approach to “good enough.” In this case, I tend to think about writing but in view of Eric S. Raymond’s approach to “Release Early, Release Often” (RERO). So there is a connection to software development and geek culture. But I think of “good enough” in a broader sense.

Disclaimer: I am not a coder.

The Calder quote remained in my head, after it was mentioned by a colleague who had read it in a local newspaper. One reason it struck me is that I spend some time thinking about artists and engineers, especially in social terms. I spend some time hanging out with engineers but I tend to be more on the “artist” side of what I perceive to be an axis of attitudes found in some social contexts. I do get a fair deal of flack for some of my comments on this characterization and it should be clear that it isn’t meant to imply any evaluation of individuals. But, as a model, the artist and engineer distinction seems to work, for me. In a way, it seems more useful than the distinction between science and art.

An engineer friend with whom I discussed this kind of distinction was quick to point out that, to him, there’s no such thing as “good enough.” He was also quick to point out that engineers can be creative and so on. But the point isn’t to exclude engineers from artistic endeavours. It’s to describe differences in modes of thought, ways of knowing, approaches to reality. And the way these are perceived socially. We could do a simple exercise with terms like “troubleshooting” and “emotional” to be assigned to the two broad categories of “engineer” and “artist.” Chances are that clear patterns would emerge. Of course, many concepts are as important to both sides (“intelligence,” “innovation”…) and they may also be telling. But dichotomies have heuristic value.

Now, to go back to software development, the focus in Vikrama’s Agile Diary post…

What pushed me to post my status update and tweet is in fact related to software development. Contrary to what Vikrama presumes, it wasn’t about a Web application. And it wasn’t even about a single thing. But it did have to do with firmware development and with software documentation.

The first case is that of my Fonera 2.0n router. Bought it in early November and I wasn’t able to connect to its private signal using my iPod touch. I could connect to the router using the public signal, but that required frequent authentication, as annoying as with ISF. Since my iPod touch is my main WiFi device, this issue made my Fonera 2.0n experience rather frustrating.

Of course, I’ve been contacting Fon‘s tech support. As is often the case, that experience was itself quite frustrating. I was told to reset my touch’s network settings which forced me to reauthenticate my touch on a number of networks I access regularly and only solved the problem temporarily. The same tech support person (or, at least, somebody using the same name) had me repeat the same description several times in the same email message. Perhaps unsurprisingly, I was also told to use third-party software which had nothing to do with my issue. All in all, your typical tech support experience.

But my tweet wasn’t really about tech support. It was about the product. Thougb I find the overall concept behind the Fonera 2.0n router very interesting, its implementation seems to me to be lacking. In fact, it reminds me of several FLOSS development projects that I’ve been observing and, to an extent, benefitting from.

This is rapidly transforming into a rant I’ve had in my “to blog” list for a while about “thinking outside the geek box.” I’ll try to resist the temptation, for now. But I can mention a blog thread which has been on my mind, in terms of this issue.

Firefox 3 is Still a Memory Hog — The NeoSmart Files.

The blogpost refers to a situation in which, according to at least some users (including the blogpost’s author), Firefox uses up more memory than it should and becomes difficult to use. The thread has several comments providing support to statements about the relatively poor performance of Firefox on people’s systems, but it also has “contributions” from an obvious troll, who keeps assigning the problem on the users’ side.

The thing about this is that it’s representative of a tricky issue in the geek world, whereby developers and users are perceived as belonging to two sides of a type of “class struggle.” Within the geek niche, users are often dismissed as “lusers.” Tech support humour includes condescending jokes about “code 6”: “the problem is 6″ from the screen.” The aforementioned Eric S. Raymond wrote a rather popular guide to asking questions in geek circles which seems surprisingly unaware of social and cultural issues, especially from someone with an anthropological background. Following that guide, one should switch their mind to that of a very effective problem-solver (i.e., the engineer frame) to ask questions “the smart way.” Not only is the onus on users, but any failure to comply with these rules may be met with this air of intellectual superiority encoded in that guide. IOW, “Troubleshoot now, ask questions later.”

Of course, many users are “guilty” of all sorts of “crimes” having to do with not reading the documentation which comes with the product or with simply not thinking about the issue with sufficient depth before contacting tech support. And as the majority of the population is on the “user” side, the situation can be described as both a form of marginalization (geek culture comes from “nerd” labels) and a matter of elitism (geek culture as self-absorbed).

This does have something to do with my Fonera 2.0n. With it, I was caught in this dynamic whereby I had to switch to the “engineer frame” in order to solve my problem. I eventually did solve my Fonera authentication problem, using a workaround mentioned in a forum post about another issue (free registration required). Turns out, the “release candidate” version of my Fonera’s firmware does solve the issue. Of course, this new firmware may cause other forms of instability and installing it required a bit of digging. But it eventually worked.

The point is that, as released, the Fonera 2.0n router is a geek toy. It’s unpolished in many ways. It’s full of promise in terms of what it may make possible, but it failed to deliver in terms of what a router should do (route a signal). In this case, I don’t consider it to be a finished product. It’s not necessarily “unstable” in the strict sense that a software engineer might use the term. In fact, I hesitated between different terms to use instead of “stable,” in that tweet, and I’m not that happy with my final choice. The Fonera 2.0n isn’t unstable. But it’s akin to an alpha version released as a finished product. That’s something we see a lot of, these days.

The main other case which prompted me to send that tweet is “CivRev for iPhone,” a game that I’ve been playing on my iPod touch.

I’ve played with different games in the Civ franchise and I even used the FLOSS version on occasion. Not only is “Civilization” a geek classic, but it does connect with some anthropological issues (usually in a problematic view: Civ’s worldview lacks anthro’s insight). And it’s the kind of game that I can easily play while listening to podcasts (I subscribe to a number of th0se).

What’s wrong with that game? Actually, not much. I can’t even say that it’s unstable, unlike some other items in the App Store. But there’s a few things which aren’t optimal in terms of documentation. Not that it’s difficult to figure out how the game works. But the game is complex enough that some documentation is quite useful. Especially since it does change between one version of the game and another. Unfortunately, the online manual isn’t particularly helpful. Oh, sure, it probably contains all the information required. But it’s not available offline, isn’t optimized for the device it’s supposed to be used with, doesn’t contain proper links between sections, isn’t directly searchable, and isn’t particularly well-written. Not to mention that it seems to only be available in English even though the game itself is available in multiple languages (I play it in French).

Nothing tragic, of course. But coupled with my Fonera experience, it contributed to both a slight sense of frustration and this whole reflection about unfinished products.

Sure, it’s not much. But it’s “good enough” to get me started.


Teaching Models: A Response on “Teaching Naked”

Teaching Anthropology: Teaching Naked: Another one of those nothing new here movements.

[Had to split my response into several comments because of Blogger’s character limit. Thought I might as well post it here.]

Thanks for the ping. No problem about the way you do it. In fact, feel free to use my name. I use “Informal Ethnographer” accounts for social media stuff having to do with ethnographic disciplines, but this is more about pedagogy.

The main thing I noticed about this piece is that the author transforms an interesting and potentially insightful story about problems facing a large number of academic institutions “going forward” into one of those sterile debates about the causal relationships between technology and learning.

Apart from all those things we’ve discussed about teaching method (including the fact that I still use the boring PPT-lecture on occasion), there’s a lot of room for discussion about the “educational industry” not getting a hint from the recording and journalism industries. Because we’re academics, it’s great to deconstruct the technological determinism embedded in many of these discussions. But there’s also something rather pressing in terms of social change: the World in which we live is significantly different from the one in which we were born, when it comes to information. It relates to “information technology” but it goes way beyond tools.

And this is where I talk about surfing the wave instead of fighting it (or building windmills instead of shelters).

As I said elsewhere, I’ve only been teaching for ten years. When I started, in Fall 1999 at Indiana University Bloomington, it was both a baptism by fire and a culture shock. Many teachers complain about a “sense of entitlement” they get from their students, or about the consumer-based approach in academic institutions. There’s a number of discussion about average class size or students-to-teacher ratio. Some talk about a so-called “me generation.” Others moan about the fact that students bring laptops in class or that teachers are forced to use tools that they don’t want to use.

These are really not recent problems. However, they are different problems from the ones for which I was prepared.

I’d still say that they affect some institutions more than others (typically: prestigious universities in the United States). But they’re spreading throughout higher education.

Bowen perceives a specific problem: campus-based universities face competition from inexpensive and even free material online. As a dean, he wants to focus on the added value of campus experience, with a focus on the classroom as a context for discussion. It seems that he was hired precisely as an agent of change, just like some “mercurial CEOs” are hired when a corporation is in trouble.

The plan is relatively creative. Not so much in the restrictions on PPT use, but on the overall approach to differentiate his institution. It’s a marketing ploy, not a PR one.

As for the specifics of people’s concepts of “lecturing”… It seems that the mainstream notion about lecture is for a linear presentation with little or no interaction possible. Other teaching methods may involve some “lecturing,” but it seems that the core notion people are discussing is really this soliloquy mode of the teacher exposing ideas without input from the audience. One way to put it is that it’s a genre of performance, like a “stand-up” or an opera.

As a subgenre, “PowerPoint lectures” may deserve special consideration. As we all know, it’s quite possible to use PPT in ways which are creative, engaging, fun, deep, etc. But there are many <a href=”http://www.jstor.org/pss/674535″>keys</a&gt; to the “PowerPoint lecture” frame. One is the use of some kind of  “visual aid.” Another is the use of different slides as key timeposts in the performance. Or we could think about the fact that control over the actual PPT file strengthens the role differentiation between “lecturer” and “audience.” Not to mention the fact that it’s quite difficult to use PPT slides when everyone is in a circle.

So, yes, I’m giving some credence to the notion that PPT is a significant part of the lecturing model people are discussing <em>ad nauseam</em>.

Much of these discussions may relate to the perception that this performance genre (what I would call “straight lecture” or «cours magistral») is dominant, at institutions of higher education. The preponderance of a given teaching style across a wide array of institutions, disciplines, and “levels” would merit careful assessment, but the perception is there. “People” (the general population of the United States, the <em>Chronicle</em>’s readership, English-speakers…?) get the impression that what teachers do is mostly: stand in front of a class to talk by themselves for significant amounts of time with, maybe, a few questions thrown in at the end. Some people say that such “lectures” may not be incredibly effective. But the notion is still there. You may call this a “straw man,” but it’s been built a while ago.

Now… There are many ways to go from this whole concept of “straight lecturing.” One is the so-called “switcharound”: you go from lecturing (as a mode) to discussion or to group activities (as distinct modes). The notion, there, is apparently about the fact that “studies have shown” that, at this point in time, English-speaking students in the United States can’t concentrate for more than 20 minutes at the time. Or some such.

I reacted quite strongly when I heard this. For several reasons, including my personal experience of paying attention during class meetings lasting seven hours or more, some of which involving very limited interaction. I also reacted because I found the 50 minute period very constraining. And I always react to the “studies have shown” stance, that I find deeply problematic at an epistemological level. Is this really how we gain knowledge?

But I digress…

Another way to avoid “straight lectures” is to make lecturing itself more interactive. Many people have been doing this for a while. Chances are, it was done by a number of people during the 19th Century, as the “modern classroom” was invented. It can be remarkably effective and it seems to be quite underrated. An important thing to note: it’s significantly different from what people have in mind, when they talk about “lecturing.” In fact, in a workshop I attended, the simple fact that a teacher was moving around the classroom as he was teaching has been used as an example of an alternative to lecturing. Seems to me that most teachers do something like this. But it’s useful to think about the implications of using such “alternative methods.” Personally, though I frequently think about those methods and I certainly respect those who use them, I don’t tend to focus so much on this. I do use “alternative lecturing methods” like these, on occasion but, when I lecture, I tend to adopt the classical approach.

Common alternatives to lecturing, mentioned in the CHE piece, include “seminars, practical sessions, and group discussions,” These all tend to be quite difficult to do in the… “lecture” hall. Even with smaller classes, a large room may be an obstacle. Though it’s not impossible to have, say, group discussions in an auditorium, few of us really end up doing it on a regular basis. I’m “guilty” of that: I have much less small-group discussions in rooms in which desks can’t be moved.

As for seminars, it’s clearly my favourite teaching mode/method and I tend to extend the concept too much. Though I tend to be critical of those rigid “factors” like class size, I keep bumping into a limit to seminar size and I run into major hurdles when I try to get more than 25 students working in a seminar mode.

We could also talk about distance education as an alternative to lecturing, though much of it has tended to be lecture-based. Distance education is interesting in many respects. While it’s really not new, it seems like it has been expanding a lot in the fairly recent past. Regardless of the number of people getting degrees through distance learning, it mostly seems that the concept has become much more accepted by the general population (in English-speaking contexts, at least) and some programmes in distance learning seem to be getting more “cred” than ever before. I don’t want to overstate this expansion but it’s interesting to think about the possible connections with social change. Telecommuting, students working full-time, combining studying with childcare, homestudy, rising tuition costs, customer-based approaches to education, the “me generation,” the ease of transmitting complex data online, etc.

Even when distance learners have to watch lectures, distance education can be conceived as an alternative to the “straight lecture.” Practical details such as scheduling aren’t insignificant, but there are more profound implications to the fact that lectures aren’t “delivered in a lecture hall.” To go back to the performance genre, there’s a difference between a drama piece and a movie. Both can be good, but they have very different implications.

My implication with distance learning has to do with online learning. Last summer, I began teaching sociology to nursing students in Texas. From Montreal. I had been thinking about online teaching for a while and I’ve always had an online component to my courses. But last year was the first time I was able to teach a course without ever meeting those students.

My impression is that the rise of online education was the main thing Bowen had in mind. He clearly seems to think that this rise will only continue and that it may threaten campus-based institutions if they don’t do anything about it. The part which is surprising about his approach is that he actually advocates blended learning. Though we may disagree with Bowen on several points, it’d be difficult to compare him to an ostrich.

All of these approaches and methods have been known for a while. They all have their own advantages and they all help raise different issues. But they’ve been tested rather extensively by generation upon generation of teachers.

The focus, today, seems to be on a new set of approaches. Most of them have direct ties to well-established teaching models like seminars and distance education. So, they’re not really “new.” Yet they combine different things in such a way that they clearly require experimentation. We can hail them as “the future” or dismiss them as “trendy,” but they still afford some consideration as avenues for experimentation.

Many of them can be subsumed under the umbrella term “blended learning.” That term can mean different things to different people and some use it as a kind of buzzword. Analytically, it’s still a useful term.

Nellie Muller Deutsch is among those people who are currently doing PhD research on blended learning. We’ve had a number of discussions through diverse online groups devoted to learning and teaching. It’s possible that my thinking has been influenced by Nellie, but I was already interested in those topics long before interacting with her.

“Blended learning” implies some combinaison of classroom and online interactions between learners and teachers. The specific degree of “blending” varies a lot between contexts, but the basic concept remains. One might even argue that any educational context is blended, nowadays, since most teachers end up responding to at least “a few emails” (!) every semester. But the extensible concept of the “blended campus” easily goes beyond those direct exchanges.

What does this have to do with lectures? A lot, actually. Especially for those who have in mind a “monolithic” model for lecture-based courses, often forgetting (as many students do!) the role of office hours and other activities outside of the classroom.

Just as it’s possible but difficult to do a seminar in a lecture hall, it’s possible but difficult to do “straight lecture” in blended learning. Those professors and adjuncts who want to have as little interactions with students as possible may end up complaining about the amount of email they receive. In a sense, they’re “victims” of the move to a blended environment. One of the most convincing ideas I’ve heard in a teaching workshop was about moving email exchanges with individual students to forums, so that everyone can more effectively manage the channels of communication. Remarkably simple and compatible with many teaching styles. And a very reasonable use of online tools.

Bowen was advocating a very specific model for blended learning: students work with required readings on their own (presumably, using coursepacks and textbooks), read/watch/listen to lecture material online, and convene in the classroom to work with the material. His technique for making sure that students don’t “skip class” (which seems important in the United States, for some reason) is to give multiple-choice quizzes. Apart from justifying presence on campus (in the competition with distance learning), Bowen’s main point is about spending as much face-to-face time as possible in discussions. It’s not really an alternative to lectures if there are lectures online, but it’s a clear shift in focus from the “straight lecture” model. Fairly creative and it’s certainly worth some experimentation. But it’s only one among many possible approaches.

At least for the past few years, I’ve been posting material online both after and ahead of class meetings. I did notice a slight decrease in attendance, but that tends to matter very little for me. I also notice that many students tend to be more reluctant to go online to do things for my courses than one would expect from most of the discussions at an abstract level. But it’s still giving me a lot, including in terms of not having to rehash the same material over and over again (and again, <em>ad nauseam</em>).

I wouldn’t really call my approach “blended learning” because, in most of my upper-level courses at least, there’s still fairly little interaction happening online. But I do my part to experiment with diverse methods and approaches.

So…

None of this is meant to be about evaluating different approaches to teaching. I’m really not saying that my approach is better than anybody else’s. But I will say that it’s an appropriate fit with my perspective on learning as well as with my activities outside of the classroom. In other words, it’s not because I’m a geek that I expect anybody else to become a geek. I do, however, ask others to accept me as a geek.

And, Pamthropologist, you provided on my blog some context for several of the comments you’ve been making about lecturing. I certainly respect you and I think I understand what’s going on. In fact, I get the impression that you’re very effective at teaching anthropology and I wish your award-winning blog entry also carried an award for teaching. The one thing I find most useful, in all of this, is that you do discuss those issues. IMHO, the most important thing isn’t to find what the best model is but to discuss learning and teaching in a thoughtful manner so that everyone gets a voice. The fact that one of the most recent comments on your blog comes from a student in the Philippines speaks volumes about your openness.


Social Networks and Microblogging

Microblogging (Laconica, Twitter, etc.) is still a hot topic. For instance, during the past few episodes of This Week in Tech, comments were made about the preponderance of Twitter as a discussion theme: microblogging is so prominent on that show that some people complain that there’s too much talk about Twitter. Given the centrality of Leo Laporte’s podcast in geek culture (among Anglos, at least), such comments are significant.

The context for the latest comments about TWiT coverage of Twitter had to do with Twitter’s financials: during this financial crisis, Twitter is given funding without even asking for it. While it may seem surprising at first, given the fact that Twitter hasn’t publicized a business plan and doesn’t appear to be profitable at this time, 

Along with social networking, microblogging is even discussed in mainstream media. For instance, Médialogues (a media critique on Swiss national radio) recently had a segment about both Facebook and Twitter. Just yesterday, Comedy Central’s The Daily Show with Jon Stewart made fun of compulsive twittering and mainstream media coverage of Twitter (original, Canadian access).

Clearly, microblogging is getting some mindshare.

What the future holds for microblogging is clearly uncertain. Anything can happen. My guess is that microblogging will remain important for a while (at least a few years) but that it will transform itself rather radically. Chances are that other platforms will have microblogging features (something Facebook can do with status updates and something Automattic has been trying to do with some WordPress themes). In these troubled times, Montreal startup Identi.ca received some funding to continue developing its open microblogging platform.  Jaiku, bought by Google last year, is going open source, which may be good news for microblogging in general. Twitter itself might maintain its “marketshare” or other players may take over. There’s already a large number of third-party tools and services making use of Twitter, from Mahalo Answers to Remember the Milk, Twistory to TweetDeck.

Together, these all point to the current importance of microblogging and the potential for further development in that sphere. None of this means that microblogging is “The Next Big Thing.” But it’s reasonable to expect that microblogging will continue to grow in use.

(Those who are trying to grok microblogging, Common Craft’s Twitter in Plain English video is among the best-known descriptions of Twitter and it seems like an efficient way to “get the idea.”)

One thing which is rarely mentioned about microblogging is the prominent social structure supporting it. Like “Social Networking Systems” (LinkedIn, Facebook, Ning, MySpace…), microblogging makes it possible for people to “connect” to one another (as contacts/acquaintances/friends). Like blogs, microblogging platforms make it possible to link to somebody else’s material and get notifications for some of these links (a bit like pings and trackbacks). Like blogrolls, microblogging systems allow for lists of “favourite authors.” Unlike Social Networking Systems but similar to blogrolls, microblogging allow for asymmetrical relations, unreciprocated links: if I like somebody’s microblogging updates, I can subscribe to those (by “following” that person) and publicly show my appreciation of that person’s work, regardless of whether or not this microblogger likes my own updates.

There’s something strangely powerful there because it taps the power of social networks while avoiding tricky issues of reciprocity, “confidentiality,” and “intimacy.”

From the end user’s perspective, microblogging contacts may be easier to establish than contacts through Facebook or Orkut. From a social science perspective, microblogging links seem to approximate some of the fluidity found in social networks, without adding much complexity in the description of the relationships. Subscribing to someone’s updates gives me the role of “follower” with regards to that person. Conversely, those I follow receive the role of “following” (“followee” would seem logical, given the common “-er”/”-ee” pattern). The following and follower roles are complementary but each is sufficient by itself as a useful social link.

Typically, a microblogging system like Twitter or Identi.ca qualifies two-way connections as “friendship” while one-way connections could be labelled as “fandom” (if Andrew follows Betty’s updates but Betty doesn’t follow Andrew’s, Andrew is perceived as one of Betty’s “fans”). Profiles on microblogging systems are relatively simple and public, allowing for low-involvement online “presence.” As long as updates are kept public, anybody can connect to anybody else without even needing an introduction. In fact, because microblogging systems send notifications to users when they get new followers (through email and/or SMS), subscribing to someone’s update is often akin to introducing yourself to that person. 

Reciprocating is the object of relatively intense social pressure. A microblogger whose follower:following ratio is far from 1:1 may be regarded as either a snob (follower:following much higher than 1:1) or as something of a microblogging failure (follower:following much lower than 1:1). As in any social context, perceived snobbery may be associated with sophistication but it also carries opprobrium. Perry Belcher  made a video about what he calls “Twitter Snobs” and some French bloggers have elaborated on that concept. (Some are now claiming their right to be Twitter Snobs.) Low follower:following ratios can result from breach of etiquette (for instance, ostentatious self-promotion carried beyond the accepted limit) or even non-human status (many microblogging accounts are associated to “bots” producing automated content).

The result of the pressure for reciprocation is that contacts are reciprocated regardless of personal relations.  Some users even set up ways to automatically follow everyone who follows them. Despite being tricky, these methods escape the personal connection issue. Contrary to Social Networking Systems (and despite the term “friend” used for reciprocated contacts), following someone on a microblogging service implies little in terms of friendship.

One reason I personally find this fascinating is that specifying personal connections has been an important part of the development of social networks online. For instance, long-defunct SixDegrees.com (one of the earliest Social Networking Systems to appear online) required of users that they specified the precise nature of their relationship to users with whom they were connected. Details escape me but I distinctly remember that acquaintances, colleagues, and friends were distinguished. If I remember correctly, only one such personal connection was allowed for any pair of users and this connection had to be confirmed before the two users were linked through the system. Facebook’s method to account for personal connections is somewhat more sophisticated despite the fact that all contacts are labelled as “friends” regardless of the nature of the connection. The uniform use of the term “friend” has been decried by many public commentators of Facebook (including in the United States where “friend” is often applied to any person with whom one is simply on friendly terms).

In this context, the flexibility with which microblogging contacts are made merits consideration: by allowing unidirectional contacts, microblogging platforms may have solved a tricky social network problem. And while the strength of the connection between two microbloggers is left unacknowledged, there are several methods to assess it (for instance through replies and republished updates).

Social contacts are the very basis of social media. In this case, microblogging represents a step towards both simplified and complexified social contacts.

Which leads me to the theme which prompted me to start this blogpost: event-based microblogging.

I posted the following blog entry (in French) about event-based microblogging, back in November.

Microblogue d’événement

I haven’t received any direct feedback on it and the topic seems to have little echoes in the social media sphere.

During the last PodMtl meeting on February 18, I tried to throw my event-based microblogging idea in the ring. This generated a rather lengthy between a friend and myself. (Because I don’t want to put words in this friend’s mouth, who happens to be relatively high-profile, I won’t mention this friend’s name.) This friend voiced several objections to my main idea and I got to think about this basic notion a bit further. At the risk of sounding exceedingly opinionated, I must say that my friend’s objections actually comforted me in the notion that my “event microblog” idea makes a lot of sense.

The basic idea is quite simple: microblogging instances tied to specific events. There are technical issues in terms of hosting and such but I’m mostly thinking about associating microblogs and events.

What I had in mind during the PodMtl discussion has to do with grouping features, which are often requested by Twitter users (including by Perry Belcher who called out Twitter Snobs). And while I do insist on events as a basis for those instances (like groups), some of the same logic applies to specific interests. However, given the time-sensitivity of microblogging, I still think that events are more significant in this context than interests, however defined.

In the PodMtl discussion, I frequently referred to BarCamp-like events (in part because my friend and interlocutor had participated in a number of such events). The same concept applies to any event, including one which is just unfolding (say, assassination of Guinea-Bissau’s president or bombings in Mumbai).

Microblogging users are expected to think about “hashtags,” those textual labels preceded with the ‘#’ symbol which are meant to categorize microblogging updates. But hashtags are problematic on several levels.

  • They require preliminary agreement among multiple microbloggers, a tricky proposition in any social media. “Let’s use #Bissau09. Everybody agrees with that?” It can get ugly and, even if it doesn’t, the process is awkward (especially for new users).
  • Even if agreement has been reached, there might be discrepancies in the way hashtags are typed. “Was it #TwestivalMtl or #TwestivalMontreal, I forgot.”
  • In terms of language economy, it’s unsurprising that the same hashtag would be used for different things. Is “#pcmtl” about Podcamp Montreal, about personal computers in Montreal, about PCM Transcoding Library…?
  • Hashtags are frequently misunderstood by many microbloggers. Just this week, a tweep of mine (a “peep” on Twitter) asked about them after having been on Twitter for months.
  • While there are multiple ways to track hashtags (including through SMS, in some regions), there is no way to further specify the tracked updates (for instance, by user).
  • The distinction between a hashtag and a keyword is too subtle to be really useful. Twitter Search, for instance, lumps the two together.
  • Hashtags take time to type. Even if microbloggers aren’t necessarily typing frantically, the time taken to type all those hashtags seems counterproductive and may even distract microbloggers.
  • Repetitively typing the same string is a very specific kind of task which seems to go against the microblogging ethos, if not the cognitive processes associated with microblogging.
  • The number of character in a hashtag decreases the amount of text in every update. When all you have is 140 characters at a time, the thirteen characters in “#TwestivalMtl” constitute almost 10% of your update.
  • If the same hashtag is used by a large number of people, the visual effect can be that this hashtag is actually dominating the microblogging stream. Since there currently isn’t a way to ignore updates containing a certain hashtag, this effect may even discourage people from using a microblogging service.

There are multiple solutions to these issues, of course. Some of them are surely discussed among developers of microblogging systems. And my notion of event-specific microblogs isn’t geared toward solving these issues. But I do think separate instances make more sense than hashtags, especially in terms of specific events.

My friend’s objections to my event microblogging idea had something to do with visibility. It seems that this friend wants all updates to be visible, regardless of the context. While I don’t disagree with this, I would claim that it would still be useful to “opt out” of certain discussions when people we follow are involved. If I know that Sean is participating in a PHP conference and that most of his updates will be about PHP for a period of time, I would enjoy the possibility to hide PHP-related updates for a specific period of time. The reason I talk about this specific case is simple: a friend of mine has manifested some frustration about the large number of updates made by participants in Podcamp Montreal (myself included). Partly in reaction to this, he stopped following me on Twitter and only resumed following me after Podcamp Montreal had ended. In this case, my friend could have hidden Podcamp Montreal updates and still have received other updates from the same microbloggers.

To a certain extent, event-specific instances are a bit similar to “rooms” in MMORPG and other forms of real-time many-to-many text-based communication such as the nostalgia-inducing Internet Relay Chat. Despite Dave Winer’s strong claim to the contrary (and attempt at defining microblogging away from IRC), a microblogging instance could, in fact, act as a de facto chatroom. When such a structure is needed. Taking advantage of the work done in microblogging over the past year (which seems to have advanced more rapidly than work on chatrooms has, during the past fifteen years). Instead of setting up an IRC channel, a Web-based chatroom, or even a session on MSN Messenger, users could use their microblogging platform of choice and either decide to follow all updates related to a given event or simply not “opt-out” of following those updates (depending on their preferences). Updates related to multiple events are visible simultaneously (which isn’t really the case with IRC or chatrooms) and there could be ways to make event-specific updates more prominent. In fact, there would be easy ways to keep real-time statistics of those updates and get a bird’s eye view of those conversations.

And there’s a point about event-specific microblogging which is likely to both displease “alpha geeks” and convince corporate users: updates about some events could be “protected” in the sense that they would not appear in the public stream in realtime. The simplest case for this could be a company-wide meeting during which backchannel is allowed and even expected “within the walls” of the event. The “nothing should leave this room” attitude seems contradictory to social media in general, but many cases can be made for “confidential microblogging.” Microblogged conversations can easily be archived and these archives could be made public at a later date. Event-specific microblogging allows for some control of the “permeability” of the boundaries surrounding the event. “But why would people use microblogging instead of simply talking to another?,” you ask. Several quick answers: participants aren’t in the same room, vocal communication is mostly single-channel, large groups of people are unlikely to communicate efficiently through oral means only, several things are more efficiently done through writing, written updates are easier to track and archive…

There are many other things I’d like to say about event-based microblogging but this post is already long. There’s one thing I want to explain, which connects back to the social network dimension of microblogging.

Events can be simplistically conceived as social contexts which bring people together. (Yes, duh!) Participants in a given event constitute a “community of experience” regardless of the personal connections between them. They may be strangers, ennemies, relatives, acquaintances, friends, etc. But they all share something. “Participation,” in this case, can be relatively passive and the difference between key participants (say, volunteers and lecturers in a conference) and attendees is relatively moot, at a certain level of analysis. The key, here, is the set of connections between people at the event.

These connections are a very powerful component of social networks. We typically meet people through “events,” albeit informal ones. Some events are explicitly meant to connect people who have something in common. In some circles, “networking” refers to something like this. The temporal dimension of social connections is an important one. By analogy to philosophy of language, the “first meeting” (and the set of “first impressions”) constitute the “baptism” of the personal (or social) connection. In social media especially, the nature of social connections tends to be monovalent enough that this “baptism event” gains special significance.

The online construction of social networks relies on a finite number of dimensions, including personal characteristics described in a profile, indirect connections (FOAF), shared interests, textual content, geographical location, and participation in certain activities. Depending on a variety of personal factors, people may be quite inclusive or rather exclusive, based on those dimensions. “I follow back everyone who lives in Austin” or “Only people I have met in person can belong to my inner circle.” The sophistication with which online personal connections are negotiated, along such dimensions, is a thing of beauty. In view of this sophistication, tools used in social media seem relatively crude and underdeveloped.

Going back to the (un)conference concept, the usefulness of having access to a list of all participants in a given event seems quite obvious. In an open event like BarCamp, it could greatly facilitate the event’s logistics. In a closed event with paid access, it could be linked to registration (despite geek resistance, closed events serve a purpose; one could even imagine events where attendance is free but the microblogging backchannel incurs a cost). In some events, everybody would be visible to everybody else. In others, there could be a sort of ACL for diverse types of participants. In some cases, people could be allowed to “lurk” without being seen while in others radically transparency could be enforced. For public events with all participants visible, lists of participants could be archived and used for several purposes (such as assessing which sessions in a conference are more popular or “tracking” event regulars).

One reason I keep thinking about event-specific microblogging is that I occasionally use microblogging like others use business cards. In a geek crowd, I may ask for someone’s Twitter username in order to establish a connection with that person. Typically, I will start following that person on Twitter and find opportunities to communicate with that person later on. Given the possibility for one-way relationships, it establishes a social connection without requiring personal involvement. In fact, that person may easily ignore me without the danger of a face threat.

If there were event-specific instances from microblogging platforms, we could manage connections and profiles in a more sophisticated way. For instance, someone could use a barebones profile for contacts made during an impersonal event and a full-fledged profile for contacts made during a more “intimate” event. After noticing a friend using an event-specific business card with an event-specific email address, I got to think that this event microblogging idea might serve as a way to fill a social need.

 

More than most of my other blogposts, I expect comments on this one. Objections are obviously welcomed, especially if they’re made thoughtfully (like my PodMtl friend made them). Suggestions would be especially useful. Or even questions about diverse points that I haven’t addressed (several of which I can already think about).

So…

 

What do you think of this idea of event-based microblogging? Would you use a microblogging instance linked to an event, say at an unconference? Can you think of fun features an event-based microblogging instance could have? If you think about similar ideas you’ve seen proposed online, care to share some links?

 

Thanks in advance!


Back in Mac: Low End Edition

Today, I’m buying an old Mac mini G4 1.25GHz. Yes, a low end computer from 2005. It’ll be great to be back in Mac after spending most of my computer life on XP for three years.

This mini is slower than my XP desktop (emachines H3070). But that doesn’t really matter for what I want to do.

There’s something to be said about computers being “fast enough.” Gamers and engineers may not grok this concept, since they always want more. But there’s a point at which computers don’t really need to be faster, for some categories of uses.

Car analogies are often made, in computer discussions, and this case seems fairly obvious. Some cars are still designed to “push the envelope,” in terms of performance. Yet most cars, including some relatively inexpensive ones, are already fast enough to run on highways beyond the speed limits in North America. Even in Europe, most drivers don’t tend to push their cars to the limit. Something vaguely similar happens with computers, though there are major differences. For instance, the difference in cost between fast driving and normal driving is a factor with cars while it isn’t so much of a factor with computers. With computers, the need for cooling and battery power (on laptops) do matter but, even if they were completely solved, there’s a limit to the power needed for casual computer use.

This isn’t contradicting Moore’s Law directly. Chips do increase exponentially in speed-to-cost ratio. But the effects aren’t felt the same way through all uses of computers, especially if we think about casual use of desktop and laptop “personal computers.” Computer chips in other devices (from handheld devices to cars or DVD players) benefit from Moore’s Law, but these are not what we usually mean by “computer,” in daily use.
The common way to put it is something like “you don’t need a fast machine to do email and word processing.”

The main reason I needed a Mac is that I’ll be using iMovie to do simple video editing. Video editing does push the limits of a slow computer and I’ll notice those limits very readily. But it’ll still work, and that’s quite interesting to think about, in terms of the history of personal computing. A Mac mini G4 is a slug, in comparison with even the current Mac mini Core 2 Duo. But it’s fast enough for even some tasks which, in historical terms, have been processor-intensive.

None of this is meant to say that the “need for speed” among computer users is completely manufactured. As computers become more powerful, some applications of computing technologies which were nearly impossible at slower speeds become easy to do. In fact, there certainly are things which we don’t even imagine becoming which will be easy to do in the future, thanks to improvements in computer chip performance. Those who play processor-intensive games always want faster machines and they certainly feel the “need for speed.” But, it seems to me, the quest for raw speed isn’t the core of personal computing, anymore.

This all reminds me of the Material Culture course I was teaching in the Fall: the Social Construction of Technology, Actor-Network Theory, the Social Shaping of Technology, etc.

So, a low end computer makes sense.

While iMovie is the main reason I decided to get a Mac at this point, I’ve been longing for Macs for three years. There were times during which I was able to use somebody else’s Mac for extended periods of time but this Mac mini G4 will be the first Mac to which I’ll have full-time access since late 2005, when my iBook G3 died.

As before, I’m happy to be “back in Mac.” I could handle life on XP, but it never felt that comfortable and I haven’t been able to adapt my workflow to the way the Windows world works. I could (and probably should) have worked on Linux, but I’m not sure it would have made my life complete either.

Some things I’m happy to go back to:

  • OmniOutliner
  • GarageBand
  • Keynote
  • Quicksilver
  • Nisus Thesaurus
  • Dictionary
  • Preview
  • Terminal
  • TextEdit
  • BibDesk
  • iCal
  • Address Book
  • Mail
  • TAMS Analyzer
  • iChat

Now I need to install some RAM in this puppy.


My Year in Social Media

In some ways, this post is a belated follow-up to my last blogpost about some of my blog statistics:

Almost 30k « Disparate.

In the two years since I published that post, I’ve received over 100 000 visits on this blog and I’ve diversified my social media activities.

Altogether, 2008 has been an important year, for me, in terms of social media. I began the year in Austin, TX and moved back to Quebec in late April. Many things have happened in my personal life and several of them have been tied to my social media activities.

The most important part of my social media life, through 2008 as through any year, is the contact I have with diverse people. I’ve met a rather large number of people in 2008 and some of these people have become quite important in my life. In fact, there are people I have met in 2008 whose impact on my life makes it feel as though we have been friends for quite a while. Many of these contacts have happened through social media or, at least, they have been mediated online. As a “people person,” a social butterfly, a humanist, and a social scientist, I care more about these people I’ve met than about the tools I’ve used.

Obviously, most of the contacts I’ve had through the year were with people I already knew. And my relationship with many of these people has changed quite significantly through the year. As is obvious for anyone who knows me, 2008 has been an important year in my personal life. A period of transition. My guess is that 2009 will be even more important, personally.

But this post is about my social media activities. Especially about (micro)blogging and about social networking, in my case. I also did a couple of things in terms of podcasting and online video, but my main activities online tend to be textual. This might change a bit in 2009, but probably not much. I expect 2009 to be an “incremental evolution” in terms of my social media activities. In fact, I mostly want to intensify my involvement in social media spheres, in continuity with what I’ve been doing in 2008.

So it’s the perfect occasion to think back about 2008.

Perhaps my main highlight of 2008 in terms of social media is Twitter. You can say I’m a late adopter to Twitter. I’ve known about it since it came out and I probably joined Twitter a while ago but I really started using it in preparation for SXSWi and BarCampAustin, in early March of this year. As I wanted to integrate Austin’s geek scene and Twitter clearly had some importance in that scene, I thought I’d “play along.” Also, I didn’t have a badge for SXSWi but I knew I could learn about off-festival events through Twitter. And Twitter has become rather important, for me.

For one thing, it allows me to make a distinction between actual blogposts and short thoughts. I’ve probably been posting fewer blog entries since I became active on Twitter and my blogposts are probably longer, on average, than they were before. In a way, I feel it enhances my blogging experience.

Twitter also allows me to “take notes in public,” a practise I find surprisingly useful. For instance, when I go to some kind of presentation (academic or otherwise) I use Twitter to record my thoughts on both the event and the content. This practise is my version of “liveblogging” and I enjoy it. On several occasions, these liveblogging sessions have been rather helpful. Some “tweeps” (Twitter+peeps) dislike this kind of liveblogging practise and claim that “Twitter isn’t meant for this,” but I’ve had more positive experiences through liveblogging on Twitter than negative ones.

The device which makes all of this liveblogging possible, for me, is the iPod touch I received from a friend in June of this year. It has had important implications for my online life and, to a certain extent, the ‘touch has become my primary computer. The iTunes App Store, which opened its doors in July, has changed the game for me as I was able to get a number of dedicated applications, some of which I use several times a day. I’ve blogged about several things related to the iPod touch and the whole process has changed my perspective on social media in general. Of course, an iPhone would be an even more useful tool for me: SMS, GPS, camera, and ubiquitous Internet are all useful features in connection to social media. But, for now, the iPod touch does the trick. Especially through Twitter and Facebook.

One tool I started using quite frequently through the year is Ping.fm. I use it to post to: Twitter, Identi.ca, Facebook, LinkedIn, Brightkite, Jaiku, FriendFeed, Blogger, and WordPress.com (on another blog). I receive the most feedback on Facebook and Twitter but I occasionally get feedback through the other services (including through Pownce, which was recently sold). One thing I notice through this cross-posting practise is that, on these different services, the same activity has a range of implications. For instance, while I’m mostly active on Twitter, I actually get more out of Facebook postings (status updates, posted items, etc.). And reactions on different services tend to be rather different, as the relationships I have with people who provide that feedback tend to range from indirect acquaintance to “best friend forever.” Given my social science background, I find these differences quite interesting to think about.

One thing I’ve noticed on Twitter is that my “ranking among tweeps” has increased very significantly. On Twinfluence, my rank has gone as high as the 86th percentile (though it recently went down to the 79th percentile) while, on Twitter Grader, my “Twitter grade” is now at a rather unbelievable 98.1%. I don’t tend to care much about “measures of influence” but I find these ratings quite interesting. One reason is that they rely on relatively sophisticated concepts from social sciences. Another reason is that I’m intrigued by what causes increases in my ranking on those services. In this case, I think the measures give me way too much credit at this point but I also think that my “influence” is found outside of Twitter.

One “sphere of influence” which remained important for me through 2008 is Facebook. While Facebook had a more central role in my life through 2007, it now represents a stable part of my social media involvement. One thing which tends to happen is that first contacts happen through Twitter (I often use it as the equivalent of a business card during event) and Facebook represents a second step in the relationship. In a way, this distinction foregrounds the obvious concept of “intimacy” in social media. Twitter is public, ties are weak. Facebook is intimate, ties are stronger. On the other hand, there seems to be much more clustering among my tweeps than among my Facebook contacts, in part because my connection to local geek scenes in Austin and Montreal happens primarily through Twitter.

Through Facebook I was able to organize a fun little brunch with a few friends from elementary school. Though this brunch may not have been the most important event of 2008, for me, I’ve learnt a lot about the power of social media through contacting these friends, meeting them, and thinking about the whole affair.

In a way, Twitter and Facebook have helped me expand my social media activities in diverse directions. But most of the important events in my social media life in 2008 have been happening offline. Several of these events were unconferences and informal events happening around conferences.

My two favourite events of the year, in terms of social media, were BarCampAustin and PodCamp Montreal. Participating in (and observing) both events has had some rather profound implications in my social media life. These two unconferences were somewhat different but both were probably as useful, to me. One regret I have is that it’s unlikely that I’ll be able to attend BarCampAustinIV now that I’ve left Austin.

Other events have happened throughout 2008 which I find important in terms of social media. These include regular meetings like Yulblog, Yulbiz, and PodMtl. There are many other events which aren’t necessarily tied to social media but that I find interesting from a social media perspective. The recent Infopresse360 conference on innovation (with Malcolm Gladwell as keynote speaker) and a rather large number of informal meetups with people I’ve known through social media would qualify.

Despite the diversification of my social media life through 2008, blogging remains my most important social media activity. I now consider myself a full-fledged blogger and I think that my blog is representative of something about me.

Simply put, I’m proud to be a blogger. 

In 2008, a few things have happened through my blog which, I think, are rather significant. One is that someone who found me through Google contacted me directly about a contract in private-sector ethnography. As I’m currently going through professional reorientation, I take this contract to be rather significant. It’s actually possible that the Google result this person noticed wasn’t directly about my blog (the ranking of my diverse online profiles tends to shift around fairly regularly) but I still associate online profiles with blogging.

A set of blog-related occurences which I find significant has to do with the fact that my blog has been at the centre of a number of discussions with diverse people including podcasters and other social media people. My guess is that some of these discussions may lead to some interesting things for me in 2009.

Through 2008, this blog has become more anthropological. For several reasons, I wish to maintain it as a disparate blog, a blog about disparate topics. But it still participates in my gaining some recognition as an anthroblogger. One reason is that anthrobloggers are now more closely connected than before. Recently, anthroblogger Daniel Lende has sent a call for nominations for the best of the anthro blogosphere which he then posted as both a “round up” and a series of prizes. Before that, Savage Minds had organized an “awards ceremony” for an academic conference. And, perhaps the most important dimension of my ow blog being recognized in the anthroblogosphere, I have been discussing a number of things with Concordia-based anthrobloggers Owen Wiltshire and Maximilian Forte.

Still, anthropology isn’t the most prominent topic on this blog. In fact, my anthro-related posts tend to receive relatively little attention, outside of discussions with colleagues.

Since I conceive of this post as a follow-up on posts about statistics, I’ve gone through some of my stats here on Disparate.  Upgrades to  Wordpress.com also allow me to get a more detailed picture of what has been happening on this blog.

Through 2008, I’ve received over 55 131 hits on this blog, about 11% more than in 2007 for an average of 151 hits a day (I actually thought it was more but there are some days during which I receive relatively few hits, especially during weekends). The month I received the most hits was February 2007 with 5 967 hits but February and March 2008 were relatively close. The day I received the most hits was October 28, 2008, with 310 hits. This was the day after Myriade opened.

These numbers aren’t so significant. For one thing, hits don’t imply that people have read anything on my blog. Since all of my blogs are ad-free, I haven’t tried to increase traffic to this blog. But it’s still interesting to notice a few things.

The most obvious thing is that hits to rather silly posts are much more frequent than hits to posts I actually care about.

For instance, my six blogposts with the most hits:

Title Hits  
Facebook Celebs and Fakes 5 782 More stats
emachines Power Supply 4 800 More stats
Recording at 44.1 kHz, 16b with iPod 5G? 2 834 More stats
Blogspot v. WordPress.com, Blogger v. Wo 2 571 More stats
GERD and Stress 2 377 More stats
University Rankings and Diversity 2 219 More stats

And for 2008:

Title Hits  
Facebook Celebs and Fakes 3 984 More stats
emachines Power Supply 2 265 More stats
AT&T Yahoo Pro DSL to Belkin WiFi 1 527 More stats
GERD and Stress 1 430 More stats
Blogspot v. WordPress.com, Blogger v. Wo 1 151 More stats
University Rankings and Diversity 995 More stats

The Facebook post I wrote very quickly in July 2007. It was a quick reaction to something I had heard. Obviously, the post’s title  is the single reason for that post’s popularity. I get an average of 11 hits a day on that post for 4 001 hits in 2008. If I wanted to increase traffic, I’d post as many of these as possible.

The emachines post is my first post on this new blog (but I did import posts from my previous blog), back in January 2006. It seems to have helped a few people and gets regular traffic (six hits a day, in 2008). It’s not my most thoughtful post but it has its place. It’s still funny to notice that traffic to this blogpost increases even though one would assume it’s less relevant.

Rather unsurprisingly, my post about then-upcoming recording capabilities on the iPod 5G, from March 2006, is getting very few hits. But, for a while, it did get a number of hits (six a day in 2006) and I was a bit puzzled by that.

The AT&T post is my most popular post written in 2008. It was a simple troubleshooting session, like the aforementioned emachines post. These posts might be useful for some people and I occasionally get feedback from people about them. Another practical post regularly getting a few hits is about an inflatable mattress with built-in pump which came without clear instructions.

My post about blogging platform was in fact a repost of a comment I made on somebody else’s blog entry (though the original seems to be lost). From what I can see, it was most popular from June, 2007 through May, 2008. Since it was first posted, WordPress.com has been updated quite a bit and Blogger/Blogspot seems to have pretty much stalled. My comment/blogpost on the issue is fairly straightforward and it has put me in touch with some other bloggers.

The other two blogposts getting the most hits in 2008 are closer to things about which I care. Both entries were written in mid-2006 and are still relevant. The rankings post is short on content, but it serves as an “anchor” for some things I like to discuss in terms of educational institutions. The GERD post is among my most personal posts on this blog, especially in English. It’s one of the posts for which I received the most feedback. My perspective on the issue hasn’t changed much in the meantime.


Privilege: Library Edition

When I came out against privilege, over a month ago, I wasn’t thinking about libraries. But, last week, while running some errands at three local libraries (within an hour), I got to think about library privileges.

During that day, I first started thinking about library privileges because I was renewing my CREPUQ card at Concordia. With that card, graduate students and faculty members at a university in Quebec are able to get library privileges at other universities, a nice “perk” that we have. While renewing my card, I was told (or, more probably, reminded) that the card now gives me borrowing privileges at any university library in Canada through CURBA (Canadian University Reciprocal Borrowing Agreement).

My gut reaction: “Aw-sum!” (I was having a fun day).

It got me thinking about what it means to be an academic in Canada. Because I’ve also spent part of my still short academic career in the United States, I tend to compare the Canadian academe to US academic contexts. And while there are some impressive academic consortia in the US, I don’t think that any of them may offer as wide a set of library privileges as this one. If my count is accurate, there are 77 institutions involved in CURBA. University systems and consortia in the US typically include somewhere between ten and thirty institutions, usually within the same state or region. Even if members of both the “UC System” and “CalState” have similar borrowing privileges, it would only mean 33 institutions, less than half of CURBA (though the population of California is about 20% more than that of Canada as a whole). Some important university consortia through which I’ve had some privileges were the CIC (Committee on Institutional Cooperation), a group of twelve Midwestern universities, and the BLC (Boston Library Consortium), a group of twenty university in New England. Even with full borrowing privileges in all three groups of university libraries, an academic would only have access to library material from 65 institutions.

Of course, the number of institutions isn’t that relevant if the libraries themselves have few books. But my guess is that the average size of a Canadian university’s library collection is quite comparable to its US equivalents, including in such well-endowed institutions as those in the aforementioned consortia and university systems. What’s more, I would guess that there might be a broader range of references across Canadian universities than in any region of the US. Not to mention that BANQ (Quebec’s national library and archives) are part of CURBA and that their collections overlap very little with a typical university library.

So, I was thinking about access to an extremely wide range of references given to graduate students and faculty members throughout Canada. We get this very nice perk, this impressive privilege, and we pretty much take it for granted.

Which eventually got me to think about my problem with privilege. Privilege implies a type of hierarchy with which I tend to be uneasy. Even (or especially) when I benefit from a top position. “That’s all great for us but what about other people?”

In this case, there are obvious “Others” like undergraduate students at Canadian institutions,  Canadian non-academics, and scholars at non-Canadian institutions. These are very disparate groups but they are all denied something.

Canadian undergrads are the most direct “victims”: they participate in Canada’s academe, like graduate students and faculty members, yet their access to resources is severely limited by comparison to those of us with CURBA privileges. Something about this strikes me as rather unfair. Don’t undegrads need access as much as we do? Is there really such a wide gap between someone working on an honour’s thesis at the end of a bachelor’s degree and someone starting work on a master’s thesis that the latter requires much wider access than the former? Of course, the main rationale behind this discrepancy in access to library material probably has to do with sheer numbers: there are many undergraduate students “fighting for the same resources” and there are relatively few graduate students and faculty members who need access to the same resources. Or something like that. It makes sense but it’s still a point of tension, as any matter of privilege.

The second set of “victims” includes Canadians who happen to not be affiliated directly with an academic institution. While it may seem that their need for academic resources are more limited than those of students, many people in this category have a more unquenchable “thirst for knowledge” than many an academic. In fact, there are people in this category who could probably do a lot of academically-relevant work “if only they had access.” I mostly mean people who have an academic background of some sort but who are currently unaffiliated with formal institutions. But the “broader public” counts, especially when a specific topic becomes relevant to them. These are people who take advantage of public libraries but, as mentioned in the BANQ case, public and university libraries don’t tend to overlap much. For instance, it’s quite unlikely that someone without academic library privileges would have been able to borrow Visual Information Processing (Chase, William 1973), a proceedings book that I used as a source for a recent blogpost on expertise. Of course, “the public” is usually allowed to browse books in most university libraries in North America (apart from Harvard). But, depending on other practical factors, borrowing books can be much more efficient than browsing them in a library. I tend to hear from diverse people who would enjoy some kind of academic status for this very reason: library privileges matter.

A third category of “victims” of CURBA privileges are non-Canadian academics. Since most of them may only contribute indirectly to Canadian society, why should they have access to Canadian resources? As any social context, the national academe defines insiders and outsiders. While academics are typically inclusive, this type of restriction seems to make sense. Yet many academics outside of Canada could benefit from access to resources broadly available to Canadian academics. In some cases, there are special agreements to allow outside scholars to get temporary access to local, regional, or national resources. Rather frequently, these agreements come with special funding, the outside academic being a special visitor, sometimes with even better access than some local academics.  I have very limited knowledge of these agreements (apart from infrequent discussions with colleagues who benefitted from them) but my sense is that they are costly, cumbersome, and restrictive. Access to local resources is even more exclusive a privilege in this case than in the CURBA case.

Which brings me to my main point about the issue: we all need open access.

When I originally thought about how impressive CURBA privileges were, I was thinking through the logic of the physical library. In a physical library, resources are scarce, access to resources need to be controlled, and library privileges have a high value. In fact, it costs an impressive amount of money to run a physical library. The money universities invest in their libraries is relatively “inelastic” and must figure quite prominently in their budgets. The “return” on that investment seems to me a bit hard to measure: is it a competitive advantage, does a better-endowed library make a university more cost-effective, do university libraries ever “recoup” any portion of the amounts spent?

Contrast all of this with a “virtual” library. My guess is that an online collection of texts costs less to maintain than a physical library by any possible measure. Because digital data may be copied at will, the notion of “scarcity” makes little sense online. Distributing millions of copies of a digital text doesn’t make the original text unavailable to anyone. As long as the distribution system is designed properly, the “transaction costs” in distributing a text of any length are probably much less than those associated with borrowing a book.  And the differences between “browsing” and “borrowing,” which do appear significant with physical books, seem irrelevant with digital texts.

These are all well-known points about online distribution. And they all seem to lead to the same conclusion: “information wants to be free.” Not “free as in beer.” Maybe not even “free as in speech.” But “free as in unchained.”

Open access to academic resources is still a hot topic. Though I do consider myself an advocate of “OA” (the “Open Access movement”), what I mean here isn’t so much about OA as opposed to TA (“toll-access”) in the case of academic journals. Physical copies of periodicals may usually not be borrowed, regardless of library privileges, and online resources are typically excluded from borrowing agreements between institutions. The connection between OA and my perspective on library privileges is that I think the same solution could solve both issues.

I’ve been thinking about a “global library” for a while. Like others, the Library of Alexandria serves as a model but texts would be online. It sounds utopian but my main notion, there, is that “library privileges” would be granted to anyone. Not only senior scholars at accredited academic institutions. Anyone. Of course, the burden of maintaining that global library would also be shared by anyone.

There are many related models, apart from the Library of Alexandria: French «Encyclopédistes» through the Englightenment, public libraries, national libraries (including the Library of Congress), Tim Berners-Lee’s original “World Wide Web” concept, Brewster Kahle’s Internet Archive, Google Books, etc. Though these models differ, they all point to the same basic idea: a “universal” collection with the potential for “universal” access. In historical perspective, this core notion of a “universal library” seems relatively stable.

Of course, there are many obstacles to a “global” or “universal” library. Including issues having to do with conflicts between social groups across the Globe or the current state of so-called “intellectual property.” These are all very tricky and I don’t think they can be solved in any number of blogposts. The main thing I’ve been thinking about, in this case, is the implications of a global library in terms of privileges.

Come to think of it, it’s possible that much of the resistance to a global library have to do with privilege: unlike me, some people enjoy privilege.


My Problem With Journalism

I hate having an axe to grind. Really, I do. “It’s unlike me.” When I notice that I catch myself grinding an axe, I “get on my own case.” I can be quite harsh with my own self.

But I’ve been trained to voice my concerns. And I’ve been perceiving an important social problem for a while.

So I “can’t keep quiet about it.”

If everything goes really well, posting this blog entry might be liberating enough that I will no longer have any axe to grind. Even if it doesn’t go as well as I hope, it’ll be useful to keep this post around so that people can understand my position.

Because I don’t necessarily want people to agree with me. I mostly want them to understand “where I come from.”

So, here goes:

Journalism may have outlived its usefulness.

Like several other “-isms” (including nationalism, colonialism, imperialism, and racism) journalism is counterproductive in the current state of society.

This isn’t an ethical stance, though there are ethical positions which go with it. It’s a statement about the anachronic nature of journalism. As per functional analysis, everything in society needs a function if it is to be maintained. What has been known as journalism is now taking new functions. Eventually, “journalism as we know it” should, logically, make way for new forms.

What these new forms might be, I won’t elaborate in this post. I have multiple ideas, especially given well-publicised interests in social media. But this post isn’t about “the future of journalism.”

It’s about the end of journalism.

Or, at least, my looking forward to the end of journalism.

Now, I’m not saying that journalists are bad people and that they should just lose their jobs. I do think that those who were trained as journalists need to retool themselves, but this post isn’t not about that either.

It’s about an axe I’ve been grinding.

See, I can admit it, I’ve been making some rather negative comments about diverse behaviours and statements, by media people. It has even become a habit of mine to allow myself to comment on something a journalist has said, if I feel that there is an issue.

Yes, I know: journalists are people too, they deserve my respect.

And I do respect them, the same way I respect every human being. I just won’t give them the satisfaction of my putting them on a pedestal. In my mind, journalists are people: just like anybody else. They deserve no special treatment. And several of them have been arrogant enough that I can’t help turning their arrogance back to them.

Still, it’s not about journalist as people. It’s about journalism “as an occupation.” And as a system. An outdated system.

Speaking of dates, some context…

I was born in 1972 and, originally,I was quite taken by journalism.

By age twelve, I was pretty much a news junkie. Seriously! I was “consuming” a lot of media at that point. And I was “into” media. Mostly television and radio, with some print mixed in, as well as lots of literary work for context: this is when I first read French and Russian authors from the late 19th and early 20th centuries.

I kept thinking about what was happening in The World. Back in 1984, the Cold War was a major issue. To a French-Canadian tween, this mostly meant thinking about the fact that there were (allegedly) US and USSR “bombs pointed at us,” for reasons beyond our direct control.

“Caring about The World” also meant thinking about all sorts of problems happening across The Globe. Especially poverty, hunger, diseases, and wars. I distinctly remember caring about the famine in Ethiopia. And when We Are the World started playing everywhere, I felt like something was finally happening.

This was one of my first steps toward cynicism. And I’m happy it occured at age twelve because it allowed me to eventually “snap out of it.” Oh, sure, I can still be a cynic on occasion. But my cynicism is contextual. I’m not sure things would have been as happiness-inducing for me if it hadn’t been for that early start in cynicism.

Because, you see, The World disinterested itself quite rapidly with the plight of Ethiopians. I distinctly remember asking myself, after the media frenzy died out, what had happened to Ethiopians in the meantime. I’m sure there has been some report at the time claiming that the famine was over and that the situation was “back to normal.” But I didn’t hear anything about it, and I was looking. As a twelve-year-old French-Canadian with no access to a modem, I had no direct access to information about the situation in Ethiopia.

Ethiopia still remained as a symbol, to me, of an issue to be solved. It’s not the direct cause of my later becoming an africanist. But, come to think of it, there might be a connection, deeper down than I had been looking.

So, by the end of the Ethiopian famine of 1984-85, I was “losing my faith in” journalism.

I clearly haven’t gained a new faith in journalism. And it all makes me feel quite good, actually. I simply don’t need that kind of faith. I was already training myself to be a critical thinker. Sounds self-serving? Well, sorry. I’m just being honest. What’s a blog if the author isn’t honest and genuine?

Flash forward to 1991, when I started formal training in anthropology. The feeling was exhilarating. I finally felt like I belonged. My statement at the time was to the effect that “I wasn’t meant for anthropology: anthropology was meant for me!” And I was learning quite a bit about/from The World. At that point, it already did mean “The Whole Wide World,” even though my knowledge of that World was fairly limited. And it was a haven of critical thinking.

Ideal, I tell you. Moan all you want, it felt like the ideal place at the ideal time.

And, during the summer of 1993, it all happened: I learnt about the existence of the “Internet.” And it changed my life. Seriously, the ‘Net did have a large part to play in important changes in my life.

That event, my discovery of the ‘Net, also has a connection to journalism. The person who described the Internet to me was Kevin Tuite, one of my linguistic anthropology teachers at Université de Montréal. As far as I can remember, Kevin was mostly describing Usenet. But the potential for “relatively unmediated communication” was already a big selling point. Kevin talked about the fact that members of the Caucasian diaspora were able to use the Internet to discuss with their relatives and friends back in the Caucasus about issues pertaining to these independent republics after the fall of the USSR. All this while media coverage was sketchy at best (sounded like journalism still had a hard time coping with the new realities).

As you can imagine, I was more than intrigued and I applied for an account as soon as possible. In the meantime, I bought at 2400 baud modem, joined some local BBSes, and got to chat about the Internet with several friends, some of whom already had accounts. Got my first email account just before semester started, in August, 1993. I can still see traces of that account, but only since April, 1994 (I guess I wasn’t using my address in my signature before this). I’ve been an enthusiastic user of diverse Internet-based means of communication since then.

But coming back to journalism, specifically…

Journalism missed the switch.

During the past fifteen years, I’ve been amazed at how clueless members of mainstream media institutions have been to “the power of the Internet.” This was during Wired Magazine’s first year as a print magazine and we (some friends and I) were already commenting upon the fact that print journalists should look at what was coming. Eventually, they would need to adapt. “The Internet changes everything,” I thought.

No, I didn’t mean that the Internet would cause any of the significant changes that we have seeing around us. I tend to be against technological determinism (and other McLuhan tendencies). Not that I prefer sociological determinism yet I can’t help but think that, from ARPAnet to the current state of the Internet, most of the important changes have been primarily social: if the Internet became something, it’s because people are making it so, not because of some inexorable technological development.

My enthusiastic perspective on the Internet was largely motivated by the notion that it would allow people to go beyond the model from the journalism era. Honestly, I could see the end of “journalism as we knew it.” And I’m surprised, fifteen years later, that journalism has been among the slowest institutions to adapt.

In a sense, my main problem with journalism is that it maintains a very stratified structure which gives too much weight to the credibility of specific individuals. Editors and journalists, who are part of the “medium” in the old models of communication, have taken on a gatekeeping role despite the fact that they rarely are much more proficient thinkers than people who read them. “Gatekeepers” even constitute a “textbook case” in sociology, especially in conflict theory. Though I can easily perceive how “constructed” that gatekeeping model may be, I can easily relate to what it entails in terms of journalism.

There’s a type of arrogance embedded in journalistic self-perception: “we’re journalists/editors so we know better than you; you need us to process information for you.” Regardless of how much I may disagree with some of his words and actions, I take solace in the fact that Murdoch, a key figure in today’s mainstream media, talked directly at this arrogance. Of course, he might have been pandering. But the very fact that he can pay lip-service to journalistic arrogance is, in my mind, quite helpful.

I think the days of fully stratified gatekeeping (a “top-down approach” to information filtering) are over. Now that information is easily available and that knowledge is constructed socially, any “filtering” method can be distributed. I’m not really thinking of a “cream rises to the top” model. An analogy with water sources going through multiple layers of mountain rock would be more appropriate to a Swiss citizen such as myself. But the model I have in mind is more about what Bakhtin called “polyvocality” and what has become an ethical position on “giving voice to the other.” Journalism has taken voice away from people. I have in mind a distributed mode of knowledge construction which gives everyone enough voice to have long-distance effects.

At the risk of sounding too abstract (it’s actually very clear in my mind, but it requires a long description), it’s a blend of ideas like: the social butterfly effect, a post-encyclopedic world, and cultural awareness. All of these, in my mind, contribute to this heightened form of critical thinking away from which I feel journalism has led us.

The social butterfly effect is fairly easy to understand, especially now that social networks are so prominent. Basically, the “butterfly effect” from chaos theory applied to social networks. In this context, a “social butterfly” is a node in multiple networks of varying degrees of density and clustering. Because such a “social butterfly” can bring things (ideas, especially) from one such network to another, I argue that her or his ultimate influence (in agregate) is larger than that of someone who sits at the core of a highly clustered network. Yes, it’s related to “weak ties” and other network classics. But it’s a bit more specific, at least in my mind. In terms of journalism, the social butterfly effect implies that the way knowledge is constructed needs not come from a singular source or channel.

The “encyclopedic world” I have in mind is that of our good friends from the French Enlightenment: Diderot and the gang. At that time, there was a notion that the sum of all knowledge could be contained in the Encyclopédie. Of course, I’m simplifying. But such a notion is still discussed fairly frequently. The world in which we now live has clearly challenged this encyclopedic notion of exhaustiveness. Sure, certain people hold on to that notion. But it’s not taken for granted as “uncontroversial.” Actually, those who hold on to it tend to respond rather positively to the journalistic perspective on human events. As should be obvious, I think the days of that encyclopedic worldview are counted and that “journalism as we know it” will die at the same time. Though it seems to be built on an “encyclopedia” frame, Wikipedia clearly benefits from distributed model of knowledge management. In this sense, Wikipedia is less anachronistic than Britannica. Wikipedia also tends to be more insightful than Britannica.

The cultural awareness point may sound like an ethnographer’s pipe dream. But I perceive a clear connection between Globalization and a certain form of cultural awareness in information and knowledge management. This is probably where the Global Voices model can come in. One of the most useful representations of that model comes from a Chris Lydon’s Open Source conversation with Solana Larsen and Ethan Zuckerman. Simply put, I feel that this model challenges journalism’s ethnocentrism.

Obviously, I have many other things to say about journalism (as well as about its corrolate, nationalism).

But I do feel liberated already. So I’ll leave it at that.


Thought Streams about Online Literacy

Interestingly enough, in the last several days, at least five unrelated items of online content have made me think about what I’d call “online literacy.” Not too surprising a co-occurrence, given the feeds I follow, but I think still interesting. Especially because different perspectives were behind these items and the ways I was led to them.

Here are the five items I most directly connect with my streams of thought about online literacy, during the past few days.

Several items in my streams of thoughts on online literacy have found their way into a Moodle Lounge thread where they were mostly connected with the future of textbooks.

My notion of “online literacy” might be idiosyncratic. The concept, to me, relates to “media literacy” which (as far as I can tell) refers to the efficient use of a set of conceptual tools meant to help in approaching media items from the perspective of critical thinking and intellectual engagement. “Online literacy” would be the same thing applied to the Internet in general. One element specific to online literacy, I would argue, is that some basic principles of the Internet (including its decentralized character) make the critical/engaged approach very prominent. Simply put, the way the ‘Net is set up almost forces people to apply critical thinking to what they read, view, watch, or listen to, online. In something of a “cool medium” sense, the ‘Net also encourages active engagement in the material (though for reasons different from McLuhan’s description of medium coolness).

Furthermore, I tend to associate “book literacy” with modernity while “online literacy” seems quite compatible with orality which is itself typical of both post- and pre-modernity. I’m guessing this last point seems exceedingly weird to a number of people, but it really seems to fit in a larger scheme.

There are ways to discuss these issues which are more tech-friendly or geeky. Synchronous communication, many-to-many relationships, peer-to-peer (file) sharing, distributed processing… But as I think out loud, these concepts are mostly in the background.

My basic claim in all of this is that, regardless of how positive we think the move toward online content and away from mass-produced books, it’s important to train ourselves (and others) to gain a level of savviness in the online world. This form of online literacy is especially important with students because of their active engagement in the construction of knowledge.


Enthused Tech

Yesterday, I held a WiZiQ session on the use of online tech in higher education:

Enthusing Higher Education: Getting Universities and Colleges to Play with Online Tools and Services

Slideshare

(Full multimedia recording available here)

During the session, Nellie Deutsch shared the following link:

Diffusion of Innovations, by Everett Rogers (1995)

Haven’t read Rogers’s book but it sounds like a contextually easy to understand version of ideas which have been quite clear in Boasian disciplines (cultural anthropology, folkloristics, cultural ecology…) for a while. But, in this sometimes obsessive quest for innovation, it might in fact be useful to go back to basic ideas about the social mechanisms which can be observed in the adoption of new tools and techniques. It’s in fact the thinking behind this relatively recent blogpost of mine:

Technology Adoption and Active Reading

My emphasis during the WiZiQ session was on enthusiasm. I tend to think a lot about occasions in which, thinking about possibilities afforded technology relates to people getting “psyched up.” In a way, this is exactly how I can define myself as a tech enthusiast: I get easy psyched up in the context of discussions about technology.

What’s funny is that I’m no gadget freak. I don’t care about the tool. I just love to dream up possibilities. And I sincerely think that I’m not alone. We might even guess that a similar dream-induced excitement animates true gadget freaks, who must have the latest tool. Early adopters are a big part of geek culture and, though still small, geek culture is still a niche.

Because I know I’ll keep on talking about these things on other occasions, I can “leave it at that,” for now.

RERO‘s my battle cry.

TBC


Crazy App Idea: Happy Meter

I keep getting ideas for apps I’d like to see on Apple’s App Store for iPod touch and iPhone. This one may sound a bit weird but I think it could be fun. An app where you can record your mood and optionally broadcast it to friends. It could become rather sophisticated, actually. And I think it can have interesting consequences.

The idea mostly comes from Philippe Lemay, a psychologist friend of mine and fellow PDA fan. Haven’t talked to him in a while but I was just thinking about something he did, a number of years ago (in the mid-1990s). As part of an academic project, Philippe helped develop a PDA-based research program whereby subjects would record different things about their state of mind at intervals during the day. Apart from the neatness of the data gathering technique, this whole concept stayed with me. As a non-psychologist, I personally get the strong impression that recording your moods frequently during the day can actually be a very useful thing to do in terms of mental health.

And I really like the PDA angle. Since I think of the App Store as transforming Apple’s touch devices into full-fledged PDAs, the connection is rather strong between Philippe’s work at that time and the current state of App Store development.

Since that project of Philippe’s, a number of things have been going on which might help refine the “happy meter” concept.

One is that “lifecasting” became rather big, especially among certain groups of Netizens (typically younger people, but also many members of geek culture). Though the lifecasting concept applies mostly to video streams, there are connections with many other trends in online culture. The connection with vidcasting specifically (and podcasting generally) is rather obvious. But there are other connections. For instance, with mo-, photo-, or microblogging. Or even with all the “mood” apps on Facebook.

Speaking of Facebook as a platform, I think it meshes especially well with touch devices.

So, “happy meter” could be part of a broader app which does other things: updating Facebook status, posting tweets, broadcasting location, sending personal blogposts, listing scores in a Brain Age type game, etc.

Yet I think the “happy meter” could be useful on its own, as a way to track your own mood. “Turns out, my mood was improving pretty quickly on that day.” “Sounds like I didn’t let things affect me too much despite all sorts of things I was going through.”

As a mood-tracker, the “happy meter” should be extremely efficient. Because it’s easy, I’m thinking of sliders. One main slider for general mood and different sliders for different moods and emotions. It would also be possible to extend the “entry form” on occasion, when the user wants to record more data about their mental state.

Of course, everything would be save automatically and “sent to the cloud” on occasion. There could be a way to selectively broadcast some slider values. The app could conceivably send reminders to the user to update their mood at regular intervals. It could even serve as a “break reminder” feature. Though there are limitations on OSX iPhone in terms of interapplication communication, it’d be even neater if the app were able to record other things happening on the touch device at the same time, such as music which is playing or some apps which have been used.

Now, very obviously, there are lots of privacy issues involved. But what social networking services have taught us is that users can have pretty sophisticated notions of privacy management, if they’re given the chance. For instance, adept Facebook users may seem to indiscrimately post just about everything about themselves but are often very clear about what they want to “let out,” in context. So, clearly, every type of broadcasting should be controlled by the user. No opt-out here.

I know this all sounds crazy. And it all might be a very bad idea. But the thing about letting my mind wander is that it helps me remain happy.


Visualizing Touch Devices in Education

Took me a while before I watched this concept video about iPhone use on campus.

Connected: The Movie – Abilene Christian University

Sure, it’s a bit campy. Sure, some features aren’t available on the iPhone yet. But the basic concepts are pretty much what I had in mind.

Among things I like in the video:

  • The very notion of student empowerment runs at the centre of it.
  • Many of the class-related applications presented show an interest in the constructivist dimensions of learning.
  • Material is made available before class. Face-to-face time is for engaging in the material, not rehashing it.
  • The technology is presented as a way to ease the bureaucratic aspects of university life, relieving a burden on students (and, presumably, on everyone else involved).
  • The “iPhone as ID” concept is simple yet powerful, in context.
  • Social networks (namely Facebook and MySpace, in the video) are embedded in the campus experience.
  • Blended learning (called “hybrid” in the video) is conceived as an option, not as an obligation.
  • Use of the technology is specifically perceived as going beyond geek culture.
  • The scenarios (use cases) are quite realistic in terms of typical campus life in the United States.
  • While “getting an iPhone” is mentioned as a perk, it’s perfectly possible to imagine technology as a levelling factor with educational institutions, lowering some costs while raising the bar for pedagogical standards.
  • The shift from “eLearning” to “mLearning” is rather obvious.
  • ACU already does iTunes U.
  • The video is released under a Creative Commons license.

Of course, there are many directions things can go, from here. Not all of them are in line with the ACU dream scenario. But I’m quite hope judging from some apparently random facts: that Apple may sell iPhones through universities, that Apple has plans for iPhone use on campuses,  that many of the “enterprise features” of iPhone 2.0 could work in institutions of higher education, that the Steve Jobs keynote made several mentions of education, that Apple bundles iPod touch with Macs, that the OLPC XOXO is now conceived more as a touch handheld than as a laptop, that (although delayed) Google’s Android platform can participate in the same usage scenarios, and that browser-based computing apparently has a bright future.


Waiting for Other Touch Devices?

Though I’m interpreting Apple’s current back-to-school special to imply that we might not see radically new iPod touch models until September, I’m still hoping that there will be a variety of touch devices available in the not-so-distant future, whether or not Apple makes them.

Turns out, the rumour mill has some items related to my wish, including this one:

AppleInsider | Larger Apple multi-touch devices move beyond prototype stage

This could be excellent news for the device category as a whole and for Apple itself. As explained before, I’m especially enthusiastic about touch devices in educational contexts.

I’ve been lusting over an iPod touch since it was announced. I sincerely think that an iPod touch will significantly enhance my life. As strange as it may sound, especially given the fact I’m no gadget freak, I think frequently about the iPod touch. Think Wayne, in Wayne’s World 2, going to a music store to try a guitar (and being denied the privilege to play Stairway to Heaven). That’s almost me and the iPod touch. When I go to an Apple Store, I spend precious minutes with a touch.

Given my current pattern of computer use, the fact that I have no access to a laptop at this point, and the availability of WiFi connections at some interesting spots, I think an iPod touch will enable me to spend much less time in front of this desktop, spend much more time outside, and focus on my general well-being.

One important feature the touch has, which can have a significant effect on my life, is instant-on. My desktop still takes minutes to wake up from “Stand by.” Several times during the day, the main reason I wake my desktop is to make sure I haven’t received important email messages. (I don’t have push email.) For a number of reasons, what starts out as simple email-checking frequently ends up being a more elaborate browsing session. An iPod touch would greatly reduce the need for those extended sessions and let me “do other things with my life.”

Another reason a touch would be important in my life at this point is that I no longer have access to a working MP3 player. While I don’t technically need any portable media player to be happy, getting my first iPod just a few years ago was an important change in my life. I’ll still miss my late iRiver‘s recording capabilities, but it’s now possible to get microphone input on the iPod touch. Eventually, the iPod touch could become a very attractive tool for fieldwork recordings. Or for podcasting. Given my audio orientation, a recording-capable iPod touch could be quite useful. Even more so than iPod Classic with recording capabilities.

There are a number of other things which should make the iPod touch very useful in my life. A set of them have to do with expected features and applications. One is Omni Group’s intention to release their OmniFocus task management software through the iPhone SDK. As an enthusiastic user of OmniOutliner for most of the time I’ve spent on Mac OS X laptops, I can just imagine how useful OmniFocus could be on an iPod touch. Getting Things Done, the handheld version. It could help me streamline my whole workflow, the way OO used to do. In other words: OF on an iPod touch could be this fieldworker’s dream come true.

There are also applications to be released for Apple’s Touch devices which may be less “utilitarian” but still quite exciting. Including the Trism game. In terms of both “appropriate use of the platform” and pricing, Trism scores high on my list. I see it as an excellent example of what casual gaming can be like. One practical aspect of casual gaming, especially on such a flexible device as the iPod touch, is that it can greatly decrease stress levels by giving users “something to do while they wait.” I’ve had that experience with other handhelds. Whether it’s riding the bus or waiting for a computer to wake up from stand by, having something to do with your hands makes the situation just a tad bit more pleasant.

I’m also expecting some new features to eventually be released through software, including some advanced podcatching features like wireless synchronization of podcasts and, one can dream, a way to interact directly with podcast content. Despite having been an avid podcast listener for years, I think podcasts aren’t nearly “interactive” enough. Software on a touch device could solve this. But that part is wishful thinking. I tend to do a lot of wishlists. Sometimes, my daydreams become realities.

The cool thing is, it looks as though I’ll be able to get my own touch device in the near future. w00t! 😀

Even if Apple does release new Touch devices, the device I’m most likely to get is an iPod touch. Chances are that I might be able to get a used 8MB touch for a decent price. Especially if, as is expected for next Monday, Apple officially announces the iPhone for Canada (possibly with a very attractive data plan) As a friend was telling me, once Canadians are able to get their hands on an iPhone directly in Canada, there’ll likely be a number of used iPod touches for sale. With a larger supply of used iPod touches and a presumably lower demand for the same, we can expect a lower price.

Another reason I might get an iPod touch is that a friend of mine has been talking about helping me with this purchase. Though I feel a bit awkward about accepting this kind of help, I’m very enthusiastic at the prospect.

Watch this space for more on my touch life. 😉


Handhelds for the Rest of Us?

Ok, it probably shouldn’t become part of my habits but this is another repost of a blog comment motivated by the OLPC XO.

This time, it’s a reply to Niti Bhan’s enthusiastic blogpost about the eeePC: Perspective 2.0: The little eeePC that could has become the real “iPod” of personal computing

This time, I’m heavily editing my comments. So it’s less of a repost than a new blogpost. In some ways, it’s partly a follow-up to my “Ultimate Handheld Device” post (which ended up focusing on spatial positioning).

Given the OLPC context, the angle here is, hopefully, a culturally aware version of “a handheld device for the rest of us.”

Here goes…

I think there’s room in the World for a device category more similar to handhelds than to subnotebooks. Let’s call it “handhelds for the rest of us” (HftRoU). Something between a cellphone, a portable gaming console, a portable media player, and a personal digital assistant. Handheld devices exist which cover most of these features/applications, but I’m mostly using this categorization to think about the future of handhelds in a globalised World.

The “new” device category could serve as the inspiration for a follow-up to the OLPC project. One thing about which I keep thinking, in relation to the “OLPC” project, is that the ‘L’ part was too restrictive. Sure, laptops can be great tools for students, especially if these students are used to (or need to be trained in) working with and typing long-form text. But I don’t think that laptops represent the most “disruptive technology” around. If we think about their global penetration and widespread impact, cellphones are much closer to the leapfrog effect about which we all have been writing.

So, why not just talk about a cellphone or smartphone? Well, I’m trying to think both more broadly and more specifically. Cellphones are already helping people empower themselves. The next step might to add selected features which bring them closer to the OLPC dream. Also, since cellphones are widely distributed already, I think it’s important to think about devices which may complement cellphones. I have some ideas about non-handheld tools which could make cellphones even more relevant in people’s lives. But they will have to wait for another blogpost.

So, to put it simply, “handhelds for the rest of us” (HftRoU) are somewhere between the OLPC XO-1 and Apple’s original iPhone, in terms of features. In terms of prices, I dream that it could be closer to that of basic cellphones which are in the hands of so many people across the globe. I don’t know what that price may be but I heard things which sounded like a third of the price the OLPC originally had in mind (so, a sixth of the current price). Sure, it may take a while before such a low cost can be reached. But I actually don’t think we’re in a hurry.

I guess I’m just thinking of the electronics (and global) version of the Ford T. With more solidarity in mind. And cultural awareness.

Google’s Open Handset Alliance (OHA) may produce something more appropriate to “global contexts” than Apple’s iPhone. In comparison with Apple’s iPhone, devices developed by the OHA could be better adapted to the cultural, climatic, and economic conditions of those people who don’t have easy access to the kind of computers “we” take for granted. At the very least, the OHA has good representation on at least three continents and, like the old OLPC project, the OHA is officially dedicated to openness.

I actually care fairly little about which teams will develop devices in this category. In fact, I hope that new manufacturers will spring up in some local communities and that major manufacturers will pay attention.

I don’t care about who does it, I’m mostly interested in what the devices will make possible. Learning, broadly speaking. Communicating, in different ways. Empowering themselves, generally.

One thing I have in mind, and which deviates from the OLPC mission, is that there should be appropriate handheld devices for all age-ranges. I do understand the focus on 6-12 year-olds the old OLPC had. But I don’t think it’s very productive to only sell devices to that age-range. Especially not in those parts of the world (i.e., almost anywhere) where generation gaps don’t imply that children are isolated from adults. In fact, as an anthropologist, I react rather strongly to the thought that children should be the exclusive target of a project meant to empower people. But I digress, as always.

I don’t tend to be a feature-freak but I have been thinking about the main features the prototypical device in this category should have. It’s not a rigid set of guidelines. It’s just a way to think out loud about technology’s integration in human life.

The OS and GUI, which seem like major advantages of the eeePC, could certainly be of the mobile/handheld type instead of the desktop/laptop type. The usual suspects: Symbian, NewtonOS, Android, Zune, PalmOS, Cocoa Touch, embedded Linux, Playstation Portable, WindowsCE, and Nintendo DS. At a certain level of abstraction, there are so many commonalities between all of these that it doesn’t seem very efficient to invent a completely new GUI/OS “paradigm,” like OLPC’s Sugar was apparently trying to do.

The HftRoU require some form of networking or wireless connectivity feature. WiFi (802.11*), GSM, UMTS, WiMAX, Bluetooth… Doesn’t need to be extremely fast, but it should be flexible and it absolutely cannot be cost-prohibitive. IP might make much more sense than, say, SMS/MMS, but a lot can be done with any kind of data transmission between devices. XO-style mesh networking could be a very interesting option. As VoIP has proven, voice can efficiently be transmitted as data so “voice networks” aren’t necessary.

My sense is that a multitouch interface with an accelerometer would be extremely effective. Yes, I’m thinking of Apple’s Touch devices and MacBooks. As well as about the Microsoft Surface, and Jeff Han’s Perceptive Pixel. One thing all of these have shown is how “intuitive” it can be to interact with a machine using gestures. Haptic feedback could also be useful but I’m not convinced it’s “there yet.”

I’m really not sure a keyboard is very important. In fact, I think that keyboard-focused laptops and tablets are the wrong basis for thinking about “handhelds for the rest of us.” Bear in mind that I’m not thinking about devices for would-be office workers or even programmers. I’m thinking about the broadest user base you can imagine. “The Rest of Us” in the sense of, those not already using computers very directly. And that user base isn’t that invested in (or committed to) touch-typing. Even people who are very literate don’t tend to be extremely efficient typists. If we think about global literacy rates, typing might be one thing which needs to be leapfrogged. After all, a cellphone keypad can be quite effective in some hands and there are several other ways to input text, especially if typing isn’t too ingrained in you. Furthermore, keyboards aren’t that convenient in multilingual contexts (i.e., in most parts of the world). I say: avoid the keyboard altogether, make it available as an option, or use a virtual one. People will complain. But it’s a necessary step.

If the device is to be used for voice communication, some audio support is absolutely required. Even if voice communication isn’t part of it (and I’m not completely convinced it’s the one required feature), audio is very useful, IMHO (I’m an aural guy). In some parts of the world, speakers are much favoured over headphones or headsets. But I personally wish that at least some HftRoU could have external audio inputs/outputs. Maybe through USB or an iPod-style connector.

A voice interface would be fabulous, but there still seem to be technical issues with both speech recognition and speech synthesis. I used to work in that field and I keep dreaming, like Bill Gates and others do, that speech will finally take the world by storm. But maybe the time still hasn’t come.

It’s hard to tell what size the screen should be. There probably needs to be a range of devices with varying screen sizes. Apple’s Touch devices prove that you don’t need a very large screen to have an immersive experience. Maybe some HftRoU screens should in fact be larger than that of an iPhone or iPod touch. Especially if people are to read or write long-form text on them. Maybe the eeePC had it right. Especially if the devices’ form factor is more like a big handheld than like a small subnotebook (i.e., slimmer than an eeePC). One reason form factor matters, in my mind, is that it could make the devices “disappear.” That, and the difference between having a device on you (in your pocket) and carrying a bag with a device in it. Form factor was a big issue with my Newton MessagePad 130. As the OLPC XO showed, cost and power consumption are also important issues regarding screen size. I’d vote for a range of screens between 3.5 inch (iPhone) and 8.9 inch (eeePC 900) with a rather high resolution. A multitouch version of the XO’s screen could be a major contribution.

In terms of both audio and screen features, some consideration should be given to adaptive technologies. Most of us take for granted that “almost anyone” can hear and see. We usually don’t perceive major issues in the fact that “personal computing” typically focuses on visual and auditory stimuli. But if these devices truly are “for the rest of us,” they could help empower visually- or hearing-impaired individuals, who are often marginalized. This is especially relevant in the logic of humanitarianism.

HftRoU needs a much autonomy from a power source as possible. Both in terms of the number of hours devices can be operated without needing to be connected to a power source and in terms of flexibility in power sources. Power management is a major technological issue, with portable, handheld, and mobile devices. Engineers are hard at work, trying to find as many solutions to this issue as they can. This was, obviously, a major area of research for the OLPC. But I’m not even sure the solutions they have found are the only relevant ones for what I imagine HftRoU to be.

GPS could have interesting uses, but doesn’t seem very cost-effective. Other “wireless positioning systems” (à la Skyhook) might reprsent a more rational option. Still, I think positioning systems are one of the next big things. Not only for navigation or for location-based targeting. But for a set of “unintended uses” which are the hallmark of truly disruptive technology. I still remember an article (probably in the venerable Wired magazine) about the use of GPS/GIS for research into climate change. Such “unintended uses” are, in my mind, much closer to the constructionist ideal than the OLPC XO’s unified design can ever get.

Though a camera seems to be a given in any portable or mobile device (even the OLPC XO has one), I’m not yet that clear on how important it really is. Sure, people like taking pictures or filming things. Yes, pictures taken through cellphones have had a lasting impact on social and cultural events. But I still get the feeling that the main reason cameras are included on so many devices is for impulse buying, not as a feature to be used so frequently by all users. Also, standalone cameras probably have a rather high level of penetration already and it might be best not to duplicate this type of feature. But, of course, a camera could easily be a differentiating factor between two devices in the same category. I don’t think that cameras should be absent from HftRoU. I just think it’s possible to have “killer apps” without cameras. Again, I’m biased.

Apart from networking/connectivity uses, Bluetooth seems like a luxury. Sure, it can be neat. But I don’t feel it adds that much functionality to HftRoU. Yet again, I could be proven wrong. Especially if networking and other inter-device communication are combined. At some abstract level, there isn’t that much difference between exchanging data across a network and controlling a device with another device.

Yes, I do realize I pretty much described an iPod touch (or an iPhone without camera, Bluetooth, or cellphone fees). I’ve been lusting over an iPod touch since September and it does colour my approach. I sincerely think the iPod touch could serve as an inspiration for a new device type. But, again, I care very little about which company makes that device. I don’t even care about how open the operating system is.

As long as our minds are open.


Well-Rounded Bloggers

While I keep saying journalist have a tough time putting journalism in perspective, it seems that some blogging journalists are able to do it.

Case in point, ZDNet Editor in Chief Larry Dignan:

Anatomy of a ‘Blogging will kill you’ story: Why I didn’t make the cut | Between the Lines | ZDNet.com

I didn’t read the original NYT piece. On purpose. As I’ve tried to establish, I sometimes run away from things “everybody has read.” Typically, in the U.S., this means something which appeared in the NYT. To the extent that, for some people, “if it’s not in the Times, it didn’t happen.” (Such an attitude is especially tricky when you’re talking about, say, parts of Africa which aren’t at war.)

This time, I’m especially glad I read Dignan’s piece instead of the NYT one because I get the gist of the “story” and Dignan provides the kind of insight I enjoy.

Basic message: blogging can be as stressful as any job yet it’s possible to have a well-balanced life as a blogger.

Simple, useful, personal, insightful, and probably more accurate than the original piece.

Oh, sure. It’s nothing new. It’s not a major revelation for most people that it’s important to think about work/life balance.

Still… As it so happens, this specific piece helped me think about my own blogging activities in a somewhat different light. No, it’s not my job (though I do wish I had a writing job). And I don’t typically stress over it. I’m just thinking about where blogging fits in my life. And that’s helpful.

Even if it means yet another blogpost about blogging.


Product Differentiation

A chart meant to show the merits of Participatory Culture’s Miro video platform over a competing product from Joost.

Miro vs. Joost – Head to Head Comparison

Interesting approach. After the obligatory feature and content comparisons, details about the organizational structure for Participatory Culture (creators of Miro) and Joost N.V. (creators of the Joost program, with links to Skype and Kazaa). Even the “technological” comparison focuses on Miro’s openness.

It seems that social and cultural factors may be important for folks at Participatory Culture. Seems like members of their target audience should care about enterprise size (11 people for Miro vs. 100 for Joost) and about connections to the Open Source community (Participatory Culture is claiming that Joost uses Open Source without making its own source code available). However, this chart makes it a bit difficult to distinguish statements about these factors from typical marketing hype.


Readership to Comments Conversion

As mentioned recently (among other times), I’d like to get more reader comments than I do now. Haven’t been really serious about it as I’m not using any of the several methods I know to get more comments. For instance, I realise shorter, quick-and-dirty posts are likely to get me more comments than my longer ramblings. Everybody knows that inflammatory (Dvorak-like) posts get more comments. I also know that commenting on other people’s blog entries is the best way to receive comments from fellow bloggers. Not to mention generating something of a community aspect through my blog. And I could certainly ask more questions in my blog posts. So I guess I’m not doing my part here.

It’s not that I care so much about getting more comments. It’s just that I do like receiving comments on blog posts. Kind of puts me back into mailing-list mode. So I (frequently) end up wondering out loud about blog comments. I don’t really want to make more of an effort. I just want my cake and eat it too. (Yes, this one is an egotistical entry.)

One thing I keep noticing is that I get more comments when I get less readers. It’s a funny pattern. Sounds like the ice-cream/crime (ice-crime?) correlation but I’m not sure what the shared cause may be. So my tendency is to think that I might get more comments if I get lower readership. I know, I know: sounds like wishful thinking. But there’s something fun about this type of thinking.

Now, how can I decrease my readership? Well, since a lot of readers seem to come to this blog through Web searches and Technorati links, I guess I could decrease my relevance in those contexts. Kind of like reverse-SEO.

As my unseemly large number of categories might be responsible for at least some of that search/Technorati traffic, getting rid of some of those categories might help.

On this WordPress.com blog, I’ve been using categories like tags. The Categories section of WordPress.com’s own blog post editor makes this use very straightforward. Instead of selecting categories, I just type them in a box and press the Add button. Since WordPress.com categories are also Technorati links, this categories-as-tags use made some sense. But I ended up with some issues, especially with standalone blog editors. Not only do most standalone blog editors have categories as selection items (which makes my typical tagging technique inappropriate) but the incredibly large number of categories on this blog makes it hard to use any blog editor which fetches those categories.

Not too long ago, WordPress.com added a tagging system to their (limited) list of supported features. Given my issues with categories, this seemed like a very appropriate feature for me to use. So the few posts I wrote since that feature became available have both tags and categories.

But what about all of those categories? Well, in an apparent effort to beef up the tagging features, WordPress.com added Category to Tag Converter. (I do hope tagging and categories development on WordPress.com will continue at a steady pace.)

This converter is fairly simple. It lists all the categories used on your blog and you can either select which categories you want to convert into tags or press the Convert All button to get all categories transformed into tags. Like a neat hack, it does what it should do. I still had to spend quite a bit of time thinking about which categories I wanted to keep. Because tags and categories perform differently and because I’m not completely clear on how tags really work (are they also Technorati tags? Can we get a page of tags? A tag cloud?), it was a bit of a shot in the dark. I pretty much transformed into tags most categories which I had only applied to one post. It still leaves me with quite a few categories which aren’t that useful but I’ll sort these out later, especially after I see the effects tags may have on my blogging habits and on my blog itself.

As luck would have it, this change in my blog may have as an impact a decrease in readership and an increase in comments. My posts might end up being more relevant for categories and tags with which they are associated. And I might end up having sufficiently few categories that I could, in fact, use blog editors on this blog.

While I was converting these categories into tags, I ended up changing some categories. Silly me thought that by simply changing the name of a category to be the same as the name of another category, I’d end up with “merged categories” (all blog posts in the category I changed being included in the list for the new categories). Turns out, it doesn’t really work like that and I ended up with duplicate categories. Too bad. Just one of those WordPress.com quirks.

Speaking of WordPress.com itself. I do like it as a blogging system. It does/could have a few community-oriented features. I would probably prefer it if it were more open, like a self-hosted WordPress installation. But the WordPress.com team seems to mostly implement features they like or that they see as being advantageous for WordPress.com as a commercial entity. Guess you could say WordPress.com is the Apple of the blogging world! (And I say this as a Macaholic.)

Just found out that WordPress.com has a new feature called AnswerLinks, which looks like it can simplify the task of linking to some broad answers. Like several other WordPress.com features, this one looks like it’s mostly meant as a cross-promotion than a user-requested feature, but it still sounds interesting.

Still, maybe the development of tag features is signalling increased responsiveness on the part of WordPress.com. As we all know, responsiveness is a key to success in the world of online business ventures.


Facebook for Teaching and Learning

My friend Jay Pottharst has created a Facebook group for a section he’s teaching. Thought about doing the same thing myself but I still prefer Moodle for learning and teaching contexts.

One thing which could be quite useful is Jay’s Tips for people who are concerned about joining Facebook. Though he wrote those three tips for his students, they could apply more widely. They’re quite straightforward and sensical. (Which shouldn’t be surprising as Jay’s in math at Harvard. If he were to not make sense, the world might collapse.) Summarised (from Jay’s already brief tips): use privacy settings, think about using a pseudonym, get a friend to register for you.

Personally, I’d say that it’s probably best to heed the first of the three tips. While Fb does encourage members to post all sorts of potentially sensitive information, it’s good practise to carefully treat any information you may provide online. Despite the ongoing media coverage on privacy concerns on Facebook and elsewhere, the main point here is that there are varying degrees of privacy which can be applied to information distributed on- or offline.

There’s a lot more to say about learning/teaching uses of Fb.

Of course, there’s a Facebook group about Teaching & Learning with Facebook. And I created a moderated group for passionate teachers on Facebook.

One thing I like about Fb in educational contexts is that it encourages a type of candour or, at least, some amount of transparency. Public information about members of a class (registered students, instructors, assistants, auditors…) can be very helpful as a course progresses. In fact, I’ve been thinking quite a bit about Fb-like features in Moodle, such as elaborate profiles, ability to build links across courses, ad hoc groups, etc. Moodle and Facebook share several features and there could be a rich integration of features from both.