Monthly Archives: June 2008

Another Point for Wikipedia: Rousseau’s Citizenship

Compare the following two articles on Jean-Jacques Rousseau.

Jean-Jacques Rousseau — Britannica Online Encyclopedia
Jean-Jacques Rousseau – Wikipedia, the free encyclopedia

At the onset of the first entry, Rousseau is described unequivocally as a “French philosopher.” In the second entry, Rousseau is first described through his contributions to philosophy, literature, and music. The beginning of the biography section of that second entry contains a clear, straightforward, and useful statement about Rousseau’s citizenship. As this Wikipedia entry explains, and is clear in Rousseau’s work, the well-known French-speaking thinker considered himself a citizen of Geneva throughout his life (which ended during the Old Swiss Confederacy, before Geneva became a Canton of Switzerland). While Rousseau’s connections to France are clearly mentioned, nowhere in the body of this Wikipedia article is Rousseau himself called “French.” The article has been classified in diverse Wikipedia categories which do contain the word “French,” but this association is fairly indirect. Though it may sound like the same thing, there’s a huge difference between putting Rousseau in a list of “French philosophers” or “French memoirists” and describing Rousseau as a “French philosopher.” In fact, Rousseau is also listed among “Swiss educationists” and “Swiss music theorists.” These classifications aren’t  inaccurate as classifications. They wouldn’t be very precise as descriptions.

As a dual Swiss/Canadian citizen myself, I easily react to this type of imprecision, especially in formal contexts.

The Encyclopædia Britannica carries quite a bit of prestige and one would expect such issues as citizenship to be treated with caution. Seeing Rousseau mentioned in the “On This Day” bulletin, I accessed the Britannica entry on Rousseau via a single click. The first word of this entry was “French,” which did seem quite inappropriate, to me. In fact, I hoped that the rest of the entry would contain an explanation of this choice. Maybe I had missed the fact that Rousseau became a naturalized French citizen, at some point. Or maybe they just mean “French-speaker.” Or the descriptor was meant as a connection to philosophical trends associated with France…

Nope! Nothing like that.

Instead, a narrative on Rousseau’s life with lots of anecdotes, a few links to other entries, and some “peacock terms.” But no explanation of what is meant by “French philosopher.” This isn’t about accuracy as an absolute. The description could be accurate if it had been explained. But it wasn’t. Oh, there are some mentions of Rousseau’s “rights as a citizen” of Geneva, in connection with The Social Contract. But these statements are rather confusing, especially in the artificial context of an encyclopedia entry.

The Britannica entry was written by the late British economist Maurice Cranston. Given the fact that Cranston died in 1993, one is led to believe that the Britannica entry on Rousseau has been left unmodified in the past 15 years. The Wikipedia version has been modified hundreds of time in the last year. Now, many of these modifications were probably trivial, some are likely to have been inappropriate, and (without looking at the details of the changes) there’s no guarantee that the current version is the best possible one. The point here isn’t about the rate of change. It’s about the opportunities for modifying an encyclopedia entry. One would think that, during the last fifteen years, the brilliant people at Britannica may have had the time to include a clarification as to Rousseau’s citizenship. In fact, one might expect that a good deal of research on Rousseau’s work has happened in the meantime and it would make sense to say that the Britannica entry on the scholar could integrate some elements of that research.

Notice that I’m not, in fact, talking about factual accuracy as an abstract concept. I’m referring to the effects of encyclopedia entries on people’s understanding. In my mind, the Wikipedia entry on Jean-Jacques Rousseau makes it easy for readers to exercise their critical thinking. The Britannica entry on the same person makes it sound as though everything which could be said about Jean-Jacques Rousseau can be contained in a single narrative.

My guess is, Rousseau and his «Encyclopédistes» friends would probably prefer Wikipedia over Britannica.

But that’s just a guess.


Note-Taking on OSX iPhone

Attended Dan Dennett’s “From Animal to Person : How Culture Makes Up our Minds” talk, yesterday. An event hosted by UQAM’s Cognitive Science Institute. Should blog about this pretty soon. It was entertaining and some parts were fairly stimulating. But what surprised me the most had nothing to do with the talk: I was able to take notes efficiently using the onscreen keyboard on my iPod touch (my ‘touch).

As I blogged yesterday, in French, it took me a while to realize that switching keyboard language on the ‘touch also changed the dictionary used for text prediction. Very sensical but I hadn’t realized it. Writing in English with French dictionary predictions was rather painful. I basically had to click bypass the dictionary predictions on most words. Even “to” was transformed into “go” by the predictive keyboard, and I didn’t necessarily notice all the substitutions done. Really, it was a frustrating experience.

It may seem weird that it would take me a while to realize that I could get an English predictive dictionary in a French interface. One reason for the delay is that I expect some degree of awkwardness in some software features, even with some Apple products. Another reason is that I wasn’t using my ‘touch for much text entry, as I’m pretty much waiting for OSX iPhone 2.0 which should bring me alternative text entry methods such as Graffiti, MessagEase and, one can dream, Dasher. If these sound like excuses for my inattention and absent-mindedness, so be it. 😀

At any rate, I did eventually find out that I could switch back and forth between French and English dictionaries for predictive text entry on my ‘touch’s onscreen keyboard. And I’ve been entering a bit of text through this method, especially answers to a few emails.

But, last night, I thought I’d give my ‘touch a try as a note-taking device. I’ve been using PDAs for a number of years and note-taking has been a major component of my PDA usage pattern. In fact, my taking notes on a PDA has been so conspicuous that some people seem to associate me quite directly with this. It may even have helped garner a gadget-freak reputation, even though my attitude toward gadgets tends to be quite distinct from the gadget-freak pattern.

For perhaps obvious reasons, I’ve typically been able to train myself to efficiently use handheld text entry methods. On my NewtonOS MessagePad 130, I initially “got pretty good” at using the default handwriting recognition. This surprised a lot of people because human beings usually have a very hard deciphering my handwriting. Still on the Newton, switching to Graffiti, I became rather proficient at entering text using this shorthand method. On PalmOS devices (HandSpring Visor and a series of Sony Clié devices), I was usually doubling on Graffiti and MessagEase. In all of these cases, I was typically able to take rather extensive notes during different types of oral presentations or simply when I thought about something. Though I mostly used paper to take notes during classes I’ve attended during most of my academic coursework, PDA text entry was usually efficient enough that I could write down some key things in realtime. In fact, I’ve used PDAs rather extensively to take notes during ethnographic field research.

So, note taking was one of the intended uses for my iPod touch. But, again, I thought I would have to wait for text entry alternatives to the default keyboard before I could do it efficiently. So that’s why I was so surprised, yesterday, when I found out that I was able to efficiently take notes during Dennett’s talk using only the default OSX iPhone onscreen keyboard.

The key, here, is pretty much what someone at Apple was describing during some keynote session (might have been the “iPhone Roadmap” event): you need to trust the predictions. Yes, it sounds pretty “touchy-feely” (we’re talking about “touch devices,” after all 😉 ). But, well, it does work better than you would expect.

The difference is even more striking for me because I really was “fighting” the predictions. I couldn’t trust them because most of them were in the wrong language. But, last night, I noticed how surprisingly accurate the predictions could be, even with a large number of characters being mistyped. Part of it has to do with the proximity part of the algorithm. If I type “xartion,” the algorithm guesses that I’m trying to type “cartoon” because ‘x’ is close to ‘c’ and ‘i’ is close to ‘o’ (not an example from last night but one I just tried). The more confident you are that the onscreen keyboard will accurately predict what you’re trying to type, the more comfortably you can enter text.  The more comfortable you are at entering text, the more efficient you become at typing, which begins a feedback loop.

Because I didn’t care that specifically about the content of Dennett’s talk, it was an excellent occasion to practise entering text on my ‘touch. The stakes of “capturing” text were fairly low. It almost became a game. When you add characters to a string which is bringing up the appropriate suggestion and delete those extra characters, the suggestion is lost. In other words, using the example above, if I type “xartion,” I get “cartoon” as a suggestion and simply need to type a space or any non-alphabetic character to accept that suggestion. But if I go on typing “xartionu” and go back to delete the ‘u,’ the “cartoon” suggestion disappears. So I was playing a kind of game with the ‘touch as I was typing relatively long strings and trying to avoid extra characters. I lost a few accurate suggestions and had to retype these, but the more I trusted the predictive algorithm, the less frequently did I have to retype.

During a 90 minute talk, I entered about 500 words. While it may not sound like much, I would say that it captured the gist of what I was trying to write down. I don’t think I would have written down much more if I had been writing on paper. Some of these words were the same as the ones Dennett uttered but the bulk of those notes were my own thoughts on what Dennett was saying. So there were different cognitive processes going on at the same time, which greatly slows down each specific process. I would still say that I was able to follow the talk rather closely and that my notes are pretty much appropriate for the task.

Now, I still have some issues with entering text using the ‘touch’s onscreen keyboard.

  • While it makes sense to make it the default that all suggestions are accepted, there could be an easier way to refuse suggestions that tapping the box where that suggestion appears.
  • It might also be quite neat (though probably inefficient) if the original characters typed by the user were somehow kept in memory. That way, one could correct inaccurate predictions using the original string.
  • The keyboard is both very small for fingers and quite big for the screen.
  • Switching between alphabetic characters and numbers is somewhat inefficient.
  • While predictions have some of the same effect, the lack of a “spell as you type” feature decreases the assurance in avoiding typos.
  • Dictionary-based predictions are still inefficient in bilingual writing.
  • The lack of copy-paste changes a lot of things about text entry.
  • There’s basically no “command” or “macro” available during text entry.
  • As a fan of outliners, I’m missing the possibility to structure my notes directly as I enter them.
  • A voice recorder could do wonders in conjunction with text entry.
  • I really just wish Dasher were available on OSX iPhone.

All told, taking notes on the iPod touch is more efficient than I thought it’d be but less pleasant than I wish it can become.


Bilinguisme sur OSX iPhone

Peut-être un peu bête de ma part, mais j’avais pas compris qu’en changeant de clavier sur mon iPod touch, je changeais aussi de dictionaire pour les prédictions.

Comme le clavier canadien-français fonctionne aussi bien en anglais qu’en français, je n’avais configuré que ce clavier. Mais j’écris plus souvent en anglais qu’en français et toutes sortes de suggestions en français rendaient très difficile l’écriture en anglais.

Récemment, j’ai voulu taper le signe de dollar («$») sur mon iPod touch mais, à chaque fois que j’appuyais sur ce signe sur le clavier virtuel, c’était le signe d’euro qui apparaissait («€»). Très bizarre, surtout que c’est bel et bien un clavier canadien-français (QWERTY avec «é» dans le bas, à droite), et non un clavier français (AZERTY, chiffres avec touche majuscule…). J’ai alors ajouté un clavier U.S. à la configuration et non seulement m’est-il alors possible de taper le signe de dollar, mais les suggestions sont maintenant en anglais des États-Unis. Toujours pas idéal, mais très différent des suggestions françaises quand on écrit en anglais. D’ailleurs, j’imagine qu’il y a aussi un dictionaire personalisé qui ne dépend pas d’une langue spécifique puisque certains termes que je tape souvent apparaissent dans une langue comme dans l’autre.

J’espère vraiment que la mise à jour à OSX iPhone 2.0 va amener diverses amélioration côté «entrée de texte» (“text input”). Déjà, le support multilinguistique semble être intégré, surtout pour les langues d’Asie de l’est. Mais j’espère aussi qu’il va y avoir de nouvelles options pour insérer du texte. Personnellement, parce que je suis à l’aise avec ces systèmes, j’aimerais bien Graffiti, MessagEase et, ô merveille, Dasher. J’ai bon espoir pour les deux premiers, puisqu’ils existent déjà sur iPhone. Pour Dasher, comme c’est un projet en source ouverte, il «suffirait» peut-être d’avoir un développeur OSX iPhone intéressé par Dasher pour «porter» Dasher de Mac OS X à OSX iPhone. Si ça peut marcher, entrer du texte sur un iPod touch peut devenir agréable, efficace et utile. D’après moi, Dasher serait très approprié pour les appareils de type iPhone (ce que j’aime appeler des “touch devices”, incluant des appareils créés par d’autres entreprises qu’Apple).


Visualizing Touch Devices in Education

Took me a while before I watched this concept video about iPhone use on campus.

Connected: The Movie – Abilene Christian University

Sure, it’s a bit campy. Sure, some features aren’t available on the iPhone yet. But the basic concepts are pretty much what I had in mind.

Among things I like in the video:

  • The very notion of student empowerment runs at the centre of it.
  • Many of the class-related applications presented show an interest in the constructivist dimensions of learning.
  • Material is made available before class. Face-to-face time is for engaging in the material, not rehashing it.
  • The technology is presented as a way to ease the bureaucratic aspects of university life, relieving a burden on students (and, presumably, on everyone else involved).
  • The “iPhone as ID” concept is simple yet powerful, in context.
  • Social networks (namely Facebook and MySpace, in the video) are embedded in the campus experience.
  • Blended learning (called “hybrid” in the video) is conceived as an option, not as an obligation.
  • Use of the technology is specifically perceived as going beyond geek culture.
  • The scenarios (use cases) are quite realistic in terms of typical campus life in the United States.
  • While “getting an iPhone” is mentioned as a perk, it’s perfectly possible to imagine technology as a levelling factor with educational institutions, lowering some costs while raising the bar for pedagogical standards.
  • The shift from “eLearning” to “mLearning” is rather obvious.
  • ACU already does iTunes U.
  • The video is released under a Creative Commons license.

Of course, there are many directions things can go, from here. Not all of them are in line with the ACU dream scenario. But I’m quite hope judging from some apparently random facts: that Apple may sell iPhones through universities, that Apple has plans for iPhone use on campuses,  that many of the “enterprise features” of iPhone 2.0 could work in institutions of higher education, that the Steve Jobs keynote made several mentions of education, that Apple bundles iPod touch with Macs, that the OLPC XOXO is now conceived more as a touch handheld than as a laptop, that (although delayed) Google’s Android platform can participate in the same usage scenarios, and that browser-based computing apparently has a bright future.


Chilling Effect and Consensus

This write-up may sound a bit strong but the issue should, in fact, be discussed.

Making Light: The Associated Press wants to charge you $12.50 to quote five words from them

There are different ways to look at these, whether or not people are taking sides. My personal perspective is that these rules The AP is trying to set may contribute to a very important chilling effect and that, in the long run, AP publications will suffer. I also think that we should strive to reach some form of agreement as to rules involving copyright. Laws don’t come in a vacuum.


Judging Eastern Canadian Espresso

For an ethnographer, it’s always a treat to gain entry in a new group. These past few days, I was given a glimpse, and possibly even some new contacts, into an espresso scene which includes dedicated coffee professionals from diverse regions.

I was acting as a sensory judge for the Eastern Regional competition of the Canadian Barista Championship, right here in Montreal.

Part of this event was blogged:

» Blog Archive » Bravo Montreal!

Though the event was held on Sunday (June 15) and Monday (June 16), I haven’t been able to report back on the experience until today. And I still haven’t completely debriefed with myself about this.

A general comment I can make is that there does seem to be a move toward an enhanced espresso scene in Eastern Canada. And although this recent competition’s first place was given to a barista from Ottawa (sincere congratulations, Laura!), I maintain that Montreal can be at the centre of a coffee renaissance.

Of course, I’m completely biased. And I’ve been talking about this same issue for a while. What is new, for me, is direct experience in Montreal’s espresso scene. Participant-observation in a very literal sense.

As a personal aside: though it’s the furthest thing from what I try to be, some people tend to find me intimidating, in daily life. As a judge, I was apparently quite intimidating, even to people who already knew me. I usually feel weird when people find me intimidating but, given the context, the reaction seems quite appropriate. I had to maintain a straight face and to refrain from interacting with competitors throughout the competition. Though it was a bit hard to do at first, it seems to have worked. And I felt very consistent, fair, and impartial throughout the competition.

Also somewhat personal, but more directly related to the task at hand, being a judge required me to temporarily change my perspective on espresso. Specifically, I had to separate my personal taste from the competition calibration. This barista championship has some strict guidelines, taken from the World Barista Championship. We weren’t judging whether or not the espresso was flavourful or complex. We were assessing the degree to which baristas were able to produce espresso which responded to some very specific criteria. To this ethical hedonist, it was a challenge. But it wasn’t as difficult a challenge as I expected.

Since my approach to food and beverages is based on reflective olfaction, the fact that aromas weren’t part of the judging calibration seemed especially surprising to me.

Obviously, I observed a lot more. And I could blog about my perception of the competitors. Yet because I was acting as a judge, talking about specific competitors would seem unethical. On the other hand, I will have occasions to talk with some former competitors and give them my impression of their work. This should be quite fun.

So, overall, I’m quite grateful to everyone involved for an occasion to get a glimpse into a part of Eastern Canada’s espresso scene.

Should be fun during the national competition of the Canadian Barista Championship, which will be held on October 21 and 21, during the Canadian Coffee & Tea Show. Not sure I’ll be a judge then, but I’m convinced it’ll be a fine event.


The Need for Social Science in Social Web/Marketing/Media (Draft)

[Been sitting on this one for a little while. Better RERO it, I guess.]

Sticking My Neck Out (Executive Summary)

I think that participants in many technology-enthusiastic movements which carry the term “social” would do well to learn some social science. Furthermore, my guess is that ethnographic disciplines are very well-suited to the task of teaching participants in these movements something about social groups.

Disclaimer

Despite the potentially provocative title and my explicitly stating a position, I mostly wish to think out loud about different things which have been on my mind for a while.

I’m not an “expert” in this field. I’m just a social scientist and an ethnographer who has been observing a lot of things online. I do know that there are many experts who have written many great books about similar issues. What I’m saying here might not seem new. But I’m using my blog as a way to at least write down some of the things I have in mind and, hopefully, discuss these issues thoughtfully with people who care.

Also, this will not be a guide on “what to do to be social-savvy.” Books, seminars, and workshops on this specific topic abound. But my attitude is that every situation needs to be treated in its own context, that cookie-cutter solutions often fail. So I would advise people interested in this set of issues to train themselves in at least a little bit of social science, even if much of the content of the training material seems irrelevant. Discuss things with a social scientist, hire a social scientist in your business, take a course in social science, and don’t focus on advice but on the broad picture. Really.

Clarification

Though they are all different, enthusiastic participants in “social web,” “social marketing,” “social media,” and other “social things online” do have some commonalities. At the risk of angering some of them, I’m lumping them all together as “social * enthusiasts.” One thing I like about the term “enthusiast” is that it can apply to both professional and amateurs, to geeks and dabblers, to full-timers and part-timers. My target isn’t a specific group of people. I just observed different things in different contexts.

Links

Shameless Self-Promotion

A few links from my own blog, for context (and for easier retrieval):

Shameless Cross-Promotion

A few links from other blogs, to hopefully expand context (and for easier retrieval):

Some raw notes

  • Insight
  • Cluefulness
  • Openness
  • Freedom
  • Transparency
  • Unintended uses
  • Constructivism
  • Empowerment
  • Disruptive technology
  • Innovation
  • Creative thinking
  • Critical thinking
  • Technology adoption
  • Early adopters
  • Late adopters
  • Forced adoption
  • OLPC XO
  • OLPC XOXO
  • Attitudes to change
  • Conservatism
  • Luddites
  • Activism
  • Impatience
  • Windmills and shelters
  • Niche thinking
  • Geek culture
  • Groupthink
  • Idea horizon
  • Intersubjectivity
  • Influence
  • Sphere of influence
  • Influence network
  • Social butterfly effect
  • Cog in a wheel
  • Social networks
  • Acephalous groups
  • Ego-based groups
  • Non-hierarchical groups
  • Mutual influences
  • Network effects
  • Risk-taking
  • Low-stakes
  • Trial-and-error
  • Transparency
  • Ethnography
  • Epidemiology of ideas
  • Neural networks
  • Cognition and communication
  • Wilson and Sperber
  • Relevance
  • Global
  • Glocal
  • Regional
  • City-State
  • Fluidity
  • Consensus culture
  • Organic relationships
  • Establishing rapport
  • Buzzwords
  • Viral
  • Social
  • Meme
  • Memetic marketplace
  • Meta
  • Target audience

Let’s Give This a Try

The Internet is, simply, a network. Sure, technically it’s a meta-network, a network of networks. But that is pretty much irrelevant, in social terms, as most networks may be analyzed at different levels as containing smaller networks or being parts of larger networks. The fact remains that the ‘Net is pretty easy to understand, sociologically. It’s nothing new, it’s just a textbook example of something social scientists have been looking at for a good long time.

Though the Internet mostly connects computers (in many shapes or forms, many of them being “devices” more than the typical “personal computer”), the impact of the Internet is through human actions, behaviours, thoughts, and feelings. Sure, we can talk ad nauseam about the technical aspects of the Internet, but these topics have been covered a lot in the last fifteen years of intense Internet growth and a lot of people seem to be ready to look at other dimensions.

The category of “people who are online” has expanded greatly, in different steps. Here, Martin Lessard’s description of the Internet’s Six Cultures (Les 6 cultures d’Internet) is really worth a read. Martin’s post is in French but we also had a blog discussion in English, about it. Not only are there more people online but those “people who are online” have become much more diverse in several respects. At the same time, there are clear patterns on who “online people” are and there are clear differences in uses of the Internet.

Groups of human beings are the very basic object of social science. Diversity in human groups is the very basis for ethnography. Ethnography is simply the description of (“writing about”) human groups conceived as diverse (“peoples”). As simple as ethnography can be, it leads to a very specific approach to society which is very compatible with all sorts of things relevant to “social * enthusiasts” on- and offline.

While there are many things online which may be described as “media,” comparing the Internet to “The Mass Media” is often the best way to miss “what the Internet is all about.” Sure, the Internet isn’t about anything (about from connecting computers which, in turn, connect human beings). But to get actual insight into the ‘Net, one probably needs to free herself/himself of notions relating to “The Mass Media.” Put bluntly, McLuhan was probably a very interesting person and some of his ideas remain intriguing but fallacies abound in his work and the best thing to do with his ideas is to go beyond them.

One of my favourite examples of the overuse of “media”-based concepts is the issue of influence. In blogging, podcasting, or selling, the notion often is that, on the Internet as in offline life, “some key individuals or outlets are influential and these are the people by whom or channels through which ideas are disseminated.” Hence all the Technorati rankings and other “viewer statistics.” Old techniques and ideas from the times of radio and television expansion are used because it’s easier to think through advertising models than through radically new models. This is, in fact, when I tend to bring back my explanation of the “social butterfly effect“: quite frequently, “influence” online isn’t through specific individuals or outlets but even when it is, those people are influential through virtue of connecting to diverse groups, not by the number of people they know. There are ways to analyze those connections but “measuring impact” is eventually missing the point.

Yes, there is an obvious “qual. vs. quant.” angle, here. A major distinction between non-ethnographic and ethnographic disciplines in social sciences is that non-ethnographic disciplines tend to be overly constrained by “quantitative analysis.” Ultimately, any analysis is “qualitative” but “quantitative methods” are a very small and often limiting subset of the possible research and analysis methods available. Hence the constriction and what some ethnographers may describe as “myopia” on the part of non-ethnographers.

Gone Viral

The term “viral” is used rather frequently by “social * enthusiasts” online. I happen to think that it’s a fairly fitting term, even though it’s used more by extension than by literal meaning. To me, it relates rather directly to Dan Sperber’s “epidemiological” treatment of culture (see Explaining Culture) which may itself be perceived as resembling Dawkins’s well-known “selfish gene” ideas made popular by different online observers, but with something which I perceive to be (to use simple semiotic/semiological concepts) more “motivated” than the more “arbitrary” connections between genetics and ideas. While Sperber could hardly be described as an ethnographer, his anthropological connections still make some of his work compatible with ethnographic perspectives.

Analysis of the spread of ideas does correspond fairly closely with the spread of viruses, especially given the nature of contacts which make transmission possible. One needs not do much to spread a virus or an idea. This virus or idea may find “fertile soil” in a given social context, depending on a number of factors. Despite the disadvantages of extending analogies and core metaphors too far, the type of ecosystem/epidemiology analysis of social systems embedded in uses of the term “viral” do seem to help some specific people make sense of different things which happen online. In “viral marketing,” the type of informal, invisible, unexpected spread of recognition through word of mouth does relate somewhat to the spread of a virus. Moreover, the metaphor of “viral marketing” is useful in thinking about the lack of control the professional marketer may have on how her/his product is perceived. In this context, the term “viral” seems useful.

The Social

While “viral” seems appropriate, the even more simple “social” often seems inappropriately used. It’s not a ranty attitude which makes me comment negatively on the use of the term “social.” In fact, I don’t really care about the use of the term itself. But I do notice that use of the term often obfuscates what is the obvious social character of the Internet.

To a social scientist, anything which involves groups is by definition “social.” Of course, some groups and individuals are more gregarious than others, some people are taken to be very sociable, and some contexts are more conducive to heightened social interactions. But social interactions happen in any context.
As an example I used (in French) in reply to this blog post, something as common as standing in line at a grocery store is representative of social behaviour and can be analyzed in social terms. Any Web page which is accessed by anyone is “social” in the sense that it establishes some link, however tenuous and asymmetric, between at least two individuals (someone who created the page and the person who accessed that page). Sure, it sounds like the minimal definition of communication (sender, medium/message, receiver). But what most people who talk about communication seem to forget (unlike Jakobson), is that all communication is social.

Sure, putting a comment form on a Web page facilitates a basic social interaction, making the page “more social” in the sense of “making that page easier to use explicit social interaction.” And, of course, adding some features which facilitate the act of sharing data with one’s personal contacts is a step above the contact form in terms of making certain type of social interaction straightforward and easy. But, contrary to what Google Friend Connect implies, adding those features doesn’t suddenly make the site social. The site itself isn’t really social and, assuming some people visited it, there was already a social dimension to it. I’m not nitpicking on word use. I’m saying that using “social” in this way may blind some people to social dimensions of the Internet. And the consequences can be pretty harsh, in some cases, for overlooking how social the ‘Net is.

Something similar may be said about the “Social Web,” one of the many definitions of “Web 2.0” which is used in some contexts (mostly, the cynic would say, “to make some tool appear ‘new and improved'”). The Web as a whole was “social” by definition. Granted, it lacked the ease of social interaction afforded such venerable Internet classics as Usenet and email. But it was already making some modes of social interaction easier to perceive. No, this isn’t about “it’s all been done.” It’s about being oblivious to the social potential of tools which already existed. True, the period in Internet history known as “Web 2.0” (and the onset of the Internet’s sixth culture) may be associated with new social phenomena. But there is little evidence that the association is causal, that new online tools and services created a new reality which suddenly made it possible for people to become social online. This is one reason I like Martin Lessard’s post so much. Instead of postulating the existence of a brand new phenomenon, he talks about the conditions for some changes in both Internet use and the form the Web has taken.

Again, this isn’t about terminology per se. Substitute “friendly” for “social” and similar issues might come up (friendship and friendliness being disconnected from the social processes which underline them).

Adoptive Parents

Many “social * enthusiasts” are interested in “adoption.” They want their “things” to be adopted. This is especially visible among marketers but even in social media there’s an issue of “getting people on board.” And some people, especially those without social science training, seem to be looking for a recipe.

Problem is, there probably is no such thing as a recipe for technology adoption.

Sure, some marketing practises from the offline world may work online. Sometimes, adapting a strategy from the material world to the Internet is very simple and the Internet version may be more effective than the offline version. But it doesn’t mean that there is such a thing as a recipe. It’s a matter of either having some people who “have a knack for this sort of things” (say, based on sensitivity to what goes on online) or based on pure luck. Or it’s a matter of measuring success in different ways. But it isn’t based on a recipe. Especially not in the Internet sphere which is changing so rapidly (despite some remarkably stable features).

Again, I’m partial to contextual approaches (“fully-customized solutions,” if you really must). Not just because I think there are people who can do this work very efficiently. But because I observe that “recipes” do little more than sell “best-selling books” and other items.

So, what can we, as social scientists, say about “adoption?” That technology is adopted based on the perceived fit between the tools and people’s needs/wants/goals/preferences. Not the simple “the tool will be adopted if there’s a need.” But a perception that there might be a fit between an amorphous set of social actors (people) and some well-defined tools (“technologies”). Recognizing this fit is extremely difficult and forcing it is extremely expensive (not to mention completely unsustainable). But social scientists do help in finding ways to adapt tools to different social situations.

Especially ethnographers. Because instead of surveys and focus groups, we challenge assumptions about what “must” fit. Our heads and books are full of examples which sound, in retrospect, as common sense but which had stumped major corporations with huge budgets. (Ask me about McDonald’s in Brazil or browse a cultural anthropology textbook, for more information.)

Recently, while reading about issues surrounding the OLPC’s original XO computer, I was glad to read the following:

John Heskett once said that the critical difference between invention and innovation was its mass adoption by users. (Niti Bhan The emperor has designer clothes)

Not that this is a new idea, for social scientists. But I was glad that the social dimension of technology adoption was recognized.

In marketing and design spheres especially, people often think of innovation as individualized. While some individuals are particularly adept at leading inventions to mass adoption (Steve Jobs being a textbook example), “adoption comes from the people.” Yes, groups of people may be manipulated to adopt something “despite themselves.” But that kind of forced adoption is still dependent on a broad acceptance, by “the people,” of even the basic forms of marketing. This is very similar to the simplified version of the concept of “hegemony,” so common in both social sciences and humanities. In a hegemony (as opposed to a totalitarian regime), no coercion is necessary because the logic of the system has been internalized by people who are affected by it. Simple, but effective.

In online culture, adept marketers are highly valued. But I’m quite convinced that pre-online marketers already knew that they had to “learn society first.” One thing with almost anything happening online is that “the society” is boundless. Country boundaries usually make very little sense and the social rules of every local group will leak into even the simplest occasion. Some people seem to assume that the end result is a cultural homogenization, thereby not necessitating any adaptation besides the move from “brick and mortar” to online. Others (or the same people, actually) want to protect their “business models” by restricting tools or services based on country boundaries. In my mind, both attitudes are ineffective and misleading.

Sometimes I Feel Like a Motherless Child

I think the Cluetrain Manifesto can somehow be summarized through concepts of freedom, openness, and transparency. These are all very obvious (in French, the book title is something close to “the evident truths manifesto”). They’re also all very social.

Social scientists often become activists based on these concepts. And among social scientists, many of us are enthusiastic about the social changes which are happening in parallel with Internet growth. Not because of technology. But because of empowerment. People are using the Internet in their own ways, the one key feature of the Internet being its lack of centralization. While the lack of centralized control may be perceived as a “bad thing” by some (social scientists or not), there’s little argument that the ‘Net as a whole is out of the control of specific corporations or governments (despite the large degree of consolidation which has happened offline and online).

Especially in the United States, “freedom” is conceived as a basic right. But it’s also a basic concept in social analysis. As some put it: “somebody’s rights end where another’s begin.” But social scientists have a whole apparatus to deal with all the nuances and subtleties which are bound to come from any situation where people’s rights (freedom) may clash or even simply be interpreted differently. Again, not that social scientists have easy, ready-made answers on these issues. But we’re used to dealing with them. We don’t interpret freedom as a given.

Transparency is fairly simple and relates directly to how people manage information itself (instead of knowledge or insight). Radical transparency is giving as much information as possible to those who may need it. Everybody has a “right to learn” a lot of things about a given institution (instead of “right to know”), when that institution has a social impact. Canada’s Access to Information Act is quite representative of the move to transparency and use of this act has accompanied changes in the ways government officials need to behave to adapt to a relatively new reality.

Openness is an interesting topic, especially in the context of the so-called “Open Source” movement. Radical openness implies participation by outsiders, at least in the form of verbal feedback. The cluefulness of “opening yourself to your users” is made obvious in the context of successes by institutions which have at least portrayed themselves as open. What’s in my mind unfortunate is that many institutions now attempt to position themselves on the openness end of the “closed/proprietary to open/responsive” scale without much work done to really open themselves up.

Communitas

Mottoes, slogans, and maxims like “build it and they will come,” “there’s a sucker born every minute,” “let them have cake,” and “give them what they want” all fail to grasp the basic reality of social life: “they” and “we” are linked. We’re all different and we’re all connected. We all take parts in groups. These groups are all associated with one another. We can’t simply behave the same way with everyone. Identity has two parts: sense of belonging (to an “in-group”) and sense of distinction (from an “out-group”). “Us/Them.”

Within the “in-group,” if there isn’t any obvious hierarchy, the sense of belonging can take the form that Victor Turner called “communitas” and which happens in situations giving real meaning to the notion of “community.” “Community of experience,” “community of practise.” Eckert and Wittgenstein brought to online networks. In a community, contacts aren’t always harmonious. But people feel they fully belong. A network isn’t the same thing as a community.

The World Is My Oyster

Despite the so-called “Digital Divide” (or, more precisely, the maintenance online of global inequalities), the ‘Net is truly “Global.” So is the phone, now that cellphones are accomplishing the “leapfrog effect.” But this one Internet we have (i.e., not Internet2 or other such specialized meta-network) is reaching everywhere through a single set of compatible connections. The need for cultural awareness is increased, not alleviated by online activities.

Release Early, Release Often

Among friends, we call it RERO.

The RERO principle is a multiple-pass system. Instead of waiting for the right moment to release a “perfect product” (say, a blogpost!), the “work in progress” is provided widely, garnering feedback which will be integrated in future “product versions.” The RERO approach can be unnerving to “product developers,” but it has proved its value in online-savvy contexts.

I use “product” in a broad sense because the principle applies to diverse contexts. Furthermore, the RERO principle helps shift the focus from “product,” back into “process.”

The RERO principle may imply some “emotional” or “psychological” dimensions, such as humility and the acceptance of failure. At some level, differences between RERO and “trial-and-error” methods of development appear insignificant. Those who create something should not expect the first try to be successful and should recognize mistakes to improve on the creative process and product. This is similar to the difference between “rehearsal” (low-stakes experimentation with a process) and “performance” (with responsibility, by the performer, for evaluation by an audience).

Though applications of the early/often concept to social domains are mostly satirical, there is a social dimension to the RERO principle. Releasing a “product” implies a group, a social context.

The partial and frequent “release” of work to “the public” relates directly to openness and transparency. Frequent releases create a “relationship” with human beings. Sure, many of these are “Early Adopters” who are already overrepresented. But the rapport established between an institution and people (users/clients/customers/patrons…) can be transfered more broadly.

Releasing early seems to shift the limit between rehearsal and performance. Instead of being able to do mistakes on your own, your mistakes are shown publicly and your success is directly evaluated. Yet a somewhat reverse effect can occur: evaluation of the end-result becomes a lower-stake rating at different parts of the project because expectations have shifted to the “lower” end. This is probably the logic behind Google’s much discussed propensity to call all its products “beta.”

While the RERO principle does imply a certain openness, the expectation that each release might integrate all the feedback “users” have given is not fundamental to releasing early and frequently. The expectation is set by a specific social relationship between “developers” and “users.” In geek culture, especially when users are knowledgeable enough about technology to make elaborate wishlists, the expectation to respond to user demand can be quite strong, so much so that developers may perceive a sense of entitlement on the part of “users” and grow some resentment out of the situation. “If you don’t like it, make it yourself.” Such a situation is rather common in FLOSS development: since “users” have access to the source code, they may be expected to contribute to the development project. When “users” not only fail to fulfil expectations set by open development but even have the gumption to ask developers to respond to demands, conflicts may easily occur. And conflicts are among the things which social scientists study most frequently.

Putting the “Capital” Back into “Social Capital”

In the past several years, ”monetization” (transforming ideas into currency) has become one of the major foci of anything happening online. Anything which can be a source of profit generates an immediate (and temporary) “buzz.” The value of anything online is measured through typical currency-based economics. The relatively recent movement toward ”social” whatever is not only representative of this tendency, but might be seen as its climax: nowadays, even social ties can be sold directly, instead of being part of a secondary transaction. As some people say “The relationship is the currency” (or “the commodity,” or “the means to an end”). Fair enough, especially if these people understand what social relationships entail. But still strange, in context, to see people “selling their friends,” sometimes in a rather literal sense, when social relationships are conceived as valuable. After all, “selling the friend” transforms that relationship, diminishes its value. Ah, well, maybe everyone involved is just cynical. Still, even their cynicism contributes to the system. But I’m not judging. Really, I’m not. I’m just wondering
Anyhoo, the “What are you selling anyway” question makes as much sense online as it does with telemarketers and other greed-focused strangers (maybe “calls” are always “cold,” online). It’s just that the answer isn’t always so clear when the “business model” revolves around creating, then breaking a set of social expectations.
Me? I don’t sell anything. Really, not even my ideas or my sense of self. I’m just not good at selling. Oh, I do promote myself and I do accumulate social capital. As social butterflies are wont to do. The difference is, in the case of social butterflies such as myself, no money is exchanged and the social relationships are, hopefully, intact. This is not to say that friends never help me or never receive my help in a currency-friendly context. It mostly means that, in our cases, the relationships are conceived as their own rewards.
I’m consciously not taking the moral high ground, here, though some people may easily perceive this position as the morally superior one. I’m not even talking about a position. Just about an attitude to society and to social relationships. If you will, it’s a type of ethnographic observation from an insider’s perspective.

Makes sense?


Addressing Issues vs. Assessing Unproblematic Uses (Rant)

A good example of something I tend to dislike, in geek culture. Users who are talking about issues they have are getting confronted with evidence that they may be wrong instead of any acknowledgment that there might be an issue. Basically, a variant of the “blame the victim” game.

Case in point, a discussion about memory usage by Firefox 3, started by Mahmoud Al-Qudsi, on his NeoSmart blog:

Firefox 3 is Still a Memory Hog

Granted, Al-Qudsi’s post could be interpreted as a form of provocation, especially given its title. But thoughtful responses are more effective than “counterstrike,” in cases in which you want to broaden your user base.

So the ZDNet response, unsurprisingly, is to run some benchmark tests. Under their conditions, they get better results for Firefox 3 than for other browsers, excluding Firefox 2 which was the basis of Al-Qudsi’s comment. The “only logical conclusion” is that the problem is with the user. Not surprising in geek culture which not only requires people to ask questions in a very specific way but calls that method the “smart” one.

How to ask a question the Smart Way

One issue with a piece like the ZDNet one is that those who are, say, having issues with a given browser are less likely to get those issues addressed than if the reaction had been more thoughtful. It’s fine to compare browsers or anything else under a standardized set of experimental conditions. But care should be taken to troubleshoot the issues users are saying they have. In other words, it’s especially important for tech journalists and developers to look at what users are actually saying, even if the problem is in the way they use the software. Sure, users can say weird things. But if developers don’t pay attention to users, chances are that they won’t pay attention to the tools the developers build.

The personal reason I’m interested in this issue is that I’ve been having memory issue with both Firefox 2 and in Flock 1.4 (my browser of choice, at least on Windows). I rarely have the same issues in Internet Explorer 7 or Safari 3. It might be “my problem,” but it still means that, as much as I love Mozila-based browsers, I feel compelled to switch to other browsers.

A major “selling point” for Firefox 3 (Ff3) is a set of improvements in memory management. Benchmarks and tests can help convince users to give Ff3 a try but those who are having issues with Ff3’s memory management should be the basis of a thoughtful analysis in terms of patterns in memory usage. It might just be that some sites take up more memory in Ff than in other browsers, for reasons unknown. Or that there are settings and/or extensions which are making Ff more memory hungry. The point is, this can all be troubleshot.

Helping users get a more pleasant experience out of Ff should be a great way to expand Ff’s userbase. Even if the problem is in the way they use Ff.

Ah, well…

Browser memory usage – the good, the bad, and the down right ugly! | Hardware 2.0 | ZDNet.com


Bubbling Wildly

Lag time on last night’s batches was quite short. In fact, the “Mighty S-04” lives up to its reputation: in a large bucket with lots of headspace and a lid not sealing properly, the “So Far” was churning away in less than two hours after pitching. I had just started to sleep and the airlock was bubbling very vigorously. As that fermenter is in my bedroom, the noise woke me up. I loosened the lid on top of that bucket to make sure I wouldn’t get a spill. There wasn’t that much kräusen but fermentation was clearly vigorous, already. This morning, it smells this distinctive S-04 smell and has a good, thick kräusen.

The “Lo Five,” which I left in the basement, is also showing clear signs of fermentation. Thick kräusen, frequent bubbles, yeast smell… As it’s the first batch in which I use US-05 yeast, I didn’t know what to expect. The smell is actually fairly similar to the S-04 but less assertive. The fact that I’m using this yeast for the first time is also a reason I couldn’t attest to its fault tolerance. Judging from the smell, at least, I’d say that it’s as “robust” as S-04 and Ringwood but that it might still make for a cleaner profile which doesn’t hide small flaws really well.

My intention is to “drop” the fermenting beer away from the yeast pretty quickly on the “So Far” to intensify diacetyl. Can’t find the reference, but some British brewery is still doing this, with special equipment. Maybe it’s called “dumping” or “crashing.” Racking the beer early, the yeast isn’t able to “chew up” the diacetyl so more of it is left in the finished beer. Yeah, I know. Diacetyl isn’t typical of mild ales. But I tend to like some level of diacetyl in British-style beers. In this case, if the beer is complex enough despite its low ABV, a bit of diacetyl could round off the finished beer.

As is often the case in homebrewing, I’m already thinking about other batches I might want to do. One could be a doubled-up version of “So Far” (twice the grainbill, maybe twice the hops). Another would be a light weizen, brewed with Lallemand’s Munich strain.

It’s too much fun.


Brewing Mildly

[This is one of my geekier posts, here. As a creative generalist, I typically “write for a general audience.” Whether or not I have an actual audience in mind, my default approach to blog writing is to write as generally as possible. But this post is about homebrewing. As these things go, it’s much easier to write when you assume that “people know what you’re talking about.” In this case, some basic things about all-grain brewing at home. Not that I’m using that obscure a terminology, here. But it’s a post which could leave some people behind, scratching their heads. If it’s your case, sorry. But, you know, “it’s my blog day and I’ll geek if I want to.” As for the verb tenses and reference to time, I’m writing much of this as I go along.]

So…

Generated a recipe for a Mild (11A) using BeerTools and posted it in their recipe library.

So Far (So Good)/Lo Five

That generated recipe is serving as the basis for two recipes I’m mashing right now (June 8, 2008). One, “So Far (So Good),” close to this generated one, is a single-step (infusion) mash with a simple grainbill, to be fermented with S-04 (so it’s “British style”). The other (“Lo Five”) is a multi-step mash with a more complex grainbill (added Carastan, Caramel 80, and chocolate). “Lo Five” is to be fermented with SS-05 (so it’s something of an “American style”).

I’ve copied the main details of this generated recipe in the standalone BeerTools Pro desktop application (version 1.5.9 beta, WinXP; also available on Mac OS X). I then tweaked that generated recipe a tiny bit to suit my needs for the “So Far.”

Here’s the version I used, before I started brewing:

So Far (So Good)

11-A Mild

BeerTools Pro Color Graphic

Size: 5,0 gal
Efficience: 75,0%
Atténuation: 75,0%
Calories: 114,99 kcal per 12,0 fl oz

Densité Initiale: 1,035 (1,030 – 1,038 )
|=================#==============|

Terminal Gravity: 1,009 (1,008 – 1,013)
|==========#=====================|

Couleur: 13,81 (12,0 – 25,0)
|==========#=====================|

Alcool: 3,41% (2,8% – 4,5%)
|=============#==================|

Amertume: 16,9 (10,0 – 25,0)
|===============#================|

Ingredients:

1,0 ea Fermentis S-04 Safale S-04
2.5 kg 6-Row Brewers Malt
150 g Brown Malt
150 g Vienna Malt
150 g Munich TYPE I
200 g Special B – Caramel malt
0.5 oz Challenger (8,0%) – added during boil, boiled 90 min
11,0 g Strisselspalt (3,3%) – added during boil, boiled 10 min

Schedule:

Air ambiant: 70,0 °F
Source Water: 130,0 °F
Altitude: 0,0 m

00:13:00 MashinEaud Empâtage: 1,91 gal; Strike: 173,56 °F; Target: 158,0 °F
01:33:00 Sacc 1Rest: 80 min; Final: 158,0 °F
02:13:00 Fly SpargeSparge Volume: 4,0 gal; Sparge Temperature: 168,0 °F; Runoff: 3,48 gal

Notes

Trying for a relatively simple mild. May skip the Strisselspalt. Doing two similar recipes.

Results generated by BeerTools Pro 1.5.9b

I then cloned that recipe and tweaked it a bit more to get my “Lo Five” recipe. Here’s the recipe I used before I started brewing:

Lo Five

11-A Mild

BeerTools Pro Color Graphic

Size: 5,0 gal
Efficience: 75,0%
Atténuation: 75,0%
Calories: 106,17 kcal per 12,0 fl oz

Densité Initiale: 1,032 (1,030 – 1,038 )
|============#===================|

Terminal Gravity: 1,008 (1,008 – 1,013)
|========#=======================|

Couleur: 17,05 (12,0 – 25,0)
|==============#=================|

Alcool: 3,15% (2,8% – 4,5%)
|===========#====================|

Amertume: 15,0 (10,0 – 25,0)
|=============#==================|

Ingredients:

1,0 ea Fermentis US-05 Safale US-05
2 kg 6-Row Brewers Malt
150 g Brown Malt
150 g Vienna Malt
150 g Munich TYPE I
200 g Special B – Caramel malt
105 g Light Carastan
125 g 2-Row Caramel Malt 80L
50 g 2-Row Chocolate Malt
0.5 oz Challenger (8,0%) – added during boil, boiled 90 min

Schedule:

Air ambiant: 70,0 °F
Source Water: 130,0 °F
Altitude: 0,0 m

00:03:00 MashinEaud Empâtage: 2,02 gal; Strike: 130,09 °F; Target: 122,0 °F
00:21:36 Ramp 1Heat: 18,6 min; Target: 150 °F
00:41:36 Sacc 1Rest: 20 min; Final: 150,0 °F
01:00:12 Ramp 2Heat: 18,6 min; Target: 158 °F
01:20:12 Sacc 2Rest: 20 min; Final: 158,0 °F
01:38:49 MashoutHeat: 18,6 min; Target: 170,0 °F
02:18:49 Fly SpargeSparge Volume: 3,72 gal; Sparge Temperature: 168,0 °F; Runoff: 3,23 gal

Notes

Trying for a relatively simple mild. May skip the Strisselspalt. Doing two similar recipes.

Results generated by BeerTools Pro 1.5.9b

On a whim (and because I just found out I had some), I switched the hops on the Lo Five to Crystal at 4.9% A.A. One half-ounce plug as first wort hopping, and another half-ounce pellet for the full boil. According to BeerTools Pro, t brings my bitterness level either below BJCP-sanctioned 10-25 IBUs for a Mild (11A) if I don’t add the boil time (FWH isn’t supposed to contribute much bitterness), or at two-thirds of the bitterness range if I add the full boil time.

The “brown malt” is actually some 6-row I roasted in a corn popper. The one in the “Lo Five” is darker (roasted longer) than the one in the “So Far.”

I’ve been having a hard time with the temperature for the “So Far.” I ended up doing a pseudo-decoction and adding some hot water to raise the temperature to something closer to even the low end of the optimal range for saccharification. This is the first batch I mash in a mashtun I got from friends in Austin. I used the strike temperature from the BeerTools Pro software (174F) but I hadn’t set the “heat capacity” and “heat transfer coefficient.” As I usually mash in my Bruheat, I rarely think about heat loss.

On the other hand, I largely overshot the strike temperature on the Lo Five. Two main reasons, AFAICT. One, the Bruheat I mashed the Lo Five in also served to preheat some sparge water, so it was still pretty hot. Second, the hot water from the sink was around 140F instead of 130F, as I had expected. In this case, overshooting the strike temperature wasn’t very problematic. The idea was to do a kind of protein rest, but that’s really not important, with the well-modified malts we all use.

Sparging the Lo Five’s mash was a real treat. Very smooth flow from HLT to MT to vessel. Plus, the smell of the Crystal hops in the first runnings was just fabulous. Because I “recirculate” during the mash, with my Bruheat, I didn’t have to recirculate, yet the runnings were quite clear. My sparge water was pretty much at the perfect temperature and I had just a bit extra sparge water (that I’m using in the So Far). I poured the liquid beneath the false bottom and it looked really nice. Rather clear and dark. After this, the grain bed was almost completely dry.

I did have to recirculate the So Far quite a bit. A good four litres, maybe even more. Still, it remains cloudier than most batches I’ve seen, possibly because some starches are remaining. I didn’t check conversion but I mashed long enough that I was assuming it was enough. Still, with a very low mash temperature, I may have needed an even longer mash. Actually, the mash is much cloudier than the runnings, which eventually became relatively clear.

May need to heat a bit of sparge water. Stopping the sparge in the meantime. A bit like a late mashout, it may help convert some of the remaining starch.

Oops! Was waiting for the Lo Five to boil. Thought it was taking a long time. Usually, the Bruheat gets to boiling pretty quickly. Eventually noticed that the element wasn’t running. The thermostat was still working (clicking sound when I reach the wort’s temperature) but I wasn’t hearing the sound of the element heating the wort. Unplugged the Bruheat, pressed the reset button, etc. Still wasn’t working. Was getting ready to try the clothes dryer to see if there was electricity coming through when I noticed it was in fact the dryer that I had plugged back, not the Bruheat.

Ah, well…

Ok…

One thing I found interesting is that when the wort was cooling off instead of heating up, the hop aroma wasn’t as pleasant as before. As the wort heated up, the pleasant profile from the Crystal came back as it was.

These are actually old hops, kept in a vacuum-sealed package. This specific package was a bit loose, as if the vacuum-seal hadn’t worked. I expected the hops to smell old. But they smelled really nice and fresh. Their colour is a bit off, but I trust their smell more than their colour.

Eventually got a rolling boil. Because of FWH, I didn’t skim the break material from the wort, which makes for a very different boil. In fact, when I added some of the runnings I had left on the side (to avoid a boilover), the effect was quite interesting.

Added back the rest of the runnings, waited to get a rolling boil again, then added the hops. Had to be careful not to get a boilover as the hops in the unskimmed wort created a lot of foam.

At the same time, I’m heating some sparge water for the So Far. Yes, once again. Guess I really under-evaluated how much sparge water I needed for this one. Strange.

It should be hot enough, now.

This time, I had let the top of the grain bed dry up a bit. This might not be so good. So I added enough water to cover the grain bed and I’ll wait a bit before I resume running off.

Overall, I’ve been much less careful with the “So Far” than with the “Lo Five.” My reasoning has to do with the fact that I perceive S-04 (Whitbread; PDF) to be a “stronger” yeast than the US-05 (PDF; aka US-56; the Sierra Nevada strain, apparently). For one thing, the optimal temperature range for the S-04 is somewhat higher than for the US-05: although Fermentis rates both at 15C to 24C, S-04 is known to sustain higher fermentation temperatures than most other strains (apart from some Belgian strains like Chouffe’s yeast). The flavour profile from S-04 also tends to be more estery than for the US-05, so it can increase the complexity of the finished beer and even cover up some small flaws. Plus, “Mighty S-04” is one of those strains which can be (and has been) used in open fermentation and continually repitched. A bit like “famously robust” Ringwood yeast. So, unlike a lager strain, this is not a yeast strain that Peter McAuslan would likely call “wimpy.” The “strength” of this strength is obvious in the fact that it ferments very vigorously and quickly.

Besides, I have a slurry of S-04 (graciously donated by a friend) and only a rehydrated pack of US-05. With more yeast comes a certain safety.

Another whim: I added a plug of Crystal to the “Lo Five” about two minutes before the end of boil. Shouldn’t get any bitterness from this, may get some flavour and, quite likely, some nice hop aroma.

So I was somewhat careless with the “So Far.” Conversely, I was rather careful with the “Lo Five.” Not more than for the typical homebrew batch, but more than with the “So Far.” At every step, I started with the “Lo Five” and let the “So Far” wait for its turn, when I wasn’t too busy with the “Lo Five.” For instance, I only started boiling the “So Far” when the “Lo Five” had been pitched. I didn’t even rinse the Bruheat after transfering the “Lo Five,” so the “So Far” started heating up with some hops from the “Lo Five.”

What I expect as a difference between the two is that the “Lo Five” will be rather clean and crisp while the “So Far” will be a chewier and grainier beer with some fruit notes.

More specifically, I’d like the “Lo Five” to have a clearly delineated malt profile and a perceivable hoppiness. in both nose and flavour. Though I wasn’t very specific about trying to emulate it, I guess my inspiration for that one was Three FloydsMild, as brewed for Legends of Notre Dame. That one was one of my favourite beers, from one of my favourite breweries, served at one of my favourites places. I don’t think “Lo Five” will be anywhere as tasty as that beer, but I was probably aiming for that kind of profile.

Tasting the “Lo Five” wort once it was cool, I thought it was decidedly bitter. Given the fact that this beer may finish rather low, it might be unbalanced. Typically, unfermented wort is sweeter than the finished beer so I’m assuming the bitterness might intensify. What’s somewhat sad is that a Mild can’t really age so it’s not like I’ll be able to wait for the bitterness to smoothen out.

Ah, well… We’ll see.

On the other hand, I seem to have overshot my OG. Not by much, and I probably had a reading error. Given how well the sparge went, it’d make sense that I overshot my efficiency (I usually get 75%-78%). But BeerTools Pro is giving me an efficiency of 92% which is pretty much not possible on a homebrew scale without pulverizing the grain (and leeching lots of tannins). At any rate, if the actual OG is higher than expected, it might balance the beer a bit.

Yes, clearly there’s a problem with my thermometer.

Just finished cleaning up (2:36, June 9, 2008). When I took the OG on the “So Far,” the wort was barely lukewarm yet the thermometer was indicating 120F. Using this temperature for hydrometer correction, the OG for the “So Far” would be exactly the same as that for the “Lo Five” (which seemed cooler). That would mean an efficiency of 83% which is not impossible but kind of high. In fact, the volume in the primary seems to be more than 5 gallons so the efficiency would go through the roof.

I did skip the Strisselspalt but the “So Far” was run through the hops from the “Lo Five” at knock-off. It was boiled about an hour, instead of 90 minutes.


A Dream Filled with Hope

The thing about transition periods is that you can go from worry to hope. Frequently, this shift comes with easily recognizable signs.


Pricing Applications for OS X iPhone

AppleInsider has a rumour about Apple trying to get developers to charge for applications they release on the AppStore for OS X iPhone devices.
As is often the case, some reactions are more interesting than the article.
Wanted to add a comment but I’m getting tired of sites which use their own authentication (instead of OpenID), so I thought I’d use Diigo to comment on the discussion. I’m sure other comments are being added "as we speak," but the conversation is veering off into trolling so I stopped.
As I read those comments, I kept thinking about DemiForce‘s Trism and about one of my own posts about no-cost software.
Other random notes about the AppStore.

  • If the store is built appropriately, it can give a lot of exposure to developers who may then capitalize on their work.
  • I’m still hoping that there will eventually be a way to have customized AppStores for universities and other institutions.
  • The AppStore process does exclude FLOSS, at least in principle.
  • It’s quite likely that some developers for OS X iPhone will also develop for OS X Leopard.
  • Some OS X iPhone applications are likely meant to accompany a service or a desktop application, say with synchronization.

So, commenting on the AppleInsider discussion…

  • tags: toblog

    • Interesting thread as a whole. The issue hasn’t been discussed extensively, AFAICT. – post by enkerli
    • added nuisance for iPod touch owners
      • Really? As a would-be touch owner, I don’t see how the app pricing would make a difference through the firmware pricing. – post by enkerli
    • Out of spite I would charge 1¢.
      • Testing micropayment? My guess Apple set a minimum. – post by enkerli
    • Free 7 day trial
      • Seems quite likely. Demiforce.com already announced a free demo (presumably w/o time limits). – post by enkerli
    • becoming only a cost center
      • Breaking even could be nice but Apple probably sees the AppStore as a selling point for its Touch devices. Also, developers on OSX iPhone can enhance the halo effect. – post by enkerli
    • Free apps imply "no" support.
      • Under certain conditions only. – post by enkerli
    • Apple promised the option of Free Apps.

      its part of my contract with them.

      • Expected but it’s good to see confirmation. – post by enkerli
    • I refuse to make costing apps at this point.
      • Thoughtfully put. – post by enkerli
    • There are TONS of FREE online games. We’re not talking about games with much complexity, but they’re still games.
      • Casual gaming makes a lot of sense on OSX iPhone. – post by enkerli
    • Another misleading article title. "Pushing" is not the same as "encouraging".
      • Agreed! The article doesn’t make it sound like the push is very aggressive and, even if it is, Apple can’t really coax developers into selling their apps if they don’t want to. – post by enkerli
    • clearly point to earnings as a metric
      • Not a bad point. But, hopefully, there are other ways to achieve this. – post by enkerli
    • Ahhhhh, the voice of reason. So refreshing
      • Agreed. But it’s not so unique. Just masked by some loud voices. – post by enkerli
    • you don’t need to charge a lot of money for the apps
      • Economies of scale should work well for some OSX iPhone apps. At the same time, there are other ways to make money that to charge for the app itself, especially in these days of online services. – post by enkerli
    • what about ad-supported apps?
      • Not a bad question. I actually hope there won’t be too much of those, but it’s likely that there will be some. It can be more subtle, using an app to lead people to a site. – post by enkerli
    • Spaz out much?
      • Snarky but respectful. Nice! – post by enkerli
    • It’s actually 30% of the REVENUE
      • Excellent point. – post by enkerli
    • in the long run, an App Store with lots of freeware will get more traffic
      • Interesting way to put it. OTOH, it’s probably not traffic that Apple’s cares the most about, for the AppStore. – post by enkerli
    • iTunes gets you to the iTunes Store.
      TextWrangler is really BBEdit Lite. It can be considered to be a gateway drug for BBEdit.
      Skype is a front-end to get you to buy some paid services.
      Sketch-up has an expensive paid version.
      • These are useful examples of the "economy of free" because they show how diverse no-cost software can be in business model. Of course, there are many other models based on no-cost software. – post by enkerli
    • My apps are free because they will be Christianity related and I don’t believe that anyone should be charged to get a bible in the medium they want.
      • It’s hard not to respect the argument and I appreciate the honesty. Also, because the apps aren’t forced on anyone, the position seems quite open. – post by enkerli
    • Given the quality of your writing
      • Feeding the troll doesn’t help. – post by enkerli

Waiting for Other Touch Devices?

Though I’m interpreting Apple’s current back-to-school special to imply that we might not see radically new iPod touch models until September, I’m still hoping that there will be a variety of touch devices available in the not-so-distant future, whether or not Apple makes them.

Turns out, the rumour mill has some items related to my wish, including this one:

AppleInsider | Larger Apple multi-touch devices move beyond prototype stage

This could be excellent news for the device category as a whole and for Apple itself. As explained before, I’m especially enthusiastic about touch devices in educational contexts.

I’ve been lusting over an iPod touch since it was announced. I sincerely think that an iPod touch will significantly enhance my life. As strange as it may sound, especially given the fact I’m no gadget freak, I think frequently about the iPod touch. Think Wayne, in Wayne’s World 2, going to a music store to try a guitar (and being denied the privilege to play Stairway to Heaven). That’s almost me and the iPod touch. When I go to an Apple Store, I spend precious minutes with a touch.

Given my current pattern of computer use, the fact that I have no access to a laptop at this point, and the availability of WiFi connections at some interesting spots, I think an iPod touch will enable me to spend much less time in front of this desktop, spend much more time outside, and focus on my general well-being.

One important feature the touch has, which can have a significant effect on my life, is instant-on. My desktop still takes minutes to wake up from “Stand by.” Several times during the day, the main reason I wake my desktop is to make sure I haven’t received important email messages. (I don’t have push email.) For a number of reasons, what starts out as simple email-checking frequently ends up being a more elaborate browsing session. An iPod touch would greatly reduce the need for those extended sessions and let me “do other things with my life.”

Another reason a touch would be important in my life at this point is that I no longer have access to a working MP3 player. While I don’t technically need any portable media player to be happy, getting my first iPod just a few years ago was an important change in my life. I’ll still miss my late iRiver‘s recording capabilities, but it’s now possible to get microphone input on the iPod touch. Eventually, the iPod touch could become a very attractive tool for fieldwork recordings. Or for podcasting. Given my audio orientation, a recording-capable iPod touch could be quite useful. Even more so than iPod Classic with recording capabilities.

There are a number of other things which should make the iPod touch very useful in my life. A set of them have to do with expected features and applications. One is Omni Group’s intention to release their OmniFocus task management software through the iPhone SDK. As an enthusiastic user of OmniOutliner for most of the time I’ve spent on Mac OS X laptops, I can just imagine how useful OmniFocus could be on an iPod touch. Getting Things Done, the handheld version. It could help me streamline my whole workflow, the way OO used to do. In other words: OF on an iPod touch could be this fieldworker’s dream come true.

There are also applications to be released for Apple’s Touch devices which may be less “utilitarian” but still quite exciting. Including the Trism game. In terms of both “appropriate use of the platform” and pricing, Trism scores high on my list. I see it as an excellent example of what casual gaming can be like. One practical aspect of casual gaming, especially on such a flexible device as the iPod touch, is that it can greatly decrease stress levels by giving users “something to do while they wait.” I’ve had that experience with other handhelds. Whether it’s riding the bus or waiting for a computer to wake up from stand by, having something to do with your hands makes the situation just a tad bit more pleasant.

I’m also expecting some new features to eventually be released through software, including some advanced podcatching features like wireless synchronization of podcasts and, one can dream, a way to interact directly with podcast content. Despite having been an avid podcast listener for years, I think podcasts aren’t nearly “interactive” enough. Software on a touch device could solve this. But that part is wishful thinking. I tend to do a lot of wishlists. Sometimes, my daydreams become realities.

The cool thing is, it looks as though I’ll be able to get my own touch device in the near future. w00t! 😀

Even if Apple does release new Touch devices, the device I’m most likely to get is an iPod touch. Chances are that I might be able to get a used 8MB touch for a decent price. Especially if, as is expected for next Monday, Apple officially announces the iPhone for Canada (possibly with a very attractive data plan) As a friend was telling me, once Canadians are able to get their hands on an iPhone directly in Canada, there’ll likely be a number of used iPod touches for sale. With a larger supply of used iPod touches and a presumably lower demand for the same, we can expect a lower price.

Another reason I might get an iPod touch is that a friend of mine has been talking about helping me with this purchase. Though I feel a bit awkward about accepting this kind of help, I’m very enthusiastic at the prospect.

Watch this space for more on my touch life. 😉


Edupunk Manifesto?

Noticed, just yesterday, that a number of unusual suspects of some online educational circles were using Edupunk as a way to identify a major movement toward openness in educational material. This video doesn’t “say it all” but it can help.

Changing Expectations

Like Lindsea, I wish more diverse voices were heard. Bakhtin FTW!

Unlike Lindsea, I don’t see it as mainly a generational thing or a “teacher vs. student” issue. In fact, I’m hoping that the social movements labelled by the term “edupunk” will move beyond those issues into a broader phenomenon.

The age/generation component is still interesting, to a Post-Buster like me. Baby Boomers are still the primary target of Punk. Lindsea even talks about Boomer classics:

Don’t you teachers remember when you were young? Hippies? Protesters? Implementers of change? Controllers of the cool, anti-establishment, nonconformist underground culture?

Baby Busters (the earlier part of the so-called “GenX”) have long been anti-Boomers. Not that everyone born during those years readily identify themselves with that “Generation.” But in terms of identity negotiation, the “Us/Them” often follows a concept of generational divide.

But I sincerely hope we can go way beyond age and generation. After all, there are learners of all ages, some of them older than their “teachers” (formally named or not).

Call me a teacher, if you really must. But, please, could we listen to diverse voices without labelling their sources?


Culture and Health: Contact and Coverage

It’s late in the game, as the story has already made the rounds, but I guess I was under a rock.

FUNAI, a Brazilian foundation which aims to help indigenous groups, has released pictures of a relatively isolated group in the Amazon region. Apparently, the purpose of those pictures was to show how healthy these people seemed to be, contrary to folk beliefs about indigenous groups. These folk beliefs are widespread in post-industrial societies and seem to relate to basic ethnocentrism.

Some major media outlets released those same pictures with captions and other comments about allegedly “uncontacted tribes.” Through the “telephone game,” the same images became part of an awkwardly anachronistic coverage of cultural diversity, many comments being made from a resolutely neo-evolutionist perspective. A whole debacle ensued. Several anthropologists have been contacted to comment on the situation.

So far, the most thoughtful piece of writing I’ve seen about the whole situation is this one:

‘Uncontacted Indians?!’ — contact an anthropologist! « Culture Matters

Wouldn’t it be wonderful if media debacles such as this one could be avoided? One would hope that a good dose of critical thinking and some thoughtful blogging might help.


Educational Touch: Back-to-School

Apple just launched a new back-to-school special. Like last year’s program, it’s a Mac+iPod special for a mail-in rebate. But unlike last year’s special, it can be used to get an iPod touch.

AppleInsider | Apple’s free 8GB iPod touch Back-to-School Promo now official

One reason I find this interesting is that I think that Touch devices make a lot of sense in educational contexts. Especially if educational institutions take advantage of them. And it would be even more interesting if, as I keep dreaming about, the device category for Touch devices is to expand.

But this Back-to-School special seems to imply that Apple will not release a new iPod touch model as it unveils the new 3G-capable iPhone, next Monday. Which is not to say that they won’t release anything else, besides the “iPhone 3G” (with rumoured features such as videoconferencing and GPS). But it does make the likeliness of a complete revamp of the Touch line less likely.

In fact, if last year’s pattern is to be repeated (like it seems to be, with the iPhone), it’s possible that Apple will refresh the iPod line (including the iPod touch) in September. In other words, just at the end of the back-to-school special…

Of course, the touch devices to use in educational contexts don’t have to be manufactured by Apple. Given the OLPC project’s official sanction of competing devices to its XOXO (a device I’ve been dreaming about), it’d be fun to see Asus or Toshiba release some kind of touch device before long. And, maybe, the Open Handset Alliance will release something in the meantime. The recent demo was intriguing.

Guess we’ll just have to wait and see.


Bookish Reference

Thinking about reference books, these days.

Are models inspired by reference books (encyclopedias, dictionaries, phonebooks, atlases…) still relevant in the context of almost-ubiquitous Internet access?

I don’t have an answer but questions such as these send me on streams of thought. I like thought streaming.

One stream of thought relates to a discussion I’ve had with fellow Yulblogger Martin Lessard about “trust in sources.” IIRC, Lessard was talking more specifically about individuals but I tend to react the same way about “source credibility” whether the source is a single human being, an institution, or a piece of writing. Typically, my reaction is a knee-jerk one: “No information is to be trusted, regardless of the source. Critical thinking and the scientific method both imply that we should apply the same rigorous analysis to any piece of information, regardless of the alleged source.” But this reasoned stance of mine is confronted with the reality of people (including myself and other vocal proponents of critical thinking) acting, at least occasionally, as if we did “trust” sources differentially.

I still think that this trusty attitude toward some sources needs to be challenged in contexts which give a lot of significance to information validity. Conversely, maybe there’s value in trust because information doesn’t always have to be that valid and because it’s often more expedient to trust some sources than to “apply the same rigorous analysis to information coming from any source.”

I also think that there are different forms of trust. From a strong version which relates to faith, all the way to a weak version, tantamount to suspension of disbelief. It’s not just a question of degree as there are different origins for source-trust, from positive prior experiences with a given source to the hierarchical dimensions of social status.

A basic point, here, might be that “trust in source” is contextual, nuanced, changing, constructed… relative.

Second stream of thought: popular reference books. I’m still afraid of groupthink, but there’s something deep about some well-known references.

Just learnt, through the most recent issue of Peter Suber’s SPARC Open Access newsletter, some news about French reference book editor Larousse (now part of Hachette, which is owned by Lagardère) making a move toward Open Access. Through their Larousse.fr site, Larousse is not only making some of its content available for open access but it’s adding some user-contributed content to its site. As an Open Access enthusiast, I do find the OA angle interesting. But the user-content angle leads me in another direction having to do with reference books.

What may not be well-known outside of Francophone contexts is that Larousse is pretty much a “household name” in many French-speaking homes. Larousse dictionaries have been commonly used in schools and they have been selling quite well through much of the editor’s history. Not to mention that some specialized reference books published by Larousse, are quite unique.

To make this more personal: I pretty much grew up on Larousse dictionaries. In my mind, Larousse dictionaries were typically less “stuffy” and more encyclopedic in approach than other well-known French dictionaries. Not only did Larousse’s flagship Petit Larousse illustré contain numerous images, but some aspect of its supplementary content, including Latin expressions and proverbs, were very useful and convenient. At the same time, Larousse’s fairly extensive line of reference books could retain some of the prestige afforded its stuffier and less encyclopedic counterparts in the French reference book market. Perhaps because I never enjoyed stuffiness, I pretty much associated my view of erudition with Larousse dictionaries. Through a significant portion of my childhood, I spent countless hours reading disparate pieces of Larousse dictionaries. Just for fun.

So, for me, freely accessing and potentially contributing to Larousse feels strange. Can’t help but think of our battered household copies of Petit Larousse illustré. It’s a bit as if a comics enthusiast were not only given access to a set of Marvel or DC comics but could also go on the drawing board. I’ve never been “into” comics but I could recognize my childhood self as a dictionary nerd.

There’s a clear connection in my mind between my Larousse-enhanced childhood memories and my attitude toward using Wikipedia. Sure, Petit Larousse was edited in a “closed” environment, by a committee. But there was a sense of discovery with Petit Larousse that I later found with CD-ROM and online encyclopedias. I used a few of these, over the years, and I eventually stuck with Wikipedia for much of this encyclopedic fun. Like probably many others, I’ve spent some pleasant hours browsing through Wikipedia, creating in my head a more complex picture of the world.

Which is not to say that I perceive Larousse as creating a new Wikipedia. Describing the Larousse.fr move toward open access and user-contributed content, the Independent mostly compares Larousse with Wikipedia. In fact, a Larousse representative seems to have made some specific statements about trying to compete with Wikipedia. Yet, the new Larousse.fr site is significantly different from Wikipedia.

As Suber says, Larousse’s attempt is closer to Google’s knols than to Wikipedia. In contrast with the Wikipedia model but as in Google’s knol model, content contributed by users on the Larousse site preserves an explicit sense of authorship. According to the demo video for Larousse.fr, some specific features have been implemented on the site to help users gather around specific topics. Something similar has happened informally with some Wikipedians, but the Larousse site makes these features rather obvious and, as some would say, “user-friendly.” After all, while many people do contribute to Wikipedia, some groups of editors function more like tight-knit communities or aficionados than like amorphous groups of casual users. One interesting detail about the Larousse model is that user-contributed and Larousse contents run in parallel to one another. There are bridges in terms of related articles, but the distinction seems clear. Despite my tendency to wait for prestige structures to “just collapse, already,” I happen to think this model is sensible in the context of well-known reference books. Larousse is “reliable, dependable, trusty.” Like comfort food. Or like any number of items sold in commercials with an old-time feel.

So, “Wikipedia the model” is quite different from the Larousse model but both Wikipedia and Petit Larousse can be used in similar ways.

Another stream of thought, here, revolves around the venerable institution known as Encyclopædia Britannica. Britannica recently made it possible for bloggers (and other people publishing textual content online) to apply for an account giving them access to the complete online content of the encyclopedia. With this access comes the possibility to make specific articles available to our readers via simple linking, in a move reminiscent of the Financial Times model.

Since I received my “blogger accreditation to Britannica content,” I did browse some article on Britannica.com. I receive Britannica’s “On This Day” newsletter of historical events in my inbox daily and it did lead me to some intriguing entries. I did “happen” on some interesting content and I even used Britannica links on my main blog as well as in some forum posts for a course I teach online.

But, I must say, Britannica.com is just “not doing it for me.”

For one thing, the site is cluttered and cumbersome. Content is displayed in small chunks, extra content is almost dominant, links to related items are often confusing and, more sadly, many articles just don’t have enough content to make visits satisfying or worthwhile. Not to mention that it is quite difficult to link to a specific part of the content as the site doesn’t use page anchors in a standard way.

To be honest, I was enthusiastic when I first read about Britannica.com’s blogger access. Perhaps because of the (small) thrill of getting “privileged” access to protected content, I thought I might find the site useful. But time and again, I had to resort to Wikipedia. Wikipedia, like an old Larousse dictionary, is dependable. Besides, I trust my sense of judgement to not be too affect by inaccurate or invalid information.

One aspect of my deception with Britannica relates to the fact that, when I write things online, I use links as a way to give readers more information, to help them exercise critical thinking, to get them thinking about some concepts and issues, and/or to play with some potential ambiguity. In all of those cases, I want to link to a resource which is straightforward, easy to access, easy to share, clear, and “open toward the rest of the world.”

Britannica is not it. Despite all its “credibility” and perceived prestige, Britannica.com isn’t providing me with the kind of service I’m looking for. I don’t need a reference book in the traditional sense. I need something to give to other people.

After waxing nostalgic about Larousse and ranting about Britannica, I realize how funny some of this may seem, from the outside. In fact, given the structure of the Larousse.fr site, I already think that I won’t find it much more useful than Britannica for my needs and I’ll surely resort to Wikipedia, yet again.

But, at least, it’s all given me the opportunity to stream some thoughts about reference books. Yes, I’m enough of a knowledge geek to enjoy it.


Actively Reading Open Access

Open Access

I’ve been enthusiastic about OA (open access to academic texts) for a number of years. I don’t tend to be extremely active in the OA milieu but I do use every opportunity I can to talk about OA, both in formal academic contexts and in more casual and informal conversation.

My own views about Open Access are that it should be plain common-sense, for both scholars and “the public.” Not that OA is an ultimate principle, but it seems so obvious to me that OA can be beneficial in a large range of contexts. In fact, I tend to conceive of academia in terms of Open Access. In my mind, a concept related to OA runs at the very core of the academic enterprise and helps distinguish it from other types of endeavours. Simply put, academia is the type of “knowledge work ” which is oriented toward openness in access and use.

Historically, this connection between academic work and openness has allegedly been the source of the so-called “Open Source movement” with all its consequences in computing, the Internet, and geek culture.

Quite frequently, OA advocates focus (at least in public) on specific issues related to Open Access. An OA advocate put it in a way that made me think it might have been a precaution, used by OA advocates and activists, to avoid scaring off potential OA enthusiasts. As I didn’t involve myself as a “fighter” in the OA-related discussions, I rarely found a need for such precautions.

I now see signs that the Open Access movement is finally strong enough that some of these precautions might not even be needed. Not that OA advocates “throw caution to the wind.” But I really sense that it’s now possible to openly discuss broader issues related to Open Access because “critical mass has been achieved.”

Suber’s Newsletter

Case in point, for this sense of a “wind of change,” the latest issue of Peter Suber’s SPARC Open Access Newsletter.

Suber’s newsletter is frequently a useful source of information about Open Access and I often get inspired by it. But because my involvement in the OA movement is rather limited, I tend to skim those newsletter issues, more than I really read them. I kind of feel bad about this but “we all need to choose our battles,” in terms of information management.

But today’s issue “caught my eye.” Actually, it stimulated a lot of thoughts in me. It provided me with (tasty) intellectual nourishment. Simply put: it made me happy.

It’s all because Suber elaborated an argument about Open Access that I find particularly compelling: the epistemological dimension of Open Acces. Because of my perspective, I respond much more favourably to this epistemological argument than I would with most practical and ethical arguments. Maybe that’s just me. But it still works.

So I read Suber’s newsletter with much more attention than usual. I savoured it. And I used this new method of actively reading online texts which is based on the Diigo.com social bookmarking service.

Active Reading

What follows is a slightly edited version of my Diigo annotations on Suber’s text.

Peter Suber, SPARC Open Access Newsletter, 6/2/08

Annotated

June 2008 issue of Peter Suber’s newsletter on open access to academic texts (“Open Access,” or “OA”).

tags: toblog, Suber, Open Access, academia, publishing, wisdom of crowds, crowdsourcing, critical thinking

General comments

  • Suber’s newsletters are always on the lengthy side of things but this one seems especially long. I see this as a good sign.
  • For several reasons, I find this issue of Suber’s newsletter is particularly stimulating. Part of my personal anthology of literature about Open Access.

Quote-based annotations and highlights.

Items in italics are Suber’s, those in roman are my annotations.

  • Open access and the self-correction of knowledge

    • This might be one of my favourite arguments for OA. Yes, it’s close to ESR’s description of the “eyeball” principle. But it works especially well for academia.
  • Nor is it very subtle or complicated
    • Agreed. So, why is it so rarely discussed or grokked?
  • John Stuart Mill in 1859
    • Nice way to tie the argument to something which may thought-provoke scholars in Humanities and Social Sciences.
  • OA facilitates the testing and validation of knowledge claims
    • Neat, clean, simple, straightforward… convincing. Framing it as hypothesis works well, in context.
  • science is self-correcting
    • Almost like “talking to scientists’ emotions.” In an efficient way.
  • reliability of inquiry
    • Almost lingo-like but resonates well with academic terminology.
  • Science is special because it’s self-correcting.
    • Don’t we all wish this were more widely understood?
  • scientists eventually correct the errors of other scientists
    • There’s an important social concept, here. Related to humility as a function of human interaction.
  • persuade their colleagues
  • new professional consensus
  • benefit from the perspectives of others
    • Tying humility, intellectual honesty, critical thinking, ego-lessness, and even relativist ways of knowing.
  • freedom of expression is essential to truth-seeking
  • opening discussion as widely as possible
    • Perhaps my favourite argument ever for not only OA but for changes in academia generally.
  • when the human mind is capable of receiving it
    • Possible tie-in with the social level of cognition. Or the usual “shoulders of giants.”
  • public scrutiny
    • Emphasis on “public”!
  • protect the freedom of expression
    • The problem I have with the way this concept is applied is that people rely on pre-established institutions for this protection and seem to assume that, if the institution is maintained, so is the protection. Dangerous!
  • If the only people free to speak their minds are people like the author, or people with a shared belief in current orthodoxy, then we’d rarely hear from people in a position to recognize deficiencies in need of correction.
    • This, I associate with “groupthink” in the “highest spheres” (sphere height being giving through social negotiation of prestige).
  • But we do have to make our claims available to everyone who might care to read and comment on them.
    • Can’t help but think that *some* of those who oppose or forget this mainly fear the social risks associated with our positions being questioned or invalidated.
  • For the purposes of scientific progress, a society in which access to research is limited, because it’s written in Latin, because authors are secretive, or because access requires travel or wealth, is like a society in which freedom of expression is limited.
  • scientists who are free to speak their minds but lack access to the literature have no advantage over scientists without the freedom to speak their minds
  • many-eyeballs theory
  • many voices from many perspectives
  • exactly what scientists must do to inch asymptotically toward certainty
  • devil’s advocate
  • enlisting as much help
  • validate knowledge claims in public
  • OA works best of all
    • My guess is that those who want to argue against this hypothesis are reacting in a knee-jerk fashion, perhaps based on personal motives. Nothing inherently wrong there, but it remains as a potential bias.
  • longevity in a free society
    • Interesting way to put it.
  • delay
  • the friction in a non-OA system
    • The academic equivalent of cute.
  • For scientific self-correction, OA is lubricant, not a precondition.
    • Catalyst?
  • much of the scientific progress in the 16th and 17th centuries was due to the spread of print itself and the wider access it allowed for new results
    • Neat way to frame it.
  • Limits on access (like limits on liberty) are not deal-breakers, just friction in the system
    • “See? We’re not opposed to you. We just think there’s a more efficient way to do things.”
  • OA can affect knowledge itself, or the process by which knowledge claims become knowledge
  • pragmatic arguments
    • Pretty convincing ones.
  • The Millian argument for OA is not the “wisdom of crowds”
    • Not exclusively, but it does integrate the diversity of viewpoints made obvious through crowdsourcing.
  • without attempting to synthesize them
    • If “wisdom of crowds” really is about synthesis, then it’s nothing more than groupthink.
  • peer review and the kind of empirical content that underlies what Karl Popper called falsifiability
    • I personally hope that a conversation about these will occur soon. What OA makes possible, in a way, is to avoid the dangers which come from the social dimension of “peerness.” This was addressed earlier, and I see a clear connection with “avoiding groupthink.” But the assumption that peer-review, in its current form, has reached some ultimate and eternal value as a validation system can be questioned in the context of OA.
  • watchdogs
  • Such online watchdogs were among those who first identified problems with images and other data in a cloning paper published in Science by Woo Suk Hwang, a South Korean researcher. The research was eventually found to be fraudulent, and the journal retracted the paper….
    • Not only is it fun as a “success story” (CHE’s journalistic bent), but it may help some people understand that there is satisfaction to be found in fact-checking. In fact, verification can be self-rewarding, in an appropriate context. Seems obvious enough to many academics but it sounds counterintuitive to those who think of academia as waged labour.

Round-up

Really impressive round-up of recent news related to Open Access. What I tend to call a “linkfest.”

What follows is my personal selection, based on diverse interests.


Crowdsourcing in Africa: Cellphones and Web Innovation

Not only do I like the concept of crowdsourcing in crisis situations but, at first blush, it seems like a culturally appropriate approach to the issue of transmitting information about such a crisis.

Now, if we could prevent such crisis situations…

Maybe we can prevent some of them through thoughtfulness and cultural awareness.

Ushahidi.com Blog » The NetSquared Slideshow Loop


Backpack Picnic: Farmer Dinner