Category Archives: Web 2.0

I don’t wanna seem crude

So there I was in W.H. Smith’s, queuing up with my Radio Times, when… actually I wasn’t buying anything, I was hanging around the magazine racks waiting for my wife and daughter to get finished in Build-A-Bear; I just thought that would take too long to explain. In any case it’s only a bit of scene-setting, I might as well have been getting the Radio Times. Shall we start this again?

I was in W.H. Smith’s – that much is true – when my attention was snagged by a display stand opposite the tills. There, where you might expect to see something by Bill Bryson or an Ordnance Survey road atlas or a new variety of chocolate orange, was this:

Just Kate Moss with no clothes on. Move along, nothing to see here.

Whoa. Tracks, stopped in.

Now, I’m a man of the world; the idea of a magazine printing pictures of Kate Moss naked doesn’t shock me. I have long been aware of the existence of pictures of Kate Moss in the nude; I know that more than one photographer has been granted the opportunity to take pictures of Kate Moss starkers, and more than one of the resulting pictures of Kate Moss in the buff has escaped onto that Internet. I’m quite relaxed about the idea of pictures of Kate Moss letting it all hang out; pictures of a bare Kate Moss are fine by me.

(And people pay consultants to get hits on their Web pages! Piece of cake.)

Kate Moss nue, Kate Moss nackt or Kate Moss desnuda (see what I did there?), it doesn’t bother me. Or indeed surprise me – the model in question has been notably relaxed about doing the whole nude bit. But it was a bit of a jolt to see that image displayed in my face, or rather around waist height. For a moment it took me back thirty-odd years, when I used to get the train home from school every afternoon and hang around the magazine stall furtively glancing at the covers of Der Spiegel and Stern. For some reason German news magazines in the 1970s quite often put topless models on the front cover, which was more than English top-shelf mags did; once or twice Stern even featured a flash of bush, which left the teenage me simultaneously aroused and genuinely shocked (on the cover! can they even do that?). Transgressive stuff there from Gruner+Jahr. (NB “shocking” and “subversive” – not the same thing.) My German isn’t great, but de.wikipedia seems to be saying that a group of women sued G+J in 1978 over the sexist objectification of women in Stern, and frankly I’m not at all surprised. The next time I saw anything like that I was in Schiphol airport, having a drink at a café completely surrounded by hard-core pr0n and thanking the Lord I didn’t have any children with me (“Daddy, what’s ‘hot wet pink action’?”).

It was a striking display, anyway – and a cursory examination confirmed what the visual grammar of that cover rather strongly suggests, i.e. that there are pictures without the masking tape inside. (And I do mean cursory – there are times and places for studying pictures of naked women, and standing opposite the till in W.H. Smith’s while waiting for one’s wife and daughter is neither.) A more leisured investigation later confirmed that Ms Moss is one of eight models featured in the issue; that Love, although it’s essentially a fashion magazine, prints rather a lot of elegant monochrome nudity; and that it’s not the only one – there’s a howlingly expensive mag called Purple which seems to specialise in naked female celebrities, while still ostensibly appealing to well-off women who like looking at posh clothes rather than well-off men who like looking at bare ladies. (I guess it’s possible that Purple‘s core audience is well-off women who like looking at bare ladies and posh clothes, but that seems too small a niche.)

There’s been a two-way traffic between fashion photography and the classier end of soft pr0nography for some time, with several people working both sides of the street; they both involve posing impossibly elegant women to look attractive, after all. Classy soft pr0n as fashion photography seems new, and rather odd – although it’s a trend that may have been brewing for a while: take this (NSFW) from a 2008 issue of W magazine, originally captioned “Christopher Kane’s cashmere sweater with polyester paillettes and glass beads”. Hands up anyone who thinks that’s a picture of Christopher Kane’s sweater.

So what’s going on? I considered the possibility that (to rework the saying about music) “if it looks too rude, you’re too old”. Back in the 1970s, when I wasn’t gawping at Stern from a safe distance, I did occasionally buy my very own copy of Mayfair or something – sometimes accompanying it with a copy of New Society or Omni, research purposes you understand…. Back then the combination of (a) a nice-looking woman and (b) no clothes was all a young lad would ask for from his top-shelf mag – which was just as well, as that was all he was going to get. But that’s a long time ago; maybe Kids These Days demand action sequences and extreme closeups, and anything short of that just doesn’t qualify as pr0n. Conversely, maybe nudity’s a tired old Anglo-Saxon taboo, and we’re all relaxed and European now. I don’t think that’s it, though – the reaction to those photos has been far from ho-hum (NSFW). I guess it’s partly a case of “pushing the boundaries” (yawn), getting attention by doing something slightly more outrageous than the last time – and what Love did the last time was a nude Beth Ditto photoshoot, so you can see the logic of going for the multiple-supermodel approach. In the case of American magazines like W and Interview, there may also be a bit of a transatlantic cultural cringe (directed our way for once), with the perception that the Europeans are so cool about nudity and Americans need to stop being so prudish – and massive over-compensation as a result. (That comparison is valid to some extent, but it’s pretty hypocritical either way round. I don’t think American men feel any differently than French or German men about looking at naked women – they all like doing it and think they have a fundamental right to go on doing it. It’s just that one way of putting naked women on display gets labelled as relaxed (or exhibitionistic), while another gets labelled moral (or uptight).)

I think there’s also something going on about the status of professional photographers, in this age of Internet-enabled mass amateurism, and the status of printed magazines. Which is, after all, something of vital interest to a shop like W.H. Smith’s: anything that makes printed magazines seem a bit less dispensable is good news for a printed magazine shop. (I initially wrote ‘physical magazine’, but if you write ‘physical magazine’ over and over again it starts to get distracting. Whatever did happen to Health and Efficiency?)

I think what caught my eye at the weekend was somebody’s USP. (No, not Kate Moss’s. Settle down.) Sure, you can take pictures of what you want when you want, and sure, you can download pictures of more or less anything you can imagine, but have you got a picture of Kate Moss, dressed in nothing but a pair of high heels, artistically lit and printed on large-format glossy paper? You haven’t? Well, isn’t this your lucky day – look what we’ve got here. Right here, just by the checkout.

(Title courtesy of Stuart, cutting to the chase in his inimitable way.

I saw a lady and she was naked!
I saw a lady, she had no clothes on!

Great song; the S/M imagery is particularly appropriate, bringing out how compelling and overpowering this kind of experience can feel (“Why she want to pick on me?”). It’s a hard life being a man, you know…)

Advertisement

But you don’t know me

I don’t know Tilda Swinton. At all.

There are, of course, many people I don’t know; the list could be extended more or less indefinitely, potentially forming the basis for a rather unchallenging game (“Yeah? Well, I don’t know Charles Kennedy, Jason Orange or Hufty from the Word…”) The point about Tilda Swinton in particular is that, if you stopped me in the street and asked me if I knew her, I’ve got a horrible feeling I’d say Yes. (At least, I used to… Well, when I say ‘know’, I met… actually no, I never actually met… sorry, what was the question?)

Obviously, the image of anyone you’ve seen a lot on the screen can get painted on the back of your mind, to the point where they seem as familiar as a friend or neighbour (“In the street people come up to Rita/It’s Barbara Knox really but they’re still glad to meet her” – Kevin Seisay). I suppose something similar’s going on here, assisted in this case by the fact that I was at the same university as Tilda Swinton for at least one year; I even saw her in a college theatre production once, playing opposite a friend of a friend of mine. (I think. It may have been someone else.)

I’ve never even had any contact with Tilda Swinton, if it comes to that. I did once try to get in touch with her, for a series of brief interviews we were running in Red Pepper at the time. A friend gave me the number of a friend, who she thought had known her and might be able to put me in touch. I duly phoned the friend’s friend, who was a bit taken aback and suggested that if I wanted to speak to Tilda Swinton I should probably go through Tilda Swinton’s management. Nothing ever came of it.

In short, whatever fantasies I may half-consciously harbour, the real world is unanimous on this one: I don’t know Tilda Swinton, at all. I’ve got a friend who’s got a friend who may once have known her, and I had a friend at college who had a friend who may once have acted with her, but none of that adds up to anything.

Or it didn’t, until LinkedIn.

LinkedIn is a social networking site for people who want to make their social network work; it’s designed to enable members to exploit “the professional relationships you already have”. You join LinkedIn by writing a ‘profile’ (a c.v., more or less). You then ‘build your network’ by exchanging emails with existing members of LinkedIn who you already know; the software helpfully provides lists of LinkedIn members who are, or were, at your workplace, former workplace or university. When your emailed invitation has been accepted, the user you invited becomes one of your ‘connections’, while you become one of theirs. Ultimately you end up with a network “consist[ing] of your connections, your connections’ connections, and the people they know, linking you to thousands of qualified professionals”. ‘Thousands’ is no exaggeration: after a month’s membership I’ve got 41 ‘trusted friends and colleagues’, and many LinkedIn users have five or ten times as many. It adds up, or rather multiplies out: if you count “[my] connections’ connections, and the people they know”, I’m connected to over 200,000 people. Woohoo.

There are two main ways to make money out of social software – adding advertising or charging a fee for a premium service – and I’m generally in favour of the latter. This is the route LinkedIn have chosen. Annoyingly, the result in this case is not simply that fee-paying users benefit but that free riders are penalised. The profiles of users outside your network are only shown in full if you’ve got a paid-for account, which can be frustrating. Worse, the highest echelons of power-networking users can opt out of receiving common-or-garden email invitations, so that they can only be contacted using the network’s ‘InMail’ facility – which is, of course, only available on paid-for accounts. There’s being linked in, and then there’s being linked in. I suppose this says something about the nature of the service they’re providing: a professional social network is one with lots of people excluded from it.

The bigger question is what LinkedIn actually provides (apart from the warm glow of knowing that somebody else has been excluded). I wrote last year that tagging, for me, is more an elaborate way of building a mind-map than anything to do with bookmarking pages and finding them again; I’m interested to see that Philipp has reached a similar conclusion (“Let’s put it straight: Using tags to find my bookmarks later just doesn’t work. I give up.”) Similarly, I suspect that one of the main benefits of LinkedIn – at least for us non-power-networkers – is the capacity it gives you to contemplate the scale and plenitude of your own network: all those people I know, sort of! I mean, I know someone who knows them, or else there’s a friend of a friend who knows them… So I sort of know them, really, don’t I, just a bit?

But Tilda Swinton’s not on LinkedIn. So I don’t know her at all.

Wrapped in paper (2)

More about blogging from iSeries NEWS UK (or System i News UK as it now is), this time from April this year. (Reverse chronological order?)

SINCE BLOGGING exploded onto the national consciousness about a year ago, around the time that I first wrote about it, the phenomenon has grown exponentially. It is now estimated that, out of any given class of fifteen-year-olds, half have a MySpace account, a third have a personal blog and one in ten are using Facebook, while the other two haven’t been online since they got the ASBO. But what are the perils and pitfalls of this new medium? Can we safely entrust our deepest personal secrets to the Web, blithely trusting in the good intentions of everyone who reads our uncensored outpourings? Or not?

Here are some tips for would-be voyagers in the blogosphere. Careful now.

Q: I’m writing a blog. Should I be worried?

A: Very probably. Let’s face it, writing about whatever comes into your head for the benefit of a few dozen readers is no kind of occupation for an adult – not like being a columnist, for example! Perhaps you should get out more. Unless you’re one of those fifteen-year-olds, in which case you probably get out quite enough. Isn’t there some homework you should be doing?

Q: No, I mean, should I be worried about getting sacked?

A: There have been a couple of high-profile cases recently of bloggers being sacked or suspended, on the general grounds that holding a responsible position in society is incompatible with writing about whatever comes into your head for the benefit of a few dozen readers – particularly if you’re doing it in work time. But let’s keep it in proportion. Before blogging, it was not unknown for employees occasionally to use the Web for personal purposes at work, particularly when Big Brother was on. Before the Web, work computer facilities could be used for employees’ personal ends just as easily, if not quite so entertainingly. Even before PCs, employees sometimes used work facilities for their own purposes, generally by having long telephone conversations with friends, lovers or relatives, often with little or no work content. Where this was not possible, employees often had workplace affairs. Blogging is just one form of workplace timewasting, and by no means the most prevalent (or the most messy).

Q: Good heavens! Can people really be so irresponsible?

A: Yes, I’m afraid so. (You are one of those fifteen-year-olds, aren’t you?)

Q: Any tips for safe blogging?

A: Think about who’s going to be reading your blog. Once it’s up there on the Web, anyone at all could read it – and it’ll stay there for years to come! On the other hand, in practice hardly anyone will read your blog, and most of those who do won’t look beyond the front page, so it’s probably not worth getting too worked up about. But do think about first impressions, and about the effect you’re having on casual visitors, and about printouts and employment tribunals. Don’t call your blog “Notes from a wage slave” or “My boss is a crook”, even if the title accurately describes its content.

Q: Shouldn’t employers actually embrace blogging, along with other forms of social networking software such as tagging, podcasts, vodcasts, wikis and mashups?

A: OK, you’ve had your fun. I’ll answer this one question, but after that I’m going to insist on talking to a grown-up. The answer is, no, they shouldn’t. The factor you’re overlooking here is that blogs are only partly to do with social networking. What they’re very largely to do with is writing about whatever comes into your head for the benefit of a few dozen readers. Which is fine if you’ve got a workforce consisting of egotistical narcissists who only want to hear the sound of their own voice and don’t understand the concept of dialogue.

Q: Many bloggers have gone on to land book contracts and TV appearances.

A: Wait a minute, I hadn’t finished. Encouraging workplace blogging is fine if your employees are all egotistical narcissists, but – let me stress this – not otherwise. What were you saying?

Q: Many bloggers have gone on to land book contracts and TV appearances. Will my blog change my life?

A: Call it “My boss is a crook” and you’ll soon find out.

Wrapped in paper (1)

A propos of not very much, here’s a magazine column about blogging. Regular readers of iSeries NEWS UK may recognise it, as it appeared in that estimable magazine last year.

BLOGGING – it’s the new thing! Everyone’s blogging these days – at least, everyone except you! But what is blogging all about? What are the do’s and don’ts of this new medium – what does it take to be a good citizen of the blogosphere? And that MySpace thing that the kids are doing – what’s that all about? Let’s find out.

Q: Reverse chronological order?

A: That’s right – you’ll see the latest posts at the top and earlier ones lower down. It’s easy to get used to – just imagine that you’re living life backwards, perhaps as the result of exposure to a top-secret military experiment that warped the very fabric of reality itself. Or that you’re reading one of those chain emails where people add their replies at the top.

Q: What about developing a coherent argument?

A: Many blogs have a continuing theme or an argument to which they frequently return. Bloggers whose writing has a particularly clear focus are sometimes referred to as ‘subject experts’, and sometimes as ‘nutters’. You may prefer to avoid being regarded as a nutter; in this case, your best strategy is to have opinions which people agree with. Otherwise, building an extended argument on a blog is no different from doing it in any other situation: cross-examining defence witnesses in a fraud trial, say, or ascertaining whether that bloke in the taxi queue did in fact want some. The only difference with blogging is that you write it all down – that, and the fact that what you write appears in reverse chronological order.

Q: But what would I write about?

A: Whatever you like – the sky is quite literally your oyster. To get some ideas, try browsing some IT blogs. The tech blogosphere is a happy hunting ground for lovers of rare, obscure and historic technology – from the LEO to the One Per Desk, from the Osborne to the Sinclair QL… The iSeries hasn’t been neglected, either – at last count there are as many as two dedicated iSeries blogs, which sometimes feature code! But it’s up to you: you can write about whatever crosses your mind, and goodness knows most people do.

Q: So who writes this stuff?

A: According to popular stereotypes, the typical blogger is a twenty-something American Unix enthusiast who lives with his parents and compensates for his lack of a social life by hunching over a keyboard for hour after lonely hour, conducting tediously pointless contests of geek one-upmanship and exchanging incomprehensibly elaborate in-jokes, pausing only for a swig of Mountain Dew or a bite of cold pizza. This stereotype is far removed from reality – Mountain Dew’s more of a skater thing, apart from anything else. In reality, the range of bloggers is as broad as the range of blogs – and that’s pretty broad. There are blogs out there devoted to every topic under the sun – computing, cult films, Dungeons and Dragons, beer, you name it! It is believed that there are also blogs written by women, although the subject matter of these has yet to be ascertained. That’s the great thing about blogging: anyone can do it. You could be a blogger, if you put your mind to it.

Q: OK, so what is blogging?

A: Blogging is the activity of keeping a blog. A blog is a personal Website, updated regularly by the user; you can think of it as a kind of online journal or commonplace book or advertisement for oneself. The word ‘blog’ may derive from ‘Web log’, a type of Web site consisting of a ‘log’ of other interesting sites. It may also derive from ‘backlog’, a term for the mass of blog-worthy material which dedicated bloggers tend to build up, and the mass of work which doesn’t get done while they’re blogging about it. Alternatively, it may be a cross between ‘brag’ and ‘slog’, encapsulating the experience of reading a blog for (a) the author and (b) everyone else.

Q: Blogs – are they something to do with that MySpace thing that the kids seem to be doing these days? What’s that all about?

A: God knows. Shall we talk about blogging?

The vagaries of science

The slightly oxymoronic Britannica Blog has recently hosted a series of posts on Web 2.0, together with responses from Clay Shirky, Andrew Keen and others. The debate’s been of very variable quality, on both the pro- and the anti- side; reading through it is a frustrating experience, not least because there’s some interesting stuff in among the strawman target practice (on both sides) and the tent-preaching (very much on both sides). As I said in response to a (related) David Weinberger post recently, it’s not always clear whether the pro-Web 2.0 camp are talking about how things are (what knowledge is like & how it works) or about how things are changing – or about how they’d like things to change. The result is that developments with the potential to be hugely valuable (like, say, Wikipedia) are written about as if they had already realised their potential, and attempts to point out flaws or collateral damage are dismissed as naysaying. On the anti- side, the danger is of an equally unthinking embrace of how things are – or how they were before all this damn change started happening.

All this is by way of background to some comments I left on danah boyd‘s contribution (which is well worth reading in full), and may explain (if not excuse) the impatient tone. danah, then me:

Why are we telling our students not to use Wikipedia rather than educating them about how Wikipedia works?

Because I could give a 20-credit course on ‘how Wikipedia works’ and not get to the bottom of it. It’s complex. It’s interesting. I happen to believe it’s an almighty mess, but it’s a very complex and interesting mess. For practical purposes “Don’t cite it” is quicker.

Wikipedia is not perfect. But why do purported experts spend so much time arguing against it rather than helping make it a better resource?

This is a false opposition: two different activities with different timescales, different skillsets and different rewards. I get an idea, I write it down – generally it won’t let me go until I’ve written it down. I look at what I’ve written down, and I want to rewrite it – quite often it won’t let me go until I’ve rewritten it. All of this takes slabs of time, but they’re slabs of time spent engrossed with ideas and language, my own and other people’s – and the result is a real and substantial contribution to a conversation, by an identifiable speaker.

I look at a bad Wikipedia article [link added] and I don’t know where to start. What I’d like to do is delete the whole thing and put in the stub of a decent article that I can come back to later, but I sense that this will be regarded as uncool. What I don’t want to do is clamber through the existing structure of an entry I think shouldn’t have been written in the first place correcting an error here or there, because that’s a long-drawn-out task that’s both tedious and unrewarding. And what I particularly don’t want to do is return to the article again and again over a period of weeks because my edits are getting reverted by someone hiding behind a pseudonym.

(I think what Wikipedia anonymity has shown, incidentally, is that people really don’t like anonymity. Wikipedia has produced its own stable identities – and its own authorities, based on the reputation particular Wikipedia editors have established within the Wikipedia community.)

Is it really worth that much prestige to write an encyclopedia article instead of writing a Wikipedia entry?

Well, yes. If I get a journal article accepted or I’m commissioned to write an encyclopedia article, I’m joining an established conversation among fellow experts. What I’ve written stays written and gets cited – in other words, it contributes to the conversation, and hence to the formation of the cloud of knowledge within the discipline. And it goes on my c.v. – because it can be retrieved as part of a reviewable body of work. If I write for Wikipedia I don’t know who I’m talking to, nobody else knows who’s writing, and what I’ve written can be unwritten at any moment. And it would look ridiculous on my c.v. – because they’ve only got my word that it is part of my body of work, assuming it still exists in the form in which I wrote it.

The way things are now, knowledge lives in domain-sized academic conversations, which are maintained by gatekeepers and authorities. Traditional encyclopedias make an effort to track those conversations, at least in their most recently crystallised (serialised?) form. Wikipedia is its own conversation with its own authorities and its own gatekeepers. For the latest state of the Wikipedia conversation to coincide with the conversation within an established domain of knowledge is a lucky fluke, not a working assumption.

Update The other big difference between traditional encyclopedias and Wikipedia (as someone known only as ‘bright’ reminded me, in comments over here) is that the latter gets much more use. From my response:

Comparisons with the Britannica are interesting as far as they go – and I don’t believe they do Wikipedia any favours – but they don’t address the way that Wikipedia is used, essentially as an extension of Google. When I google for information I’m not hoping to find an encyclopedia article. Generally, Britannica articles used to appear on the first page of hits, but not right at the top; usually you’d see a fan sites, hobby sites, school sites, scholarly articles and domain-specific reference works on the same page, and usually the fan sites, etc, would be just as good. (I stopped using the Britannica altogether as soon as it went paywalled.) If all that had happened was that Britannica results had been pushed down from number 8 to number 9, with their place being taken by Wikipedia, I doubt we’d be having this conversation. What’s happened is that, for topic after topic, Wikipedia is number 1; the people who would have run all those fan sites and hobby sites are either writing for Wikipedia instead or they’re not bothering, since after all Wikipedia is already there. (Or else the sites are still out there, but they’re way down the search result list because they’re not getting the traffic.) It’s a monoculture; it’s a single point of failure, in a way that encyclopedias aren’t. And it’s the last thing that should have happened on the Web. (I’ll own up to a lingering Net idealism. Internet 0.1, I think it was.)

Alright, yeah

Stephen Lewis (via Dave) has a good and troubling post about the limits of the Web as a repository of knowledge.

while the web might theoretically have the potential of providing more shelf space than all libraries combined, in reality it is quite far from being as well stocked. Indeed, only a small portion of the world’s knowledge is available online. The danger is that as people come to believe that the web is the be-all and end-all source of information, the less they will consult or be willing to pay for the off-line materials that continue to comprise the bulk of the world’s knowledge, intellectual achievement, and cultural heritage. The outcome: the active base of knowledge used by students, experts, and ordinary people will shrink as a limited volume of information, mostly culled from older secondary sources, is recycled and recombined over and again online, leading to an intellectual dark-age of sorts. In this scenario, Wikipedia entries will continue to grow uncontrolled and unverified while specialized books, scholarly journals and the world’s treasure troves of still-barely-explored primary sources will gather dust. Present-day librarians, experts in the mining of information and the guidance of researchers, will disappear. Scholarly discourse will slow to a crawl while the rest of us leave our misconceptions unquestioned and the gaps in our knowledge unfilled.

The challenge is either – or both – to get more books, periodicals, and original source materials online or to prompt people to return to libraries while at the same time ensuring that libraries remain (or become) accessible. Both tasks are dauntingly expensive and, in the end, must be paid for, whether through taxes, grants, memberships, donations, or market-level or publicly-subsidized fees.

Lewis goes on to talk about the destruction of the National and University Library in Sarajevo, among other things. Read the whole thing.

But what particularly struck me was the first comment below the post.

I think you’re undervaluing the new primary sources going up online, and you’re undervaluing the new connections that are possible which parchment can’t compete with like this post I’m making to you. I definitely agree that there is a ton of great knowledge stored up in books and other offline sources, but people solve problems with the information they have, and in many communities – especially rural third world communities, offline sources are just as unreachable, if not more, than online sources.

This is a textbook example of how enthusiasts deal with criticism. (I’m not going to name the commenter, because I’m not picking on him personally.) It’s a reaction I’ve seen a lot in debates around Wikipedia, but I’m sure it goes back a lot further. I call it the “your criticism may be valid but” approach – it starts by formally conceding the criticism, thus avoiding the need to refute or even address it. Counter-arguments can then be deployed at will, giving the rhetorical effect of debate without necessarily addressing the original point. It’s a very persuasive style of argument.In this case there are three main strategies. The criticism may be valid…

I think you’re undervaluing the new primary sources going up online

but (#1) things are getting better all the time, and soon it won’t be valid any more! (This is a very common argument among ‘social software’ fans. Say something critical about Wikipedia on a public forum, then start your stopwatch. See also Charlie Stross’s ‘High Frontier’ megathread.)

you’re undervaluing the new connections that are possible which parchment can’t compete with like this post I’m making to you. … in many communities – especially rural third world communities, offline sources are just as unreachable, if not more, than online sources

but (#2) you’re just looking at the negatives and ignoring the positives, and that’s wrong! Look at the positives, never mind the negatives! (Also very common out on the Web 2.0 frontier.)

I definitely agree that there is a ton of great knowledge stored up in books and other offline sources, but people solve problems with the information they have

but (#3) …hey, we get by, don’t we? Does it really matter all that much?

I’m not a fan of Richard Rorty, but I believe that communities have conversations, and that knowledge lives in those conversations (even if some of them are very slow conversations that have been serialised to paper over the decades). I also believe that knowledge comes in domains, and that each domain follows the shape of the overall cloud of knowledge constituted by a conversation. But I’ve been in enough specialised communities (Unix geeks, criminologists, folk singers, journalists…) to know that there’s a wall of ignorance and indifference around each domain; there probably has to be, if we’re not to keel over from too much perspective. Your stuff, you know about and you know that you don’t know all that much; you know you’re not an expert. Their stuff, well, you know enough; you know all you need to know, and anyway how complicated can it be?

Enthusiasts are good people to have around; they hoard the knowledge and keep the conversation going, even when there’s a bit of a lull. The trouble is, they tend to keep the wall of ignorance and apathy in place while they’re doing it. The moral is, if your question is about something just outside a particular domain of knowledge, don’t ask an enthusiast – they’ll tell you there’s nothing there. (Or: there’s something there now, but it won’t be there for long. Or: there’s something there, but look at all the great stuff we’ve got here!)

I call that education

It became apparent that most of them hadn’t heard of Twitter.

Tim Bray misjudges his audience. What’s interesting is that the audience in question was at something called Web Design World. This leads Tim to wonder just how small the ‘Internet in-crowd’ really is – and, conversely, if it is that small, how come it makes so much noise.

I wrote about this last year, and I think some of what I wrote then is worth repeating:

When I first started using the Internet, about ten years ago, there was a geek Web, a hobbyist Web, an academic Web (small), a corporate Web (very small) and a commercial Web (minute) – and the geek Web was by far the most active. Since then the first four sectors have grown incrementally, but the commercial Web has exploded, along with a new sixth sector – the Web-for-everyone of AOL and MSN and MySpace and LiveJournal (and blogs), whose users vastly outnumber those of the other five. But the geek Web is still where a lot of the new interesting stuff is being created, posted, discussed and judged to be interesting and new.

Add social software to the mix – starting, naturally, within the geek Web, as that’s where it came from – and what do you get? You get a myth which diverges radically from the reality. The myth is that this is where the Web-for-everyone comes into its own, where millions of users of what was built as a broadcast Web with walled-garden interactive features start talking back to the broadcasters and breaking out of their walled gardens. The reality is that the voices of the geeks are heard even more loudly – and even more disproportionately – than before. Have a look at the ‘popular’ tags on del.icio.us: as I write, six of the top ten (including all of the top five) relate directly to programmers, and only to programmers. (Number eight reads: “LinuxBIOS – aims to replace the normal BIOS found on PCs, Alphas, and other machines with a Linux kernel”. The unglossed reference to Alphas says it all.) Of the other four, one’s a political video, two are photosets and one is a full-screen animation of a cartoon cat dancing, rendered entirely in ASCII art. (Make that seven of the top ten.)

[2007 del.icio.us/popular update: still six out of ten, albeit only two out of the top five]

Yes, ‘insiders’ do make a disproportionate amount of noise. And yes, the in-crowd does look bigger on the inside than it does from the outside – so does any crowd once you’re in it. The mistake is to assume that your crowd is the only crowd there is – but it’s a mistake that every crowd makes. An old post about Technorati (this time from 2005) makes this point better than I could paraphrase it:

The equation of authority with ‘popularity’ is, in one sense, neither inappropriate nor avoidable … the distinction between the knowledge produced in academic discourse and the knowledge produced in conversation is ultimately artificial: in both cases, there’s a cloud of competing and overlapping arguments and definitions; in both cases, each speaker – or each intervention – draws a line around a preferred constellation of concepts. At some level, all knowledge is ‘cloudy’. Moreover, in both cases, the outcome of interactions depends in large part on the connections which speakers can make between their own arguments and those of other speakers, particularly those who speak with greater authority. (Hence controversy: your demonstration that an established writer is wrong about A, B and C will interest a lot more people – and do more for your reputation – than your utterly original exposition of X, Y and Z.) You may not like the internationally-renowned scholar who’s agreed to look in on your workshop – you may resent his refusal to attend the whole thing and disapprove of his attitude to questioners; you may not even think his work’s that great – but you still invite him: he’s popular, which means he’s authoritative, which means he reflects well on you. Domain by domain, authority does indeed track popularity.

But there’s the rub – and here begins the argument against Technorati. Domain by domain, authority tracks popularity, but not globally: it makes a certain kind of sense to say that the Sun is more authoritative than the Star, but to say that it’s more authoritative than the Guardian would be absurd. (Perverse rankings like this are precisely an indicator of when two distinct domains are being merged.) Similarly, it’s easy to imagine somebody describing either the Daily Kos or Instapundit as the most ‘authoritative’ site on the Web; what’s impossible to imagine is the mindset which would say that Kos was almost the most authoritative source, second only to Glenn Reynolds. But this is what drops out if we use Technorati’s (global) equation of popularity with authority. … This effect has been masked up to now by the prevalence of a single domain among Technorati tags (and, indeed, Technorati users): it’s a design flaw which has been compensated by an implementation flaw.

Some final brief thoughts. Blogging tends towards conversation. Conversation routes around gatekeepers (Technorati is, precisely, a gatekeeper – but an avoidable gatekeeper). Conversations happen within domains. People cross domains, but domains don’t overlap. Every domain thinks it’s the only one.

Except, of course, the domain shared by readers of this blog, which is plural and open to a high degree. A uniquely high degree, in fact…

Great big bodies

I think the thing that really irritates me about the Long Tail is just how basic the statistical techniques underlying it are. If you’ve got all that data, why on earth wouldn’t you do something more interesting and more informative with it? It’s really not hard. (In fact it’s so easy that I can’t help feeling the Long Tail image must have some other appeal – but more on that later.)

As you may have noticed, this weblog hasn’t been updated for a while. In fact, when I compared it with the rest of my RSS feed I found it was a bit of an outlier:

blogs2

The Y axis is ‘number of blogs’: two updated today (zero days ago), 11 in the previous 10 days, 1 in the 10-day period before that, and so on until you get to the 71-80 column. Note that each column is a range of values, and that the columns are touching; technically this is a histogram rather than a bar chart.

You can do something similar with ‘posts in last 100 days’:

blogs1

This shows that the really heavy posters are in the minority in this sample; twelve out of the eighteen have 30 or fewer posts in the last 100 days.

So it looks as if I’m reading a lot of reasonably regular but fairly light bloggers, and a few frequent fliers. If you put the two series together you can see the two groups reflected in the way the sample smears out along the X and Y axes without much in the middle:

blogs3

My question is this. If you can produce readable and informative charts like this quickly and easily (and I assure you that you can – we’re talking an hour from start to finish, and most of that went on counting the posts), what on earth would make you prefer this:

blogs5

or this:

blogs4

I can only think of two reasons. One is that it looks kind of like a power law distribution, and that’s a cool idea. Except that it isn’t a power law distribution, or any kind of distribution – it’s a list ranked in descending order, and, er, that’s it. The same criticism applies, obviously, to the classic ‘power law’ graphic ranking weblogs in descending order of inbound links.

DIGRESSION
You can compute a distribution of inbound links across weblogs using very much the techniques I’ve used here – so many weblogs with one link, so many with two and so forth. Oddly enough, what you end up with then is a curve which falls sharply then tapers off – there are far fewer weblogs with two links than with only one, but not so much of a difference between the ’20 links’ and ’21 links’ categories. However, even that isn’t a power law distribution, for reasons explained here and here (reasons which, for the non-mathematician, can be summed up as ‘a power law distribution means something specific, and this isn’t it’).
END DIGRESSION

The other reason – and, I suspect, the main reason – is that the Long Tail privileges ranking: the question it suggests isn’t how many of which are doing what? but who’s first?. A histogram might give more information, but it wouldn’t tell me who’s up there in the big head, or how far down the tail I am.

People want to be on top; failing that, they want to fantasise about being on top and identify with whoever’s up there now. Not everyone, but a lot of people. The popularity of the Long Tail image has a lot in common with the popularity of celebrity gossip magazines.

They don’t know about us

Some dystopian thoughts on data harvesting, usage tracking, recommendation engines and consumer self-expression. First, here’s Tom, then me:

“This is going to be one of the great benefits of ambient/pervasive computing or everyware – not the tracking of objects but the tracking and collating of you yourself through objects.”

This sentence works just as well with the word ‘benefits’ replaced by ‘threats’. It all depends who gets to do the tracking and collating, I suppose.

Now here’s Max Levchin, formerly of Paypal, and his new toy Slide (via Thomas):

If Slide is at all familiar, it’s as a knockoff of Flickr, the photo-sharing site. Users upload photos, which are displayed on a running ticker or Slide Show, and subscribe to one another’s feeds. But photos are just a way to get Slide users communicating, establishing relationships, Levchin explains.

The site is beginning to introduce new content into Slide Shows. It culls news feeds from around the Web and gathers real-time information from, say, eBay auctions or Match.com profiles. It drops all of this information onto user desktops and then watches to see how they react.

Suppose, for example, there’s a user named YankeeDave who sees a Treo 750 scroll by in his Slide Show. He gives it a thumbs-up and forwards it to his buddy” we’ll call him Smooth-P. Slide learns from this that both YankeeDave and Smooth-P have an interest in a smartphone and begins delivering competing prices. If YankeeDave buys the item, Slide displays headlines on Treo tips or photos of a leather case. If Smooth-P gives a thumbs-down, Slide gains another valuable piece of data. (Maybe Smooth-P is a BlackBerry guy.) Slide has also established a relationship between YankeeDave and Smooth-P and can begin comparing their ratings, traffic patterns, clicks and networks.

Based on all that information, Slide gains an understanding of people who share a taste for Treos, TAG Heuer watches and BMWs. Next, those users might see a Dyson vacuum, a pair of Forzieri wingtips or a single woman with a six-figure income living within a ten-mile radius. In fact, that’s where Levchin thinks the first real opportunity lies – hooking up users with like-minded people. “I started out with this idea of finding shoes for my girlfriend and hotties on HotOrNot for me,” Levchin says with a wry smile. “It’s easy to shift from recommending shoes to humans.”

If this all sounds vaguely creepy, Levchin is careful to say he’s rolling out features slowly and will only go as far as his users will allow. But he sees what many others claim to see: Most consumers seem perfectly willing to trade preference data for insight. “What’s fueling this is the desire for self-expression,” he says.

Nick:

I’m not sure that I see, in today’s self-portraits on MySpace or YouTube or Flickr, or in the fetishistic collecting of virtual tokens of attention, the desire to mark one’s place in a professional or social stratum. What they seem to express, more than anything, is a desire to turn oneself into a product, a commodity to be consumed. And since, as I wrote earlier, “self-commoditization is in the end indistinguishable from self-consumption,” the new portraiture seems at its core narcissistic. The portraits are advertisements for a commoditized self

Granny Weatherwax:

“And sin, young man, is when you treat people as things. Including yourself. That’s what sin is. … People as things, that’s where it starts.”

More precisely, that’s where some extraordinarily unequal and dishonest social relationships can start.

Everything new is old again

Printed in iSeries NEWS UK, February 2006

Everybody’s talking about Web 2.0! Web 2.0 offers a whole new way of looking at the Web, a whole new way of developing applications and a whole new way of making enough money to retire on for some irritating bunch of American students who dream up applications you can’t see the point of anyway! Web 2.0 is different because it’s a whole new departure from the old ways of doing things – and what makes it new is that it’s so different.

Web 2.0 breaks all the rules. The rigid document-based format of HTML became a universal computing standard in the early days of the Internet, some time around Web 0.9 [Can we check this? – Ed]. Web 2.0 emerged when a few pioneering developers broke with this orthodoxy, insisting that a page-based document markup language like HTML was better adapted to marking up page-based documents than to running high-volume transaction processing systems. With the industry still reeling from the shockwaves of this revelation, an alternative approach was unveiled. The key Web 2.0 methodology of AJAX – Asynchronous Javascript And XML – breaks the dominance of the HTML page. Now, applications can be built using pages which are dynamically reshaped, driven by back-end databases and the program logic defined by developers. Screen input fields can even be highlighted or prompted individually, without needing to refresh the entire screen! It’s this kind of innovation that makes Web 2.0 so different.

What’s more, it’s new. Web 2.0 is not in any way old – it’s not even similar to anything old! Some people have compared the excitement about Web 2.0 with the dotcom boom of the late 1990s. It’s true that Web 2.0 is likely to involve the proliferation of new companies which you’ve never heard of, and most of which you’ll never hear of again. However, there are three significant differences. The typical dotcom company raised big money from investors, spent it, then got bought out for small change by an established business. By contrast, the typical Web 2.0 company raises small change from investors, spends it, then gets bought out for big money by an established dotcom business. Secondly, dotcoms usually had a speculative long-term business case and a meaningless name interspersed with capital letters; they also used buzzwords beginning with a lower-case e. By contrast, Web 2.0 companies generally have a speculative short-term business case and a meaningless name interspersed with extraneous punctuation marks; also, their buzzwords tend to begin with a lower-case i. Finally, Web 2.0 is quite different from the dotcom boom, which took place in the late 1990s and so is now quite old. Web 2.0, on the other hand, is new, which in itself makes it different.

Above all, Web 2.0 is here to stay. In the wake of the dotcom boom, dozens of unprepared startups crashed and burned. As the painful memories of WebVan and boo.com faded, little remained of the brave new world of e-business: these days there are only a couple of major players in each of the main e-business niche areas, and some of them are subsidiaries of bricks-and-mortar businesses, which is cheating. By contrast, the big names of Web 2.0 are all around us. In the field of tagging and social networking alone, there’s the innovative picture tagging and social networking company Flickr (now owned by Yahoo!); there’s the groundbreaking bookmark tagging and social networking company del.icio.us (now owned by Yahoo!); and let’s not forget the unprecedented social network tagging company Dodgeball (now owned by Google). Meanwhile blogging, that quintessential Web 2.0 tool, guarantees that fresh new voices will continue to be heard, thanks in no small part to quick-and-easy blog hosting companies like Blogger (now owned by Google) and the new kid on the block, Myspace (now owned by Rupert Murdoch).

Web 2.0 is new, it’s different, and above all, it’s here – and it’s here to stay! So get down and get with it and get hep to the Web 2.0 scene, daddy-o! [Can we check this as well? – Ed] Don’t say ‘programming’, say ‘scripting’! Don’t say ‘directory’, say ‘tags’! Don’t say ‘DoubleClick’, say ‘Google AdSense’!

And don’t say ‘hype’. Please don’t say that.

Got a web between his toes

Now that Nick has read the last rites for Web 2.0, perhaps it’s safe to return to a question that’s never quite been resolved.

To wit: what is Web 2.0? (We’ve established that it’s not a snail.) Over at What I wrote, I’ve just put up a March 2003 article called “In Godzilla’s footprint“. In it, I asked similar questions about e-business, taking issue with the standard rhetoric of ‘efficiency’ and ’empowerment’. I suggested that e-business wasn’t – or rather isn’t – a phenomenon in its own right, but the product of three much larger trends: standardisation, automation and externalisation of costs. (Read the whole thing.)

Assuming for the moment that I called this one correctly – and I find my arguments pretty persuasive – what of Web 2.0? More of the same, only featuring the automation of income generation (AdSense) and the externalisation of payroll costs (‘citizen journalism’)? Or is there more going on – and if so, what?

Update 16/11

It would be remiss of me not to give any pointers to my own thinking on Web 2.0. So I’m republishing another column at What I wrote, this time from February of this year. Most of you will probably have seen it the first time round, when it appeared in iSeries NEWS UK, but I think it’s worth giving it another airing. Have a gander.

Simplify, reduce, oversimplify

An interesting post on ‘folksonomies’ at Collin Brooke’s blog prompted this comment, which I thought deserved a post of its own.

I think Peter Merholz‘s coinage ‘ethnoclassification’ could be useful here. As I’ve argued elsewhere, I think we can see all taxonomies (and ultimately all knowledge) as the product of an extended conversation within a given community: in this respect a taxonomy is simply an accredited ‘folksonomy’.

However, I think there’s a dangerous (but interesting) slippage here between what folksonomies could be and what folksonomies are: between the promise of the project of ‘folksonomy’ (F1) and what’s delivered by any identifiable folksonomy (F2). (You can get into very similar arguments about Wikipedia 1 and Wikipedia 2 – sometimes with the same people.) Compared to the complexity and exhaustiveness of any functioning taxonomic scheme, I don’t believe that any actually-existing ‘folksonomy’ is any more than an extremely sketchy work in progress.

For this reason (among others), I believe we need different words for the activity and the endpoint. So we could contrast classification with Peterme’s ‘ethnoclassification’, on one hand, and note that the only real difference between the two is that the former takes place within structured and credentialled communities. On the other hand, we could contrast actual taxonomies with ‘folksonomies’. The latter could have very much the same relationship with officially-credentialled taxonomies as classification does with ethnoclassification – but they aren’t there yet.

The shift from ‘folksonomy’ to ‘ethnoclassification’ has two interesting side-effects, which I suspect are both fairly unwelcome to folksonomy boosters (a group in which I don’t include Thomas Vander Wal, ironically enough). On one hand, divorcing process and product reminds us that improvements to one don’t necessarily translate as improvements in the other. The activity that goes into producing a ‘folksonomy’, as distinct from a taxonomy, may give more participants a better experience (more egalitarian, more widely distributed, more chatty, more fun) but you wouldn’t necessarily expect the end product to show improvements as a result. (You’d expect it to be a bit scrappy, by and large.) On the other hand, divorcing process from technology reminds us that ethnoclassification didn’t start with del.icio.us; the aggregation of informal knowledge clouds is something we’ve been doing for a long time, perhaps as long as we’ve been human.

The people with the answers

Nick:

Larry Sanger, the controversial online encyclopedia’s cofounder and leading apostate, announced yesterday, at a conference in Berlin, that he is spearheading the launch of a competitor to Wikipedia called The Citizendium. Sanger describes it as “an experimental new wiki project that combines public participation with gentle expert guidance.”The Citizendium will begin as a “fork” of Wikipedia, taking all of Wikipedia’s current articles and then editing them under a new model that differs substantially from the model used by what Sanger calls the “arguably dysfunctional” Wikipedia community. “First,” says Sanger, in explaining the primary differences, “the project will invite experts to serve as editors, who will be able to make content decisions in their areas of specialization, but otherwise working shoulder-to-shoulder with ordinary authors. Second, the project will require that contributors be logged in under their own real names, and work according to a community charter. Third, the project will halt and actually reverse some of the ‘feature creep’ that has developed in Wikipedia.”

I’ve been thinking about Wikipedia, and about what makes a bad Wikipedia article so bad, for some time – this March 2005 post took off from some earlier remarks by Larry Sanger. I’m not attempting to pass judgment on Wikipedia as a whole – there are plenty of good Wikipedia articles out there, and some of them are very good indeed. But some of them are bad. Picking on an old favourite of mine, here’s the first paragraph of the Wikipedia article on the Red Brigades, with my comments.

The Red Brigades (Brigate Rosse in Italian, often abbreviated as BR) are

The word is ‘were’. The BR dissolved in 1981; its last successor group gave up the ghost in 1988. There’s a small and highly violent group out there somewhere which calls itself “Nuove Brigate Rosse” – the New Red Brigades – but its continuity with the original BR is zero. This is a significant disagreement, to put it mildly.

a militant leftist group located in Italy. Formed in 1970, the Marxist Red Brigades

‘Marxist’ is a bizarre choice of epithet. Most of the Italian radical left was Marxist, and almost all of it declined to follow the BR’s lead. Come to that, the Italian Communist Party (one of the BR’s staunchest enemies) was Marxist. Terry Eagleton’s a Marxist; Jeremy Hardy’s a Marxist; I’m a Marxist myself, pretty much. The BR had a highly unusual set of political beliefs, somewhere between Maoism, old-school Stalinism and pro-Tupamaro insurrectionism. ‘Maoist’ would do for a one-word summary. ‘Marxist’ is both over-broad and misleading.

sought to create a revolutionary state through armed struggle

Well, yes. And no. I mean, I don’t think it’s possible to make any sense of the BR without acknowledging that, while they did have a famous slogan about portare l’attacco al cuore dello stato (‘attacking at the heart of the state’), their anti-state actions were only a fairly small element of what they did. To begin with they were a factory-based group, who took action against foremen and personnel managers; in their later years – which were also their peak years – the BR, like other armed groups, got drawn into what was effectively a vendetta with the police, prioritising revenge attacks over any kind of ‘revolutionary’ programme. You could say that the BR were a revolutionary organisation & consequently had a revolutionary programme throughout, even if their actions didn’t always match it – but how useful would this be?

and to separate Italy from the Western Alliance

Whoa. I don’t think the BR were particularly in favour of Italy’s NATO membership, but the idea that this was one of their key goals is absurd. If the BR had been a catspaw for the KGB, intent on fomenting subversion so as to destabilise Italy, then this probably would have been high on their list. But they weren’t, and it wasn’t.

In 1978, they kidnapped and killed former Prime Minister Aldo Moro under obscure circumstances.

Remarkably well-documented circumstances, I’d have said.

After 1984’s scission

This is just wrong – following growing and unresolvable factionalism, the BR formally dissolved in October 1981.

Red Brigades managed with difficulty to survive the official end of the Cold War in 1989

This is both confused and wrong. Given that there was a split, how would the BR have survived beyond 1981 (or 1984), let alone 1989? As for the BR’s successor groups, the last one to pack it in was last heard from in 1988.

even though it is now a fragile group with no original members.

Or rather, even though the name is now used by a small group about which very little is know, but which is not believed to have any connection to the original group (whose members are after all knocking on a bit by now).

Throughout the 1970’s the Red Brigades were credited with 14,000 acts of violence.

Good grief. Credited by whom? According to the sources I’ve seen, between 1970 and 1981 Italian armed struggle groups were responsible for a total of 3,258 actions, including 110 killings; the BR’s share of the total came to 472 actions, including 58 killings. (Most ‘actions’ consisted of criminal damage and did not involve personal violence.) I’d be the first to admit that the precision of these figures is almost certainly spurious, but even if we doubled that figure of 472 we’d be an awful long way short of 14,000.

I’m not even going to look at the body of the article.

I think there are two main problems here; the good news is that Larry’s proposals for the neo-Wikipedia (Nupedia? maybe not) would address both of them.

Firstly, first mover advantage. The structure of Wikipedia creates an odd imbalance between writers and editors. Writing a new article is easy: the writer can use whatever framework he or she chooses, in terms both of categories used to structure the entry and of the overall argument of the piece. Making minor edits to an article is easy: mutter 1984? no way, it was 1981!, log on, a bit of typing and it’s done. But making major edits is hard – you can see from the comments above just how much work would be needed to make that BR article acceptable, starting from what’s there now. It would literally be easier to write a new article. What’s more, making edits stick is hard; I deleted one particularly ignorant falsehood from the BR article myself a few months ago, only to find my edit reverted the next day. (Of course, I re-reverted it. So there!)

Larry’s suggestion of getting experts on board is very much to the point here. Slap my face and call me a credentialled academic, but I don’t believe that everyone is equally qualified to write an encyclopedia article about their favourite topic – and I do think it matters who gets the first go.

Secondly, gaming the system. Wikipedia is a community as well as an encyclopedia. I’ll pass over Larry’s suggestion that Wikipedia is dysfunctional as a community, but I do think it’s arguable that some behaviours which work well for Wikipedia-the-community are dysfunctional for Wikipedia-the-resource. It’s been suggested, for instance, that what really makes Wikipedia special is the ‘history’ pages, which take the lid off the debate behind the encyclopedia and let us see knowledge in the process of formation. It follows from this that to show the world a single, ‘definitive’ version of an article on a subject would actually be a step backwards: The discussion tab on Wikipedia is a great place to point to your favorite version … Does the world need a Wikipedia for stick-in-the-muds? W. A. Gerrard objects:

Of what value is publicly documenting the change history of an encyclopedia entry? How can something that purports to be authoritative allow the creation of alternative versions which readers can adopt as favorites?If an attempt to craft a wiki that strives for accuracy, even via a flawed model, is considered something for “stick-in-the-muds”, then it’s apparent that many of Wikipedia’s supporters value the dynamics of its community more than the credibility of the product they deliver.

I think this is exactly right: the history pages are worth much more to members of the Wikipedia community than to Wikipedia users. People like to form communities and communities like to chat – and edits and votes are the currency of Wikipedia chat. And gaming the system is fun (hence the word ‘game’). Aaron Swartz quotes comments about Wikipedia regulars who delete your newly[-]create[d] article without hesitation, or revert your changes and accuse you of vandalis[m] without even checking the changes you made, or who “edited” thousands of articles … [mostly] to remove material that they found unsuitable. This clearly suggest the emergence of behaviours which are driven more by social expectations than by a concern for Wikipedia. The second writer quoted above continues: Indeed, some of the people-history pages contained little “awards” that people gave each other — for removing content from Wikipedia.

Now, all systems can be gamed, and all communities chat. The question is whether the chatting and the gaming can be harnessed for the good of the encyclopedia – or, failing that, minimised. I’m not optimistic about the first possibility, and I suspect Larry Sanger isn’t either. Larry does, however, suggest a very simple hack which would help with the second: get everyone to use their real name. This would, among other things, make it obvious when a writer had authority in a given area. I don’t entirely agree with Aaron’s conclusion:

Larry Sanger famously suggested that Wikipedia must jettison its anti-elitism so that experts could feel more comfortable contributing. I think the real solution is the opposite: Wikipedians must jettison their elitism and welcome the newbie masses as genuine contributors to the project, as people to respect, not filter out.

This is half right: Wikipedia-the-community has produced an elite of ‘regulars’, whose influence over Wikipedia-the-resource derives from their standing in the community rather than from any kind of claim to expertise. I agree with Aaron that this is an unhealthy situation, but I think Larry was right as well. The artificial elitism of the Wikipedia community doesn’t only marginalise the ‘masses’ who contribute most of the original content; it also sidelines the subject-area experts who, within certain limited domains, have a genuine claim to be regarded as an elite.

I don’t know if the Citizendium is going to address these problems in practice; I don’t know if the Citizendium is going anywhere full stop. But I think Larry Sanger is asking the right questions. It’s increasingly clear that Wikipedia isn’t just facing in two directions at once, it’s actually two different things – and what’s good for Wikipedia-the-community isn’t necessarily good for Wikipedia-the-resource.

Back in the garage

I have begun to see what I think is a promising trend in the publishing world that may just transform the industry for good.

Paul Hartzog‘s Many-to-Many post on publishing draws some interesting conclusions from the success of Charlie Stross’s Accelerando (nice one, Charlie). but makes me a bit nervous, partly because of the liberal use of excitable bolding.

What I am suggesting is happening is the reversal of traditional publishing, i.e. the transformation of the system in which authors create and distribute their work. In the old system, it is assumed that the publishing process acts as a quality control filter … but it ends up merely being a profit-capturing filter.
[…]
Conversely, in the new system, the works are made available, and it is up to the community-at-large to pass judgement on their quality. In the emerging system, authors create and distribute their work, and readers, individually and collectively, including fans as well as editors and peers, review, comment, rank, and tag, everything.

Setting aside the formatting – and the evangelistic tone, something which never fails to set my teeth on edge – this is all interesting stuff. My problem is that I’m not sure about the economics of it. It’s not so much that writers won’t write if they don’t get paid – writers will write, full stop – as that writers won’t eat if they don’t get paid: some money has to change hands some time. If the kind of development Paul is talking about takes hold, I can imagine a range of more-or-less unintended consequences, all with different overtones but few of them, to this jaundiced eye, particularly desirable:

  1. Mass amateurisation means that nobody pays for anything, which in turn means that nobody makes a living from writing; this is essentially the RIAA/BPI anti-filesharing nightmare scenario, transposed to literature
  2. Mass amateurisation doesn’t touch the Dan Brown/Katie Price market, but gains traction in specialist areas of literature to the point where nobody can make a living from writing unless they’re writing for the mass market; this is Charlie Gillett’s argument for keeping CDs expensive (and the line the BPI would use against filesharing if they had any sense)
  3. Downloads like Accelerando function essentially as tasters and people end up buying just as many actual books, if not more; this scenario will also be familiar from filesharing arguments, as it’s the line generally used to counter the previous two
  4. Mass amateur production becomes a new sphere of economic activity, linked in with and subordinate to the major mainstream operators: this is the MySpace scenario (at least, the MySpace makes money for Murdoch scenario)
  5. Mass amateur production becomes a new sphere of non-economic activity, with a few star authors subsidised by publishing companies for the sake of the cachet they bring: the open source scenario
  6. Mass amateur production becomes a new sphere of economic activity, existing on the margins and in the shadows, out of the reach of the major mainstream operators: the punk scenario (or, for older readers, the hippie scenario)

We can dismiss the first, RIAA-nightmare scenario. The third (‘tasters’) would be bearable, although it wouldn’t go halfway to justifying Paul’s argument. Most of the rest look pretty ghastly to me. Perhaps Paul is thinking in terms of the last scenario or something like it – but in that case I’d have to say that his optimism is just as misplaced, for different but related reasons, as the pessimism of the first scenario (although a new wave of garage literature would be a fine thing to see).

The trouble with making your own history is that you don’t do it in circumstances of your own choosing. The participatory buzz of Web 2.0 tends to eat away at the structural and procedural walls that stop people getting their hands on stuff – but that can just mean that only the strongest and highest walls are left standing. Besides, walls can be useful, particularly if you want to keep a roof over your head.

We’re all together now, dancing in time

Ryan Carson:

I’d love to add friends to my Flickr account, add my links to del.icio.us, browse digg for the latest big stories, customise the content of my Netvibes home page and build a MySpace page. But you know what? I don’t have time and you don’t either…

Read the whole thing. What’s particularly interesting is a small straw poll at the end of the article, where Ryan asks people who actually work on this stuff what social software apps they use on a day-to-day basis. Six people made 30 nominations in all; Ryan had five of his own for a total of 35.

Here are the apps which got more than one vote:

Flickr (four votes)
Upcoming (two)
Wikipedia (two)

And, er, that’s it.

Social software looks like very big news indeed from some perspectives, but when it’s held to the standard of actually helping people get stuff done, it fades into insignificance. I think there are three reasons for this apparent contradiction. First, there’s the crowd effect – and, since you need a certain number of users before network effects start taking off, any halfway-successful social software application has a crowd behind it. It can easily look as if everyone‘s doing it, even if the relevant definition of ‘everyone’ looks like a pretty small group to you and me.

Then there’s the domain effect: tagging and user-rating are genuinely useful and constructive, in some not very surprising ways, within pre-defined domains. (Think of a corporate intranet app, where there is no need for anyone to specify that ‘Dunstable’ means one of the company’s offices, ‘Barrett’ means the company’s main competitor and ‘Monkey’ means the payroll system.) For anyone who is getting work done with tagging, in other words, tagging is going to look pretty good – and, thanks to the crowd effect, it’s going to look like a good thing that everyone‘s using.

Thirdly, social software is new, different, interesting and fun, as something to play with. It’s a natural for geeks with time to play with stuff and for commentators who like writing about new and interesting stuff – let alone geek commentators. The hype generates itself; it’s the kind of development that’s guaranteed to look bigger than it is.

Put it all together – and introduce feedback effects, as the community of geek commentators starts to find social software apps genuinely useful within its specialised domain – and social software begins to look like a Tardis in reverse: much, much bigger on the outside than it is on the inside.

That’s not to say that social software isn’t interesting, or that it isn’t useful. But I think that in the longer term those two facets will move apart: useful and productive applications of tagging will be happening under the commentator radar, often behind organisational firewalls, while the stuff that’s interesting and fun to play with will remain… interesting and fun to play with.

The users geeks don’t see

Nick writes, provocatively as ever, about the recent ‘community-oriented’ redesign of the netscape.com portal:

A few days ago, Netscape turned its traditional portal home page into a knockoff of the popular geek news site Digg. Like Digg, Netscape is now a “news aggregator” that allows users to vote on which stories they think are interesting or important. The votes determine the stories’ placement on the home page. Netscape’s hope, it seems, is to bring Digg’s hip Web 2.0 model of social media into the mainstream. There’s just one problem. Normal people seem to think the entire concept is ludicrous.

Nick cites a post titled Netscape Community Backlash, from which this line leapt out at me:

while a lot of us geeks and 2.0 types are addicted to our own technology (and our own voices, to be honest), it’s pretty darn obvious that A LOT of people want to stick with the status quo

This reminded me of a minor revelation I had the other day, when I was looking for the Java-based OWL reasoner ‘pellet’. I googled for
pellet owl
– just like that, no quotes – expecting to find a ‘pellet’ link at the bottom of forty or fifty hits related to, well, owls and their pellets. In fact, the top hit was “Pellet OWL Reasoner”. (To be fair, if you google
owl pellet
you do get the fifty pages of owl pellets first.)

I think it’s fair to say that the pellet OWL reasoner isn’t big news even in the Web-using software development community; I’d be surprised if everyone reading this post even knows what an OWL reasoner is (or has any reason to care). But there’s enough activity on the Web around pellet to push it, in certain circumstances, to the top of the Google rankings (see for yourself).

Hence the revelation: it’s still a geek Web. Or rather, there’s still a geek Web, and it’s still making a lot of the running. When I first started using the Internet, about ten years ago, there was a geek Web, a hobbyist Web, an academic Web (small), a corporate Web (very small) and a commercial Web (minute) – and the geek Web was by far the most active. Since then the first four sectors have grown incrementally, but the commercial Web has exploded, along with a new sixth sector – the Web-for-everyone of AOL and MSN and MySpace and LiveJournal (and blogs), whose users vastly outnumber those of the other five. But the geek Web is still where a lot of the new interesting stuff is being created, posted, discussed and judged to be interesting and new.

Add social software to the mix – starting, naturally, within the geek Web, as that’s where it came from – and what do you get? You get a myth which diverges radically from the reality. The myth is that this is where the Web-for-everyone comes into its own, where millions of users of what was built as a broadcast Web with walled-garden interactive features start talking back to the broadcasters and breaking out of their walled gardens. The reality is that the voices of the geeks are heard even more loudly – and even more disproportionately – than before. Have a look at the ‘popular’ tags on del.icio.us: as I write, six of the top ten (including all of the top five) relate directly to programmers, and only to programmers. (Number eight reads: “LinuxBIOS – aims to replace the normal BIOS found on PCs, Alphas, and other machines with a Linux kernel”. The unglossed reference to Alphas says it all.) Of the other four, one’s a political video, two are photosets and one is a full-screen animation of a cartoon cat dancing, rendered entirely in ASCII art. (Make that seven of the top ten.)

I’m not a sceptic about social software: ranking, tagging, search-term-aggregation and the other tools of what I persist in calling ethnoclassification are both new and powerful. But they’re most powerful within a delimited domain: a user coming to del.icio.us for the first time should be looking for the ‘faceted search’ option straight away (“OK, so that’s the geek cloud, how do I get it to show me the cloud for European history/ceramics/Big Brother?”) The fact that there is no ‘faceted search’ option is closely related, I’d argue, to the fact that there is no discernible tag cloud for European history or ceramics or Big Brother: we’re all in the geek Web. (Even Nick Carr.) (Photography is an interesting exception – although even there the only tags popular enough to make the del.icio.us tag cloud are ‘photography’, ‘photo’ and ‘photos’. There are 40 programming-related tags, from ajax to xml.)

Social software wasn’t built for the users of the Web-for-everyone. Reaction to the Netscape redesign tells us (or reminds us) that there’s no reason to assume they’ll embrace it.

Update Have a look at Eszter Hargittai‘s survey of Web usage among 1,300 American college students, conducted in February and March 2006. MySpace is huge, and Facebook’s even huger, but Web 2.0 as we know it? It’s not there. 1.9% use Flickr; 1.6% use Digg; 0.7% use del.icio.us. Answering a slightly different question, 1.5% have ever visited Boingboing, and 1% Technorati. By contrast, 62% have visited CNN.com and 21% bbc.co.uk. It’s still, very largely, a broadcast Web with walled-garden interactivity. Comparing results like these with the prophecies of tagging replacing hierarchy, Long Tail production and mashups all round, I feel like invoking the story of the blind men and the elephant – except that I’m not even sure we’ve all got the same elephant.

We hear the sound of machines

Sooner or later, the Internet will need to be saved from Google. Because Google – which appears to be an integral part of the information-wants-to-be-free Net dream, the search engine which gives life to the hyperlinked digital nervous system of a kind of massively-distributed Xanadu project – is nothing of the sort. Google is a private company; Google’s business isn’t even search. Google’s business is advertising – and, whatever we think about how well search goes together with tagging and folksonomic stumbling-upon, search absolutely doesn’t go with advertising. (Update 15th June: this is a timely reminder that Google is a business, and its business is advertising. Mass personalisation, online communities, interactive rating and ranking, it’s all there – and it’s all about the advertising.)

I had thought that, in the context of plain vanilla Web search, Google actually had this cracked – that the prominence of ‘sponsored links’, displayed separately from search results, allowed them to deliver an unpolluted service and still make money. I hadn’t reckoned with AdSense. AdSense doesn’t in itself pollute Google’s search results. What it does is far worse: it encourages other people to pollute the Net. Which will mean, ultimately, that Google will paint (or choke) itself into a corner – but that, if we’re not careful, an awful lot of users will be stuck in that corner with them.

For a much fuller and more cogent version of this argument, read Seth Jayson (via Scott). One point in particular stood out: Google (Nasdaq: GOOG) insiders are continuing to drop shares on the public at a rate that boggles the mind. It’s true. Over the last year, as far as published records show, Sun insiders have sold $50,000 worth of shares, net. In the same period, IBM insiders have sold $6,500,000; Microsoft insiders have sold $1,500,000,000; and Google insiders have sold $5,000,000,000. See for yourself. That’s a lot of shares.

I couldn’t make it any simpler

I hate to say this – I’ve always loathed VR boosters and been highly sceptical about the people they boost – but Jaron Lanier’s a bright bloke. His essay Digital Maoism doesn’t quite live up to the title, but it’s well worth reading (thanks, Thomas).

I don’t think he quite gets to the heart of the current ‘wisdom of the crowds’ myth, though. It’s not Maoism so much as Revivalism: there’s a tight feedback loop between membership of the collective, collective activity and (crucially) celebration of the activity of the collective. Or: celebration of process rather than end-result – because the process incarnates the collective.

Put it this way. Say that (for example) the Wikipedia page on the Red Brigades is wildly wrong or wildly inadequate (which is just as bad); say that the tag cloud for an authoritative Red Brigades resource is dominated by misleading tags (‘kgb’, ‘ussr’, ‘mitrokhin’…). Would a wikipedian or a ‘folksonomy’ advocate see this situation as a major problem? Not being either I can’t give an authoritative answer, but I strongly suspect the answer would be No: it’s all part of the process, it’s all part of the collective self-expression of wikipedians and the growth of the folksonomy, and if the subject experts don’t like it they should just get their feet wet and start tagging and editing themselves. And if, in practice, the experts don’t join in – perhaps, in the case of Wikipedia, because they don’t have the stomach for the kind of ‘editing’ process which saw Jaron Lanier’s own corrections get reverted? Again, I don’t know for sure, but I suspect the answer would be another shrug: the wiki’s open to all – and tagspace couldn’t be more open – so who’s to blame, if you can’t make your voice heard, but you? There’s nothing inherently wrong with the process, except that you’re not helping to improve it. There’s nothing inherently wrong with the collective, except that you haven’t joined it yet.

Two quotes to clarify (hopefully) the connection between collective and process. Michael Wexler:

our understanding of things changes and so do the terms we use to describe them. How do I solve that in this open system? Do I have to go back and change all my tags? What about other people’s tags? Do I have to keep in mind all the variations on tags that reflect people’s different understanding of the topics?The social connected model implies that the connections are the important part, so that all you need is one tag, one key, to flow from place to place and discover all you need to know. But the only people who appear to have time to do that are folks like Clay Shirky. The rest of us need to have information sorted and organized since we actually have better things to do than re-digest it.

What tagging does is attempt to recreate the flow of discovery. That’s fine… but what taxonomy does is recreate the structure of knowledge that you’ve already discovered. Sometimes, I like flowing around and stumbling on things. And sometimes, that’s a real pita. More often than not, the tag approach involves lots of stumbling around and sidetracks.

It’s like Family Feud [a.k.a. Family Fortunes – PJE]. You have to think not of what you might say to a question, you have to guess what the survey of US citizens might say in answer to a question. And that’s really a distraction if you are trying to just answer the damn question.

And our man Lanier:

there’s a demonstrative ritual often presented to incoming students at business schools. In one version of the ritual, a large jar of jellybeans is placed in the front of a classroom. Each student guesses how many beans there are. While the guesses vary widely, the average is usually accurate to an uncanny degree.This is an example of the special kind of intelligence offered by a collective. It is that peculiar trait that has been celebrated as the “Wisdom of Crowds,”

The phenomenon is real, and immensely useful. But it is not infinitely useful. The collective can be stupid, too. Witness tulip crazes and stock bubbles. Hysteria over fictitious satanic cult child abductions. Y2K mania. The reason the collective can be valuable is precisely that its peaks of intelligence and stupidity are not the same as the ones usually displayed by individuals. Both kinds of intelligence are essential.

What makes a market work, for instance, is the marriage of collective and individual intelligence. A marketplace can’t exist only on the basis of having prices determined by competition. It also needs entrepreneurs to come up with the products that are competing in the first place. In other words, clever individuals, the heroes of the marketplace, ask the questions which are answered by collective behavior. They put the jellybeans in the jar.

To illustrate this, once more (just the once) with the Italian terrorists. There are tens of thousands of people, at a conservative estimate, who have read enough about the Red Brigades to write that Wikipedia entry: there are a lot of ill-informed or partially-informed or tendentious books about terrorism out there, and some of them sell by the bucketload. There are probably only a few hundred people who have read Gian Carlo Caselli and Donatella della Porta’s long article “The History of the Red Brigades: Organizational structures and Strategies of Action (1970-82)” – and I doubt there are twenty who know the source materials as well as the authors do. (I’m one of the first group, obviously, but certainly not the second.) Once the work’s been done anyone can discover it, but discovery isn’t knowledge: the knowledge is in the words on the pages, and ultimately in the individuals who wrote them. They put the jellybeans in the jar.

This is why (an academic writes) the academy matters, and why academic elitism is – or at least can be – both valid and useful. Jaron:

The balancing of influence between people and collectives is the heart of the design of democracies, scientific communities, and many other long-standing projects. There’s a lot of experience out there to work with. A few of these old ideas provide interesting new ways to approach the question of how to best use the hive mind.Scientific communities … achieve quality through a cooperative process that includes checks and balances, and ultimately rests on a foundation of goodwill and “blind” elitism — blind in the sense that ideally anyone can gain entry, but only on the basis of a meritocracy. The tenure system and many other aspects of the academy are designed to support the idea that individual scholars matter, not just the process or the collective.

I’d go further, if anything. Academic conversations may present the appearance of a collective, but it’s a collective where individual contributions are preserved and celebrated (“Building on Smith’s celebrated critique of Jones, I would suggest that Smith’s own analysis is vulnerable to the criticisms advanced by Evans in another context…”). That is, academic discourse looks like a conversation – which wikis certainly can do, although Wikipedia emphatically doesn’t.

The problem isn’t the technology, in other words: both wikis and tagging could be ways of making conversation visible, which inevitably means visualising debate and disagreement. The problem is the drive to efface any possibility of conflict, effectively repressing the appearance of debate in the interest of presenting an evolving consensus. (Or, I could say, the problem is the tendency of people to bow and pray to the neon god they’ve made, but that would be a bit over the top – and besides, Simon and Garfunkel quotes are far too obvious.)

Update 13th June

I wrote (above): It’s not Maoism so much as Revivalism: there’s a tight feedback loop between membership of the collective, collective activity and (crucially) celebration of the activity of the collective. Or: celebration of process rather than end-result – because the process incarnates the collective.

Here’s Cory Doctorow, responding to Lanier:

Wikipedia isn’t great because it’s like the Britannica. The Britannica is great at being authoritative, edited, expensive, and monolithic. Wikipedia is great at being free, brawling, universal, and instantaneous.If you suffice yourself with the actual Wikipedia entries, they can be a little papery, sure. But that’s like reading a mailing-list by examining nothing but the headers. Wikipedia entries are nothing but the emergent effect of all the angry thrashing going on below the surface. No, if you want to really navigate the truth via Wikipedia, you have to dig into those “history” and “discuss” pages hanging off of every entry. That’s where the real action is, the tidily organized palimpsest of the flamewar that lurks beneath any definition of “truth.” The Britannica tells you what dead white men agreed upon, Wikipedia tells you what live Internet users are fighting over.

The Britannica truth is an illusion, anyway. There’s more than one approach to any issue, and being able to see multiple versions of them, organized with argument and counter-argument, will do a better job of equipping you to figure out which truth suits you best.

Quoting myself again, There’s nothing inherently wrong with the process, except that you’re not helping to improve it. There’s nothing inherently wrong with the collective, except that you haven’t joined it yet.

When there is no outside

Nick Carr’s hyperbolically-titled The Death of Wikipedia has received a couple of endorsements and some fairly vigorous disagreement, unsurprisingly. I think it’s as much a question of tone as anything else. When Nick reads the line

certain pages with a history of vandalism and other problems may be semi-protected on a pre-emptive, continuous basis.

it clearly sets alarm bells ringing for him, as indeed it does for me (“Ideals always expire in clotted, bureaucratic prose”, Nick comments). Several of his commenters, on the other hand, sincerely fail to see what the big deal might be: it’s only a handful of pages, it’s only semi-protection, it’s not that onerous, it’s part of the continuing development of Wikipedia editing policies, Wikipedia never claimed to be a totally open wiki, there’s no such thing as a totally open wiki anyway…

I think the reactions are as instructive as the original post. No, what Nick’s pointing to isn’t really a qualitative change, let alone the death of anything. But yes, it’s a genuine problem, and a genuine embarrassment to anyone who takes the Wikipedian rhetoric seriously. Wikipedia (“the free encyclopedia that anyone can edit”) routinely gets hailed for its openness and its authority, only not both at the same time – indeed, maximising one can always be used to justify limits on the other. As here. But there’s another level to this discussion, which is to do with Wikipedia’s resolution of the openness/authority balancing-act. What happens in practice is that the contributions of active Wikipedians take precedence over both random vandals and passing experts. In effect, both openness and authority are vested in the group.

In some areas this works well enough, but in others it’s a huge problem. I use Wikipedia myself, and occasionally drop in an edit if I see something that’s crying out for correction. Sometimes, though, I see a Wikipedia article that’s just wrong from top to bottom – or rather, an article where verifiable facts and sustainable assertions alternate with errors and misconceptions, or are set in an overall argument which is based on bad assumptions. In short, sometimes I see a Wikipedia article which doesn’t need the odd correction, it needs to be pulled and rewritten. I’m not alone in having this experience: here’s Tom Coates on ‘penis envy’ and Thomas Vander Wal (!) on ‘folksonomy’, as well as me on ‘anomie’.

It’s not just a problem with philosophical concepts, either – I had a similar reaction more recently to the Wikipedia page on the Red Brigades. On the basis of the reading I did for my doctorate, I could rewrite that page from start to finish, leaving in place only a few proper names and one or two of the dates. But writing this kind of thing is hard and time-consuming work – and I’ve got quite enough of that to do already. So it doesn’t get done.

I don’t think this is an insurmountable problem. A while ago I floated a cunning plan for fixing pages like this, using PledgeBank to mobilise external reserves of peer-pressure; it might work, and if only somebody else would actually get it rolling I might even sign up. But I do think it’s a problem, and one that’s inherent to the Wikipedia model.

To reiterate, both openness and authority are vested in the group. Openness: sure, Wikipedia is as open to me as any other registered editor d00d, but in practice the openness of Wikipedia is graduated according to the amount of time you can afford to spend on it. As for authority, I’m not one, but (like Debord) I have read several good books – better books, to be blunt, than those relied on by the author[s] of the current Red Brigades article. But what would that matter unless I was prepared to defend what I wrote against bulk edits by people who disagreed – such as, for example, the author[s] of the current article? On the other hand, if I was prepared to stick it out through the edit wars, what would it matter whether I knew my stuff or not? This isn’t just random bleating. When I first saw that Red Brigades article I couldn’t resist one edit, deleting the completely spurious assertion that the group Prima Linea was a Red Brigades offshoot. When I looked at the page again the next day, my edit had been reverted.

Ultimately Wikipedia isn’t about either openness or authority: it’s about the collective activity of editing Wikipedia and being a Wikipedian. From that, all else follows.

Update 2/6/06 (in response to David, in comments)

There are two obvious problems with the Wikipedia page on the Brigate Rosse, and one that’s larger but more diffuse. The first problem is that it’s written in the present tense; it’s extremely dubious that there’s any continuity between the historic Brigate Rosse and the gang who shot Biagi, let alone that they’re simply, unproblematically the same group. This alone calls for a major rewrite. Secondly, the article is written very much from a police/security-service/conspiracist stance, with a focus on question like whether the BR was assisted by the Czech security services or penetrated by NATO. But this tends to reinforce an image of the BR as a weird alien force which popped up out of nowhere, rather than an extreme but consistent expression of broader social movements (all of which has been documented).

The broader problem – which relates to both of the specific points – goes back to a problem with the amateur-encyclopedia format itself: Wikipedia implicitly asks what a given topic is, which prompts contributors to think of their topic as having a core, essential meaning (I wrote about this last year). The same problem can arise in a ‘proper’ encyclopedia, but there it’s generally mitigated by expertise: somebody who’s spent several years studying the broad Italian armed struggle scene is going to be motivated to relate the BR back to that scene, rather than presenting it as an utterly separate thing. The motivation will be still greater if the expert on the BR has also been asked to contribute articles on Prima Linea, the NAP, etc. This, again, is something that happens (and works, for all concerned) in the kind of restricted conversations that characterise academia, but isn’t incentivised by the Wikipedia conversation – because the Wikipedia conversation doesn’t go anywhere else. Doing Wikipedia is all about doing Wikipedia.

Some day this will all be yours

Scott Karp:

What if dollars have no place in the new economics of content?

In media 1.0, brands paid for the attention that media companies gathered by offering people news and entertainment (e.g. TV) in exchange for their attention. In media 2.0, people are more likely to give their attention in exchange for OTHER PEOPLE’S ATTENTION. This is why MySpace can’t effectively monetize its 70 million users through advertising — people use MySpace not to GIVE their attention to something that is entertaining or informative (which could thus be sold to advertisers) but rather to GET attention from other users.

MySpace can’t sell attention to advertisers because the site itself HAS NONE. Nobody pays attention to MySpace — users pay attention to each other, and compete for each other’s attention — it’s as if the site itself doesn’t exist.You see the same phenomenon in blogging — blogging is not a business in the traditional sense because most people do it for the attention, not because they believe there’s any financial reward. What if the economics of media in the 21st century begin to look like the economics of poetry in the 20th century? — Lots of people do it for their own personal gratification, but nobody makes any money from it.

Pedantry first: it’s inconceivable that we’ll reach a point where nobody makes any money from the media, at least this side of the classless society. Even the hard case of blogging doesn’t really stand up – I could name half a dozen bloggers who have made money or are making money from their blogs, without pausing to think.

It’s a small point, but it’s symptomatic of the enthusiastic looseness of Karp’s argument. So I welcomed Nicholas Carr’s counterblast, which puts Karp together with some recent comments by Esther Dyson:

“Most users are not trying to turn attention into anything else. They are seeking it for itself. For sure, the attention economy will not replace the financial economy. But it is more than just a subset of the financial economy we know and love.”

Here’s Carr:

I fear that to view the attention economy as “more than just a subset of the financial economy” is to misread it, to project on it a yearning for an escape (if only a temporary one) from the consumer culture. There’s no such escape online. When we communicate to promote ourselves, to gain attention, all we are doing is turning ourselves into goods and our communications into advertising. We become salesmen of ourselves, hucksters of the “I.” In peddling our interests, moreover, we also peddle the commodities that give those interests form: songs, videos, and other saleable products. And in tying our interests to our identities, we give marketers the information they need to control those interests and, in the end, those identities. Karp’s wrong to say that MySpace is resistant to advertising. MySpace is nothing but advertising.

Now, this is good, bracing stuff, but I think Carr bends the stick a bit too far the other way. I know from my own experience that there’s a part of my life labelled Online Stuff, and that most of my reward for doing Online Stuff is attention from other people doing Online Stuff. Real-world payoffs – money, work or just making new real-world friends – are nice to get, but they’re not what it’s all about.

The real trouble is that Karp has it backwards. Usenet – where I started doing Online Stuff, ten years ago – is a model of open-ended mutual whuffie exchange. (A very imperfect model, given the tendency of social groups to develop boundaries and hierarchies, but at least an unmonetised one.) Systematised whuffie trading came along later. The model case here is eBay, where there’s a weird disconnect between meaning and value. Positive feedback doesn’t really mean that you think the other person is a “great ebayer” – it doesn’t really mean anything, any more than “A+++++” means something distinct from “A++++” or “A++++++”. What it does convey is value: it makes it that much easier for the other person to make money. It also has attention-value, making the other person feel good for no particular real-world reason, but even this is quantifiable (“48! I’m up to 48!”).

Ultimately Dyson and Carr are both right. The ‘attention economy’ of Online Stuff is new, absorbing and unlike anything that went before – not least because the way in which it gratifies fantasies of being truly appreciated, understood, attended to. But, to the extent that the operative model is eBay rather than Usenet, it is nothing other than a subset of the financial economy. Karp may be right about the specific case of MySpace, but I can’t help distrusting his exuberance – not least because, in my experience, the suffix ‘2.0’ is strongly associated with a search for new ways to cash in.

%d bloggers like this: