Category Archives: computing

Wrapped in paper (4)

Finally (for now), here’s another one from a defunct print publication, in this case one that wasn’t even available on this side of the Atlantic. The magazine was called ePro and it was aimed at IBM users. IBM what users, you ask. That was the clever part – ePro was for users of IBM ‘eservers’, in other words any of IBM’s four (or thereabouts) server platforms. (That was ‘eserver’ with that squiggly at-sign ‘e’. You do remember the squiggly ‘e’, don’t you? Alex? Anyone?)

Anyway, I got the WebSphere-related commentary gig, which involved sounding knowledgeable once a month without making too many jokes. Most of the columns are pretty damn geeky, to be honest, as well as tending to slip into the corporate-breathless mode (I’m guessing here, but if IBM have successfully developed the philosopher’s stone – and that is a big if…) Some of the less technical ones still read pretty well, I think. For example, this one, from March 2003.

MONSTER MOVIES never give you a good view of the monster until halfway through. Representing Godzilla through one enormous footprint — or even one enormous foot — is a good way of building up suspense. It’s also realistic: if Godzilla came to town, one scaly foot would be all that most people ever saw.

Some things are so big they’re hard to see. Although e-business is making some huge changes to the way we live and work, we don’t often think about where it’s coming from and why. Asked to identify trends driving e-business, analysts tend to resort to general statements about business efficiency or customer empowerment. Alternatively, we get the circular argument which identifies e-business as a response to competitive pressures—pressures which are intensified by the growth of e-business.

The real trends driving the evolution of e-business are at once more specific and more far-reaching. Moreover, these trends affect everyone from the B2C customer at home to the IBM board of directors, taking in the hard-pressed WebSphere developer on the way.

The first trend is standardization. On the client side, there is now only one ‘standard’ browser. A friend of mine recently complained about a site which was not rendering properly (in Navigator 7.0). The Webmaster — presumably a person of some technical smarts — replied, “This is not a problem with our site, but your browser. I am running Windows 98 with IE 5.50 and everything displays perfectly.” At the back end, conversely, the tide of standards rolls on—from CORBA to XML to SOAP to ebXML. Interoperability between servers is too important for any company, even Microsoft, to stand in its way.

Whether standards are set by mutual agreement or by the local 800-pound gorilla is secondary; however it’s achieved, standardization has fostered the development of e-business, and continues to do so. The effect is to commoditize Web application servers and development tools; this in turn promotes the development of a single standard application platform, putting ‘non-standard’ platforms and environments under competitive pressure. From OS/400 to Windows 2000, platforms which diverge from the emerging Intel/Linux/Apache norm are increasingly being forced to justify themselves.

The second trend is automation. Since the dawn of business computing, payroll savings have been an ever-present yardstick in justifying IT projects. E business continues this trend with a vengeance. Whether you’re balancing your bank account or making a deal for office supplies in a trading exchange, you’re interacting with an IT system where once — only a few years ago — you would have had to deal with a human being. The word processor was the end of the line for shorthand typists; e-business is having a similar effect on growing numbers of skilled clerical employees. The next step, promised by Microsoft and IBM alike, is an applications development framework so comprehensive that business analysts and end users will be able to generate entire systems: even application development will be automated. (No, I don’t believe it either, but are you going to bet against IBM and Microsoft?)

The third trend is externalization of costs. Not long ago, if you asked a shop to deliver to your home, you could expect to see a van with the name of the shop on the side. Place an order online today, and your goods may well be delivered by a self-employed driver working with a delivery service contracted to an order fulfillment specialist. Talk of ‘disintermediation’ as a trend in e-business is wide of the mark. By offering more agile, flexible and transparent inter-business relationships, e business makes it possible for intermediaries to proliferate, each contracting out its costly or inconvenient functions. On the B2C front, meanwhile, operating costs are increasingly passed on to the customer: I sometimes spend far longer navigating a series of Web forms than it would take to give the same details to a skilled employee.

A drive for standardization, forcing all platforms into a single generic framework; automation for all, cutting jobs among bank tellers and programmers alike; businesses concentrating ruthlessly on core functions, passing on costs to partners and customers. These trends have had a huge impact on IT and society at large — and there’s more to come. In the e-business world, we’re all in Godzilla’s footprint.

Advertisement

All those numbers

I like a good fallacy; I managed to get the Base Rate Fallacy, the Hawthorne Effect and Goodhart’s Law into one lecture I gave recently. So I was intrigued to run across this passage in Jock Young’s 2004 essay “Voodoo Criminology and the numbers game” (you can find a draft in pdf form here):

Legions of theorists from Robert K Merton through to James Q Wilson have committed Giffen’s paradox: expressing their doubts about the accuracy of the data and then proceeding to use the crime figures with seeming abandon, particularly in recent years when the advent of sophisticated statistical analysis is, somehow, seen to grant permission to skate over the thin ice of insubstantiality.

I like a good fallacy, but paradoxes are even better. So, tell me more about Giffen’s paradox:

Just as with Giffen’s paradox, where the weakness of the statistics is plain to the researchers yet they continue to force-feed inadequate data into their personal computers

Try as I might, I wasn’t seeing the paradox there. A footnote referenced

Giffen, P. (1965), ‘Rates of Crime and Delinquency’ in W. McGrath (ed.), Crime Treatment in Canada

I didn’t have W. McGrath (ed.), Crime Treatment in Canada by me at the time, so I did the next best thing and Googled. I rapidly discovered that Giffen’s paradox is also known as the Giffen paradox, that it’s associated with Giffen goods, and that it’s got nothing to do with Giffen, P. (1965):

Proposed by Scottish economist Sir Robert Giffen (1837-1910) from his observations of the purchasing habits of the Victorian poor, the Giffen paradox states that demand for a commodity increases as its price rises.

Raise the price of bread when there are people on the poverty line – ignoring for the moment the fact that this makes you the rough moral equivalent of Mengele – and those people will buy more bread, to substitute for the meat they’re no longer able to afford. It’s slightly reassuring to note that, notwithstanding Sir Robert’s observations of the Victorian poor, economists have subsequently questioned whether the Giffen paradox has ever actually been observed.

But none of this cast much light on those researchers force-feeding their personal computers with inadequate data. Eventually I tracked down W. McGrath (ed.), Crime Treatment in Canada. It turns out that the less famous Giffen did in fact describe the willingness of researchers to rely on statistics, after having registered caveats about their quality, as a paradox (albeit “one of the less important paradoxes of modern times”). I still can’t see that this rises to the level of paradox: surely being upfront about the quality of the data you’re processing is what a statistical analyst should do. If initial reservations don’t carry through into the conclusion that’s another matter – but that’s not a paradox, that’s just misrepresentation.

Paradoxical or not, Giffen’s observation accords with Young’s argument in the paper, which is that criminologists, among other social scientists, place far too much trust in statistical analysis: statistics are only as good as the methods used to produce them, methods which in many cases predictably generate gaps and errors.

It’s a good argument but not a very new or surprising one (perhaps it was newer in 1965). Moreover, Young pushes it in some odd directions. The paper reminded me of Robert Martinson’s 1974 study of rehabilitation programmes, “What Works?” – or rather, of how that paper was received. Martinson demonstrated that no study had conclusively shown any form of rehabilitation to work consistently, and that very few studies of rehabilitation showed any clear result; his paper was seized on by advocates of imprisonment and invoked as proof that nothing worked. This was unjustified on two levels. Firstly, while Martinson’s negatives would justify scepticism about a one-size-fits-all rehabilitation panacea, the detail of his research did suggest that some things worked for some people in some settings. Subsequent research – some of it by Martinson himself – bore out this suggestion, showing reasonably clear evidence that tailored, flexible and multiple interventions can actually do some good. Secondly, if Martinson was sceptical about rehabilitation, he wasn’t any less sceptical about imprisonment: his conclusion was that ex-offenders could be left alone, not that they should be kept locked up (“if we can’t do more for (and to) offenders, at least we can safely do less”). For Martinson, rehabilitation couldn’t cut crime by reforming bad people, because crime wasn’t caused by bad people in the first place. Sadly, the first part of this message was heard much more clearly than the second.

Like Martinson, Young is able to present a whole series of statistical analyses which seem obviously, intuitively wrong. However, what his examples suggest is that statistics from different sources require different types and levels of wariness: some are dependably more trustworthy than others, and some of the less trustworthy are untrustworthy in knowably different ways. But rather than deal individually with the different types of scepticism, levels of scepticism and reasons for scepticism which different analyses provoke, Young effectively concludes that nothing works, or very little:

Am I suggesting an open season on numbers? Not quite: there are … numbers which are indispensable to sociological analysis. Figures of infant mortality, age, marriage and common economic indicators are cases in point, as are, for example, numbers of police, imprisonment rates and homicide incidences in criminology. Others such as income or ethnicity are of great utility but must be used with caution. There are things in the social landscape which are distinct, definite and measurable; there are many others that are blurred because we do not know them – some because we are unlikely ever to know them, others, more importantly, because it is their nature to be blurred. … There are very many cases where statistical testing is inappropriate because the data is technically weak – it will simply not bear the weight of such analysis. There are many other instances where the data is blurred and contested and where such testing is simply wrong.

(In passing, that’s a curious set of solid, trustworthy numbers to save from the wreckage – it’s hard to think of an indicator more bureaucratically produced, socially constructed and culture-bound than “infant mortality”, unless perhaps it’s “marriage”.)

I’ve spent some time designing a system for cataloguing drug, alcohol and tobacco statistics – an area where practically all the data we have is constructed using “blurred and contested” concepts – so I sympathise with Young’s stance here, up to a point. Police drug seizure records, British Crime Survey drug use figures and National Treatment Agency drug treatment statistics are produced in different ways and tell us about different things, even when they appear to be talking about the same thing. (In my experience, people who run archives know about this already and find it interesting, people who use the statistics take it for granted, and IT people don’t know about it and want to fix it.) But: such testing is simply wrong? (Beware the persuasive adverb – try re-reading those last two sentences with the word ‘simply’ taken out.) We know how many people answered ‘yes’ to a question with a certain form of words; we know how many of the same people answered ‘yes’ to a different question; and we know the age distribution of these people. I can’t see that it would be wrong to cross-tabulate question one against question two, or to calculate the mean age of one sub-sample or the other. Granted, it would be wrong to present findings about the group which answered Yes to a question concerning activity X as if they were findings about the group who take part in activity X – but that’s just to say that it’s wrong to misrepresent your findings. Young’s broader sceptical claim – that figures constructed using contested concepts should not or cannot be analysed mathematically – seems… well, wrong.

Young then repeats the second of the errors of Martinson’s audience: if none of that works, then we can stick with what we know. In this case that means criminology reconceived as cultural ethnography: “a theoretical position which can enter in to the real world of existential joy, fear, false certainty and doubt, which can seek to understand the subcultural projects of people in a world riven with inequalities of wealth and uncertainties of identity”. Fair enough – who’d want a theoretical position which couldn’t enter in to the real world? But the question to ask about creeds is not what’s in them but what they leave out. Here, the invocation of culture seems to presage the abandonment not only of statistical analysis but of materialism.

The usual procedure … is to take the demographics and other factors which correlate with crime in the past and attempt to explain the present or predict the future levels of crime in terms of changes in these variables. The problem here is that people (and young people in particular) might well change independently of these variables. For in the last analysis the factors do not add up and the social scientists begin to have to admit the ghost in the machine.

People … might well change independently of these variables – how? In ways which don’t find any expression in phenomena that might be measured (apart from a drop in crime)? It seems more plausible to say that, while people do freely choose ways to live their lives, they do not do so in circumstances of their own choosing – and that those choices in turn have material effects which create constraints as well as opportunities, for themselves and for others. To put it another way, if the people you’re studying change independently of your variables, perhaps you haven’t got the right variables. Young’s known as a realist, which is one way of being a materialist these days; but the version of criminology he’s proposing here seems, when push comes to shove, to be non- or even anti-materialist (“the ghost in the machine”). That’s an awfully big leap to make, and I don’t think it can be justified by pointing out that some statisticians lie.

What arguments based on statistics need – and crime statistics are certainly no exception – is scepticism, but patient and attentive scepticism: it’s not a question of declaring that statistics don’t tell us anything, but of working out precisely what particular uses of statistics don’t tell us. A case in point is this story in last Friday’s Guardian:

An 8% rise in robberies and an 11% increase in vandalism yesterday marred the latest quarterly crime figures, which showed an overall fall of 2% across all offences in England and Wales.

The rise in street crime was accompanied by British Crime Survey indicators showing that public anxiety about teenagers on the streets, noisy neighbours, drug dealing, drunkenness and rowdiness has continued to increase despite the government’s repeated campaigns against antisocial behaviour. … But police recorded crime figures for the final three months of 2006 compared with 12 months earlier showed that violent crime generally was down by 1%, including a 16% fall in gun crime and an 11% fall in sex offences.

The more authoritative British Crime Survey, which asks 40,000 people about their experience of crime each year, reported a broadly stable crime rate, including violent crime, during 2006. … The 11% increase in vandalism recorded by the BCS and a 2% rise in criminal damage cases on the police figures underlined the increase in public anxiety on five out of seven indicators of antisocial behaviour.

Confused? You should be. Here it is again:

  Police BCS
All crime down 2% stable (up 1%*)
Violent crime down 1% stable
Robbery up 8% stable (down 1%*)
Vandalism up 2% up 11%

* Figures in italics are from the BCS but weren’t in the Guardian story.

Earlier on in this post I made a passing reference to statistical data being bureaucratically produced, socially constructed and culture-bound. Here’s an example of what that means in practice. Police crime figures are a by-product of the activities of the police in dealing with crime, and as such are responsive to changes in the pattern of those activities: put a lot more police resources into dealing with offence X, or change police procedure so that offences of type X are less likely to go by unrecorded, and the crime rate for offence X will appear to go up (see also cottaging). Survey data, on the other hand, is produced by asking people questions; as such, it’s responsive to variations in the type of people who answer questions and to variations in those people’s memory and mood, not to mention variations in the wording of the questions, the structure of the questionnaire, the ways in which answers are coded up and so on. The two sets of indicators are associated with different sets of extraneous influences; if they both show an increase, the chances are that they’ve both been affected by the same influence. The influence in question may be a single big extraneous factor which affects both sets of figures – for example, a massively-publicised crackdown on particular criminal offences will give them higher priority both in police activities and in the public consciousness. But it may be a genuine increase in the thing being measured – and, more to the point, the chances of it being a genuine increase are much higher than if only one indicator shows an increase.

In this case, the police have robberies increasing by 8%; the BCS has theft from the person dropping by 1%. That’s an odd discrepancy, and suggests that something extraneous is involved in the police figure; it’s not clear what that might be, though. Vandalism, on the other hand, goes up by 2% if you use police figures but by all of 11% if you use the BCS. Again, this discrepancy suggests that something other than an 11% rise in the actual incidence of vandalism might be involved, and in this case the story suggests what this might be:

British Crime Survey indicators showing that public anxiety about teenagers on the streets, noisy neighbours, drug dealing, drunkenness and rowdiness has continued to increase despite the government’s repeated campaigns against antisocial behaviour

Presumably the government’s repeated campaigns against antisocial behaviour have raised the profile of anti-social behaviour as an issue. Perhaps this has made it more likely that people will feel that behaviour of this type is something to be anxious about, and that incidents of vandalism will be talked about and remembered for weeks or months afterwards (the BCS asks about incidents in the past twelve months).

That’s just one possible explanation: the meaning of figures like these is all in the interpretation, and the interpretation is up to the interpreter. The more important point is that there are things that these figures will and won’t allow you to do. You can say that police figures, unlike the BCS, are a conservative but reliable record of things that have actually happened, and that robbery has gone up by 8% and criminal damage by 2%. You can say that victim surveys, unlike police figures, are an inclusive and valid record of things that people have actually experienced, and that vandalism has gone up by 11% while robbery has gone down by 1%. What you can’t do is refer to An 8% rise in robberies and an 11% increase in vandalism – there is no way that the data can give you those two figures.

But that’s not a paradox or even a fallacy – it’s just misuse of statistics.

Hello, I’m a reject

I got my first PC in 1986; it was the upmarket model with the colour screen and the 40 MB hard disk (which I could only access as a single drive by running a non-standard version of DOS). I couldn’t get a PC that took the old floppies as well as the 3.5″ kind, but not for want of asking. I like backward compatibility.

I got my second PC in 1996, mainly to get online with. A 1 GB hard drive and a 100 MHz Pentium seemed pretty whizzy at the time, but by 2005 it was creaking badly. So I upgraded, this time to a Mac.

I’d never used a Mac before, but I found the switch surprisingly easy. I got used to a single-button mouse – and to pressing the splat key when I wanted a right-click – quite quickly. Not being able to Alt- to the menu bar was more irritating, and I couldn’t work out why I couldn’t delete files with the key labelled ‘delete’. Mostly good, though.

Some time later: the file-deleting thing was still bugging me, so I poked around a bit. OS X Help says you delete files by dragging them to Trash. Cheers. Some page somewhere suggested splat-delete. I tried it. It didn’t work. I asked around among Mac-using friends. Everyone told me it did work. Oh well, maybe I’ve got a duff keyboard.

Last month, the bottom row of the numeric keypad stopped working, probably owing to coffee, toast crumbs etc. I was pleasantly surprised to find my AppleCare cover entitles me to get a new one delivered (and maybe splat-delete will work on the new one!).

The new keyboard arrived two days ago. The numeric keypad works perfectly. Splat-delete doesn’t.

I do some serious poking around. (Maybe Apple are so keen on getting people to drag files to Trash that they’ve disabled splat-delete in the latest release?)

I notice that the Finder’s ‘File’ menu shows a key combination with a hollow arrow with an X in it. I’ve only got one key with a hollow arrow with an X in it; it’s the one labelled ‘delete’. The arrow points the other way, though. Funny.

I’m mystified by a page which advises newbies to use command+delete to delete files, then adds ‘the delete key, NOT the del key’. I’ve got a key labelled ‘delete’ – it’s the one I’ve been trying to use all this time – but there is no ‘del’ key.

I find an Apple page which makes a similar distinction, only this one refers to the ‘delete’ key and the ‘delfwd’ key. It further explains that the ‘delete’ key deletes the character to the left of the cursor. Light dawns.

So: the key labelled with the word ‘delete’, which is in a similar position to and acts exactly the same way as the ‘delete’ key on a PC keyboard, is not the ‘delete’ key. The ‘delete’ key is the big key with the long left-pointing arrow, which looks the same, is in the same position and has exactly the same function as the BACKSPACE key on a PC keyboard.

I don’t know why I didn’t realise that before.

I call that education

It became apparent that most of them hadn’t heard of Twitter.

Tim Bray misjudges his audience. What’s interesting is that the audience in question was at something called Web Design World. This leads Tim to wonder just how small the ‘Internet in-crowd’ really is – and, conversely, if it is that small, how come it makes so much noise.

I wrote about this last year, and I think some of what I wrote then is worth repeating:

When I first started using the Internet, about ten years ago, there was a geek Web, a hobbyist Web, an academic Web (small), a corporate Web (very small) and a commercial Web (minute) – and the geek Web was by far the most active. Since then the first four sectors have grown incrementally, but the commercial Web has exploded, along with a new sixth sector – the Web-for-everyone of AOL and MSN and MySpace and LiveJournal (and blogs), whose users vastly outnumber those of the other five. But the geek Web is still where a lot of the new interesting stuff is being created, posted, discussed and judged to be interesting and new.

Add social software to the mix – starting, naturally, within the geek Web, as that’s where it came from – and what do you get? You get a myth which diverges radically from the reality. The myth is that this is where the Web-for-everyone comes into its own, where millions of users of what was built as a broadcast Web with walled-garden interactive features start talking back to the broadcasters and breaking out of their walled gardens. The reality is that the voices of the geeks are heard even more loudly – and even more disproportionately – than before. Have a look at the ‘popular’ tags on del.icio.us: as I write, six of the top ten (including all of the top five) relate directly to programmers, and only to programmers. (Number eight reads: “LinuxBIOS – aims to replace the normal BIOS found on PCs, Alphas, and other machines with a Linux kernel”. The unglossed reference to Alphas says it all.) Of the other four, one’s a political video, two are photosets and one is a full-screen animation of a cartoon cat dancing, rendered entirely in ASCII art. (Make that seven of the top ten.)

[2007 del.icio.us/popular update: still six out of ten, albeit only two out of the top five]

Yes, ‘insiders’ do make a disproportionate amount of noise. And yes, the in-crowd does look bigger on the inside than it does from the outside – so does any crowd once you’re in it. The mistake is to assume that your crowd is the only crowd there is – but it’s a mistake that every crowd makes. An old post about Technorati (this time from 2005) makes this point better than I could paraphrase it:

The equation of authority with ‘popularity’ is, in one sense, neither inappropriate nor avoidable … the distinction between the knowledge produced in academic discourse and the knowledge produced in conversation is ultimately artificial: in both cases, there’s a cloud of competing and overlapping arguments and definitions; in both cases, each speaker – or each intervention – draws a line around a preferred constellation of concepts. At some level, all knowledge is ‘cloudy’. Moreover, in both cases, the outcome of interactions depends in large part on the connections which speakers can make between their own arguments and those of other speakers, particularly those who speak with greater authority. (Hence controversy: your demonstration that an established writer is wrong about A, B and C will interest a lot more people – and do more for your reputation – than your utterly original exposition of X, Y and Z.) You may not like the internationally-renowned scholar who’s agreed to look in on your workshop – you may resent his refusal to attend the whole thing and disapprove of his attitude to questioners; you may not even think his work’s that great – but you still invite him: he’s popular, which means he’s authoritative, which means he reflects well on you. Domain by domain, authority does indeed track popularity.

But there’s the rub – and here begins the argument against Technorati. Domain by domain, authority tracks popularity, but not globally: it makes a certain kind of sense to say that the Sun is more authoritative than the Star, but to say that it’s more authoritative than the Guardian would be absurd. (Perverse rankings like this are precisely an indicator of when two distinct domains are being merged.) Similarly, it’s easy to imagine somebody describing either the Daily Kos or Instapundit as the most ‘authoritative’ site on the Web; what’s impossible to imagine is the mindset which would say that Kos was almost the most authoritative source, second only to Glenn Reynolds. But this is what drops out if we use Technorati’s (global) equation of popularity with authority. … This effect has been masked up to now by the prevalence of a single domain among Technorati tags (and, indeed, Technorati users): it’s a design flaw which has been compensated by an implementation flaw.

Some final brief thoughts. Blogging tends towards conversation. Conversation routes around gatekeepers (Technorati is, precisely, a gatekeeper – but an avoidable gatekeeper). Conversations happen within domains. People cross domains, but domains don’t overlap. Every domain thinks it’s the only one.

Except, of course, the domain shared by readers of this blog, which is plural and open to a high degree. A uniquely high degree, in fact…

Everything new is old again

Printed in iSeries NEWS UK, February 2006

Everybody’s talking about Web 2.0! Web 2.0 offers a whole new way of looking at the Web, a whole new way of developing applications and a whole new way of making enough money to retire on for some irritating bunch of American students who dream up applications you can’t see the point of anyway! Web 2.0 is different because it’s a whole new departure from the old ways of doing things – and what makes it new is that it’s so different.

Web 2.0 breaks all the rules. The rigid document-based format of HTML became a universal computing standard in the early days of the Internet, some time around Web 0.9 [Can we check this? – Ed]. Web 2.0 emerged when a few pioneering developers broke with this orthodoxy, insisting that a page-based document markup language like HTML was better adapted to marking up page-based documents than to running high-volume transaction processing systems. With the industry still reeling from the shockwaves of this revelation, an alternative approach was unveiled. The key Web 2.0 methodology of AJAX – Asynchronous Javascript And XML – breaks the dominance of the HTML page. Now, applications can be built using pages which are dynamically reshaped, driven by back-end databases and the program logic defined by developers. Screen input fields can even be highlighted or prompted individually, without needing to refresh the entire screen! It’s this kind of innovation that makes Web 2.0 so different.

What’s more, it’s new. Web 2.0 is not in any way old – it’s not even similar to anything old! Some people have compared the excitement about Web 2.0 with the dotcom boom of the late 1990s. It’s true that Web 2.0 is likely to involve the proliferation of new companies which you’ve never heard of, and most of which you’ll never hear of again. However, there are three significant differences. The typical dotcom company raised big money from investors, spent it, then got bought out for small change by an established business. By contrast, the typical Web 2.0 company raises small change from investors, spends it, then gets bought out for big money by an established dotcom business. Secondly, dotcoms usually had a speculative long-term business case and a meaningless name interspersed with capital letters; they also used buzzwords beginning with a lower-case e. By contrast, Web 2.0 companies generally have a speculative short-term business case and a meaningless name interspersed with extraneous punctuation marks; also, their buzzwords tend to begin with a lower-case i. Finally, Web 2.0 is quite different from the dotcom boom, which took place in the late 1990s and so is now quite old. Web 2.0, on the other hand, is new, which in itself makes it different.

Above all, Web 2.0 is here to stay. In the wake of the dotcom boom, dozens of unprepared startups crashed and burned. As the painful memories of WebVan and boo.com faded, little remained of the brave new world of e-business: these days there are only a couple of major players in each of the main e-business niche areas, and some of them are subsidiaries of bricks-and-mortar businesses, which is cheating. By contrast, the big names of Web 2.0 are all around us. In the field of tagging and social networking alone, there’s the innovative picture tagging and social networking company Flickr (now owned by Yahoo!); there’s the groundbreaking bookmark tagging and social networking company del.icio.us (now owned by Yahoo!); and let’s not forget the unprecedented social network tagging company Dodgeball (now owned by Google). Meanwhile blogging, that quintessential Web 2.0 tool, guarantees that fresh new voices will continue to be heard, thanks in no small part to quick-and-easy blog hosting companies like Blogger (now owned by Google) and the new kid on the block, Myspace (now owned by Rupert Murdoch).

Web 2.0 is new, it’s different, and above all, it’s here – and it’s here to stay! So get down and get with it and get hep to the Web 2.0 scene, daddy-o! [Can we check this as well? – Ed] Don’t say ‘programming’, say ‘scripting’! Don’t say ‘directory’, say ‘tags’! Don’t say ‘DoubleClick’, say ‘Google AdSense’!

And don’t say ‘hype’. Please don’t say that.

In Godzilla’s footprint

Published in e-Pro magazine, March 2003

Monster movies never give you a good view of the monster until halfway through. Representing Godzilla through one enormous footprint — or even one enormous foot — is a good way of building up suspense. It’s also realistic: if Godzilla came to town, one scaly foot would be all that most people ever saw.

Some things are so big they’re hard to see. Although e-business is making some huge changes to the way we live and work, we don’t often think about where it’s coming from and why. Asked to identify trends driving e-business, analysts tend to resort to general statements about business efficiency or customer empowerment. Alternatively, we get the circular argument which identifies e-business as a response to competitive pressures—pressures which are intensified by the growth of e-business.

The real trends driving the evolution of e-business are at once more specific and more far-reaching. Moreover, these trends affect everyone from the B2C customer at home to the IBM board of directors, taking in the hard-pressed WebSphere developer on the way.

The first trend is standardization. On the client side, there is now only one ‘standard’ browser. A friend of mine recently complained about a site which was not rendering properly (in Navigator 7.0). The Webmaster — presumably a person of some technical smarts — replied, “This is not a problem with our site, but your browser. I am running Windows 98 with IE 5.50 and everything displays perfectly.” At the back end, conversely, the tide of standards rolls on—from CORBA to XML to SOAP to ebXML. Interoperability between servers is too important for any company, even Microsoft, to stand in its way.

Whether standards are set by mutual agreement or by the local 800-pound gorilla is secondary; however it’s achieved, standardization has fostered the development of e-business, and continues to do so. The effect is to commoditize Web application servers and development tools; this in turn promotes the development of a single standard application platform, putting ‘non-standard’ platforms and environments under competitive pressure. From OS/400 to Windows 2000, platforms which diverge from the emerging Intel/Linux/Apache norm are increasingly being forced to justify themselves.

The second trend is automation. Since the dawn of business computing, payroll savings have been an ever-present yardstick in justifying IT projects. E business continues this trend with a vengeance. Whether you’re balancing your bank account or making a deal for office supplies in a trading exchange, you’re interacting with an IT system where once — only a few years ago — you would have had to deal with a human being. The word processor was the end of the line for shorthand typists; e-business is having a similar effect on growing numbers of skilled clerical employees. The next step, promised by Microsoft and IBM alike, is an applications development framework so comprehensive that business analysts and end users will be able to generate entire systems: even application development will be automated. (No, I don’t believe it either, but are you going to bet against IBM and Microsoft?)

The third trend is externalization of costs. Not long ago, if you asked a shop to deliver to your home, you could expect to see a van with the name of the shop on the side. Place an order online today, and your goods may well be delivered by a self-employed driver working with a delivery service contracted to an order fulfillment specialist. Talk of ‘disintermediation’ as a trend in e-business is wide of the mark. By offering more agile, flexible and transparent inter-business relationships, e business makes it possible for intermediaries to proliferate, each contracting out its costly or inconvenient functions. On the B2C front, meanwhile, operating costs are increasingly passed on to the customer: I sometimes spend far longer navigating a series of Web forms than it would take to give the same details to a skilled employee.

A drive for standardization, forcing all platforms into a single generic framework; automation for all, cutting jobs among bank tellers and programmers alike; businesses concentrating ruthlessly on core functions, passing on costs to partners and customers. These trends have had a huge impact on IT and society at large — and there’s more to come. In the e-business world, we’re all in Godzilla’s footprint.

Simplify, reduce, oversimplify

An interesting post on ‘folksonomies’ at Collin Brooke’s blog prompted this comment, which I thought deserved a post of its own.

I think Peter Merholz‘s coinage ‘ethnoclassification’ could be useful here. As I’ve argued elsewhere, I think we can see all taxonomies (and ultimately all knowledge) as the product of an extended conversation within a given community: in this respect a taxonomy is simply an accredited ‘folksonomy’.

However, I think there’s a dangerous (but interesting) slippage here between what folksonomies could be and what folksonomies are: between the promise of the project of ‘folksonomy’ (F1) and what’s delivered by any identifiable folksonomy (F2). (You can get into very similar arguments about Wikipedia 1 and Wikipedia 2 – sometimes with the same people.) Compared to the complexity and exhaustiveness of any functioning taxonomic scheme, I don’t believe that any actually-existing ‘folksonomy’ is any more than an extremely sketchy work in progress.

For this reason (among others), I believe we need different words for the activity and the endpoint. So we could contrast classification with Peterme’s ‘ethnoclassification’, on one hand, and note that the only real difference between the two is that the former takes place within structured and credentialled communities. On the other hand, we could contrast actual taxonomies with ‘folksonomies’. The latter could have very much the same relationship with officially-credentialled taxonomies as classification does with ethnoclassification – but they aren’t there yet.

The shift from ‘folksonomy’ to ‘ethnoclassification’ has two interesting side-effects, which I suspect are both fairly unwelcome to folksonomy boosters (a group in which I don’t include Thomas Vander Wal, ironically enough). On one hand, divorcing process and product reminds us that improvements to one don’t necessarily translate as improvements in the other. The activity that goes into producing a ‘folksonomy’, as distinct from a taxonomy, may give more participants a better experience (more egalitarian, more widely distributed, more chatty, more fun) but you wouldn’t necessarily expect the end product to show improvements as a result. (You’d expect it to be a bit scrappy, by and large.) On the other hand, divorcing process from technology reminds us that ethnoclassification didn’t start with del.icio.us; the aggregation of informal knowledge clouds is something we’ve been doing for a long time, perhaps as long as we’ve been human.

We’re all together now, dancing in time

Ryan Carson:

I’d love to add friends to my Flickr account, add my links to del.icio.us, browse digg for the latest big stories, customise the content of my Netvibes home page and build a MySpace page. But you know what? I don’t have time and you don’t either…

Read the whole thing. What’s particularly interesting is a small straw poll at the end of the article, where Ryan asks people who actually work on this stuff what social software apps they use on a day-to-day basis. Six people made 30 nominations in all; Ryan had five of his own for a total of 35.

Here are the apps which got more than one vote:

Flickr (four votes)
Upcoming (two)
Wikipedia (two)

And, er, that’s it.

Social software looks like very big news indeed from some perspectives, but when it’s held to the standard of actually helping people get stuff done, it fades into insignificance. I think there are three reasons for this apparent contradiction. First, there’s the crowd effect – and, since you need a certain number of users before network effects start taking off, any halfway-successful social software application has a crowd behind it. It can easily look as if everyone‘s doing it, even if the relevant definition of ‘everyone’ looks like a pretty small group to you and me.

Then there’s the domain effect: tagging and user-rating are genuinely useful and constructive, in some not very surprising ways, within pre-defined domains. (Think of a corporate intranet app, where there is no need for anyone to specify that ‘Dunstable’ means one of the company’s offices, ‘Barrett’ means the company’s main competitor and ‘Monkey’ means the payroll system.) For anyone who is getting work done with tagging, in other words, tagging is going to look pretty good – and, thanks to the crowd effect, it’s going to look like a good thing that everyone‘s using.

Thirdly, social software is new, different, interesting and fun, as something to play with. It’s a natural for geeks with time to play with stuff and for commentators who like writing about new and interesting stuff – let alone geek commentators. The hype generates itself; it’s the kind of development that’s guaranteed to look bigger than it is.

Put it all together – and introduce feedback effects, as the community of geek commentators starts to find social software apps genuinely useful within its specialised domain – and social software begins to look like a Tardis in reverse: much, much bigger on the outside than it is on the inside.

That’s not to say that social software isn’t interesting, or that it isn’t useful. But I think that in the longer term those two facets will move apart: useful and productive applications of tagging will be happening under the commentator radar, often behind organisational firewalls, while the stuff that’s interesting and fun to play with will remain… interesting and fun to play with.

The users geeks don’t see

Nick writes, provocatively as ever, about the recent ‘community-oriented’ redesign of the netscape.com portal:

A few days ago, Netscape turned its traditional portal home page into a knockoff of the popular geek news site Digg. Like Digg, Netscape is now a “news aggregator” that allows users to vote on which stories they think are interesting or important. The votes determine the stories’ placement on the home page. Netscape’s hope, it seems, is to bring Digg’s hip Web 2.0 model of social media into the mainstream. There’s just one problem. Normal people seem to think the entire concept is ludicrous.

Nick cites a post titled Netscape Community Backlash, from which this line leapt out at me:

while a lot of us geeks and 2.0 types are addicted to our own technology (and our own voices, to be honest), it’s pretty darn obvious that A LOT of people want to stick with the status quo

This reminded me of a minor revelation I had the other day, when I was looking for the Java-based OWL reasoner ‘pellet’. I googled for
pellet owl
– just like that, no quotes – expecting to find a ‘pellet’ link at the bottom of forty or fifty hits related to, well, owls and their pellets. In fact, the top hit was “Pellet OWL Reasoner”. (To be fair, if you google
owl pellet
you do get the fifty pages of owl pellets first.)

I think it’s fair to say that the pellet OWL reasoner isn’t big news even in the Web-using software development community; I’d be surprised if everyone reading this post even knows what an OWL reasoner is (or has any reason to care). But there’s enough activity on the Web around pellet to push it, in certain circumstances, to the top of the Google rankings (see for yourself).

Hence the revelation: it’s still a geek Web. Or rather, there’s still a geek Web, and it’s still making a lot of the running. When I first started using the Internet, about ten years ago, there was a geek Web, a hobbyist Web, an academic Web (small), a corporate Web (very small) and a commercial Web (minute) – and the geek Web was by far the most active. Since then the first four sectors have grown incrementally, but the commercial Web has exploded, along with a new sixth sector – the Web-for-everyone of AOL and MSN and MySpace and LiveJournal (and blogs), whose users vastly outnumber those of the other five. But the geek Web is still where a lot of the new interesting stuff is being created, posted, discussed and judged to be interesting and new.

Add social software to the mix – starting, naturally, within the geek Web, as that’s where it came from – and what do you get? You get a myth which diverges radically from the reality. The myth is that this is where the Web-for-everyone comes into its own, where millions of users of what was built as a broadcast Web with walled-garden interactive features start talking back to the broadcasters and breaking out of their walled gardens. The reality is that the voices of the geeks are heard even more loudly – and even more disproportionately – than before. Have a look at the ‘popular’ tags on del.icio.us: as I write, six of the top ten (including all of the top five) relate directly to programmers, and only to programmers. (Number eight reads: “LinuxBIOS – aims to replace the normal BIOS found on PCs, Alphas, and other machines with a Linux kernel”. The unglossed reference to Alphas says it all.) Of the other four, one’s a political video, two are photosets and one is a full-screen animation of a cartoon cat dancing, rendered entirely in ASCII art. (Make that seven of the top ten.)

I’m not a sceptic about social software: ranking, tagging, search-term-aggregation and the other tools of what I persist in calling ethnoclassification are both new and powerful. But they’re most powerful within a delimited domain: a user coming to del.icio.us for the first time should be looking for the ‘faceted search’ option straight away (“OK, so that’s the geek cloud, how do I get it to show me the cloud for European history/ceramics/Big Brother?”) The fact that there is no ‘faceted search’ option is closely related, I’d argue, to the fact that there is no discernible tag cloud for European history or ceramics or Big Brother: we’re all in the geek Web. (Even Nick Carr.) (Photography is an interesting exception – although even there the only tags popular enough to make the del.icio.us tag cloud are ‘photography’, ‘photo’ and ‘photos’. There are 40 programming-related tags, from ajax to xml.)

Social software wasn’t built for the users of the Web-for-everyone. Reaction to the Netscape redesign tells us (or reminds us) that there’s no reason to assume they’ll embrace it.

Update Have a look at Eszter Hargittai‘s survey of Web usage among 1,300 American college students, conducted in February and March 2006. MySpace is huge, and Facebook’s even huger, but Web 2.0 as we know it? It’s not there. 1.9% use Flickr; 1.6% use Digg; 0.7% use del.icio.us. Answering a slightly different question, 1.5% have ever visited Boingboing, and 1% Technorati. By contrast, 62% have visited CNN.com and 21% bbc.co.uk. It’s still, very largely, a broadcast Web with walled-garden interactivity. Comparing results like these with the prophecies of tagging replacing hierarchy, Long Tail production and mashups all round, I feel like invoking the story of the blind men and the elephant – except that I’m not even sure we’ve all got the same elephant.

Who’s there?

At Many-to-Many, Ross Mayfield reports that Clay Shirky and danah boyd have been thinking about “the lingering questions in our field”, viz. the field of social software. I was a bit surprised to see that

How can communities support veterans going off topic together and newcomers seeking topical information and connections?

still qualifies as a ‘lingering question’; I distinctly remember being involved in thrashing this one out, together with Clay, the best part of nine years ago. But this was the one that really caught my eye, if you’ll pardon the expression:

What level of visual representation of the body is necessary to trigger mirror neurons?

Uh-oh. Sherry Turkle (subscription-only link):

a woman in a nursing home outside Boston is sad. Her son has broken off his relationship with her. Her nursing home is taking part in a study I am conducting on robotics for the elderly. I am recording the woman’s reactions as she sits with the robot Paro, a seal-like creature advertised as the first ‘therapeutic robot’ for its ostensibly positive effects on the ill, the elderly and the emotionally troubled. Paro is able to make eye contact by sensing the direction a human voice is coming from; it is sensitive to touch, and has ‘states of mind’ that are affected by how it is treated – for example, it can sense whether it is being stroked gently or more aggressively. In this session with Paro, the woman, depressed because of her son’s abandonment, comes to believe that the robot is depressed as well. She turns to Paro, strokes him and says: ‘Yes, you’re sad, aren’t you. It’s tough out there. Yes, it’s hard.’ And then she pets the robot once again, attempting to provide it with comfort. And in so doing, she tries to comfort herself.What are we to make of this transaction? When I talk to others about it, their first associations are usually with their pets and the comfort they provide. I don’t know whether a pet could feel or smell or intuit some understanding of what it might mean to be with an old woman whose son has chosen not to see her anymore. But I do know that Paro understood nothing. The woman’s sense of being understood was based on the ability of computational objects like Paro – ‘relational artefacts’, I call them – to convince their users that they are in a relationship by pushing certain ‘Darwinian’ buttons (making eye contact, for example) that cause people to respond as though they were in relationship.

Further reading: see Kathy Sierra on mirror neurons and the contagion of negativity. See also Shelley‘s critique of Kathy’s argument, and of attempts to enforce ‘positive’ feelings by manipulating mood. And see the sidebar at Many-to-Many, which currently reads as follows:

Recent Commentsviagra on Sanger on Seigenthaler’s criticism of Wikipedia

hydrocodone cheap on Sanger on Seigenthaler’s criticism of Wikipedia

viagra on Sanger on Seigenthaler’s criticism of Wikipedia

alprazolam online on Sanger on Seigenthaler’s criticism of Wikipedia

Timur on Sanger on Seigenthaler’s criticism of Wikipedia

Timur on Sanger on Seigenthaler’s criticism of Wikipedia

Recent Trackbacks

roulette: roulette

jouer casino: jouer casino

casinos on line: casinos on line

roulette en ligne: roulette en ligne

jeux casino: jeux casino

casinos on line: casinos on line

Some day this will all be yours

Scott Karp:

What if dollars have no place in the new economics of content?

In media 1.0, brands paid for the attention that media companies gathered by offering people news and entertainment (e.g. TV) in exchange for their attention. In media 2.0, people are more likely to give their attention in exchange for OTHER PEOPLE’S ATTENTION. This is why MySpace can’t effectively monetize its 70 million users through advertising — people use MySpace not to GIVE their attention to something that is entertaining or informative (which could thus be sold to advertisers) but rather to GET attention from other users.

MySpace can’t sell attention to advertisers because the site itself HAS NONE. Nobody pays attention to MySpace — users pay attention to each other, and compete for each other’s attention — it’s as if the site itself doesn’t exist.You see the same phenomenon in blogging — blogging is not a business in the traditional sense because most people do it for the attention, not because they believe there’s any financial reward. What if the economics of media in the 21st century begin to look like the economics of poetry in the 20th century? — Lots of people do it for their own personal gratification, but nobody makes any money from it.

Pedantry first: it’s inconceivable that we’ll reach a point where nobody makes any money from the media, at least this side of the classless society. Even the hard case of blogging doesn’t really stand up – I could name half a dozen bloggers who have made money or are making money from their blogs, without pausing to think.

It’s a small point, but it’s symptomatic of the enthusiastic looseness of Karp’s argument. So I welcomed Nicholas Carr’s counterblast, which puts Karp together with some recent comments by Esther Dyson:

“Most users are not trying to turn attention into anything else. They are seeking it for itself. For sure, the attention economy will not replace the financial economy. But it is more than just a subset of the financial economy we know and love.”

Here’s Carr:

I fear that to view the attention economy as “more than just a subset of the financial economy” is to misread it, to project on it a yearning for an escape (if only a temporary one) from the consumer culture. There’s no such escape online. When we communicate to promote ourselves, to gain attention, all we are doing is turning ourselves into goods and our communications into advertising. We become salesmen of ourselves, hucksters of the “I.” In peddling our interests, moreover, we also peddle the commodities that give those interests form: songs, videos, and other saleable products. And in tying our interests to our identities, we give marketers the information they need to control those interests and, in the end, those identities. Karp’s wrong to say that MySpace is resistant to advertising. MySpace is nothing but advertising.

Now, this is good, bracing stuff, but I think Carr bends the stick a bit too far the other way. I know from my own experience that there’s a part of my life labelled Online Stuff, and that most of my reward for doing Online Stuff is attention from other people doing Online Stuff. Real-world payoffs – money, work or just making new real-world friends – are nice to get, but they’re not what it’s all about.

The real trouble is that Karp has it backwards. Usenet – where I started doing Online Stuff, ten years ago – is a model of open-ended mutual whuffie exchange. (A very imperfect model, given the tendency of social groups to develop boundaries and hierarchies, but at least an unmonetised one.) Systematised whuffie trading came along later. The model case here is eBay, where there’s a weird disconnect between meaning and value. Positive feedback doesn’t really mean that you think the other person is a “great ebayer” – it doesn’t really mean anything, any more than “A+++++” means something distinct from “A++++” or “A++++++”. What it does convey is value: it makes it that much easier for the other person to make money. It also has attention-value, making the other person feel good for no particular real-world reason, but even this is quantifiable (“48! I’m up to 48!”).

Ultimately Dyson and Carr are both right. The ‘attention economy’ of Online Stuff is new, absorbing and unlike anything that went before – not least because the way in which it gratifies fantasies of being truly appreciated, understood, attended to. But, to the extent that the operative model is eBay rather than Usenet, it is nothing other than a subset of the financial economy. Karp may be right about the specific case of MySpace, but I can’t help distrusting his exuberance – not least because, in my experience, the suffix ‘2.0’ is strongly associated with a search for new ways to cash in.

Cloudbuilding (3)

By way of background to this post – and because I think it’s quite interesting in itself – here’s a short paper I gave last year at this conference (great company, shame about the catering). It was co-written with my colleagues Judith Aldridge and Karen Clarke. I don’t stand by everything in it – as I’ve got deeper into the project I’ve moved further away from Clay’s scepticism and closer towards people like Carole Goble and Keith Cole – but I think it still sets out an argument worth having.

Mind the gap: Metadata in e-social science

1. Towards the final turtle

It’s said that Bertrand Russell once gave a public lecture on astronomy. He described how the earth orbits around the sun and how the sun, in turn, orbits around the centre of our galaxy. At the end of the lecture, a little old lady at the back of the room got up and said: “What you have told us is rubbish. The world is really a flat plate supported on the back of a giant tortoise.”

Russell smiled and replied, “What is the tortoise standing on?”

“You’re very clever, young man, very clever,” said the old lady. “But it’s turtles all the way down.”

The Russell story is emblematic of the logical fallacy of infinite regress: proposing an explanation which is just as much in need of explanation as the original fact being explained. The solution, for philosophers (and astronomers), is to find a foundation on which the entire argument can be built: a body of known facts, or a set of acceptable assumptions, from which the argument can follow.

But what if infinite regress is a problem for people who want to build systems as well as arguments? What if we find we’re dealing with a tower of turtles, not when we’re working backwards to a foundation, but when we’re working forwards to a solution?

WSDL [Web Services Description Language] lets a provider describe a service in XML [Extensible Markup Language]. […] to get a particular provider’s WSDL document, you must know where to find them. Enter another layer in the stack, Universal Description, Discovery, and Integration (UDDI), which is meant to aggregate WSDL documents. But UDDI does nothing more than register existing capabilities […] there is no guarantee that an entity looking for a Web Service will be able to specify its needs clearly enough that its inquiry will match the descriptions in the UDDI database. Even the UDDI layer does not ensure that the two parties are in sync. Shared context has to come from somewhere, it can’t simply be defined into existence. […] This attempt to define the problem at successively higher layers is doomed to fail because it’s turtles all the way up: there will always be another layer above whatever can be described, a layer which contains the ambiguity of two-party communication that can never be entirely defined away. No matter how carefully a language is described, the range of askable questions and offerable answers make it impossible to create an ontology that’s at once rich enough to express even a large subset of possible interests while also being restricted enough to ensure interoperability between any two arbitrary parties.
(Clay Shirky)

Clay Shirky is a longstanding critic of the Semantic Web project, an initiative which aims to extend Web technology to encompass machine-readable semantic content. The ultimate goal is the codification of meaning, to the point where understanding can be automated. In commercial terms, this suggests software agents capable of conducting a transaction with all the flexibility of a human being. In terms of research, it offers the prospect of a search engine which understands the searches it is asked to run and is capable of pulling in further relevant material unprompted.

This type of development is fundamental to e-social science: a set of initiatives aiming to enable social scientists to access large and widely-distributed databases using ‘grid computing’ techniques.

A Computational Grid performs the illusion of a single virtual computer, created and maintained dynamically in the absence of predetermined service agreements or centralised control. A Data Grid performs the illusion of a single virtual database. Hence, a Knowledge Grid should perform the illusion of a single virtual knowledge base to better enable computers and people to work in cooperation.
(Keith Cole et al)

Is Shirky’s final turtle a valid critique of the visions of the Semantic Web and the Knowledge Grid? Alternatively, is the final turtle really a Babel fish — an instantaneous universal translator — and hence (excuse the mixed metaphors) a straw person: is Shirky setting the bar impossibly high, posing goals which no ‘semantic’ project could ever achieve? To answer these questions, it’s worth reviewing the promise of automated semantic processing, and setting this in the broader context of programming and rule-governed behaviour.

2. Words and rules

We can identify five levels of rule-governed behaviour. In rule-driven behaviour, firstly, ‘everything that is not compulsory is forbidden’: the only actions which can be taken are those dictated by a rule. In practice, this means that instructions must be framed in precise and non-contradictory terms, with thresholds and limits explicitly laid down to cover all situations which can be anticipated. This is the type of behaviour represented by conventional task-oriented computer programming.

A higher level of autonomy is given by rule-bound behaviour: rules must be followed, but there is some latitude in how they are applied. A set of discrete and potentially contradictory rules is applied to whatever situation is encountered. Higher-order rules or instructions are used to determine the relative priority of different rules and resolve any contradiction.

Rule-modifying behaviour builds on this level of autonomy, by making it possible to ‘learn’ how and when different rules should be applied. In practice, this means that priority between different rules is decided using relative weightings rather than absolute definitions, and that these weightings can be modified over time, depending on the quality of the results obtained. Neither rule-bound nor rule-modifying behaviour poses any fundamental problems in terms of automation.

Rule-discovering behaviour, in addition, allows the existing body of rules to be extended in the light of previously unknown regularities which are encountered in practice (“it turns out that many Xs are also Y; when looking for Xs, it is appropriate to extend the search to include Ys”). This level of autonomy — combining rule observance with reflexive feedback — is fairly difficult to envisage in the context of artificial intelligence, but not impossible.

The level of autonomy assumed by human agents, however, is still higher, consisting of rule-interpreting behaviour. Rule-discovery allows us to develop an internalised body of rules which corresponds ever more closely to the shape of the data surrounding us. Rule-interpreting behaviour, however, enables us to continually and provisionally reshape that body of rules, highlighting or downgrading particular rules according to the demands of different situations. This is the type of behaviour which tells us whether a ban is worth challenging, whether a sales pitch is to be taken literally, whether a supplier is worth doing business with, whether a survey’s results are likely to be useful to us. This, in short, is the level of Shirky’s situational “shared context” — and of the final turtle.

We believe that there is a genuine semantic gap between the visions of Semantic Web advocates and the most basic applications of rule-interpreting human intelligence. Situational information is always local, experiential and contingent; consequently, the data of the social sciences require interpretation as well as measurement. Any purely technical solution to the problem of matching one body of social data to another is liable to suppress or exclude much of the information which makes it valuable.

We cannot endorse comments from e-social science advocates such as this:

variable A and variable B might both be tagged as indicating the sex of the respondent where sex of the respondent is a well defined concept in a separate classification. If Grid-hosted datasets were to be tagged according to an agreed classification of social science concepts this would make the identification of comparable resources extremely easy.
(Keith Cole et al)

Or this:

work has been undertaken to assert the meaning of Web resources in a common data model (RDF) using consensually agreed ontologies expressed in a common language […] Efforts have concentrated on the languages and software infrastructure needed for the metadata and ontologies, and these technologies are ready to be adopted.
(Carole Goble and David de Roure; emphasis added)

Statements like these suggest that semantics are being treated as a technical or administrative matter, rather than a problem in its own right; in short, that meaning is being treated as an add-on.

3. Google with Craig

To clarify these reservations, let’s look at a ‘semantic’ success story.

The service, called “Craigslist-GoogleMaps combo site” by its creator, Paul Rademacher, marries the innovative Google Maps interface with the classifieds of Craigslist to produce what is an amazing look into the properties available for rent or purchase in a given area. […] This is the future….this is exactly the type of thing that the Semantic Web promised
(Joshua Porter)

‘This’ is is an application which calculates the location of properties advertised on the ‘Craigslist’ site and then displays them on a map generated from Google Maps. In other words, it takes two sources of public-domain information and matches them up, automatically and reliably.

That’s certainly intelligent. But it’s also highly specialised, and there are reasons to be sceptical about how far this approach can be generalised. On one hand, the geographical base of the application obviates the issue of granularity. Granularity is the question of the ‘level’ at which an observation is taken: a town, an age cohort, a household, a family, an individual? a longitudinal study, a series of observations, a single survey? These issues are less problematic in a geographical context: in geography, nobody asks what the meaning of ‘is’ is. A parliamentary constituency; a census enumeration district; a health authority area; the distribution area of a free newspaper; a parliamentary constituency (1832 boundaries) — these are different ways of defining space, but they are all reducible to a collection of identifiable physical locations. Matching one to another, as in the CONVERTGRID application (Keith Cole et al) — or mapping any one onto a uniform geographical representation — is a finite and rule-bound task. At this level, geography is a physical rather than a social science.

The issue of trust is also potentially problematic. The Craigslist element of the Rademacher application brings the social element to bear, but does so in a way which minimises the risks of error (unintentional or intentional). There is a twofold verification mechanism at work. On one hand, advertisers — particularly content-heavy advertisers, like those who use the ‘classifieds’ and Craigslist — are motivated to provide a (reasonably) accurate description of what they are offering, and to use terms which match the terms used by would be buyers. On the other hand, offering living space over Craigslist is not like offering video games over eBay: Craigslist users are not likely to rely on the accuracy of listings, but will subject them to in-person verification. In many disciplines, there is no possibility of this kind of ‘real-world’ verification; nor is there necessarily any motivation for a writer to use researchers’ vocabularies, or conform to their standards of accuracy.

In practice, the issues of granularity and trust both pose problems for social science researchers using multiple data sources, as concepts, classifications and units differ between datasets. This is not just an accident that could have been prevented with more careful planning; it is inherent in the nature of social science concepts, which are often inextricably contingent on social practice and cannot unproblematically be recorded as ‘facts’. The broad range covered by a concept like ‘anti-social behaviour’ means that coming up with a single definition would be highly problematic — and would ultimately be counter-productive, as in practice the concept would continue to be used to cover a broad range. On the other hand, concepts such as ‘anti-social behaviour’ cannot simply be discarded, as they are clearly produced within real — and continuing — social practices.

The meaning of a concept like this — and consequently the meaning of a fact such as the recorded incidence of anti-social behaviour — cannot be established by rule-bound or even rule-discovering behaviour. The challenge is to record both social ‘facts’ and the circumstances of their production, tracing recorded data back to its underlying topic area; to the claims and interactions which produced the data; and to the associations and exclusions which were effectively written into it.

4. Even better than the real thing

As an approach to this problem, we propose a repository of content-oriented metadata on social science datasets. The repository will encompass two distinct types of classification. Firstly, those used within the sources themselves; following Barney Glaser, we refer to these as ‘In Vivo Concepts’. Secondly, those brought to the data by researchers (including ourselves); we refer to these as ‘Organising Concepts’. The repository will include:

• relationships between Organising Concepts
‘theft from the person’ is a type of ‘theft’

• associations between In-Vivo Concepts and data sources
the classification of ‘Mugging’ appears in ‘British Crime Survey 2003’

• relationships between In-Vivo Concepts
‘Snatch theft’ is a subtype of the classification of ‘Mugging’

• relationships between Organising Concepts and In-Vivo Concepts
the classification of ‘Snatch theft’ corresponds to the concept of ‘theft from the person’

The combination of these relationships will make it possible to represent, within a database structure, a statement such as

Sources of information on Theft from the person include editions of the British Crime Survey between 1996 and the present; headings under which it is recorded in this source include Snatch theft, which is a subtype of Mugging

The structure of the proposed repository has three significant features. Firstly, while the relationships between concepts are hierarchical, they are also multiple. In English law, the crime of Robbery implies assault (if there is no physical contact, the crime is recorded as Theft). The In-Vivo Concept of Robbery would therefore correspond both to the Organising Concept of Theft from the person and that of Personal violence. Since different sources may share categories but classify them differently, multiple relationships between In-Vivo Concepts will also be supported. Secondly, relationships between concepts will be meaningful: it will be possible to record that two concepts are associated as synonyms or antonyms, for example, as well as recording one as a sub-type of the other. Thirdly, the repository will not be delivered as an immutable finished product, but as an open and extensible framework. We shall investigate ways to enable qualified users to modify both the developed hierarchy of Organising Concepts and the relationships between these and In-Vivo Concepts.

In the context of the earlier discussion of semantic processing and rule-governed behaviour, this repository will demonstrate the ubiquity of rule-interpreting behaviour in the social world by exposing and ‘freezing’ the data which it produces. In other words, the repository will encode shifting patterns of correspondence, equivalence, negation and exclusion, demonstrating how the apparently rule-bound process of constructing meaning is continually determined by ‘shared context’.

The repository will thus expose and map the ways in which social data is structured by patterns of situational information. The extensible and modifiable structure of the repository will facilitate further work along these lines: the further development of the repository will itself be an example of rule-interpreting behaviour. The repository will not — and cannot — provide a seamless technological bridge over the semantic gap; it can and will facilitate the work of bridging the gap, but without substituting for the role of applied human intelligence.

This is the new stuff

Thomas criticises Wikipedia’s entry on folksonomy – a term which was coined just over a year ago by, er, Thomas. As of today’s date, the many hands of Wikipedia say:

Folksonomy is a neologism for a practice of collaborative categorization using freely chosen keywords. More colloquially, this refers to a group of people cooperating spontaneously to organize information into categories, typically using categories or tags on pages, or semantic links with types that evolve without much central control. … In contrast to formal classification methods, this phenomenon typically only arises in non-hierarchical communities, such as public websites, as opposed to multi-level teams and hierarchical organization. An example is the way in which wikis organize information into lists, which tend to evolve in their inclusion and exclusion criteria informally over time.

Thomas:

Today, having seen an new academic endeavor related to folksonomy quoting the Wikipedia entry on folksonomy, I realize the definition of Folksonomy has become completely unglued from anything I recognize (yes, I did create the word to define something that was undefined prior). It is not collaborative, it is not putting things into categories, it is not related to taxonomy (more like the antithesis of a taxonomy), etc. The Wikipedia definition seems to have morphed into something that the people with Web 2.0 tagging tools can claim as something that can describe their tool

I’m resisting the temptation to send Thomas the All-Purpose Wikipedia Snark Letter (“Yeah? Well, if you don’t like the wisdom of the crowds, Mr So-Called Authority…”). In fact, I’m resisting the temptation to say anything about Wikipedia; that’s another discussion. But I do want to say something about the original conception of ‘folksonomy’, and about how it’s drifted.

Firstly, another quote from Thomas’s post from today:

Folksonomy is the result of personal free tagging of information and objects (anything with a URL) for one’s own retrival. The tagging is done in a social environment (shared and open to others). The act of tagging is done by the person consuming the information.

There is tremendous value that can be derived from this personal tagging when viewing it as a collective, when you have the three needed data points in a folksonomy tool: 1) the person tagging; 2) the object being tagged as its own entity; and 3) the tag being used on that object. … [by] keeping the three data elements you can use two of the elements to find a third element, which has value. If you know the object (in del.icio.us it is the web page being tagged) and the tag you can find other individuals who use the same tag on that object, which may lead (if a little more investigation) to somebody who has the same interest and vocabulary as you do. That person can become a filter for items on which they use that tag. You then know an individual and a tag combination to follow.

This is admirably clear and specific; it also fits rather well with the arguments I was making in two posts earlier this year:

[perhaps] the natural state of knowledge is to be ‘cloudy’, because it’s produced within continuing interactions within groups: knowledge is an emergent property of conversation, you could say … [This suggests that] every community has its own knowledge-cloud – that the production and maintenance of a knowledge-cloud is one way that a community defines itself.

If ‘cloudiness’ is a universal condition, del.icio.us and flickr and tag clouds and so forth don’t enable us to do anything new; what they are giving us is a live demonstration of how the social mind works. Which could be interesting, to put it mildly.

Thomas’s original conception of ‘folksonomy’ is quite close to my conception of a ‘knowledge cloud’: they’re both about the emergence of knowledge within a social interaction (a conversation).

The current Wikipedia version of ‘folksonomy’ is both fuzzier and more closely tied to existing technology. What’s happened seems to be a kind of vicious circle of hype and expectations management. It’s not a new phenomenon – anyone who’s been watching IT for any length of time has seen it happen at least once. (Not to worry anyone, but it happened quite a lot around 1999, as I remember…)

  1. There’s Vision: someone sees genuinely exciting new possibilities in some new technology and writes a paper on – oh, I don’t know, noetic telepresence or virtual speleology or network prosody…
  2. Then there’s Development: someone builds something that does, well, a bit of it. Quite significant steps towards supporting network prosody. More coming in the next release.
  3. Phase three is Hype. Hype, hype, hype. Mm-hmm. I just can’t get enough hype, can you?
  4. The penultimate phase is Dissemination: in which everyone’s trying to support network prosody. Or, at least, some of it. That stuff that those other people did with their tool. There we go, fully network prosody enabled – must get someone to do a writeup.
  5. Finally we’re into Hype II, also known as Marketing: ‘network prosody’ is defined less by the original vision than by the tools which have been built to support it. The twist is that it’s still being hyped in exactly the same way – tools which don’t actually do that much are being marketed as if they realised the original Vision. It’s a bit of a pain, this stage. Fortunately it doesn’t last forever. (Stage 6 is the Hangover.)

What’s to be done? As I said back here, personally I don’t use the term ‘folksonomy’; I prefer Peter Merholz’s term ‘ethnoclassification’. Two of my objections to ‘folksonomy’ were that it appears to denote an end result as well as a process, and that it’s become a term of (anti-librarian) advocacy as well as description; Thomas’s criticisms of Wikipedia seem to point in a similar direction. Where I do differ from Thomas is in the emphasis to be placed on online technologies. Ethnoclassification is – at least, as I see it – something that happens everywhere all the time: it’s an aspect of living in a human community, not an aspect of using the Web. If I’m right about where we are in the Great Cycle of Hype, this may soon be another point in its favour.

%d bloggers like this: