Image via CrunchBase
Do you remember when the internet was all new and strange? When being “online” felt like a distinct and separate experience to simply doing normal things?
Then let’s transport you right back. I stumbled across this: a guide to online journalism from 1999, written by Mike Holderness. He provided it as a free resource for students – apparently under the banner of the UK’s National Union of Journalists (NUJ) – and it says so much about how the industry viewed the web back then, and to extent betrays attitudes still prevalent now. (I spotted this while looking at work by @LibbyGalvin, a student at City University, London, where I teach occasionally)
The internet is the reporter’s hangout of the future
What’s immediately striking is how much of Mike’s advice is still valid. He opens with the first thing any digital publishing trainer would tell you today:
In order to understand the internet and other on-line services, you have to use them
And then, anticipating the sharp cuts and job losses across the industry in the 2000s, Mike gives his rationale for online journalism as a career choice (emphasis is mine):
At some point in your career you are going to be freelance. Believe me.
When you are a freelance, you will need to specialise. In many specialist areas, the internet is the native form of communication of the people you are writing about. If you refuse to enter bars, it is extremely difficult to be a crime reporter; if you refuse to enter the internet, it will be extremely difficult to cover science – including the social sciences and criminology – technology, and an expanding list of other areas.
Who could argue with that? Looking at this now, this point seems more true than Mike could have envisaged: in order to fully report specialist areas, reporters have no choice but to engage online with communities that understand them. We’ve even seen general news reporters, with the advent of liveblogging, use online as the principle mode of communication.
Imagine a world without Google
The students graduating from City University this summer probably have no idea what Alta Vista, Lycos, Usenet and telnet are – but anyone interested in online stuff in the 1990s will know exactly what they were.
But almost everyone alive of sound mind knows what Google is – but Google as still in beta in 1999 and had yet to revolutionise search. It seems nostalgically quaint when Mike makes the distinction between automated and manual directories. “Machine-generated indexes have the advantage that their contents are not filtered – computers are, so far, not capable of introducing political bias,” he says.
Skills for journalists, via the Wayback machine
“Often, the best way to find someone’s email address is a phone call“, says Mike. Today we might use Twitter, LinkedIn or Google, but back then none of them existed.
It’s sobering to think that Mike’s best advice to finding a URL – in the absence of a comprehensive real-time all-web catalogue – is just to hit and hope: “Often, the best way to locate such a directory is to make an informed guess at its ‘URL’.”
Mike suggests that the only people you might want to contact using the internet are techies and academic boffins – a reasonable idea 12 years ago. As he puts it:
The Web was, after all, invented by academics tired of answering colleagues’ questions in email. Then you call them or email them for the soundbite, knowing what to ask for.
Mike’s general advice for people baffled by all this is salient too:
If you stay calm and don’t get carried away with the newness of the technology, you’ll find this is just the same as evaluating material you find in the Real World™.
Stuff you pick up in a mailing list has the same value as stuff you pick up in (a conversation in) a bar. (It makes some difference whether it’s the bar of King’s College Cambridge or the Axe, Hackney Road…)
Google fails on press freedom search
A triumph for historic irony, Mike’s take on the in-beta Google search engine is worth quoting at length:
The Google search engine (which was still in “beta test” in August 1999) is interesting. It ranks pages according to the number of pages which link to them. This makes it very good indeed at finding home pages for organisations and individuals – or at least those which have had Web presences for a while. It did well, too, on Genetically Modified Organism* but averagely for press freedom.
Ten years later Rupert Murdoch would accuse Google of being a parasite, getting rich from the hard work of his editors and journalists, before taking his sites off Google News’ search index. What will be said of Google in another 10 years, I wonder…
p.s. This article from Mike in the Indy from 1992 prefaces the rise of Wikileaks by nearly 20 years