Four years ago, I was editor at my small college newspaper, and we were experimenting with new (for us) forms of online publishing — we’d launched a new site based on Django, after experimenting on WordPress, and I was desperate to find other tech-minded journalists.
Somehow, I got introduced to people like Daniel Bachhuber, Greg Linch, Adam Hemphill, and a few others. Together, we went launched CoPress, where we encouraged college media to become more innovative.
At the time it was an organization seriously at odds with college media. Most major papers were published using College Publisher, a locked-down platform which also severely misaligned incentives for college papers by taking most/all of their online advertising revenue.
Within a year, though, we managed to launch a half-dozen newspapers on WordPress, and I think a critical turning point was when five of our partners won Pacemaker awards for their web presences.
CoPress only lasted two years, but its had an impressive legacy. One of our former advisor, Bryan Murley, just analyzed the results of the 2012 Pacemaker awards, and found that more than half are on WordPress … and only 4% are hosted on College Publisher.
While driving back from Tahoe with Daniel, we discussed the most recent Carnival of Journalism prompt. Daniel argued that news organizations are missing out on a huge opportunity to serve as data providers, rather than journalism providers–that their role in aggregating and explaining stories is important, but they’d probably be better served by making as much of the underlying data as possible available.
This makes some sense to me. I can imagine lots of organizations that would love reasonably organized data–particularly the more human and local data that news organizations could lead in. Nearly every study I’ve worked in at McKinsey would benefit from this kind of intelligence–trying to discern trends in consumer behavior is a huge part of being successful in businesses these days.
However, in the vast majority of situations, the real problem for companies isn’t having too little data. Big databases like StatsCan provide fantastic and detailed data sources. NPD, IPR, and a host of other data-gathering organizations compile and measure reports on huge numbers of subjects. And companies pay tens of thousands of dollars to get access to this information. But once they have it, many of them don’t use it in an effective manner.
I think that the rise of internal consulting / biz-ops groups is an effort to make use of the huge amount of data out there, but it seems that (even when these groups exist) this role is passed to external companies. It used to be that McKinsey was hired to find and compile data–these days, we’re just given access to massive internal databases, and have to make and find meaning.
This kind of work can vary radically from business to business, so I can imagine how it would be very difficult to create some kind of standardized process for finding useful information.
I’m not clear how newspapers can create useful meaning from their data sources for people willing to pay for it without becoming more like consultants than journalists–and I find it hard to imagine my local newspaper acting as a businesses advisor for a small-town pizza shop, even if they have information which could, with the proper presentation and data-mining, be pretty useful in guiding decision making.