Blogs alone do not suffice

“Journalism is dead. Long live the blog.”


Pundits hail the death of journalism, pointing to the declining newspaper revenues from print advertising, which more than halved in the US from $46.6bn in 2006 to $20.7bn in 2011 (according to Pew).  


To make this claim is to confound journalism and newspapers. Journalism is not dead. Newspapers are dead.


We all still consume news and op-eds, just increasingly through different media. So newspapers may be dying, but our appetite for content is not, as the volume of news we consume is not decreasing. Interestingly, newspapers are dying but much more slowly than many expected. They are unlikely to ever fully recover, but Jeff Bezos and Warren Buffett’s recent acquisitions of newspapers such as the Washington Post testify to their belief in the current undervaluation of newspapers. They still have a runway of profitability, thanks to a loyal physical readership among people predominantly above 40 years old.


The problem is that despite our high consumption of news, the quality of this content is waning. The reason is, as Dean Starkman points out, we have an increasingly “network-driven system of journalism” in which content is not so much created as “assembled, shared”. Fewer long-form articles or in-depth documentaries are being funded. “Reporters are disempowered”, buckling under the pressure to publish more articles, with less choice over the subjects they cover. Investigative journalism is being ousted to make way for light entertainment.


Ironically, it is in part our fault as readers. News sites use web analytics tools to track the articles or videos that draw the most views, in order to decide what they should produce in higher quantities. Inevitably, as a population we prefer the junk food. Kim Kardashian tripping over a puppy draws more views, by a factor of more than ten, than tan article about the US fiscal cliff.


Yet I believe the decline of investigative journalism is not permanent. Rather, the journalistic system is in a state of flux. As Clay Shirky says, in revolutions “the old stuff gets broken faster than the new stuff is put in its place”. As more and more holes appear in hulls of newspapers, investigative journalists are being used as plugs, asked to publish banalities rather than thoroughly-researched stories. Or more unfortunate still, they get fired and so take up jobs in other industries. This shift arises because we have not yet found a new, sustainable model for journalism. Currently, “old institutions seem exhausted while new ones seem untrustworthy” (Shirky).


In time, though, new institutions will become trustworthy. Investigative journalists will be in high demand again, because “society needs journalism” (Shirky). We need journalism in order to expose corporate fraud, human rights violations, consumer abuse. Investigative journalism cannot just be about readers posting on twitter feeds. Breaking a story of government corruption can involve months, if not years, of research, interviews and even imbedding with a group. It is a full time job.


Is there a need for new institutions if many people avidly read blogs, seeing this netroots journalism as a vital source of opinion. Yet, as Salon Daous points out, the “netroots alone cannot generate the critical mass necessary to alter or create conventional wisdom”. The ideas they propose need broader propagation and their facts need authentication. So blogs inform politicians, reporters and citizens, but rely on politicians and the press to be deemed reliable and actually be scalable. Daous claims this triangle of blogs, media and political is necessary to create widespread change.


What these media institutions will look like tomorrow is unclear, because they’ll be very different to those that persist today. Google and Amazon are well positioned to play a role, as they should be able to automate fact checking with algorithms that scan the web for corroborating evidence and they can integrate distribution onto Android and Kindle platforms. Startups like Circa and Zite have also started providing news highly tailored to consumption on mobiles and tablets, but it still more a question of improved curation than a reinvention of the model for investigative journalism. They select articles based on whether their algorithms predict that I will personally like them. Zite, for example, almost exclusively shows me articles focused on technology start-ups and foreign affairs.


In short, there will be new winners, but the tournament is only just starting.

Testing whether Wikipedia gets it right

I had a brief look at the Wikipedia article on,_Oxford, to see how accurate the crowd-sourced encyclopedia is.

In short, the facts tend to be correct, but there are sometimes glaring omissions of important content. I think this is the result of two phenomena. Firstly, the Wikipedia development process is piecemeal. This is good from the perspective that it sources different perspectives, but it means no contributor is responsible for ensuring the end product holistically covers the topic at hand. Secondly, as contributors, we are much more likely to correct errors that add new content if we notice some is missing. Errors stab us, forcing a reaction; omissions merely pinch us, and so are easier to brush off.


i) History: The article omits to detail the importance of the Jacobite influence on Brasenose College, particularly during the first half of the seventeenth century.


ii) Location and buildings: While the Wikipedia page provides a decent overview of the buildings used to teach students, it omits to note the existence of a second annex residence used to house graduate students. It would also be worth adding that the seventeenth century kitchen underwent a major renovation between 2010-12, which reconstructed the ceiling which was at risk of collapsing.


iii) Events: Every two years, the college hosts a major ball, with a budget of circa £50,000. In 2009, to mark the 500th anniversary since the founding of Brasenose, the college hosted a particularly large ball. It was the first college to secure the right to have exclusive use of Radcliffe Square for an evening, in order to allow attendees to watch a firework display, launched from the gardens of All Saints College. To mark the anniversary, Queen Elizabeth II visited the college.


The Wikipedia article does well to make good use of Joseph Mordaunt Crook’s 2008 book, entitled ‘Brasenose: the biography of an Oxford College’, published shortly before the college’s quincentenary celebrations. Joseph Crook was appointed as the official Brasenose historian, having before been a professor of History at the College. 

Mistakenly ‘zorbing’ through life


Crème brulee or a side of celery? Most of us opt for what we like: desert. It’s the same with written content: we consume the information we enjoy and avoid the celery like the plague. However, crème brulee isn’t that good for us, and nor is informational monoculturalism.


We typically view the internet as a source of diversity of opinion, because any view is permitted and it gives a voice to every user (topic of my last post). Yet the results we see when we search on Google or scan our Facebook feeds are actually highly tailored to our tastes, based on our previous browsing behavior. These websites feed us what crème brulees. Unlike traditional print newspapers, where editors select articles that they deem will have a positive impact on society by challenging readers, algorithms give us more and more of the same dessert. Yet they are disserving us, because we would gain from exposing ourselves to more diverse opinions. Just as diverse teams perform better, so do individuals with exposure to multiple perspectives.


There are two fundamental problems in the way we access information online as Eli Pariser points out in his striking TED talk. Firstly, we have no control over how information is ranked: we can’t opt for different Google results. Secondly, the filtering process is done by algorithms, which have no imbedded ethical values.


There are a number of potential online solutions. For one, Google and other sites could change the way they present search results, content display or news feeds to ensure it presented challenging information. Secondly, as Jonathan Stray suggests, we could map the internet so that people could get a more comprehensive idea of what is out there.


The online example illustrates a broader point: we constantly need to challenge ourselves to escape our bubbles. There are endless ways to do this but it is actually remarkably difficult. For me travelling has been one way, because it is akin to pressing fast forward on the film of life: foreign faces and new places flash past. Moving to Silicon Valley was another, where I kept struggling to learn how to look at the world through a more creative lens. Coming to Boston recently has helped me imbed myself with people trying to shape society through governments and non-profits – whereas I only knew a little about businesses.


The experiences which have really marked me at both Stanford and Harvard have been the ones that removed me from my comfort zone: the Stanford design school class where we dressed up in Viking costumes to try come up with solutions to US bipartisanship (topical, given the US congress will probably shut down tomorrow!) or the course at the M.I.T. Media Lab where we try create ventures to solve developing world problems. In both I have felt like my brain was being electrocuted with a taser, because members of the class were neurosurgeons in training or mechanical engineers, which forced me to rethink the way I looked at problems.


The internet could and should become a way to challenge us by presenting radically different opinions. The opinions are out in the ether, but they are also filtered out. Companies like Google should step up to the task. But we can also help ourselves by going out to find those weird blogs or foreign language newspapers. 

The battle to control the internet

The internet is a battlefield. The uniformed battalions of governments line up in the valley; but all around on the hillsides, cyber guerrillas dart among the trees. The governments’ troops seem overwhelmed, ambushed from every side by guerrillas who can cross all international borders and are impossible to track down. The governments have so far been unable to pass laws to govern these online guerrillas.


Inadvertently, we are all guerrillas in this internet revolution, because we all use it, and it subverts the traditional state and media powers. It is a revolution, in the true sense, “because the goals of the revolutionaries cannot be contained by the institutional structure of the existing society” (Clay Shirky). As Shirky points out, it not the invention of a new tool which creates societal change, but its widespread adoption; and the internet is obviously ubiquitous.


We all know the internet has revolutionized everything, but what is it that fundamentally changed? The internet changed the nature of communication. Before, large-scale media was “one-way”, in the sense that the audience was addressed but could not respond. Now, we have what Shirky calls “symmetrical participation” where “the former audience” (Dan Gillmor) become creators as well as consumers of content.


The desire of users to create such content is understandable when it relates to individual blogging or posting on social networks: we all have a desire to share aspects of our experiences or thoughts with friends (my blog is a point in case). Much more surprising is people’s willingness to do real work for free, the kind of work that would normally be compensated, and do it without even receiving credit.


Wikipedia serves as a powerful example, because very few articles remain the work of one contributor. Wikipedia’s inherent self-correction process functions well, as users complement or correct previous versions of an article. What is amazing is that Wikipedia can function with an unmanaged division of labor. Notably, the initial quality of the article does not really matter: poorly-informed or badly-written articles in fact produce a greater incentive for other users to improve them. This is what Richard Gabriel refers to as the “worse is better” phenomenon. In many other models of user-generated content, such as the question and answer site Quora (, users do not edit content posted by others, but vote on it, helping filter for quality.


Intuitively, it makes sense to trust users to co-develop or filter content for discreet or simple tasks, such as writing Wikipedia articles or raring Quora responses. Yet surprisingly, co-creation works astonishingly well for very complex tasks which have traditionally required centralized planning. Take the development, by hundreds of different software engineers around the world, of the open-source Hadoop MapReduce software (, which enables very rapid processing of vast sets of unstructured data, and is used by companies like Facebook (i.e., data that is not storable in tabular form on SQL databases).


The constant iteration of Wikipedia or Hadoop is significant, because it hails what Tim O’Reilly calls the “end of the software development and release cycle”. Software updates can now be developed anywhere by multiple users and then be pushed to our devices. This transforms our entire relationship with the software we use: software becomes an iterative process, not a product, because the barriers to development and deployment are radically reduced.


The internet also reduces the barriers to coordination among other groups of people. For example, companies like Researchgate ( and Mendeley ( enable scientists to better collaborate on research. The internet also enables individuals to organize protests against denials of free speech, like those planned invisibly online by activists, to bring crowds to Cairo’s Tahrir Square or Minsk’s Oktyabrskaya Square.


The ease of coordination distributes power more evenly in society, but it also creates hazards, because members of groups can implement their own agendas. The hacker group Anonymous has been praised by many for its members’ efforts to fight child pornography on the internet, dismantling several child pornography networks that use Tor (, software which enables online anonymity. Yet Anonymous has also been harshly criticized, as Michael Gross points out in Vanity Fair, for simply “breaking stuff” without any ideological motivation ( Joshua Corman, in his excellent security blog, identifies that this dichotomy of Anonymous’s behavior arises because of its structure as a “loose collective”, rather than a well-organized group. Parmy Olson, who gained remarkable insider access to members of Anonymous (, reiterates the disparate nature of the network.


Like many things in life, Anonymous’s lack of a central command and control center is both its greatest weakness and its greatest strength. The FBI famously announced it had “chopped the head off” LulzSec, an offshoot of Anonymous responsible for hacking various Government Agencies. But Anonymous has survived easily, because of the dispersed nature of its members. The FBI didn’t realize it was dealing with a many-headed medusa.


The internet also poses dangers relating to privacy. Ironically, it was the Clinton administration’s decision to export encryption techniques in the 90s which compromised much of the internet’s safety. Clinton authorized this export to enable secure international payments and thereby help commercialize the internet, but it had the unforeseen consequence of providing criminals with a tool to export stolen data in an encrypted form.


The internet’s anonymity also contributes to making it easier to commit crimes or other abuse. For example, Reuters recently exposed an uncontrolled online exchange of adopted children ( Parents give up problematic adopted children without any paperwork, a process which is clearly massively open to abuse by pedophiles.


In conclusion, the internet has empowered all of us by giving us a voice, the ability to co-create content and the tools to manage groups effortlessly. Yet there is also a need for improved regulation of the internet. The US, China, Russia, Brazil and European Union are all pulling in different directions, based on varying preferences regarding privacy principles. If the negotiations about the future of the internet prove as arduous and inconclusive as the ones on Syria, perhaps we should turn the process over to the people, and create the laws through open-source? Wikipedia to the rescue?