The case I chose to study is “The Brian Lehrer Show”, under the subtitle of “Wisdom of Crowds in General-interest Reporting by Recruiting a General Audience”. Through reading Muthukumaraswamy’s article, I started to gain an understanding of how media incorporate crowdsourcing into their practice. The author examined five methods of crowdsourcing employed by media. To be honest, I don’t think this is the best way to generalize crowdsourcing. For me, only two of the subtitles make sense: ”General-interest Reporting by Recruiting a General Audience” and “Specialized Reporting by Recruiting an Expert audience”. In other cases, I don’t think there is a need to differentiate expert audience from general audience. For instance, Muthukumarasamy listed TPM and The Huffington post as examples to show how media recruit an expert audience in general-interest reporting. I think for TPM and The Huffington Post, the “experts” are just common subscribers, people who are interested in political issues. Unlike the Beatblogging case, the media are not aimed at picking out experts to help them deal with specialized problems. As for “the Brian Lehrer Show” case, I think it’s a good example to demonstrate how media recruited its audience for general-interest reporting. For one thing, the recruiting is not limited to WNYC listeners. I’ve seen the recruiting message on various webpages, personal or public. For another, the project is closely related to people’s day-to-day lives. The total number of cars parking in the neighborhood and that of SUVs is an indication of the percentage of SUVs owned by New York citizens. I generalize three themes from my case study.
I think the most important thing for a crowdsourcing project to work is audiences’ motivation. The more people participate, the better result it can get. The first link I found useful is an interview with Jim Colgan, the producer of the Brian Lehrer show. He thoroughly explained his motive of conducting such project and what can be learned from this practice. As said by Colgan, “the investigation lasted only a week but received 450 comments, far above the average call-in segment”. Colgan specifically pointed out his surprise when he found out “the level to which people really want to take part. They want to be part of the news.” Being able to Perform a journalistic investigation, even as simple as counting the number of cars becomes the motivation for some of the audiences. For those who don’t like SUVs, being able to expose the problem becomes their strongest motivation, Lehrer also mentioned this during his show.
As Muthukumaraswamy noticed, “the obvious disadvantage in such an exercice is the unreliability of the news obtained.” I browsed the audiences’ responses and came up with several findings. One, some people posted more than once. Two, some locations the audiences reported are overlapped. Three, there are confusions about what is a SUV. It created many problems for the producers when analyzing the data. There are only 405 valid responses out of the 450, Colgan mentions it during results sharing with the audiences. Another problem is there is no fact checking strategies. Just as one of the listeners pointed out when talking about citizen photo reporting, “How much we can trust people who are not photo journalist to do the reporting? What if someone dodge something?” The second link I found useful is an article examines crowdsourcing through this case. As noticed by the author, “because participants tend to be self-selecting, the producer can’t assume they amount to a representative sample and sometimes must actively seek out underrepresented voices.” He points out one of the most obvious disadvantage of crowdsourcing: the crowd in crowdsourcing is not picked out by producers, so they are not representative samples. Hense, the author offers suggestions for future projects. “Start small”, “Be relevant”, “Be specific” and “Verify”. Crowdsourcing may have some reliability issues, but the idea is still fascinating. As I was listening to Brain’s show, one of the listeners brought out the essence of crowdsourcing, “no one person could ever have an absolute, comprehensive insight into the way things worked, so we have to refer to plurality of actors.” It’s also the central idea of democratic deliberation.
The last theme I generated is journalists’ responsibility. Every crowdsourcing project can’t operate without “real” journalists’ organization. Just as Jeff Howe described in his blog, “I think the crowd make excellent sources and additional sets of eyes and ears, but I believe the future lies in carefully cultivated partnerships between professionals and their audiences.” I found his blog extremely useful for people trying to comprehend the term crowdsourcing. He listed many case studies and his understandings of the relationship between crowdsourcing and journalism. Without the careful planning and organizing of Ushahidi and Okolloh, the crowdsourcing project wouldn’t be as successful. Back to the Brian Lehrer’s show case, Howe brought out the idea of incorporating Google map with SUV data during the show. It’s a brilliant idea and a perfect example of the map mashup talked in another article this week. My point is, data will always be data if journalists don’t jump in and present it into readable forms.