Is Barack Obama a Muslim? The answer, unequivocally, is no, he’s a Christian who goes to church regularly. But according to some Internet sites – especially white-supremacist Web sites – the man who could be the next president of the United States not only practices Islam but is practically a terrorist.
Obama’s campaign has fought back, launching a Web site – “fightthesmears.com” – to correct the misinformation about the candidate, including false claims that his campaign contributions largely have come from wealthy supporters in the Middle East.
Obama isn’t alone, of course, when it comes to inaccurate information on the Internet.
As millions of people and organizations around the world post information on the Internet, factual mistakes are alarmingly easy to find, and they don’t just come from hate groups or “from shady, anonymous, Internet authors posing as reliable art historians,” according to two historians at George Mason University in Fairfax, Va. Indeed, they say, misinformation often comes from highly reputable institutions.
In a study of Web sites highly ranked in Google searches, they found an incorrect date in a biography of the French Impressionist painter Claude Monet – the date he moved to Giverny, the small village west of Paris where he painted his famous images of water lilies. No less an authority than the Art Institute of Chicago posted an erroneous date (the correct date is 1883) while “the democratic (and some would say preposterously anarchical)” Web site Wikitravel got it right, according to the study.
In the past, a countable number of sources produced most of the world’s information, and most readers and viewers took the names of top newspapers, magazines and television networks as a modest guarantee that they would be accurate.
But as information migrates onto the Internet and newspapers and network TV news outlets see their audiences declining, all that is changing. Today the World Wide Web is a user-driven medium, where teenage videographers and political activists of all stripes can post their messages, often in formats as sophisticated-looking as the sites mounted by television networks and major newspapers. The tidal wave of citizen-generated content has made it much harder to ferret out the most credible sources, which has many people alarmed, including some policy makers.
For example, in May, Sen. Joseph I. Lieberman, I-Conn., asked Google to remove online YouTube videos that he says al Qaeda and other terrorist groups post to spread false and slanted anti-Western information. The company removed some videos but refused to block all videos from certain groups, as Lieberman requested.
Terrorist propaganda aside, “there are fewer signposts” online to signal reliability, such as newspaper brand names, says Larry Pryor, an associate professor of journalism at the University of Southern California’s Annenberg School for Communications.
Wikis – user-generated online publications – like Wikipedia are edited by staff and other users only after they’ve been published online, unlike in traditional media, where editing comes before publication, notes Pryor. Furthermore, while some wiki entries are written by experts, others are contributed by people with no expertise in the subject matter, and it’s difficult or impossible for unwary readers to tell the difference.
In a critique of Wikipedia’s 2005 entry on “haute couture” – high fashion – Vogue magazine Editor Alexandra Shulman wrote that, “broadly speaking, it’s inaccurate and unclear. . . . There are a few correct facts included, but every value judgment it makes is wrong.”
Nevertheless, not all so-called new media is inaccurate, says David Perlmutter, a professor at the University of Kansas’ William Allen White School of Journalism. Take blogs, for example. “While some are merely sock puppets” spouting Republican or Democratic party talking points, “those are not very well-respected,” while the most popular political blogs are the less biased ones, he says.
In fact, online media frequently act as credibility watchdogs for traditional media, says Perlmutter. Many bloggers are experts, such as military officers and technology specialists, who are “big fact-checkers,” using their specialized knowledge to spot false information in areas such as war reporting, he says.
For example, “it was . . . Russ Kick’s Memory Hole, not The New York Times, that first broke pictures of military personnel brought home in [caskets] from Iraq,” said Yochai Benkler, a professor at Yale Law School.
Much online information also contains good clues with which to judge its credibility, says R. David Lankes, an associate professor at Syracuse University’s School of Information Studies. For example, blogs usually contain biographies of their authors, and wikis have a history of the editing changes to posted articles.
Google News and Yahoo! News – sites that aggregate what are supposedly the days’ top news stories – “are more scary” because they don’t share the rules on which their rankings are based, says Lankes. But the online world is huge, and there’s usually an alternate voice to consult on any issue, he says, “and that allays my fears a bit” about being misinformed.
The vast store of information available online has a major benefit: “We no longer have to rely on single authorities,” says Lankes. The downside is that “we have to work harder to determine credibility.”
But are Internet users prepared to be critical consumers of information? “The flaws in Wikipedia and other kinds of media are real” and “demonstrate how much we need to update our media literacy in a digital . . . era,” said Dan Gillmor, director of the Center for Citizen Media, a project to support grassroots journalism jointly supported by Harvard and Arizona State universities.
For example, when Wikipedia’s article on Pope Benedict XVI initially appeared – only a few hours after his election on April 19, 2005 – the page “suffered vandalism,” with false statements and accusations popping up that very same day, said Gillmor. “Over time,” the entry “will settle down to something all sides can agree on,” Gillmor blogged later that day, but for the moment, “the vandals are having a good time mucking with the page, I’m sorry to report. What jerks they are.”
“Our internal b.s. meters . . . work, but they’ve fallen into a low and sad level of use in the Big Media world,” Gillmor continued. “Many people tend to believe what they read. Others tend to disbelieve everything. Too few apply appropriate skepticism.”
In fact, some online material can mislead readers into thinking it’s from a more reliable source than it is. For example, “a hospital Web site may not look any different from the herbal remedy store’s Web site – or from an accomplished teenager’s hobby page,” said Frances Jacobson Harris, a professor of library administration at the University of Illinois at Urbana-Champaign. Even “relevancy ranking” – as in Google search results – can mislead, she said. For example, at one time a Google search for “Martin Luther King” pulled up a disguised anti-King hate site as its top result, partly because librarians had linked to the page as an example of untrustworthy information, said Harris.
And despite young people’s reputations as digital natives and Internet gurus, their “skills in effective navigation of today’s information landscape are actually somewhat limited,” Harris wrote. “They always find something when searching for information, just not always the best thing.”
For example, young researchers often “make credibility judgments that rely heavily on design and presentation features rather than content,” he continued.
Others argue that growing up online naturally makes one a savvier Internet user.
“Information overload” can overwhelm older generations, but the younger generation “doesn’t know the phrase,” says Penelope Trunk, a veteran blogger in Madison, Wis., who writes about careers in the Internet Age. Immersed in the online world practically from birth, “they’re just smarter about information.”
But “it’s not how old you are but how long you’ve been online” that improves research skills, says Lankes. While some expect young people to be Internet experts, Lankes says, “I don’t buy it. If we create this monolithic view of kids as technologically literate, we’ll do a great disservice to kids who aren’t.”
Some fear that the double burden of teaching old-fashioned literacy, still vital online, plus the critical thinking required to sort through the vast amount of online information will increase the so-called digital divide, leaving low-income students – those who don’t have computers or have limited computer literacy – further and further behind.
“The industry argues that the digital divide is gone, but that’s not true,” says Erik Bucy, an associate professor of telecommunications at Indiana University. “We have to think of access to digital technology as a cognitive problem and a social problem,” not just an issue of handing out computers, he says.
The Web was born in 1992, and “16 years in the evolution of man is not a long time,” says Lankes. Nevertheless, “already we’re seeing people learning to read it intelligently. Kids understand very well what they’re seeing in Wikipedia,” he says, knowing they must judge credibility “article by article,” rather than trusting the site as a whole, as one might do with the Encyclopaedia Britannica, he says.
The rules of collaborative, user-generated media like wikis have been developing for less than a decade, so it’s unrealistic to expect perfection, says Siva Vaidhyanathan, associate professor of media studies and law at the University of Virginia in Charlottesville. One promising approach is typified by Slashdot – a Web site that posts technology news based on how many site users rate it as valuable, he says. Contributors get “reputation scores” based on votes from other site users, and it becomes clear over time that some “are more credible than others,” he says.
Google’s page-rank algorithm, which ranks pages based on how many other Web pages link to them, amounts to a public “vote on credibility,” says Lankes. It has turned out to be another kind of reliability test that is fairly accurate and “very powerful.”
But some analysts call the idea that accurate information can arise from “collective intelligence” – the philosophy behind the Web’s user-generated media and user-based ranking systems – a pipe dream.
“One need only look at the composition of the Internet to understand why the ‘wisdom of crowds’ will never apply,” wrote Andrew Orlowski, a technology columnist for The Register in the United Kingdom. The Internet doesn’t represent society because “only a self-selecting few” have any interest in information projects, which “amplifies groupthink,” Orlowski charged. “Facts that don’t fit beliefs are discarded.”
To view the entire report on CQ Researcher Online, click here. [subscription required]
0 comments:
Post a Comment