Tampilkan postingan dengan label Headline. Tampilkan semua postingan
Tampilkan postingan dengan label Headline. Tampilkan semua postingan

Kamis, 12 Januari 2012

Why a media company wants to be a tech incubator


Nieman Journalism Lab



Posted: 11 Jan 2012 11:00 AM PST

One side effect of downsizing at most newspapers: a surplus of office space. That may be a cold blooded way of seeing the empty desks that haunt newsrooms and advertising departments, but in an era where newspapers get bought largely for the value of their underlying real estate, the fact is that’s square footage that could be put to use.
Consider the example of the Philadelphia Media Network, owner of The Philadelphia Inquirer and Philadelphia Daily News, which has welcomed three startups inside their walls with the launch of the Project Liberty Digital Incubator. Thanks to some funds from the Knight Foundation the media company is offering itself up as a rent-free test kitchen for six months to CloudMine, SnipSnap, and ElectNext, early-stage tech companies starting out in Philly.
PMN isn’t offering up a couch to crash on purely out of the kindness of its heart: As a condition of the incubator they get an early look at whatever apps, tools, or projects the teams are working on. That would be great in itself, particularly because the companies are focused on markets that align with newspapers: SnipSnap is working on an app to scan and save coupons for mobile, ElectNext is building an app to help better connect voters to candidates, and CloudMine is creating a platform for seamless app development.
But what PMN wants more is to better expose their staffs to the world of startups and tech. There are clear lessons for journalism from people whose work emphasizes identifying audiences, monetization, and rapid iteration. If the journalists and geeks can bump into one another, there’s potential for some beneficial cross-pollination, Philadelphia Media Network CEO Gregory Osberg told me. The media network is working on its own digital offerings (Remember, this is the same company offering Android tablets to readers) and the best way to get that process to speed up is through learning from companies operating in markets like e-commerce and mobile, Osberg said. The three companies each signed non-disclosure agreements to gain access to PMN data that might be helpful as they progress their work. That means within the next few months, we could see apps from the three companies branded under the Inquirer, Daily News, or Philly.com.
It’s like having a skunkworks without paying full retail price. In the media world, that’s a bonus considering the length of time it takes to recruit and build a team of developers, producers, and others who want to work in journalism. Even better: After this six-month period, they’ll bring in a fresh group of tech companies for a new round. “This takes us to market much quicker than if we were to staff up, which takes a big investment but takes a long time in the product development cycle,” Osberg said.
One thing Osberg is clear about is that while CloudMine, ElectNext, and SnipSnap are in the building and sharing the elevator with the rest of the staff of the media network, they’re not employees — their work is their property. And that’s a good thing. “We’re rooting for their success,” Osberg said. “We’re not here to absorb their companies or slow them down. We’re here to stimulate and become a catalyst for them.”
Lots of media companies are trying to adopt the methods, philosophy and talent of the independent (read: non-journalism related) tech community. In some cases, it’s through straight-up acquisitions (CNN and Zite, Financial Times and Assanka). Other times, it’s investment, as with Digital First Media, which runs the Journal Register Co. and MediaNews Group, announcing its own plans to invest in startups that align with corners of the journalism business like advertising, content, and audience development.
The Boston Globe has an informal incubator with people from a half-dozen small firms at various stages of development, all working out of the Globe’s headquaters. Jeff Moriarty, vice president of digital products for the Globe, told me over email “We had extra space here at the Globe and wanted to create an environment around our digital lab and digital development area where we have smart people working on interesting things.” The companies (Twine, Muckrock, Schedit, among others), work in areas like video and social media, were a natural fit, and could provide support to the Globe’s own products in the future. “We figure that the more smart people we have in the room, the better our opportunities to test and explore new ideas and also to expand our network of contacts in the digital space in Boston,” he said.
In many cases, media companies are taking a quieter approach, offering hack day events like those at the Globe and The New York Times. Or it’s through grant-funded collaborations like the Knight-Mozilla News Technology Fellows, which dropped developers right in the middle of newsrooms at places like Al Jazeera English, Zeit Online, The Guardian, and the Globe.
When I asked Osberg what would the best outcome for the project, he talked in terms of the impact to the Philadelphia community, not just his media properties. “Success would be that we would have some of their technology utilitized in our product offerings, and that they were able to leverage the success of that offering in the marketplace to take their company to the next level,” he said.
Image by the University of Iowa Libraries used under a Creative Commons license
Posted: 11 Jan 2012 08:00 AM PST

For over 70 years here at the Nieman Foundation for Journalism, we’ve run the Nieman Fellowships, which allows a couple dozen journalists from around the world to spend a year studying here at Harvard. And today, we’re announcing a new fellowship partner that I’m really excited about.
I’m happy to announce the Nieman-Berkman Fellowship in Journalism Innovation. It’s a joint project between us here at Nieman and the Berkman Center for Internet & Society, the primary unit of the university dedicated to understanding our digital present and future. Berkman runs its own awesome fellowship program that brings technologists, social scientists, legal scholars, journalists, and others to Harvard. The Nieman-Berkman Fellow will be a full Nieman Fellow and a full Berkman Fellow, able to draw on both communities and help strengthen connections between technology and journalism.
So what’s this Nieman-Berkman Fellowship all about? We’re looking for someone who had a specific course of research or project that they’d like to undertake — something that would have a substantial benefit to the larger world of journalism. We’re intentionally keeping the boundaries of that idea wide open — so proposals might deal with social media, with data visualization, with database analysis, with the underlying business models of online journalism, with newsroom structure, with networked journalism, with mobile consumption patters, or anything else that plays a meaningful part in how digital journalism is evolving. If it’s a subject or field that we write about here at Nieman Lab, it probably makes sense for a proposal for the Nieman-Berkman Fellowship.
This fellowship is also open to a wider range of applicants than the other Nieman Fellowships. For instance, someone who works on the publishing or technology sides of a news organization could be a strong candidate, even if they aren’t reporters or editors.
When the Nieman-Berkman Fellow arrives on campus this fall, he or she will work with Nieman and Berkman to advance the work of the proposal, sharing their work and their findings with readers of Nieman Lab and with the Harvard community. It’s a pretty great gig — one I’d be applying for myself if I weren’t already here!
You can read a lot more about the Nieman-Berkman Fellowship over here, and I’m happy to answer specific questions by email. Because we’re announcing this fellowship a little later than usual, we’ve extended the deadline for applications to February 15 — so you’ve got a little over a month to think up a proposal and apply. This fellowship is open to both U.S. citizens and international applicants; we’ll do interviews with finalists in the spring and, if we find the right person, make an announcement in May. We look forward to seeing your ideas.
(An aside: Americans are also still very much welcome to apply for the traditional Nieman Fellowships, which have a deadline of January 31. Unfortunately, the deadline for international applicants was back in December. I’d strongly encourage any journalist who wants to apply for the Nieman-Berkman Fellowship to also apply for the standard fellowship — that’ll help your odds.)
Posted: 11 Jan 2012 06:45 AM PST

Gizmodo, the popular gadget site and pageview king of Gawker Media, debuted a new look last night that they’re calling HD view, and it’s big. Not big in the grand scheme of things — big in the number of pixels it takes up. Whereas most websites top out at around 1000 pixels in width, Gizmodo HD stretches like Plastic Man, with photos and videos stretching wider and wider as the browser window does too. On my 1900-pixel-wide monitor, pages like this one (photo-dominant) and this one (video-dominant) both resize all the way to blowout width. Call it the doublewide approach.
(The screenshot above is obviously less than full size; to see its full, 1920-by-1200-pixel glory, click here.)
This is the flip side of responsive design, the web-design idea that BostonGlobe.com’s recent launch brought to the attention of lots of news execs. In the case of the Globe (and in most other responsive efforts), the primary appeal is the ability to get small — to build a website that can look good both on your laptop and on your smartphone without having to build a separate mobile site. (The Globe’s website expands up to 1230 pixels, but not beyond that.) But responsive design works in the other direction too, and Gizmodo’s new look is an attempt to play with that — to give more space to the big photos and big videos that Gawker Media’s been trying to push over the past year.
At this point, HD view is very much a beta (it won’t work in all browsers, for instance, and there’s no place for comments), and seems more like a parlor trick than a feature. But why might a news organization be interested in a doublewide view? What might be the use cases for an HD view?
  • There’s still a class of user who (a) uses a desktop computer, where monitor sizes once outlandish (24-inch, 27-inch, 30-inch) are becoming more affordable and common, and (b), particularly on Windows, runs browser windows full screen. Those folks are used to seeing a bunch of whitespace to the left and right of their favorite websites, and this could fill them up and build something more immersive. With Gawker Media making bigger investments in video and art, it makes sense to play those as big as the browser will allow.
  • A theme running throughout Gawker’s controversial redesign last year was that it viewed television as both an important competitor and a production-value bar that Gawker Media felt it was approaching. “[W]e increasingly have the scale and production values of — say — cable television,” Nick Denton told us at the time: “[W]e'll compete for audiences with cable groups such as NBC Universal.” Well, Gizmodo HD fits perfectly into a world where screens are shifting and the television might move from the-place-where-you-watch-Mad-Men to, simply, the biggest and best content-agnostic screen in the house. To be fair, previous attempts to bring the web to big-screen television haven’t borne much fruit. But with everyone expecting an new TV push from Apple in 2012 — and with companies like The Wall Street Journal moving from web video to TV sets — it makes sense for a big online brand like Gawker Media to prepare for that eventuality.
  • Advertisers are always looking for new ways to draw attention, having soured at least a bit on the efficacy of the banner ads. Gawker’s long been willing to push the boundaries with things like sponsored posts and site takeovers. Imagine the greater impact that a site takeover could have when there’s twice as much space to take over?
It’ll probably be a while before the doublewide becomes much more than a novelty, but it’s worth thinking about how a news site might look different if, instead of thinking small (that is, mobile), it thought big.

Rabu, 11 Januari 2012

Piano Media wants national paywalls all over Europe


Nieman Journalism Lab



Posted: 10 Jan 2012 06:00 AM PST
Liptov, Slovakia
The expansion of Bratislava-based Piano Media into Slovenia is just the beginning of the company’s efforts to bring national paywalls to five European countries by year’s end.
Eight Slovene media outlets have agreed to unite behind a single paywall starting Jan. 16. It’s the cable TV model: Pay a flat monthly fee for unlimited access to everything inside. I caught up with Piano CEO Tomas Bella to hear how it’s going in Slovakia, where the experiment began seven months ago. He’s not yet willing to share subscriber numbers, but he did share observations — mainly, that Slovak readers are not much different from those in the United States or elsewhere.
He said there are different types of readers. The first group “will sign up no matter what you do,” he said. “They just do it in the first week or first weeks. The price can be almost any price. They will pay, and they will pay for a year.” These are people who trust traditional media institutions and are willing to pay to help them survive.
As for everyone else, the barriers to subscribing range from inertia — some people need lots of naggy “here’s what you’re missing” emails — to philosophical opposition. “People were saying, in principle, I will never pay because the Internet should be free,” Bella said. He said he had expected a strong correlation between socioeconomic status and willingness to subscribe, but the divide turned out to be philosophical.
“The number of subscribers is still going up as more and more people are telling us that they were against the concept at first but now they got used to the idea and already feel comfortable with paying,” he said. “Last week I saw one post on Facebook that literally said: ‘You know what, I have woken up one day and realized that I do not know why I was against Piano all the time, and I have paid.’”
For Bella, a former newspaper editor who has tried and failed to get paywalls off the ground, that’s more satisfying than making money. He is out to reset the way people think about the value of news. He said the subscriber numbers were “not so big” at first but that the Slovak paywall generated €40,000 in its first month.
The biggest mistake, Bella said, was trying to charge users for comments. Five of the Slovak publishers wanted a way — any way — to help manage the daily deluge of comments on news articles. But citizens of a former Communist regime don’t want their free speech impinged upon. “This was a very special central European problem,” Bella said. In Slovenia, the paywall will only cover text and video at launch; publishers will be able to add in more kinds of content down the road.
The price for the Slovene package is just under €5 a month, a couple of euros more than in Slovakia. Piano’s market research found that was the most that most Slovenes are willing to pay for news, Bella said.
“It’s still, of course, not as much as publishers would like to have, and it’s still not, I would say, a finite price. But they understood that we need to start at some level of — look at it from the point of view of the reader, not from the point of view of how much the content costs. The research was really very strong. It said that if we go any higher, then we are losing money.”
Pricing for the Slovak paywall, at €2.90 per month, was based more on intuition than research, he said.
The subscription model in Slovenia remains the same: 40 percent of the proceeds goes to the media organization that initially captured the subscriber, 30 percent is distributed to all partners based on how much time the reader spends on their respective sites, and the remaining 30 percent goes to Piano. So if I sign up for Piano’s paywall at the website of Delo, Slovenia’s national broadsheet, and I spend most of my time at Delo’s website, most of my money goes to Delo. (Tracking time on site has proved to be the most complicated technical challenge, Bella said. That and going after users who avoid the paywall by creating multiple free trials.)
Bella said he hopes to sign up 1 percent of the Slovene population, or about 20,000 people. He said he expects to announce two or three new deals with publishers in Slovakia later this month.
Photo of Liptov, Slovakia by Martin Sojka used under a Creative Commons license.

Selasa, 10 Januari 2012

Why media outlets team up in an election year


Nieman Journalism Lab



Posted: 09 Jan 2012 11:45 AM PST

We’ve reached the point in journalism where we barely bat an eye when two news organizations say they’re joining forces. Anything less than a merger is just not an earth mover these days, when egos, brands, unique audiences — all of the guarded, proprietary stuff that kept news companies at opposite ends of the sword — seem to matter less in the face of an uncertain journalism marketplace.
In that way the new partnership between NBC News and Newsweek/The Daily Beast to cover the 2012 election shouldn’t be too surprising. It’s a classic partnership of two organizations looking for a Doublemint effect: Double the resources, double the coverage, double the audience. The plan calls for campaign trail reporting from NBC (and a healthy dose of video) to appear in the pages of Newsweek and online at The Daily Beast. [UPDATE: See correction below.] Call it NBCWeekBeast. (NBeastCWeek?)
But there’s something about politics in particular that seems to bring out the hugging and sharing in news organizations. A presidential election brings out the heavy news artillery, and that means a flurry of scooplets coming from all directions — from the networks, from newspapers national and local, from blogs, from campaigns, and everywhere else. All that firepower pointed in the same direction makes the urge to team up more tempting than ever. (Take for example The New York Times’ Election 2012 iPhone app, which is built more on linking and aggregation than any Times product before it — this, despite the fact that the Times devotes enormous resources to its own coverage.)
History backs this instinct. After all, for years outlets — like the Times and CBS News or ABC News and The Washington Post — have linked up for the purposes of polling. At the same time debates, from the local legislative races up to the president level, have long been collaborations across media, whether it’s the local newspaper and public media, or CNN, Politico, and The Los Angeles Times.
What’s interesting is how many of these partnerships derive from cross-media competitors. Pre-web, The New York Times and CBS News had reporters chasing the same stories — but a broadcast nightly news show and a morning newspaper could comfortably share an audience without excluding either. With everyone competing on the same platforms these days — the web, your smartphone — the calculus is different. And it’s unclear how far these partnerships will extend beyond election season — a beat that is both extended (the presidential election will last a lot longer than mega-events like the Oscars or the Super Bowl) and predictable (that once-every-four-years scheduling means there’s time to align up multiple outlets’ interests).
As indicated by the number of media outlets launching (or relaunching) their politics offerings, we also know it’s an area that can spike pageviews and draw a reliable audience. (The New Yorker’s the latest, just today.) Readers are on the hunt for their election coverage earlier than ever, be it tracking polls, candidate gaffes, new endorsements, or daily overviews, and news organizations are jockeying for position. And it doesn’t hurt that once you have a politics vertical it’s that much easier to take advantage of the spending on political ads. But that underlying tension between the journalist’s desire for exclusivity and the brand’s desire to aggregate content will be something to keep watching from here to election day.
Correction: This piece originally said the sharing would go both ways, from Newsbeast to NBC and from NBC to Newsbeast. In fact, it’s only the latter — NBC content flowing to Newsweek and The Daily Beast. Sorry.
Image by Jiheffe used under a Creative Commons license.
Posted: 09 Jan 2012 06:00 AM PST

Piano Media, the company that introduced a unified paywall for all major media in Slovakia last year, is expanding to Slovenia.
Eight Slovene media companies are participating — a group that includes nearly all of the nation’s major daily newspapers, a car magazine, a sports daily, two regional publications, and even a free city newspaper. A subscription to all of the content will cost €4.89 ($6.21) per month, two euros more expensive than the package in Slovakia. Users can opt to pay weekly (€1.99) or yearly (€48.90), but that’s as complex as the pricing gets.
Publishers get to choose what content goes behind the paywall and what remains free. They can also elect to make users pay for premium features, such as commenting, access to archives, or ad-free browsing.
The Slovak paywall generated more than €40,000 in its debut month of May 2011, the company said, which was split with publishers 70/30, Apple-style. We playfully dubbed it “the new Iron Curtain” — and the metaphor seems to hold up now, as Piano expects to reach three more European markets by year’s end. The company says it is negotiating with publishers in 11 countries.
Like Slovakia, Slovenia is a relatively small, monolingual country. The company estimates 1.3 million Slovenes, or 65 percent of the population, are online; about half of those read news on the web. It will be interesting to see how a national paywall might work in much larger European countries with more media choices. Would the non-participating media siphon users who don’t want to pay?
When we talked last year, Piano CEO Tomas Bella told me he hoped to change the attitudes of consumers accustomed to years of free riding. The company offered the simplicity of a single login and a single monthly bill. As he told me then: “We don't think it's a problem of people refusing to pay — we don't think it's a problem of money. It's a problem of convenience.”
Photo of Slovenia’s Julian Alps by Christian Mehlführer used under a Creative Commons license.

Minggu, 08 Januari 2012

Where to Go to Understand the World in 2012

The New York Times
My Alerts: Yuli Akhmada
January 8, 2012 1:18 AM
--------------------------------------
Travel: Where to Go to Understand the World in 2012
By NICHOLAS D. KRISTOF
Start with China and India and not just the cities writes
The New York Times columnist.
Full Story:
http://www.nytimes.com/2012/01/08/travel/where-to-go-to-understand-the-world-in-2012.html?emc=tnt&tntemail0=y


Business Day / Mutual Funds: As Asia Slows Down, Investors May Still Want to Dive In
By CONRAD DE AENLLE
After Asian stocks underperformed other markets late last
year, some investment advisers see a buying opportunity.
Full Story:
http://www.nytimes.com/2012/01/08/business/mutfund/as-asia-slows-down-investors-may-still-want-to-dive-in.html?emc=tnt&tntemail0=y
---
Books / Sunday Book Review: Paperback Row
By IHSAN TAYLOR
Paperback books of particular interest.
Full Story:
http://www.nytimes.com/2012/01/08/books/review/paperback-row.html?emc=tnt&tntemail0=y
---

Sabtu, 07 Januari 2012

Lessons from Murdoch on Twitter, and paywalls’ role in 2011-12


Nieman Journalism Lab



Posted: 06 Jan 2012 07:30 AM PST

Murdoch, Twitter, and identity: News Corp.’s Rupert Murdoch had a pretty horrible 2011, but he ended it with a curious decision, joining Twitter on New Year’s Eve. The account was quickly verified and introduced as real by Twitter chairman Jack Dorsey, dousing some of the skepticism about its legitimacy. His Twitter stream so far has consisted of a strange mix of News Corp. promotion and seemingly unfiltered personal opinions: He voiced his support for presidential candidate Rick Santorum (a former paid analyst for News Corp.’s Fox News) and ripped former Fox News host Glenn Beck.
But the biggest development in Murdoch’s Twitter immersion was about his wife, Wendi Deng, who appeared to join Twitter a day after he did and was also quickly verified as legitimate by Twitter. (The account even urged Murdoch to delete a tweet, which he did.) As it turned out, though, the account was not actually Deng, but a fake run by a British man. He said Twitter verified the account without contacting him.
This, understandably, raised a few questions about the reliability of identity online: If we couldn’t trust Twitter to tell us who on its service was who they said they were, the issue of online identity was about to become even more thorny. GigaOM’s Mathew Ingram chastised Twitter for its lack of transparency about the process, and The Washington Post’s Erik Wemple urged Twitter to get out of the verification business altogether: “The notion of a central authority — the Twitterburo, so to speak — sitting in judgment of authentic identities grinds against the identity of Twitter to begin with.” (Twitter has begun phasing out verification, limiting it to a case-by-case basis.)
Eric Deggans of the Tampa Bay Times argued that the whole episode proved that regardless of what Twitter chooses to do, “the Internet is always the ultimate verification system for much of what appears on it.” Kara Swisher of All Things Digital unearthed the problem in this particular case that led to the faulty verification: A punctuation mixup in communication with Deng’s assistant.
Columbia’s Emily Bell drew a valuable lesson from the Rupert-joins-Twitter episode: As they wade into the social web, news organizations, she argued, need to do some serious thinking about how much control they’re giving up to third-party groups who may not have journalism among their primary interests. Elsewhere in Twitter, NPR Twitter savant Andy Carvin and NYU prof Clay Shirky spent an hour on WBUR’s On Point discussing Twitter’s impact on the world.

Trend-spotting for 2011 and 2012: I caught the front end of year-in-review season in my last review before the holidays, after the Lab’s deluge of 2012 predictions. But 2011 reviews and 2012 previews kept rolling in over the past two weeks, giving us a pretty thoroughly drawn picture of the year that was and the year to come. We’ll start with 2011.
Nielsen released its list of the most-visited sites and most-used devices of the year, with familiar names — Google, Facebook, Apple, YouTube — at the top. And Pew tallied the most-talked-about subjects on social media: Osama bin Laden on Facebook and Egypt’s Hosni Mubarak on Twitter topped the lists, and Pew noted that many of the top topics were oriented around specific people and led by the traditional media.
The Next Web’s Anna Heim and Mashable’s Meghan Peters reviewed the year in digital media trends, touching on social sharing, personal branding, paywalls, and longform sharing, among other ideas. At PBS MediaShift, Jeff Hermes and Andy Sellars authored one of the most interesting and informative year-end media reviews, looking at an eventful year in media law. As media analyst Alan Mutter pointed out, though, 2011 wasn’t so great for newspapers: Their shares dropped 27 percent on the year.
One of the flashpoints in this discussion of 2011 was the role of paywalls in the development of news last year: Mashable’s Peters called it “the year the paywall worked,” and J-Source’s Belinda Alzner said the initial signs of success for paywalls are great news for the financial future of serious journalism. Mathew Ingram of GigaOM pushed back against those assertions, arguing that paywalls are only working in specific situations, and media prof Clay Shirky reflected on the ways paywalls are leading news orgs to focus on their most dedicated users, which may not necessarily be a bad thing. “The most promising experiment in user support means forgoing mass in favor of passion; this may be the year where we see how papers figure out how to reward the people most committed to their long-term survival,” he wrote.
Which leads us to 2012, and sets of media/tech predictions from the Guardian’s Dan Gillmor, j-prof Alfred Hermida, Mediaite’s Rachel Sklar, Poynter’s Jeff Sonderman, and Sulia’s Joshua Young. Sklar and Sonderman both asserted that news is going to move the needle online (especially on Facebook, according to Sonderman), and while Hermida said social media is going to start to just become part of the background, he argued that that’s a good thing — we’re going to start to find the really interesting uses for it, as Gillmor also said. J-prof Adam Glenn also chimed in at PBS MediaShift with his review of six trends in journalism education, including journo-programming and increased involvement in community news.

SOPA’s generation gap: The debate over Internet censorship and SOPA will continue unabated into the new year, and we’re continuing to see groups standing up for and against the bill, with the Online News Association and dozens of major Internet companies voicing their opposition. One web company who notoriously came out in favor of the bill, GoDaddy, faced the wrath of the rest of the web, with some 37,000 domains being pulled in two days. The web hosting company quickly pulled its support for SOPA, though it isn’t opposing the bill, either.
New York Times media critic David Carr also made the case against the bill, noting that it’s gaining support because many members of Congress are on the other side of a cultural/generational divide from those on the web. He quoted Kickstarter co-founder Yancey Strickler: “It's people who grew up on the Web versus people who still don't use it. In Washington, they simply don't see the way that the Web has completely reconfigured society across classes, education and race. The Internet isn't real to them yet.”
Forbes’ Paul Tassi wrote about the fact that many major traditional media companies have slyly promoted some forms of piracy over the past decade, and GigaOM’s Derrick Harris highlighted an idea to have those companies put some of their own money into piracy enforcement.

Tough times for the Times: It’s been a rough couple of weeks for The New York Times: Hundreds of staffers signed an open letter to Publisher Arthur Sulzberger Jr. expressing their frustration over various compensation and benefits issues. The Huffington Post’s Michael Calderone reported that the staffers’ union had also considered storming Sulzberger’s office or walking out, and Politico’s Dylan Byers noted that the signers covered a broad swath of the Times’ newsroom, cutting across generational lines.
The Atlantic’s Adam Clark Estes gave some of the details behind the union’s concerns about the inequity of the paper’s buyouts. But media consultant Terry Heaton didn’t have much sympathy: He said the union’s pleas represented an outmoded faith in the collective, and that Times staffers need to take more of an everyone-for-themselves approach.
The Times also announced it would sell its 16 regional newspapers for $143 million to Halifax Media Group, a deal that had been rumored for a week or two, and told Jim Romenesko it would drop most of its podcasts this year. To make matters worse, the paper mistakenly sent an email to more than 8 million followers telling them their print subscriptions had been canceled.
Reading roundup: Here’s what else you might have missed over the holidays:
— A few thoughtful postscripts in the debate over PolitiFact and fact-checking operations: Slate’s Dave Weigel and Forbes’ John McQuaid dissected PolitiFact’s defense, and Poynter’s Craig Silverman offered some ideas for improving fact-checking from a recent roundtable. And Greg Marx of the Columbia Journalism Review argued that fact-checkers are over-reaching beyond the bounds of the bold language they use.
— A couple of good pieces on tech and the culture of dissent from Wired: A Sean Captain feature on the efforts to meet the social information needs of the Occupy movement, and the second part of Quinn Norton’s series going inside Anonymous.
— For Wikipedia watchers, a good look at where the site is now and how it’s trying to survive and thrive from The American Prospect.
— Finally, a deep thought about journalism for this weekend: Researcher Nick Diakopoulos’ post reconceiving journalism in terms of information science.
Crystal ball photo by Melanie Cook used under a Creative Commons license.

Jumat, 06 Januari 2012

Hacking consensus: How we can build better arguments online


Nieman Journalism Lab



Posted: 05 Jan 2012 11:30 AM PST
In a recent New York Times column, Paul Krugman argued that we should impose a tax on financial transactions, citing the need to reduce budget deficits, the dubious value of much financial trading, and the literature on economic growth. So should we? Assuming for a moment that you’re not deeply versed in financial economics, on what basis can you evaluate this argument? You can ask yourself whether you trust Krugman. Perhaps you can call to mind other articles you’ve seen that mentioned the need to cut the deficit or questioned the value of Wall Street trading. But without independent knowledge — and with no external links — evaluating the strength of Krugman’s argument is quite difficult.
It doesn’t have to be. The Internet makes it possible for readers to research what they read more easily than ever before, provided they have both the time and the ability to filter reliable sources from unreliable ones. But why not make it even easier for them? By re-imagining the way arguments are presented, journalism can provide content that is dramatically more useful than the standard op-ed, or even than the various “debate” formats employed at places like the Times or The Economist.
To do so, publishers should experiment in three directions: acknowledging the structure of the argument in the presentation of the content; aggregating evidence for and against each claim; and providing a credible assessment of each claim’s reliability. If all this sounds elaborate, bear in mind that each of these steps is already being taken by a variety of entrepreneurial organizations and individuals.

Defining an argument

We’re all familiar with arguments, both in media and in everyday life. But it’s worth briefly reviewing what an argument actually is, as doing so can inform how we might better structure arguments online. “The basic purpose of offering an argument is to give a reason (or more than one) to support a claim that is subject to doubt, and thereby remove that doubt,” writes Douglas Walton in his book Fundamentals of Critical Argumentation. “An argument is made up of statements called premises and a conclusion. The premises give a reason (or reasons) to support the conclusion.”
So an argument can be broken up into discrete claims, unified by a structure that ties them together. But our typical conceptions of online content ignore all that. Why not design content to more easily assess each claim in an argument individually? UI designer Bret Victor is working on doing just that through a series of experiments he collectively calls “Explorable Explanations.”
Writes Victor:
A typical reading tool, such as a book or website, displays the author’s argument, and nothing else. The reader’s line of thought remains internal and invisible, vague and speculative. We form questions, but can’t answer them. We consider alternatives, but can’t explore them. We question assumptions, but can’t verify them. And so, in the end, we blindly trust, or blindly don’t, and we miss the deep understanding that comes from dialogue and exploration.
The alternative is what he calls a “reactive document” that imposes some structure onto content so that the reader can “play with the premise and assumptions of various claims, and see the consequences update immediately.”
Although Victor’s first prototype, Ten Brighter Ideas, is a list of recommendations rather than a formal argument, it gives a feel of how such a document could work. But the specific look, feel and design of his example aren’t important. The point is simply that breaking up the contents of an argument beyond the level of just a post or column makes it possible for authors, editors or the community to deeply analyze each claim individually, while not losing sight of its place in the argument’s structure.

Show me the evidence (and the conversation)

Victor’s prototype suggests a more interesting way to structure and display arguments by breaking them up into individual claims, but it doesn’t tell us anything about what sort of content should be displayed alongside each claim. To start with, each claim could be accompanied by relevant links that help the reader make sense of that claim, either by providing evidence, counterpoints, context, or even just a sense of who does and does not agree.
Each claim could be accompanied by relevant links that help the reader make sense of that claim by providing evidence, counterpoints, context, or even just a sense of who does and does not agree.
At multiple points in his column, Krugman references “the evidence,” presumably referring to parts of the economics literature that support his argument. But what is the evidence? Why can’t it be cited alongside the column? And, while we’re at it, why not link to countervailing evidence as well? For an idea of how this might work, it’s helpful to look at a crowd-sourced fact-checking experiment run by the nonprofit NewsTrust. The “TruthSquad” pilot has ended, but the content is still online. One thing that NewsTrust recognized was that rather than just being useful for comment or opinion, the crowd can be a powerful tool for sourcing claims. For each fact that TruthSquad assessed, readers were invited to submit relevant links and mark them as For, Against, or Neutral.
The links that the crowd identified in the NewsTrust experiment went beyond direct evidence, and that’s fine. It’s also interesting for the reader to see what other writers are saying, who agrees, who disagrees, etc. The point is that a curated or crowd-sourced collection of links directly relevant to a specific claim can help a reader interested in learning more to save time. And allowing space for links both for and against an assertion is much more interesting than just having the author include a single link in support of his or her claim.
Community efforts to aggregate relevant links along the lines of the TruthSquad could easily be supplemented both by editor-curators (which NewsTrust relied on) and by algorithms which, if not yet good enough to do the job on their own, can at least lessen the effort required by readers and editors. The nonprofit ProPublica is also experimenting with a more limited but promising effort to source claims in their stories. (To get a sense of the usefulness of good evidence aggregation on a really thorny topic, try this post collecting studies of the stimulus bill’s impact on the economy.)

Truth, reliability, and acceptance

While curating relevant links allows the reader to get a sense of the debate around a claim and makes it easier for him or her to learn more, making sense of evidence still takes considerable time. What if a brief assessment of the claim’s truth, reliability or acceptance were included as well? This piece is arguably the hardest of those I have described. In particular, it would require editors to abandon the view from nowhere to publish a judgment about complicated statements well beyond traditional fact-checking. And yet doing so would provide huge value to the reader and could be accomplished in a number of ways.
Imagine that as you read Krugman’s column, each claim he makes is highlighted in a shade between green and red to communicate its truth or reliability. This sort of user interface is part of the idea behind “Truth Goggles,” a master’s project by Dan Schultz, an MIT Media Lab student and Mozilla-Knight Fellow. Schultz proposes to use an algorithm to check articles against a database of claims that have previously been fact-checked by Politifact. Schultz’s layer would highlight a claim and offer an assessment (perhaps by shading the text) based on the work of the fact checkers.
The beauty of using color is the speed and ease with which the reader is able to absorb an assessment of what he or she is reading. The verdict on the statement’s truthfulness is seamlessly integrated into the original content. As Schultz describes the central problem:
The basic premise is that we, as readers, are inherently lazy… It’s hard to blame us. Just look at the amount of information flying around every which way. Who has time to think carefully about everything?
Still, the number of statements that PolitiFact has checked is relatively small, and what I’m describing requires the evaluation of messy empirical claims that stretch the limits of traditional fact-checking. So how might a publication arrive at such an assessment? In any number of ways. For starters, there’s good, old-fashioned editorial judgment. Journalists can provide assessments, so long as they resist the view from nowhere. (Since we’re rethinking the opinion pages here, why not task the editorial board with such a role?)
Publications could also rely on other experts. Rather than asking six experts to contribute to a “Room for Debate”-style forum, why not ask one to write a lead argument and the others not merely to “respond,” but to directly assess the lead author’s claims? Universities may be uniquely positioned to help in this, as some are already experimenting with polling their own experts on questions of public interest. Or what if a Quora-like commenting mechanism was included for each claim, as Dave Winer has suggested, so that readers could offer assessments, with the best ones rising to the top?
Ultimately, how to assess a claim is a process question, and a difficult one. But numerous relevant experiments exist in other formats. One new effort, Hypothes.is, is aiming to add a layer of peer review to the web, reliant in part on experts. While the project is in its early stages, its founder Dan Whaley is thinking hard about many of these same questions.

Better arguments

What I’ve described so far may seem elaborate or resource-intensive. Few publications these days have the staff and the time to experiment in these directions. But my contention is that the kind of content I am describing would be of dramatically higher value to the reader than the content currently available. And while Victor’s UI points towards a more aggressive restructuring of content, much could be done with existing tools. By breaking up an argument into discrete claims, curating evidence and relevant links, and providing credible assessments of those claims, publishers would equip readers to form opinions on merit and evidence rather than merely on trust, intuition, or bias. Aggregation sites like The Atlantic Wire may be especially well-positioned to experiment in this direction.
I have avoided a number of issues in this explanation. Notably, I have neglected to discuss counter-arguments (which I believe could be easily integrated) and haven’t discussed the tension between empirical claims and value claims (I have assumed a focus on the former). And I’ve ignored the tricky psychology surrounding bias and belief formation. Furthermore, some might cite the recent PolitiFact Lie of the Year controversy as evidence that this sort of journalism is too difficult. In my mind, that incident further illustrates the need for credible, honest referees.
Aggregation sites like The Atlantic Wire may be especially well-positioned to experiment in this direction.
Returning once more to Krugman’s argument, imagine the color of the text signaling whether his claims about financial transactions and economic growth are widely accepted. Or mousing over his point about reducing deficits to quickly see links providing background on the issue. What if it turned out that all of Krugman’s premises were assessed as compelling, but his conclusion was not? It would then be obvious that something was missing. Perhaps more interestingly, what if his conclusion was rated compelling but his claims were weak? Might he be trying to convince you of his case using popular arguments that don’t hold up, rather than the actual merits of the case? All of this would finally be apparent in such a setup.
In rethinking how we structure and assess arguments online, I’ve undoubtedly raised more questions than I’ve answered. But hopefully I’ve convinced you that better presentation of arguments online is at least possible. Not only that, but numerous hackers, designers, and journalists — and many who blur the lines between those roles — are embarking on experiments to challenge how we think about content, argument, truth, and credibility. It is in their work that the answers will be found.
Image by rhys_kiwi used under a Creative Commons license.
Posted: 05 Jan 2012 07:30 AM PST
Twenty-nine major news organizations have signed on as investors in NewsRight, a newly launched company that aims to make it easier for publishers to license and track their content on the web.
NewsRight logo
David Westin, the president of NewsRight and former head of ABC News, says news organizations are suffering even though demand for news on the web is exploding, calling it an “imperfection in the marketplace.”
“Much of that digital growth is coming to the benefit of companies who themselves are not hiring reporters, or at least not very many reporters. They are relying on content taken from websites of the traditional news providers,” Westin said.
That’s a polite way of addressing unresolved tension between traditional news organizations and the aggregators, bloggers, and scrapers — ”some of which are perfectly legitimate, some of which are perfectly outrageous, and a fair number of which lie in between and are subject to honest disagreement,” Westin said.
The details are still being worked out, but the company will provide a platform for news organizations to license and distribute clean feeds of their content to third parties. That software includes analytics to help measure the reach of the content — and find out whether it’s being ripped off. NewsRight will provide legal guidance to publishers where necessary.
BeaconA little history: Remember THE BEACON? Back in 2009, the Associated Press took a somewhat more antagonistic approach to protecting its intellectual property on the web. We reported on the AP’s plans to build AP News Registry, “a way to identify, record and track every piece of content AP makes available to its members and other paying customers.” Part of that plan was the beacon, a little bit of JavaScript embedded into the AP’s syndicated news feeds, which helped expose people who, in the AP’s view, were scraping or, well, over-aggregating, its material. The AP took a lot of flak in the journalism universe.
The beacon is still very much alight and integrated into the NewsRight platform, which AP spun off into its own separate concern. The company is tracking more than 16,000 websites that use material from almost 900 news sites in its database, Westin said, and the software is measuring more than 160 million unique readers and four billion impressions a month.
Most of the websites that auto-scrape AP news feeds without permission don’t remove the tracking code, Westin said. To hunt down those savvy enough to remove the beacon, the tracking software also scours the web for text that matches the source material and flags anything that’s a 70-percent match or stronger.
Westin, who spent years as a litigator in Washington, said NewsRight is not Righthaven, the aggressive copyright enforcer that has all but folded. “We have not been set up first and foremost as a litigation shop,” Westin said. “Now, that doesn’t mean down the road there won’t be litigation. I hope there’s not. Some people may decide to sue, and we can support that with the data we gather, the information we gather. But…those are very expensive, cumbersome, time-consuming processes.”
NewsRight’s partner news organizations include Advance Publications, A.H. Belo, Community Newspaper Holdings, Gatehouse Media, The Gazette Company, Hearst Newspapers, Journal Communications, McClatchy, MediaNews, The New York Times Co., Scripps, and The Washington Post Co. AP remains on the NewsRight board and is a minority shareholder. Westin said NewsRight is accepting new applications for news organizations and bloggers who want to syndicate their content.
Posted: 05 Jan 2012 07:15 AM PST
It’s an emerging issue of our time and place. They know too much about us, and we know too little about what they know. We do know that what they know about us is increasingly determining what they choose to give us to read. We wonder: What are we missing? And just who is making those decisions?

Today, in 2012, those questions are more pressing in our age of news deluge. We’re confronted at every turn, at every finger gesture, with more to read or view or listen to. It’s not just the web: It’s also the smartphone and especially the tablet, birthing new aggregator products — Google Currents and Yahoo Livestand have joined Flipboard, Pulse, Zite, and AOL Editions — every month. Compare for a moment the “top stories” you get on each side-by-side, and you’ll be amazed. How did they get there? Why are they so different?
Was it some checkbox I checked (or didn’t?!) at sign-in? Using Facebook to sign in seemed so easy, but how is that affecting what I get? Are all those Twitterees I followed determining my story selection? (Or maybe that’s why I’m getting so many Chinese and German stories?) Did I tell the Times to give the sports section such low priority? The questions are endless, a ball of twine we’ve spun in declaring some preferences in our profiles over the years, wound ever wider by the intended or (or un-) social curation of Facebook and Twitter, and mutliplied by the unseen but all-knowing algorithms that think they know what we really want to read, more than we do. (What if they are right? Hold that thought.)
The “theys” here aren’t just the digital behemoths. Everyone in the media business — think Netflix and The New York Times as much as Pandora and People — wants to do this simple thing better: serve their customers more of what they are likely to consume so that they’ll consume more — perhaps buying digital subscriptions, services, or goods and providing very targetable eyes for advertisers. It’s not a bad goal in and of itself, but sometimes it feels like it is being done to us, rather than for us.
Our concern, and even paranoia, is growing. Take Eli Pariser’s well-viewed (500,000 times, just on YouTube) May 2011 TED presentation on “filter bubbles,” which preceded his June-published book of the same name. In the talk, Pariser talks about the fickle faces of Facebook and Google, making “invisible algorithmic editing of the web” an issue. He tells the story of how a good progressive like himself, a founder of MoveOn.org, likes to keep in touch with conservative voices and included a number in his early Facebook pages.
He then describes how Facebook, as it watched his actual reading patterns — he tended to read his progressive friends more than his conservative ones — began surfacing the conservative posts less and less over time, leaving his main choices (others, of course, are buried deeper down in his datastream, but not easily surfaced on that all-important first screen of his consciousness) those of like-minded people. Over time, he lost the diversity he’d sought.
Citing the 57 unseen filters Google uses to personalize its results for us, Pariser notes that it’s a personalization that doesn’t even seem personalized, or easily comparable: “You can’t see how different your search results are than your friends…We’re seeing a passing of the torch from human gatekeepers to algorithmic ones.”
Pariser’s worries have been echoed by a motley crew we can call algorithmic and social skeptics. Slowly, Fear of Facebook has joined vague grumbles about Google and ruminations about Amazon’s all-knowing recommendations. Ping, we’ve got a new digital problem on our bands. Big Data — now well-advertised in every airport and every business magazine as the new business problem of the digital age to pay someone to solve — has gotten very personal. We are more than the sum of our data, we shout. And why does everyone else know more more about me that I do?
The That’s My Datamine Era has arrived.
So we see Personal.com, a capitalist solution to the uber-capitalist usage of our data. I’ve been waiting for a Personal.com (and the similar Singly.com) to come along. What’s more American than having the marketplace harness the havoc that the marketplace hath wrought? So Personal comes along with the bold-but-simple notion that we should individually decide who should see our own data, own preferences, and our own clickstreams — and be paid for the privilege of granting access (with Personal taking 10 percent of whatever bounty we take in from licensing our stuff).
It’s a big, and sensible, idea in and of itself. Skeptics believe the horse has left the barn, saying that so much data about us is already freely available out there to ad marketers as to make such personal databanks obsolete before they are born. They may be forgetting the power of politics. While the FCC, FTC, and others have flailed at the supposed excesses of digital behemoths, they’ve never figured out how to rein in those excesses. Granting consumers some rights over their own data — a Consumer Data Bill of Rights — would be a populist political issue, for either Republicans or Democrats or both. But, I digress.
I think there’s a way for us to reclaim our reading choices, and I’ll call it the News Dial-o-Matic, achievable with today’s technology.
While Personal.com gives us 121 “gem” lockers — from “Address” to “Women’s Shoes”, with data lockers for golf scores, beer lists, books, house sitters, and lock combinations along the way, we want to focus on news. News, after all, is the currency of democracy. What we read, what she reads, what they read, what I read all matter. We know we have more choice than any generation in history. In this age of plenty, how do we harness it for our own good?
Let’s make it easy, and let’s use technology to solve the problem technology has created. Let’s think of three simple news reading controls that could right the balance of choice, the social whirl and technology. We can even imagine them as three dials, nicely circular ones, that we can adjust with a flick of the finger or of the mouse, changing them at our whim, or time of day.
The three dials control the three converging factors that we’d like to to determine our news diet.

Dial #1: My Sources

This is the traditional title-by-title source list, deciding which titles from global news media to local blogs I want in my news flow.

Dial #2: My Networks

Social curation is one of the coolest ideas to come along. Why should I have to rely only on myself to find what I like (within or in addition to My Sources) when lots of people like me are seeking similar content? My Facebook friends, though, will give me a very different take than those I follow on Twitter. My Gmail contact list would provide another view entirely. In fact, as Google Circles has philosophized, “You share different things with different people. But sharing the right stuff with the right people shouldn’t be a hassle.” The My Networks dial lets me tune my reading of different topics by different social groups. In addition, today’s announced NewsRight — the AP News Registry spin-off intended to market actionable intelligence about news reading in the U.S. — could even play a role here. (More on NewsRight here and here.)

Dial #3: The Borg

The all-knowing, ever-smarter algorithm isn’t going away — and we don’t want it to. We just want to control it — dial it down sometimes. I like thinking of it in sci-fi terms, and The Borg from “Star Trek” well illustrates its potential maniacal drive. (I love the Wikipedia Borg definition: “The Borg manifest as cybernetically-enhanced humanoid drones of multiple species, organized as an interconnected collective, the decisions of which are made by a hive mind, linked by subspace radio frequencies. The Borg inhabit a vast region of space in the Delta Quadrant of the galaxy, possessing millions of vessels and having conquered thousands of systems. They operate solely toward the fulfilling of one purpose: to “add the biological and technological distinctiveness of other species to [their] own” in pursuit of their view of perfection“.) The Borg knows more about our habits than we’d like and we can use it well, but let’s have us be the ones doing the dialing up and down.
Three simple round dials. They could harness the power of our minds, our relationships, and our technologies. They could utilize the smarts of human gatekeepers and of algorithmic ones. And they would return power to where it belongs, to us.
Where are the dials? Who powers them? Facebook, the new home page of our time, would love to, but so would Google, Amazon, and Apple, among a legion of others. Personal.com would love to be that center, as it would any major news site (The New York Times, Zite-powered CNN, Yahoo News). We’ll leave that question to the marketplace.
Lastly, what are the newsonomics of the News Dial-o-Matic? As we perfect what we want to read, the data capturing it becomes even more valuable to anyone wanting to sell us stuff. Whether that gets monetized by us directly (through the emerging Personals of the world), or a mix of publishers, aggregators, or ad networks would be a next battleground. And then: What about the fourth wheel, as we dial up and down what we’re in the marketplace to buy right now? Wouldn’t that be worth a tidy sum?

Patch Business Model Flounders


Newspaper Death Watch



Posted: 05 Jan 2012 08:25 AM PST
We’ve posted several positive items about the local Patch operation in our community, a one-person news bureau that has become our favorite – and most timely – source of information about local events. So we feel it’s also important to share the news that AOL’s Patch operation, a constellation of more than 800 hyperlocal news sites, looks like a train wreck.
Tim Armstrong, AOLBusiness Inside says Patch has generated only about $8 million in revenue in 2011 on an investment of more than $160 million. InvestorPlace says revenues were closer to $20 million, but that Patch still lost $150 million on the year. Some investors are calling for the head of Tim Armstrong (right) the former Google executive who took the helm at AOL nearly three years ago. Armstrong conceived of Patch in 2007 and funded the first two years of its operations before assuming the top job at AOL in 2009 and buying Patch outright. Since then he’s embarked upon an aggressive expansion program to place hyperlocal news bureaus in as many US locations as possible. He’s also spent lavishly on the acquisitions of Huffington Post and TechCrunch. At this point, critics are calling the strategy a bust.
The problem with Patch is that the hyperlocal revenue model doesn’t work nearly as well as the hyperlocal news model. According to Business Inside, Patch sells advertising through a network of mostly outsourced telesales representatives. It’s clear that these sales people don’t have their tentacles into the local communities that are the core of Patch’s model. The advertising on our own local outlet is mostly a mix of display ads from big national brands (presumably sold at remainder prices), Google AdSense and a smattering of classifieds. With that kind of revenue base, it’s not surprising Patch is losing a fortune.
As we’ve argued before, the hyperlocal model needs to work from both the content and revenue perspectives. Patch has clearly succeeded in hiring editors who are closely tied in to their communities, but it isn’t doing that on the sales side. This is a tough problem to solve. Small businesses aren’t big advertisers to begin with, and the cost of deploying dedicated sales reps to 800 local communities would be far higher than the centralized telesales model. On the other hand, the centralized model isn’t exactly killing it.
We hope Patch figures it out, because it’s inventing some creative new ways to report the news. We continue to like the business model of Sacramento Press, which positions itself as an integrated marketing partner rather than an advertising outlet. Addiction to advertising revenue is one of the reasons newspapers are in so much trouble in the first place. In its current iteration, Patch appears to be making the same mistakes.

Miscellany

As if reporters don’t like to gripe enough, there’s a new website where they can do it anonymously in public. It’s called Dash30Dash.org, and it was started by a former newspaper reporter who wants “to give reporters, editors and others a chance to post comments about their jobs and their ever-changing profession." So far, it looks like the commentaries are mostly limited to contributions from the site's creator, but it’s still early. The writing is lively and pointed, so check it out.

An Australian philanthropist and Internet entrepreneur has pledged more than $15 million to fund a new, nonprofit media venture called The Global Mail. Graeme Wood says he has only one goal in mind: "produce public-interest journalism."
Wood, whose personal fortune is estimated at $337 million, was apparently taken with the example of ProPublica in the U.S. That nonprofit investigative venture was also started with a large grant from a single donor but has been successfully diversifying its support base and now employs 34 editorial staff members. Wood’s commitment to support The Global Mail for at least five years resulted from a dinner party conversation with former Australian Broadcast Corp. journalist Monica Attard, who is now the site’s editor-in-chief. That’s pretty good sales efficiency in our book.