Politicians. Businesses. Banks. All are worried about their reputation.
It affects elections. It can determine stock prices. And it can create jobs — think about all those social media gigs created to hedge against disgruntled Twitter followers.
That’s why so many are placing resources into sentiment analysis technology that promises to hedge against this type of risk.
Technology giants such as IBM and SAS offer clients software packages that comb the Web for clues to consumer sentiment. Those programs parse language and attempt to understand the meaning of words, pairs of words or phrases, in context.
Luckily, rather than having to create a new methodology for this type of analysis, several Stanford students have published an API that takes care of the hard work for us. The software measures the positive and negative number of tweets containing specific keywords (for example, ‘BofA’, ‘SuperStorm’, or ‘Christie’).
The Guardian used the tool to analyze the sentiment around Rupert and James Murdoch in the wake of the British tabloid cell phone hacking scandal.
Develop a site that has the ability to either track large publicly traded companies or elections using the aforementioned API.
At the same time, the visualization will also track stock price and company news, or polls to see if there is ever any correlation.
I’m open to how this news app might be designed, but I imagine it will have three main elements: some kind of score based on the number of positive and negative tweets; a graphic for each keyword, #hashtag or username; and a metric to measure a topic’s sentiment against, that could be polling results or stock prices or a map showing where the most negative and positive tweets are coming from.
Also, since the tool only counts current tweets, I’d recommend running the API at an interval (whether that’s several times an hour, a day or a week) to give viewers an idea of how sentiment around a topic has changed over time.
There are several different targets for this visualization. I’m open to concentrating on municipal government, polling results or publicly traded companies.
On the weekend of Jan. 25, Hack Jersey will host the first hackathon in the state to invite journalists and coders to work together, competing to build innovative projects that can transform the way we use data and experience news in the Garden State.
The idea of a Jersey-based hackathon began with a conversation between Debbie Galant, director of the NJ News Commons at Montclair State University, and Tom Meagher, data editor at Digital First Media, at the Online News Association conference this September. Since then, dozens of volunteers from news organizations, nonprofits and tech startups across the state have come onto our planning team. And Knight-Mozilla’s OpenNews initiative joined the NJ News Commons as a leading sponsor of our hack weekend.
Participants will meet at our launch party on Friday, Jan. 25 at Fitzgerald’s 1928 in Glen Ridge (RSVP here!). The next morning, through our primary sponsor, the NJ News Commons, our hackathon will begin at University Hall at Montclair State. Participants will break into teams and have 24 hours to create their open source projects. On Sunday afternoon, Jan. 27, a panel of media and tech judges will choose the winners and award prizes to the best projects.
To keep up with the latest news on our hack weekend, follow us on Twitter @hackjersey. You can find more information about preparing for the hackathon on our blog. Want to start brainstorming ideas for your projects? Check out our list of public data where you might want to start and be sure to read our rules for the competition.
Feel free to grab a hold of me over Twitter, Gmail or Linkedin. Or, better yet, just leave a comment below.