9/29: The Collapse of the Boston Red Sox

September 29, 2011 by  
Filed under Featured, Jared Goldman

In the wake of one of the strangest nights in my life as a Red Sox fan, I have to ask, “Why?”

The question is not rhetorical expression of despair, as in “Why is baseball so cruel?”  It’s a real question: “Why did the Red Sox collapse?”

It’s easy to come up with quick answer.  Injuries, poor conditioning, bad free agent signings, and lack of clubhouse leadership are all popular explanations.  Many will propose a combination of causes.

And it is also likely that some people will throw up their hands and declare that the reason cannot be found, because baseball defies reason.  Such is the greatness of baseball, they might say.  I am not one of those people.  In a few weeks, though, once I have entered the acceptance phase, perhaps I will be able to appreciate that perspective.

In the FiveThirtyEight Blog at the New York Times, Nate Silver crunched the numbers to determine the likelihood of the Red Sox missing the playoffs in such agonizing style.  In a calculation that was not “mathematically rigorous,” he determined “a probability of about one chance in 278 million.”

With odds like those, Silver speculates that some other factors may be involved in the latest Sox meltdown.  I would have to agree.  In this age of advanced statistics, when sabermetricians are ensconced in baseball’s front offices and celebrated in films like “Moneyball,” we should be able to empirically investigate why one team manages to defy all expectations.

I know where to start: stress.  Though it’s not an original explanation, the idea that pressure could be the root of the Red Sox’ woes jibes with their playing environment, where the weight of sports history, regional angst, and the local media can be overwhelming.  It also might explain player underperformance — see the Yerkes-Dodson law — and the large number of broken-down bodies.

How to measure stress?  Blood pressure and cortisol levels come to mind.  Players could also fill out questionnaires assessing anxiety.  Of course, the players’ union may not approve such measures, given how drug testing has been so fiercely contested.  Also, athletes may be loath to dignify the notion that stress affects their job performance.  Nonetheless, I still think it would be interesting to compare the subjective experience of playing in Fenway Park or Yankee Stadium as opposed to say, St. Petersburg’s Tropicana Field, where the Rays might benefit from the breezy Florida vibe.

My point is not to invite pity for the Red Sox, a collection of millionaires, nor to excuse their futility.  The results would be just as interesting if there’s no demonstrable difference in stress.  Maybe there’s some other reason.  Either way, I can’t believe the answer lies in dumb luck or the resurfacing of a curse.  I can only hope that cold, hard facts might alleviate my own stress over the cruelty of the baseball gods.

10/21: On-Demand TV: What’s the Story?

October 21, 2010 by  
Filed under Featured, Jared Goldman

A recent Marist poll suggests our TV viewing habits are undergoing massive changes. 16% of U.S. residents are watching most TV shows using their DVRs, while another 9% are watching most shows on the Web. If demographics are any indication, it won’t be long before these numbers climb even higher: only 56% of people under 45 watch most TV shows in real time compared with 77% of their elders. The implications are straightforward: many of us are enjoying the flexibility of the digital age, which doesn’t require us to be in our living rooms on a certain day at a certain time to catch our favorite programs.

goldman-caricature-430A more intriguing question might have to do with what we’re watching rather than how we’re watching. Our evolving habits could alter (and may have already altered) the structure and content of television shows.

It’s not hard to imagine possible changes in structure. Freed from strict programming schedules, shows needn’t be edited into to half-hour and hour-long blocks that alternate between content and ads. Distributors can also be more creative with ad placement. Hulu.com, which offers TV shows in full, sometimes allows users to choose to view a long advertisement before their show starts instead of experiencing the traditional interruptions. Many shows resort to narrative devices that pump up suspense prior to commercials — what better way to make viewers sit through the beer and insurance ads? — and on-demand formats may give writers the confidence to ditch these tired conventions.

On-demand technology also allows us to start at the beginning of each series. Traditional TV shows, eager to pick up viewers in the first season, the third season, or whatever season, usually aren’t structured so that knowledge of past episodes are crucial to understanding the show. Instead, mainstream programs are designed to deliver their thrills or laughs in a short period of time, followed by satisfying closure.  Conflict is established in the first minute and resolved prior to the end of the half-hour or hour. Law & Order has mastered this form, hooking us before the credits with a crime scene, often drenched in blood, and then rewarding us one hour later by bringing the depraved perp to justice.  House thrives on the same trick, although the mystery is medical rather than criminal (the amount of blood being roughly the same). In both shows, the characters have histories, but knowing their back stories aren’t essential to following the action.

I’m sure there will always be a place for such tactics, but I also hope the new technology could spur writers to be more inventive when organizing plots. Individual episodes should still be self-contained, but they can also expand the less obvious narrative threads planted in earlier shows, as well as continue thematic and visual motifs. One of the common compliments lavished on shows such as The Wire and The Sopranos was that they told stories with novelistic complexity; each episode functioned as a book chapter, not only advancing another increment of plot, but also contributing, in a less linear way, to the narrative whole that stretched from the first episode to the last, many seasons later.

This could all be wishful thinking; the money-making requirements of on-demand content could shape our new media stories in ways that aren’t especially respectful of narrative integrity. I’ve encountered plenty of three-minute comedy and sports highlight clips that start off with pre-roll ads, boasting a content-to-commercial ratio that traditional TV advertisers could only dream of. But, here’s hoping that advances in technology will promote advances in TV shows.

One final thought — to the 7% of U.S. residents who don’t watch TV at all, I say … wow. I’m not sure what you’re doing with your free time, but I have a feeling it’s more productive than watching TV, no matter what format.

9/17: Novelists vs. Beeping Devices

September 17, 2010 by  
Filed under Featured, Jared Goldman

Jonathan Franzen, whose novel Freedom recently hit bookstore shelves, has an interesting idea about the role of novelists in the digital age.

goldman-caricature-430In a video interview, the author discusses his earlier struggles with the question, “Why should I write?” and says he found at least one justification: a novel is a portal into a world in which the reader no longer feels alone.  While a reader’s reality may be ruled by unjust people and dominated by forbidding customs, a book provides a connection to an author who might be equally appalled at the state of things.  Reader and writer are united in their solitude, which is made much more bearable as a result.  In this way, solitude serves an important purpose in that it compels a reader to read and a writer to write, requiring them to forge a connection that transcends the constraints of their lives.

Franzen goes on to say that in the digital age our opportunities for solitude are rapidly disappearing. Chained to our communication devices, we have nonstop access to our co-workers, friends and family. Certainly, this access diminishes our sense of aloneness.  But, Franzen raises the point that the “beeping devices” in our lives may only provide superficial relief, leading us to endlessly check our messages as though we’re just one click away from a satisfying connection.

In that sense, Franzen articulates a battle that may have occurred in the minds of many writers who feel their craft is becoming obsolete.   On one side, we have technology capturing all individuals in an expanding communication net,while on the other, we have practitioners of the “old” forms of media – novelists, playwrights, nonfiction writers, etc. – whose work isn’t easily integrated into this digital grid.  Novelists, Franzen says, are tasked with enticing people away from their linked-up lives.

The entire interview is worth watching, but for Franzen’s comments on life in the digital age, fast-forward to the 9:06 mark:

7/14: Multitasking Bad for the Brain?

July 14, 2010 by  
Filed under Featured, Jared Goldman

One could argue that digital technology has helped make us better multitaskers.  These days, we can simultaneously check our e-mails, monitor our Twitter feeds and listen to a podcast, all while eating our breakfast.  Wouldn’t it make sense that such a capacity for divided attention is making our brains stronger?

goldman-caricature-430Unfortunately, that might not be the case.  Experiments comparing the ability of heavy multitaskers – thus designated based on self-reports about their technology use – to non-multitaskers, found that the latter group actually performs better on certain cognitive tasks. In a Stanford study, cited in a recent New York Times article, subjects participated in a test that required them to ignore extraneous inputs, a measure of their ability to filter out distractions.  (You can take a test on ignoring distractions here.)  In another test, participants had to switch between tasks, showing their ability to adjust to new information and task demands on the fly.  (Take a task-switching test here.)  In both cases, the non-multitaskers performed better than heavy multitaskers.  Based on these and other experiments, the scientists surmised that multitaskers are more responsive to new incoming information.  On the positive side, one might say the multitaskers are more alert to new stimuli; on the negative side, one could claim their multitaskers’ focus is more easily disrupted.

As with many scientific studies, the tests in this case might not truly reflect real world situations.  A cognitive test in a laboratory could fall short of replicating the experience of juggling computer applications.  As always, more study is needed to examine, among other things, how different amounts of multitasking affect performance on cognitive tasks, and whether the recency of one’s immersion in technology affects the ability to direct attention.  Nonetheless, it would appear that heavy use of gadgets and computers is influencing our brain function.

On the plus side, there is also evidence that screen technology benefits certain cognitive skills. (Click here for a list of such articles.)  It has been demonstrated in the laboratory that playing action video games improves visual attention in several ways.  Gamers show the ability to process more visual inputs than non-gamers, the ability to process inputs across a greater field of view, and a better ability to process inputs presented in rapid succession.  Considering the deficits shown by people with disabilities and the demonstrated erosion of certain cognitive skills among the elderly, perhaps, action video games – or programs that mimic them – can be used therapeutically.

Above all else, the experiments reveal the apparent power of technology to mold our brains, for better and for worse. The question, however, may be whether we can harness our gadgets’ power to maximize the benefits and minimize the harm.

3/15: Hopping on the Bandwagon? The Internet’s Impact on Intelligence

March 15, 2010 by  
Filed under Featured, Jared Goldman

With the news from The Marist Poll that an overwhelming 68% of U.S. residents believe the Internet is making us smarter, I’m beginning to think I should just hop on the bandwagon and see where it takes me.  Still, I can’t help asking why people are so optimistic.

goldman-caricature-430The general argument linking smarts to the Web seems to go like this: Because of this vast online memory store, parts of our mind that would have been tied up in the dark days preceding the Web are freed to accomplish new tasks.  With the Web harboring all the data we need, we know finding an answer is as simple as typing a query into a search engine, and this certainty alters our approach to any task that requires information we lack.  Now, we don’t have to spend time and effort acquiring such knowledge; the Internet holds it for us, and we are more productive under this lightened load.

Some people characterize the Internet as an extension of our brains.  In his Atlantic article “Get Smarter,” Jamais Cascio discusses the rise of computers and devices dubbed “exocortical technology,” which allow us to perform tasks we never dreamed of.  He writes: “As the digital systems we rely upon become faster, more sophisticated, and (with the usual hiccups) more capable, we’re becoming more sophisticated and capable too.”  The article is fascinating, and I encourage you to read it – among other things, it suggests that in addition to computers, drugs will be developed that help us perform cognitive tasks better.

But I can’t stop myself from protesting that the Web, one of these “sophisticated” systems, has spawned a certain amount of unpleasantness: paparazzi-fueled “news,” silly viral videos, a huge number of scams … the list goes on.  While the Web can be seen as a tool to help us achieve things, it also appears to be able to distract us, sell us things we don’t need, and lead us down fruitless paths as we seek information.  One could argue that the Web is still in its infancy, and guides will emerge to point us in the right directions.  But one could also argue that powerful entities who see the medium as a piggy bank waiting to be smashed don’t want that to happen.

Nicholas Carr, whose article “Is Google Making Us Stupid,” also in the Atlantic, created quite a buzz among tech pundits, points out that for all of the Internet’s innovative power, it could be altering something fundamental about the way we read.  Carr writes: “In the quiet spaces opened up by the sustained, undistracted reading of a book … we make our own associations, draw our own inferences and analogies, foster our own ideas.”  Such deep reading, he says, isn’t encouraged by the Web’s architecture, which is designed to accommodate shallow, fast processing: the more we click, the more some company stands to sell us something.

I doubt Carr was surprised when a survey from the Pew Internet & American Life Project revealed 81% of experts believe “Nicholas Carr was wrong: Google does not make us stupid.” He knows as much as anyone that the bandwagon is alluring and swift, with some authority figures at the wheel.  So while the Web skeptics and evangelists will go back and forth (the evangelists enjoying the majority position), one thing is abundantly clear: most people trust the Web to propel them into the future.  If that’s the case, then regulation, analysis, and organization are in order.  Perhaps we need the skeptics to keep the bandwagon from tipping over.

This Is Your Brain on Social Networks … Any Questions?

December 18, 2009 by  
Filed under Blog, Featured, Jared Goldman

Have you ever fallen into a tech-hole?

You’re sitting at your computer, logged into your Facebook, Twitter and other social networking accounts, immersed in the links, videos, comments and other digital flotsam shooting down the info streams.  Meanwhile, a person, real flesh and blood, walks in the room and wants your attention.  You don’t hear his words; you mindlessly wave him away.  You’re busy … with your virtual friends.

goldman-caricature-430Perhaps that’s never happened to you.  As for me, I’ve spent a serious number of hours in the tech-hole.  Based on a recent Marist poll, the number of Web users with social networking accounts, and perhaps susceptible to this experience, is growing rapidly.  This furious growth has led some to question whether the effects of spending so much time on Facebook, Twitter and their ilk could be harmful.

In the U.K., neuroscientist Susan Greenfield took her concerns about social networks to the House of Lords, suggesting that the use of the sites could affect the human brain — especially a child’s brain — in profound ways. One of her more frightening points was that using the sites could yield a generation of grown-ups with the emotional depth and cognitive abilities of big babies.  The social networks provide experiences that are “devoid of cohesive narrative and long-term significance,” said Greenfield.  ”As a consequence, the mid-21st century mind might almost be infantilized, characterized by short attention spans, sensationalism, inability to empathize and a shaky sense of identity.”  Among other things, she called for an investigation into whether the overuse of screen technologies could be linked to a recent spike in diagnoses of attention-deficit hyperactivity disorder.  People who spend formative years surfing the Internet, an environment characterized by “fast action and reaction,” could come to expect similar instant gratification in the non-virtual world, said Greenfield.

Her concerns have probably resonated with Web skeptics because she’s homed in on recognizably annoying online behavior. For example, if you’ve ever been irritated when a friend updates his or her status message to broadcast a favorite kind of toothpaste – e.g., “[Person X] is contemplating the different colors of AquaFresh” — Greenfield sympathizes. “Now what does this say about how you see yourself?” she asks of those prone to posting personal trivia. “Does this say anything about how secure you feel about yourself? Is it not marginally reminiscent of a small child saying ‘Look at me, look at me mummy!  Now I’ve put my sock on. Now I’ve got my other sock on.’”

Not everyone is receptive to Greenfield’s concerns.  Ben Goldacre, a British writer, broadcaster and doctor, and author of a Guardian column called Bad Science, says Greenfield is irresponsibly using her position as head of the Royal Institution of Great Britain — a body devoted to improving the public’s knowledge of science — because she doesn’t have any empirical evidence backing up her fears.  If Greenfield wants to promote awareness of the scientific method, says Goldacre, she shouldn’t be spending so much time airing her qualms about untested hypotheses.  Greenfield’s caveats that her purpose is to  raise questions, not give answers, aren’t enough for Goldacre; he says she’s recklessly generating scary headlines that frighten a Web-loving populace. “It makes me quite sad,” he writes, “when the public’s understanding of science is in such a terrible state, that this is one of our most prominent and well funded champions.”  In a heated BBC debate on the social networking controversy, you can see Goldacre square off against Dr. Aric Sigman, who says we should be wary about the time we spend in front of screens subtracting from the time we spend talking to people.

Despite the squabbling, it’s probably safe to say that thinkers on both sides of the issue would agree that more research is needed. To that end, various studies and polls have been published on the social networks in particular and increased Web use in general.  For example, the USC Annenberg Center for the Digital Future reported that households connected to the Internet were experiencing less “face-to-face family time, increased feelings of being ignored by family members using the Web, and growing concerns that children are spending too much time online.” On the other hand, a poll conducted by the Pew Internet & American Life Project suggests that use of cell phones and the Internet has not, generally speaking, contributed to social isolation (I urge you to view their conclusions for a much more precise explanation).

In the meantime, the tech-hole always beckons, so much so that Web addiction treatment centers have emerged to help people who can’t prioritize the real world over the virtual one.  While weighing in on the controversy, Maggie Jackson, the author of “Distracted: The Erosion of Attention and the Coming Dark Age,” offers this advice to Web users: “Going forward, we need to rediscover the value of digital gadgets as tools, rather than elevating them to social and cognitive panacea. Lady Greenfield is right: we need to grow up and take a more mature approach to our tech tools.” In other words, technology exists to support our relations with other human beings, not replace them.

In theory, it’s easy to remember that.  In practice, we might find ourselves sacrificing hours to the digital ether, convincing ourselves that we’re connected to everyone, but in reality being connected to no one.

Related Stories:

12/18: The Twitter “Craze:” Not So Much

12/18: Social Networks Grow in Popularity Among U.S. Residents

12/18: Technology’s Impact on Relationships

The Future of Technology and Journalism

10/7: Offensive Language

October 7, 2009 by  
Filed under Blog, Featured, Jared Goldman

Initially, I was going to write about all the annoying phrases polled about in the recent Marist survey.  I was going to discuss why they irritate me (or why they don’t — Caroline Kennedy would have been happy to see I that have no problem with compulsive use of the phrase, “you know”).  But, then I realized I use all of those expressions myself.   At the end of the day (there’s one right there!), I have no business taking on the role of guardian of the English language, at least when it comes to the way other people speak.

Jared Goldman

Jared Goldman

Instead, I will turn the critical lens on my own irksome verbal tics.  Do you ever have that feeling immediately after using a certain word that you wish you hadn’t and pause in disgust with yourself, much to the confusion of your partner in conversation?  Well, these are the words that give me that feeling.

Amazing, brilliant

These terms are brothers in the family of inappropriately strong compliments.  I use “amazing” so often that I am beginning to wonder if I really am so easily amazed. If you take me literally, I’ve been amazed by a sunset, the sound of a motorcycle engine, a soft plane landing, and a dog’s ability to catch a Frisbee. As for “brilliant,” I often use it when referring to a film, book or TV show I enjoyed.  Did I really need to describe that Stephen Colbert sketch as “brilliant?” Couldn’t I have gone with “clever” or “well-done?”  Obviously, the word loses a bit more of its luster every time it’s used.  On a related note, another word used way too much is “genius.” It’s part of our culture of extreme flattery.  This article from the Atlantic Monthly suggests the overuse of “genius” by pointing out its prevalence in discussions about football coaches.  Do Shakespeare, Einstein, and Bill Belichick really belong in the same category?

That’s funny

Is it, really?  Then why do I say this only when I’m not laughing?  It’s a polite impulse, but it really results in a double-insult: not only do I think the joke is not funny, but I think the joke-cracker is gullible enough to believe that I express mirth not with laughter, but by declaring it outright. Anyone who truly wants to be polite should teach him or herself a convincing fake laugh.

You know what I mean?

When I use this in relation to a complex topic, it’s perfectly acceptable.  If I’m explaining, say, how a neuron works (for the record, I don’t actually know how it works), then I am permitted to end a sentence with, “you know what I mean?”  But, I should refrain if I’m explaining how to work the TV remote.  I once had a supervisor who ended his critiques of my work with arched eyebrows, a cocked head and a conciliatory “you know what I mean?” I think he was trying to draw an affirmative response from me in place of explicit acceptance of his criticism. Every time I was tempted to say, “Yes, of course I know what you mean, and I disagree completely.”

What’re ya gonna do?

I use this expression too much.  When I say it, I feel as though I’m trying to channel an overworked detective from a TV police drama — bags under his eyes, discussing a problem that just won’t go away. “It’s a tough situation” works better for me.

Or something like that

When I don’t know the exact answer, I’ll end my response with “or something like that,” just so you can’t hold me to it.  It’s often replaced by the equally vague “or something to that effect.”  Or, I’ll begin my sentence with “I think” or “I guess.” There should be an expression for expressions like these — perhaps “envagueners” or “imprecisioners.” You know, something in that vein.

I mean

This might be the most useless expression in my arsenal.  Rarely do I use “I mean” to clarify some point that has been misunderstood — in that case, it would be acceptable.  Instead, I use it as a completely unnecessary introduction to what I’m about to say.  If you ask me if I think it’s going to rain, I might say, “I mean, there are clouds in the sky, and the air has that rainy smell… “.  But, I have no idea how to define the role “I mean” has in that sentence.  I mean, there’s just no reason for it to be there.

You might be wondering if there’s really any reason for me to eliminate these expressions from my conversational repertoire.  It’s true that most discussions are informal and most interlocutors, if put under scrutiny, would be guilty of these small verbal crimes.  But as any English teacher will tell you, while thought affects speech, the converse is true, too.  When I say something is “amazing,” for example, I may be failing to process what I really think about it.  The lack of precision and nuance in my spoken words diminishes the quality of my thoughts, which, when expressed, come out equally pedestrian.  It can be a negative spiral of empty language.

So, while I would never tell someone to stop using a certain word or phrase, I would encourage them, in general, to put more effort into expressing themselves as best they can.  They might start here, a site that provides the meanings and origins of scads of commonly used phrases, such as “take umbrage” and “cock and bull story.” Not only might they add to their stable of sayings, but they’d gain more insight into how the meanings of words and phrases evolve in unlikely, unpredictable ways.

You know what I mean?

Related Links:

10/7: “Whatever” Takes Top Honors as Most Annoying

Our Genius Problem (from the Atlantic Monthly)

Oxford’s Top Ten List of Irritating Expressions

Meanings and Origins of Phrases, Sayings and Idioms

Twitter: Can’t Beat the Tweet

June 12, 2009 by  
Filed under Blog, Jared Goldman, Science & Tech

Even if you don’t use Twitter, you’ve probably been inundated with news about the social networking site.  That’s because the sharp minds behind Twitter managed to create a perfect media storm.  Not only does their product have an insanely catchy name — isn’t it fun to say “Twitter” and “tweet”? — but it also provides mainstream media outlets with another way to reach an audience whose technology I.Q. is growing every day (Pebbles and Pundits also has a Twitter account). As a result, talking heads have been giving Twitter endless free publicity, promoting their own Twitter accounts and cracking each other up with Twitter-related banter (in a much-publicized gaffe on “The Today Show,” Stephen Colbert rendered Meredith Vieira speechless when he attempted to coin the past-tense variation of “tweet”).

Jared Goldman

Jared Goldman

Personally, I was skeptical when I first heard about Twitter. After the ascent of Facebook, MySpace, and many other social networking sites, why did the world need another one?  What’s more, Twitter only allows messages of up to 140 characters in length.  How much significance could be conveyed in a sentence or two?  Twitter struck me as another nail in the coffin of the average American’s attention span.

A recent Marist poll suggests that, despite all the publicity, many people may share my skepticism — only 6% of Americans have personal Twitter accounts.  Moreover, a study by Nielsen found that a majority of Twitter users stop tweeting a month after signing up.  Is it possible that Twitter is a passing fad?

That’s doubtful.  The aforementioned Nielsen study caused such a fervor among Twitter users that an addendum was posted acknowledging their complaints (though not retracting the original findings).  Comscore, a company that measures consumers’ surfing habits, awarded Twitter the fastest-growing property title for the month of March; in April, Twitter surpassed The New York Times and the Wall Street Journal in unique visits.  While Twitter may not be able to maintain its astronomical growth rate — a 1382% boost in unique visitors from February 2008 to February 2009, according to Nielsen — it seems to have become a staple in the lives of many people who use it to trade information and stay in touch.

You may be wondering, “What about money?  Even though Twitter is popular, that doesn’t mean it’s generating any revenue — which means it may not be sustainable.”  That’s a good point, but Twitter doesn’t appear to be stressing over finances.  In November, its owners rebuffed Facebook after the social networking rival offered to take over Twitter for stock worth $500 million.  And, on the web site, Twitter claims that it’s more interested in improving its service than boosting its bottom line.  Meanwhile, speculation abounds over potential revenue streams with one possibility being the sale of commercial accounts to businesses.  One can imagine the benefit a company might draw if it can find out, via Twitter, who’s tweeting about their product, who else is receiving those tweets, and what, specifically, those people need in terms of customer care or innovation.  What’s more, some of that information can be found on a real-time basis, which could help inform business decisions that need to be made sooner rather than later.  Recently, the ability of Twitter’s search engine to deliver data in real time earned praise from no less than an online eminence — the co-founder of Google.

In other words, thanks to shrewd marketing and cutting-edge technology, Twitter appears to have built a sturdy nest in the tree of online media.  For Twitter die-hards, that’s great news.  For the rest of us, it means enduring a lot more Twitter hype — or joining the growing ranks of tweeters.

Related Stories:

6/12: 6 % of Americans Have Twitter Accounts

6/12: Keeping In Touch Online

Sources/Additional Information:

How Will Twitter Make Money?

Google ‘Falling Behind Twitter’

Twitter’s Tweet Smell of Success

Twitter Quitters Post Roadblock to Long-Term Growth

ComScore Media Metrix Ranks Top 50 U.S. Web Properties for March 2009

When Twitter Met Facebook: The Acquisition Deal That Fail-Whaled

Twitter to Offer Business Accounts, at a Price