Americans perceive technological advances to be a double-edged sword and divide about whether or not technology makes us smarter or dumber, according to an Exclusive Point Taken-Marist Poll, commissioned by WGBH Boston for its new late-night, multi-platform PBS debate series Point Taken. Despite the perceived technological pitfalls, a slim majority of Americans consider the benefits of technology to outweigh the privacy risks associated with it.
The national survey was conducted by The Marist Poll in advance of this week’s Point Taken episode, airing Tuesday, June 21st at 11pmET (check local listings) and streaming on pbs.org/pointtaken. The series is hosted by Carlos Watson, Emmy Award winning journalist and OZY Media co-founder and CEO.
When assessing the overall effect of technology, 49% of Americans report it makes people dumber while 46% say technology makes individuals smarter. A slim majority, 51%, says the benefits to society outweigh the privacy risks of technological advances.
On the question of intelligence, Millennials, 53%, and Gen X, 53%, are more likely than Baby Boomers, 48%, and the Silent-Greatest generation, 38%, to say technology makes us dumber.
Generational differences also exist when assessing the risks of technology. Gen X, 50%, is the most likely to report the risks outweigh the benefits while the Silent-Greatest generation is the least likely to stress the potential privacy implications of technology. Also noteworthy, men, 55%, and college graduates, 58%, are more likely than women, 46%, and those without a college degree, 47%, to say technology’s benefits trump the risks.
“If you think younger people are all in for technological revolution, think again,” says Dr. Lee M. Miringoff, Director of The Marist College Institute for Public Opinion. “This national survey shows surprising differences among generations and their appreciation for innovation.”
“Digital and other technologies are central and definitional aspects of contemporary life,” says Denise DiIanni, series creator and Executive-in-Charge. “This week our panelists — and audiences — debate whether tech makes us smarter — or dumber.”
What are the benefits of technology? Many Americans, 74%, say technology improves education, and a majority, 54%, believes it makes individuals more productive.
Differences based on education and generation exist here, too. College graduates, 80%, are more likely than those without a college degree, 70%, to think technology makes education better. Generationally, Millennials, 82%, and members of Gen X, 80%, are more likely than Baby Boomers, 65%, and those in the Silent-Greatest generation, 65% to report technology improves education.
College graduates, 65%, are also more likely than those without a degree, 47%, to say technology makes us more productive. The Silent-Greatest generation, 64%, is more likely than other generations to say technology improves productivity. Americans who earn $50,000 or more, 58%, are more inclined than those who make less, 47%, to report technology aids productivity.
However, many Americans perceive technology to be detrimental to humanity and to personal relationships. 71% of residents nationally believe technology is making people less human, and 54% report it makes individuals less connected to family and friends.
Many residents think technology is making people less human. Members of Generation X, 77%, and Baby Boomers, 74%, are the most likely to have this view. Those in the Silent-Greatest generation, 60%, are the least likely to agree with this premise.
Those in the Silent-Greatest generation, 33%, are also the least likely of the generations to say technology enhances our connections to family and friends. Millennials, 48%, are the most likely to report technology keeps us connected but still less than a majority. Looking at gender, men, 48%, are more likely than women, 39%, to believe technology makes people more connected to their loved ones. In fact, a majority of women, 58%, report technology is detrimental to personal relationships.
This survey of 622 adults was conducted March 29th through March 31st, 2016 by The Marist Poll sponsored and funded in partnership with WGBH’s Point Taken. Adults 18 years of age and older residing in the contiguous United States were contacted on landline or mobile numbers and interviewed in English by telephone using live interviewers. Results are statistically significant within ±3.9 percentage points. The error margin was not adjusted for sample weights and increases for cross-tabulations.
It can unite us, help us with mundane tasks, and entertain us. Technology is wonderful. That is, when it’s used appropriately.
The abuse of technology is widespread. Perhaps, the most recent, shocking incident occurred last week when four middle school students taunted Greece, New York School Bus Monitor Karen Klein. As if the boys’ behavior wasn’t abhorrent enough, one of them actually posted the video on YouTube under the title, “Making the Bus Monitor Cry.” Of course, the video went viral and prompted an outcry of support for Klein, including a collection to send Klein on a dream vacation. (As of this writing, the sum totals more than $650,000 and counting.)
As if their bullying wasn’t bad enough, the boys’ actions resulted in death threats directed toward their families which, in turn, cost taxpayer dollars to address those threats.
For a moment, let’s just focus on their use of technology. Did they really not get it? Did they really not understand that the very same technology that allows them to interact with their friends has the power to illuminate their bad behavior?
Children are taught from a very young age that actions have consequences. If they touch a hot stove, they will burn their hand. If they talk back to their parents, they will be reprimanded. So, what makes technology, specifically social media, different? For starters, perhaps, it has to do with the nature of the technology, itself. Because electronic media removes the need to physically be in the same space as the communication itself, people can detach from their every day persona and become more brazen. In fact, this pre-dates social media. (Think back to the early days of email and chat rooms.) Unfortunately, the tendency still exists. (Think not so far back to the Anthony Weiner scandal.)
But, that still doesn’t solve the problem at hand. Why can’t many young people grasp that the use of social media isn’t all fun and games? Perhaps, that’s part of the answer. Because these kids have grown up with social media, they have mostly been exposed to the “good” side of it. Let’s face it, when it comes to sensitive issues, many parents aren’t eager to have a heart to heart with their kid.
Perhaps, as part of their education, students should be required from a young age to participate in forums with those who have been victims of online bullying. For those who believe it’s not the role of the schools to play parent, tools should be available for parents to teach their children the downside of social media.
Simply put, America’s youth needs to be educated and guided about the nature of technology.
One could argue that digital technology has helped make us better multitaskers. These days, we can simultaneously check our e-mails, monitor our Twitter feeds and listen to a podcast, all while eating our breakfast. Wouldn’t it make sense that such a capacity for divided attention is making our brains stronger?
Unfortunately, that might not be the case. Experiments comparing the ability of heavy multitaskers – thus designated based on self-reports about their technology use – to non-multitaskers, found that the latter group actually performs better on certain cognitive tasks. In a Stanford study, cited in a recent New York Times article, subjects participated in a test that required them to ignore extraneous inputs, a measure of their ability to filter out distractions. (You can take a test on ignoring distractions here.) In another test, participants had to switch between tasks, showing their ability to adjust to new information and task demands on the fly. (Take a task-switching test here.) In both cases, the non-multitaskers performed better than heavy multitaskers. Based on these and other experiments, the scientists surmised that multitaskers are more responsive to new incoming information. On the positive side, one might say the multitaskers are more alert to new stimuli; on the negative side, one could claim their multitaskers’ focus is more easily disrupted.
As with many scientific studies, the tests in this case might not truly reflect real world situations. A cognitive test in a laboratory could fall short of replicating the experience of juggling computer applications. As always, more study is needed to examine, among other things, how different amounts of multitasking affect performance on cognitive tasks, and whether the recency of one’s immersion in technology affects the ability to direct attention. Nonetheless, it would appear that heavy use of gadgets and computers is influencing our brain function.
On the plus side, there is also evidence that screen technology benefits certain cognitive skills. (Click here for a list of such articles.) It has been demonstrated in the laboratory that playing action video games improves visual attention in several ways. Gamers show the ability to process more visual inputs than non-gamers, the ability to process inputs across a greater field of view, and a better ability to process inputs presented in rapid succession. Considering the deficits shown by people with disabilities and the demonstrated erosion of certain cognitive skills among the elderly, perhaps, action video games – or programs that mimic them – can be used therapeutically.
Above all else, the experiments reveal the apparent power of technology to mold our brains, for better and for worse. The question, however, may be whether we can harness our gadgets’ power to maximize the benefits and minimize the harm.
A few months ago, you may have thought Twitter was taking over the country, but that doesn’t appear to be the case. A recent national, Marist poll found only 8% of U.S. residents have accounts with the social networking site. When Marist last conducted a poll on Twitter use in June, 6% of U.S. residents reported having an account.
Not surprisingly, if you’re younger, you’re more likely to be a member of the Twitterati. 15% of Americans 18 to 29 and 13% of those 30 to 44 report they tweet. This is compared with 6% of residents 45 to 59 and 3% of those 60 or older. The proportion of younger Americans who use Twitter has grown since Marist last asked about Twitter use in June. Use by older Americans is little changed.
Social networks such as Facebook, MySpace, and LinkedIn have yet to convince the majority of U.S. residents to sign up, but if a recent national Marist Poll on the topic is any indication, it’s only a matter of time before they do.
The survey found that 41% of U.S. residents have a profile on a social networking site, a 9 percentage point jump since Marist last asked about social networking in June.
Notably, the social networking generation gap may be shrinking. Although those under age 45 still outnumber the proportion of older Americans who stay connected online, more Americans age 45 and older have discovered the interactive joys of trading witticisms, sharing photos, and swapping links. 23% of people in that age group now report having an account compared with 14% when Marist last asked this question in June.
Growth continues for people under 45, as well. 65% of residents under 45 years old say they have a social networking profile while 59% said the same in the last poll.
Americans who are employed are also more likely to appreciate the advantages social networking affords. Nearly half — 48% — of people with a job have a profile compared with only three in ten adults who are not working.
When it comes to having a social networking profile, women are more likely to connect with family, friends, and colleagues online. 45% of women report they have profiles, and 36% of men say the same.
Relationships and Social Networking
The substantial increase may be explained by the general perception that social networks are a good way to strengthen connections to friends and family. 68% of U.S. residents with profiles say the sites help their personal relationships while 12% say they hurt them. 20% are unsure.
Age is also a factor on this question. More younger Americans with a social networking profile think using this form of communication helps relationships compared with those who are older. 71% of those under age 45 think this is the case compared with 63% who are 45 and older.
Have you ever fallen into a tech-hole?
You’re sitting at your computer, logged into your Facebook, Twitter and other social networking accounts, immersed in the links, videos, comments and other digital flotsam shooting down the info streams. Meanwhile, a person, real flesh and blood, walks in the room and wants your attention. You don’t hear his words; you mindlessly wave him away. You’re busy … with your virtual friends.
Perhaps that’s never happened to you. As for me, I’ve spent a serious number of hours in the tech-hole. Based on a recent Marist poll, the number of Web users with social networking accounts, and perhaps susceptible to this experience, is growing rapidly. This furious growth has led some to question whether the effects of spending so much time on Facebook, Twitter and their ilk could be harmful.
In the U.K., neuroscientist Susan Greenfield took her concerns about social networks to the House of Lords, suggesting that the use of the sites could affect the human brain — especially a child’s brain — in profound ways. One of her more frightening points was that using the sites could yield a generation of grown-ups with the emotional depth and cognitive abilities of big babies. The social networks provide experiences that are “devoid of cohesive narrative and long-term significance,” said Greenfield. ”As a consequence, the mid-21st century mind might almost be infantilized, characterized by short attention spans, sensationalism, inability to empathize and a shaky sense of identity.” Among other things, she called for an investigation into whether the overuse of screen technologies could be linked to a recent spike in diagnoses of attention-deficit hyperactivity disorder. People who spend formative years surfing the Internet, an environment characterized by “fast action and reaction,” could come to expect similar instant gratification in the non-virtual world, said Greenfield.
Her concerns have probably resonated with Web skeptics because she’s homed in on recognizably annoying online behavior. For example, if you’ve ever been irritated when a friend updates his or her status message to broadcast a favorite kind of toothpaste – e.g., “[Person X] is contemplating the different colors of AquaFresh” — Greenfield sympathizes. “Now what does this say about how you see yourself?” she asks of those prone to posting personal trivia. “Does this say anything about how secure you feel about yourself? Is it not marginally reminiscent of a small child saying ‘Look at me, look at me mummy! Now I’ve put my sock on. Now I’ve got my other sock on.’”
Not everyone is receptive to Greenfield’s concerns. Ben Goldacre, a British writer, broadcaster and doctor, and author of a Guardian column called Bad Science, says Greenfield is irresponsibly using her position as head of the Royal Institution of Great Britain — a body devoted to improving the public’s knowledge of science — because she doesn’t have any empirical evidence backing up her fears. If Greenfield wants to promote awareness of the scientific method, says Goldacre, she shouldn’t be spending so much time airing her qualms about untested hypotheses. Greenfield’s caveats that her purpose is to raise questions, not give answers, aren’t enough for Goldacre; he says she’s recklessly generating scary headlines that frighten a Web-loving populace. “It makes me quite sad,” he writes, “when the public’s understanding of science is in such a terrible state, that this is one of our most prominent and well funded champions.” In a heated BBC debate on the social networking controversy, you can see Goldacre square off against Dr. Aric Sigman, who says we should be wary about the time we spend in front of screens subtracting from the time we spend talking to people.
Despite the squabbling, it’s probably safe to say that thinkers on both sides of the issue would agree that more research is needed. To that end, various studies and polls have been published on the social networks in particular and increased Web use in general. For example, the USC Annenberg Center for the Digital Future reported that households connected to the Internet were experiencing less “face-to-face family time, increased feelings of being ignored by family members using the Web, and growing concerns that children are spending too much time online.” On the other hand, a poll conducted by the Pew Internet & American Life Project suggests that use of cell phones and the Internet has not, generally speaking, contributed to social isolation (I urge you to view their conclusions for a much more precise explanation).
In the meantime, the tech-hole always beckons, so much so that Web addiction treatment centers have emerged to help people who can’t prioritize the real world over the virtual one. While weighing in on the controversy, Maggie Jackson, the author of “Distracted: The Erosion of Attention and the Coming Dark Age,” offers this advice to Web users: “Going forward, we need to rediscover the value of digital gadgets as tools, rather than elevating them to social and cognitive panacea. Lady Greenfield is right: we need to grow up and take a more mature approach to our tech tools.” In other words, technology exists to support our relations with other human beings, not replace them.
In theory, it’s easy to remember that. In practice, we might find ourselves sacrificing hours to the digital ether, convincing ourselves that we’re connected to everyone, but in reality being connected to no one.