Robots and Social Media Networks

Recent analyses show that the average Internet user spends a little over two hours on social media each day. Over the course of a lifetime, that is over five years spent watching YouTube videos and checking Instagram followers. But who is posting all this content and why? Could your attention really be so valuable that companies might vie for its control?

From marketing executives to local researchers, everyone is interested in how social media is influencing today’s consumers and driving tomorrow’s decisions. Dr. Daniel Faltesek, an assistant professor of New Media Communications at Oregon State University, specifically looks at how companies and the economy drive design decisions for social media platforms.

“I’m trying to connect the ways that logistical factors, legal and economic, affect the behavior and appearance of social networks,” Faltesek began. “For instance, I look at how the stock market affects Facebook’s layout and what features they provide.”

Faltesek focuses on how the platforms are changing, what drives that change, and how that affects the individual users’ experience. However, much of his current research looks at the impact Tweets had on the 2016 presidential election.

“I’m wondering how much of the tweet cascades were real,” Faltesek said. “Week by week, as Twitter confirms what researchers are seeing, so much was bought, noise, or just fake.”

Faltesek’s research points to how online robots, or programs, were used to magnify certain points of view or messages during the 2016 election cycle, making some perspectives seem much more widespread and representative of the population than they actually were. 

Robots Magnify Messages
Imagine if someone Tweeted that bananas cause cancer. Then imagine someone created a computer program that generated thousands of social media profiles and accounts that repeated that same message, making the message saturate social media networks, leading journalists to pick it up and report on it, heightening the reach the message had. 

Many more people would hear about those troublesome bananas, and maybe a lot of people would start to believe it than if that one Tweet had stayed as just one, getting quickly lost in the cacophony of other messages.

“I’ve been looking at election data for years,” Faltesek said. “This past one, so many stories misunderstood what they were seeing. They assumed ties between people, but the reality was they saw robots.”

“The reality of the online world is that much of it is synthetic,” he said. “I don’t think much is intended to be hurtful, but it is meant to amplify the message so people can’t hear anyone else.”

The difficulty with this research is the fast-paced and saturated nature of the content. Faltesek sifts through thousands of social media posts, using programs that visually demonstrate connections between people and their posts. Pendanting, which is a type of graph, is used to identify clusters with very few crosslinks, which shows that a message is being amplified by robots or people targeting one person strategically.

As Faltesek points out, he is still looking at content from the 2016 election to make some sense of what actually happened. And social media companies are changing sometimes as fast as the content they host on their websites.

“In the heyday of newspapers, the 1970s to the 2000s, it was a steady business,” Faltesek said. “But social networks came in hot and fast, also cooling off fast.”

“Facebook claimed that video would be the future, but they were wrong. Folks don’t want videos playing a bunch in their feeds. I was expecting them to pivot more slowly, and that’s a problem with research because peer review expects some consistency,” Faltesek said. 

“I tell people, don’t expect stability. I try to focus on things that are interesting and relevant in the world and less on the specific platforms. In the last 10 years, people haven’t been researching social media like they did with cinema or other forms of media,” explained Faltesek. “We have to create methods for research and then when people learn the tools, they do research on it. We focus on the enduring questions.”

The Need for a Sift
Faltesek’s work is demonstrating the need for a method that anyone can use to sift through all the noise on social media, being able to tell what perspectives are representative of a large portion of the population and which messages are just echoed nonsense.

After describing his research, Faltesek expressed frustration with how many journalists reported on fake Tweet storms during the election, comments that were likely magnified by robots and not real people: “the question is how do we deal with any of this? How do we deal with norms in journalism and reportage? What kind of basic rules can we give someone for examining a Twitter scenario to determine if it is real or fake? How can we help journalists see if they are being ‘false started’ or know what is real?”

But it’s not all doom and gloom. Faltesek warned against the all-too-common response to condemn all social media usage. 

“There is still a lot of strategic communication on these networks,” Faltesek said. “People need to find ways to have a meaningful online experience. Maybe it means Snapchatting with distant relatives or using these platforms more wisely. Unfortunately, now, they are more crass and full of propaganda.”

Faltesek noted the changes at Facebook and how the platform’s stock has been slowly declining. “I would like it if people could go back to using social networks to connect, but I don’t think it will get better. Facebook was a Titanic three years ago, seemingly indestructible. Now they are making huge changes and taking publisher content off the feeds, but they aren’t making changes to make the platform better.”

Should Social Networks Be More Moral?
One of the common critiques of Facebook in particular is its insular nature. How it traps people in echo chambers where all they are exposed to is similar content to what they post or “like,” instead of presenting a wide perspective in their daily feed. 

When asked about if he thinks Facebook – and social media networks in general – have a moral responsibility to make their platforms more beneficial for the populace, Faltesek gave an example: the Spiderman principle. Faltesek argues that people have a responsibility to make positive, moral decisions.

“In Spiderman, Uncle Ben said ‘with great power comes great responsibility’ and that motto can help you understand how to be a good journalist, teacher, or network manager. We have first amendment protections because we expect that professionals will engage in meaningful conduct to make the world better, or show they have some discretion or judgement.”

“If you are operating under the theory that you don’t need to take responsibility, that is not how the public experience works,” Faltesek warned.

“If you abuse your power, it will go away. There was poor network management at Twitter, the same with Facebook, and now people are saying that they aren’t interested in it. Facebook does have a responsibility, and they haven’t lived up to it. They need to make sure that the content is emotionally engaging, that they do what is right,” Faltesek said.

When considering whether this moral modulation would be ethical, Faltesek and many other researchers and intellectuals posit that there is no such thing as an objective social media network. Many people think that Facebook doesn’t have a responsibility to show everyone multiple perspectives, and underlying that assumption is that Facebook has just allowed the platform to develop of its own accord. 

No Such Thing as an Objective Platform
The reality is that Facebook, and every other website or social media network, has curated a certain type of experience. They decide what type of stories to show users. They created the algorithm that filters through content. They chose how frequently to refresh the home page or how frequently to show users how many “likes” they got.

Faltesek said, “Facebook has always manipulated the feed. For the last seven years, either with researchers or partners, they decide what content people see and how they see it. There was never a time where it was not manipulated.”

“Facebook and social media networks aren’t the government. Facebook has always been a customized experience. They expected it would create an open space,” Faltesek said. 

“Censorship arguments ring hollow,” he added. “They don’t have an obligation to provide equal access to every person. They could absolutely get too aggressive, or too hard, and it would crash, but if you go too soft, it will drive off a cliff.”

In an interview on the Waking Up podcast, Tristan Harris, previously the Design Ethicist at Google and known as the ‘closest thing Silicon Valley has to a conscience,’ spoke about the attention economy that all media thrives on. He describes how, instead of product, companies gain worth and money from advertisers based on how long they retain their audience’s attention.

From the newspaper to the television and now to social media, people’s attention is being split in so many places that companies are determined to keep it at all costs. Harris describes the phenomenon as an “arms race for human attention,” with all companies using as many ploys as possible to lure people in and keep them hooked.

Harris describes how companies like Facebook and Twitter house these echo chambers because people like their beliefs or thoughts to be reinforced. They enjoy when others agree with them or “like” what they say. As it stands, what is clicked or shared most is not what is true or good. But that clicking and sharing is how the business grows.

Faltesek is no stranger to the idea of an attention economy, but he sees it in a brighter light than the world Harris depicts: “It’s a history lesson. We have had attention economies for 70 years, and it’s all about how the media controls people’s attention. Media is designed to be interesting, but if a media source doesn’t fill a niche for their users, the users go away.”

When asked if he thought social media has an overall positive or negative impact on its users, his response was careful and ambiguous.

“History is important for this reason. Someone can come back when things cool off to see what happened. We don’t really know what the end result is going to be.”

If you found this story interesting, you may also like a forum titled “Beyond Fake News: How we Find Accurate Information About the World.” Scheduled for 2:00 pm, Saturday, March 3rd at the Corvallis-Benton County Library, sponsored by Oregon Humanities and the Friends of the Library.

By Kristen Edge

Do you have a story for The Advocate? Email editor@corvallisadvocate.com