When University of Wisconsin-Madison professor Young Mie Kim and her team set out to research divisive political ads on Facebook during the 2016 election, they embarked on a first-of-its-kind study of how groups try to target and influence voters. What they found — that more than half of these ads came from "suspicious" groups with little to no identifiable information — has led Kim to spend six months at the Washington, D.C.-based Campaign Legal Center, where she will research and advocate federal solutions to the issue of digital political advertising.
Kim's research has attracted a flood of national attention, particularly for its findings that one-sixth of ads run by "suspicious," untraceable groups had ties to Russia. She spoke with the Cap Times about her team's process and findings, and the changes she recommends.
What was your plan when you started this research?
I have been studying passionate publics who care about an issue based on their identity, values or self-interest … and their election behavior for almost my entire career. But now that political leaders, especially non-party groups like advocacy groups, have data and targeting capacity, I was wondering about how political campaigns would reach out to these people. I expected to see some dark money groups involved in these issue campaigns, because we have seen that political campaigns are increasingly more issue-based rather than candidate-based. Given that we don’t have regulatory guidance for digital political campaigns, there will be a lot of unknown actors including dark money groups, and there might be some foreign groups. That was my expectation.
When did you realize you were also observing Russian involvement in U.S. elections?
We found a lot of suspicious groups, and one out of six suspicious groups turned out to be Kremlin-linked Russian groups verified by the data from House Intelligence Committee.
This study is not just about Russian propaganda. The other remaining suspicious groups, that’s the question I am dying to know. Basically, suspicious groups are unidentifiable, untrackable groups beyond a certain point. We can’t find a public footprint on those groups. We looked at FEC data, IRS data, and did our own research, and we couldn’t find any information about those groups.
What kind of content did you see in these ads?
The study was only focused on issue-based campaigns, so all of the ads we were looking at were the campaigns designed to sow division within the public. We looked at some ads on wedge issues like abortion, LGBT, nationalism, alt-right, terrorism, and we also included candidate scandals like Donald Trump's Access Hollywood tape and the Hillary Clinton email server scandal. Some of the ads seemed to be coordinated, because we saw some of the exact same ads run by seemingly very different groups.
What did these efforts look like in Wisconsin?
We found that there is clear geographic and demographic targeting of divisive issue campaigns by these anonymous groups. The most highly targeted states were Wisconsin and Pennsylvania. Generally, anonymous groups are targeted at the battleground states, but it is notable that Pennsylvania and Wisconsin are the most targeted by divisive issue campaigns. There has been a lot of discussion whether Russian groups or Cambridge Analytica ever influenced outcomes, and it is really difficult, almost impossible, to pin down how much influence these groups exert in election outcomes, whether they have successfully persuaded people.
The average audience reach of divisive issue campaigns in targeted states was about 8 percent in Pennsylvania and 3 percent in Wisconsin — so 3 percent of all Wisconsin voters were exposed to the divisive issues we examined. If you think about the Wisconsin vote margin, the state used to be a Democratic stronghold, but turned to support Trump with a razor-thin margin. The vote margin was 0.8 percent in Wisconsin and 0.7 percent in Pennsylvania. I cannot claim that the ads directly influenced the election outcomes, but I think it’s notable that these two states are especially targeted.
Can you do more research to determine whether the ads did influence outcomes?
Yes. We developed an ad tracking tool that worked like an ad blocker, but instead of blocking the ad, it automatically detected the ad and transferred it to our research server. Our technique is sort of a reverse engineer. We have the ad data, meta information of where this ad appeared, when and to whom. We have an anonymized user ID and we also did a survey of users — demographics and political profiles — of nearly 10,000 users who participated in this study.
You asked about whether we are trying to pin down these ads, if they influenced election outcomes: turnout or willingness to vote for a particular candidate or general preferences. One way to make these causal inferences is to have a panel data analysis. In other words, we ask the same questions over time and then see the changes, and then map the changes in candidate preferences, for example, with their ad exposure patterns. That way we come one step closer to causal inferences.
Among these 10,000 people, 1,200 people participated in a four-wave panel study until Election Day. We are now working on that data set. We haven’t analyzed anything yet.
Do you have policy recommendations based on what you observed?
The reason I’m in D.C. is to make a direct, immediate influence out of my research to the policymaking community.
People still don’t know there is no law that adequately addresses digital platforms, digital political campaigns. I think the digital loophole explains a lot. This is nothing about content regulation or censorship. This is about transparency. Disclose all the groups that engage, just like television or broadcast media — clearly identify who you are, and report your activities to the Federal Election Commission. That's it.
The Honest Ads Act is a bicameral, bipartisan effort that would extend the broadcast media political advertising disclosure and disclaimer rules to digital. It is a very important first step that would provide rules for digital platforms. Another component of the bill is that large tech companies’ platforms need to archive political ads that hit national political issues.
Facebook also announced new transparency measures, and I think this is a first step in the right direction, but it still shows some lack of understanding of what exactly happened in the first place. For example, Facebook said they will verify the identities and addresses of top groups posting political ads, but my research shows that these kinds of campaigns are run by a number of small niche groups. It's not like a big campaign run by a top pager. If we only verify top pages, we might not be able to effectively fight back against foreign actors.
Should the government step up its regulatory efforts? Should it be up to tech companies?
There should be some effort made at both levels, but I don’t have a lot of trust in a self-regulating. Now we have the Cambridge Analytica scandal exposed by whistleblowers. If that didn’t happen, Facebook would not have come up with these transparency measures. Do we want to hand over our rights to gigantic tech companies? Political ads and issue ads, that was a subject of research for decades, and we still have a debate over what political ads are and what’s an issue ad. Do we really want tech companies to determine those things? Probably not.
Is there more that voters should do to evaluate the messages they're getting?
I wouldn’t blame voters. We are living in an information flood and it’s really hard to verify the sources of information. I think there must be a clear policy guideline in terms of disclosure and disclaimers so voters can make an informed decision, and then tech platforms follow those guidelines, archive the ads and do their best effort to have a more transparent platform. A lot of people are saying voters have a responsibility to verify information, but I personally think we are asking too much. Especially, we are living in an algorithm age, and voters don’t know why they are getting particular messages or particular ads.
Sometimes when academic research gets a lot of publicity, it can get a little watered down. Is there anything about your study that you'd like to clarify?
A lot of people are interested in Russian ads and I totally get that. But I'm trying to put this study into a larger context. One out of six of our suspicious groups are Russian groups, and we are fortunate that we know now that they are Russian groups. But what about the rest of the groups that still remain unidentified? It could be foreign actors, it could be domestic actors. The more important issue is what kind of transparency measures we have so researchers, journalists and lawmakers can track and investigate foreign involvement in campaigns, and who is trying to put influences on voters’ decisions.
The second issue I want to emphasize is that this is peer-reviewed research, it’s the first of its kind, so it took a lot of time to get published. This is different than other types of big data research, and we are very grateful to the volunteers, the participants, who helped us discover these important things.