Features

XXVIII.6 November - December 2021
Page: 56
Digital Citation

A study of ballot anomaly detection with a transparent voting machine


Authors:
Juan Gilbert, Isabel Laurenceau, Jean Louis

back to top 

Many components contribute to a safe and secure election, but several experts agree that the most important one is to include risk-limiting audits on voter-verified paper ballots. Ballot-marking devices (BMDs) allow voters to select candidates on a machine and print a physical ballot summary that can be tallied and used for a later audit. The paper ballots must be voter-verified before being cast to ensure that if their vote has been tampered with, or if they have made an error, it can be rectified. The voter, after all, is the only one who knows their intention and whom they meant to vote for. A threat to the validity of the election can arise if a vote printed on the ballot is different from what was selected on the machine. If the paper ballot contains an error, whether due to the fault of the voter, the machine, or a malicious actor, a risk-limiting audit (RLA) is moot, as it may return a correct threshold of accuracy, though the voters' true intentions were not captured. RLAs are a methodology performed on physical paper ballots after an election to confirm the results, during which the "audits examine individual randomly selected paper ballots until there is sufficient statistical assurance to demonstrate that the chance that an incorrect reported outcome escaping detection and correction is less than a predetermined risk limit" [1].

ins01.gif

To enable RLAs and increase voter verification while providing the accessibility of BMDs, we propose a transparent voting machine (TVM) that utilizes a transparent interactive printing interface. It allows voters the accessibility of a BMD with the security of a voter-verified paper ballot. The fundamental idea of the transparent voting machine is to enable users to vote on an accessible BMD and view their printed ballot at the same time. The touchscreen interface of the BMD is transparent and located above where the ballot summary is printed (see Figure 1). The screen's transparency allows for users to interact with both the digital and physical ballots by making the selection on the BMD and verifying what is printed as they vote, rather than only at the end.

back to top  Insights

ins02.gif

ins03.gif Figure 1. Transparent voting machine showing confirmation and print screen.

The ballot summary is printed after the user selects a candidate in a contest. Between contests, the machine instructs participants to "Confirm your selection is correct. Touch your selection to continue." To verify that the printed ballot selection is correct, participants have to touch an area on the screen. Bernhard et al. conducted a study with a BMD and found that "[w]ithout intervention, only 40 percent of participants reviewed their printed ballots at all, and only 6.6 percent told a poll worker something was wrong" [2]. In Kortum et al., the researchers found an overall detection rate of 17.6 percent; however, 76 percent of those who attempted to find an anomaly were successful [3]. This study concluded that voters could detect changes "if they will only attempt to do so" [3]. Our findings indicate a 77 percent notice rate, a significant improvement over previous findings. These findings suggest that for an election using a transparent voting machine, it is unlikely that an election could be hacked such that the tampering would go unnoticed.

back to top  Method of Study

Participants were told the study's purpose was to evaluate whether voter sentiment had changed since the 2018 midterm and 2020 presidential elections. Once they agreed to participate and gave their consent, they completed a combined 2018/2020 Florida ballot on the transparent voting machine. Participants were not debriefed on the study's true nature until after they had completed the voting scenario. As seen in previous studies, this deception is done deliberately to identify whether flipped votes will be noticed.

Each participant had one of their 12 contest votes flipped, with the first four contests set to be flipped more than the other eight. Therefore, more participants would see votes for the U.S president, U.S. senator, U.S. Congress representative, or governor contests flipped than other contests down the ballot.

As participants voted, if they noticed a flip via the transparent voting machine and said so to the researcher, they stopped voting and were classified as Vocal. If participants did not say anything to the researcher about the flip, they completed the entire ballot before a brief post-interview. Two questions were asked of participants who did not immediately mention the flip vote. The first was, "Did you notice one of your votes was flipped?" This was followed by, "Could you tell me which one?"

Figure 2 shows the classification methodology. If a participant responded in the affirmative to the first question and was able to identify the correctly flipped contest, they were classified as Quiet. Participants who answered no to the first question but could identify which contest was flipped on the ballot summary were marked as Paper. Participants who replied no to the first question and could not identify the change were marked as None. Participants who answered yes to the first question but could not correctly identify the flipped contest were classified as Incorrect.

ins04.gif Figure 2. Classification methodology. Pursuit flowchart classifications are as follows: 1) noticed when the flip occurred; 2) noticed, but didn't say anything; 3) identified change on paper after prompt; 4) did not notice or identify change; and 5) said they noticed the change, but didn't correctly identify the change.

Participants. One hundred fifty-one adults ranging in age from 18 to 78 participated. The average age was 27 with a standard deviation of 13.3 years. Ninety-five participants were between the ages of 18 and 23. Thirty-one participants were between 24 and 37; 25 participants were 40 or older, including six participants who were 50 or older, and eight participants who were 60 or older. Of the participants, 54 percent were identified as women and 46 percent as men. Twelve were identified as Asian, 24 as Black/African American, nine as Hispanic, 105 as white/Caucasian, and one did not specify a race.

Participants were recruited through email and by the researcher on-site. Five locations were used to diversify the participant pool. Eight participants were recruited from a community barbershop, 48 participants from a health and fitness club in the community, 73 participants from a library on a college campus, seven participants from an office suite on a college campus, and 15 participants from a student union on a college campus. To participate, the individuals had to be 18 or older and speak English. Each participant received a $20 gift card for participating in the study.

Results per classification. The majority of participants (41.1 percent) were classified as Quiet. The next largest group of participants was Vocal at 35.8 percent. These two groups are combined for further analysis as the Noticed group (77 percent). Twenty-four (15.9 percent) were placed in the Paper category, and 10 subjects (6.6 percent) were classified as None. Only one participant fell into the Incorrect classification (see Table 1).

ins05.gif Table 1. Results per classification.

Results per contest. A chi-square analysis test showed a statistically significant correlation of contest flipped to Noticed (P=0.0019). A Pearson's correlation analysis resulted in a moderate correlation of—0.5261 for contest flipped to Noticed. These results imply that a participant's likelihood of noticing is linked to which contest was flipped on the ballot. The farther the contest is down the ballot, the likelihood of noticing also decreases.

Though contests were flipped randomly, those in the first third of the ballot were more likely to be flipped, and the first contest was most likely to be flipped. Therefore, it is worth looking at the correlations of contests within sections of the ballot, specifically thirds. Of the 97 flips for contests 1 to 4, 85 (88 percent) noticed. Contests 5 through 8 flipped 29 times and 15 participants (52 percent) noticed. Twenty-five participants had contests 9 to 12 flipped and 16 (64 percent) noticed. Within the first third (contests 1 to 4), there was a negative Pearson's correlation of—0.8913, the second third a negative correlation of —0.9774, and the final third a positive correlation of 0.2730.

Even only in the first third of contests, there is a strong negative correlation that as one goes down the ballot, noticing is less likely to happen. One could argue from this result that an adversary could tamper with elections by flipping votes closer to the end of the ballot. However, this is not necessarily the case; when only the last eight contests are observed, the correlation is 0.1474. This weaker correlation and change of sign indicate that contests at the end of the ballot are not subject to being easily tampered with. This particular ballot comprised a mix of contests from the 2018 and 2020 Florida elections. It followed the relative order in which contests would be displayed in general elections. The last contest was a binary contest for an amendment. It had fewer options than the other contests, which can make it easier to detect a change. In fact, the notice rate for contest 12 is 57 percent, above the 50 percent threshold that could be attributed to noticing by chance for a binary contest, and it is not the lowest noticed contest (contest 8 with 29 percent; see Figure 3).

ins06.gif Figure 3. Percent of participants within each contest who noticed flip. Noticed consists of both Vocal and Quiet groups. Chance shows constant 50 percent as for every question both noticing and not noticing have equal chances of occurring.

Results by location and demographics. Five locations were used during the study. Fifty-seven (78 percent) of the library participants noticed the change, while only three (38 percent) noticed in a barbershop. Thirty-six (75 percent) of the participants noticed at the gym, thirteen (87 percent) at the student union, and seven (100 percent) noticed at the office suite. A chi-square analysis reveals a statically significant correlation between the locations and Noticed (P=0.0399). This may be due to the different distraction levels present at the locations. Gender, age, and race/ethnicity were found not to be correlated to Noticed (P=0.0596, 0.492, and 0.1389, respectively).

Time results. The average time for participants in None was 3 minutes and 48 seconds. Participants in Paper took 3 minutes and 20 seconds. Participants in Quiet took 3 minutes and 26 seconds. Participants in Vocal stopped the voting process early, and their time results are not used in the overall average. Time data between those who noticed and completed the ballot (Paper and Quiet) versus those who did not notice (None and Incorrect) is seen in Figure 4.

ins07.gif Figure 4. Time comparison of Notice versus Did Not Notice. Noticed and completed the ballot consists of Paper and Quiet. Did Not Notice consists of None and Incorrect.

To determine whether there was any significant difference in voting between those who noticed and completed the ballot versus those who did not notice, a Mann-Whitney U test with a null hypothesis of no statistical difference was run between the two groups. It returned a u-value equal to 388.5, a z-value equal to 0.956, and a p-value equal to 0.339. Therefore, the null hypothesis was accepted and there is no statistical difference in time completion between the groups. These results show that even with a novel technology such as the transparent voting machine, the verification process itself does not add extra time, as those who took the longest to vote on average were those who did not identify the flip (None).

back to top  Implications and Impacts

Given the overall notice rate of 77 percent (excluding notices on paper), it is unlikely an election could be hacked such that the hack would go unnoticed. Recall that 77 percent of participants noticed votes on the machine; this includes participants who spoke up (Vocal) and participants who later told researchers of the flip (Quiet). Therefore, we pose the main takeaway to the community is that tampering that flips votes on the paper for a transparent voting machine implementing this interface would result in voters noticing the flips. For analysis, we did not include those who noticed on paper in our minimum detection rate. Voters, however, would be given their physical ballot summary once they have finished on the BMD in a general election. When asked, 24 participants (15.9 percent) noticed the flip once they had the physical ballot. When all participants who noticed on paper are included, the detection rate increases to 92.7 percent. It has been stated that to avoid rerunning an election, a detection rate of 80 percent is needed with a 0.5 percent error margin [2]. Without any intervention, 40 percent of participants in the study reviewed their printed ballots. Suppose we are to follow the same distribution and take only 40 percent of the participants who noticed on paper (ours included the intervention of the question), that leaves 9.6 or 10 people who would have noticed. This increases the Noticed population to 126 of the 151 participants, or 83.4 percent. The TVM, paired with the physical ballot review, surpasses the 80 percent threshold needed to avoid rerunning an election as defined in [2].


Voting security is a critical area of research. However, security cannot come at the cost of accessibility.


More than 40 percent of participants noticed a flip but did not say anything. Participants reported not speaking up as it was only a study, but said they would have done so in the case of an actual election. This behavior where subjects change their behavior "due to their awareness of being observed" is known as the Hawthorne effect [4,5]. They did not believe that the stakes were high enough to warrant speaking up and were not inclined to speak to researchers unprompted. In a genuine election, however, they stated they would have said something.

The transparent voting machine is easy to use across demographics. Only five participants (3.3 percent) asked how to use the machine. This, coupled with the fact that there was no significant increase in time to use the TVM, point to its user-friendly design. A TVM is a novel technology, a transparent screen paired with a printer to produce a ballot. Even so, two (1.3 percent) of the participants asked how to use the voting machine. After brief instructions that the voting machine is a touchscreen, all of the participants successfully and independently used the TVM. In a 2019 study by Byrne et al. of the voters who appeared to verify their ballot, they were observed to take 2 minutes and 10 seconds more than their peers who did not verify their ballots [6]. Though participants who noticed the flip on the TVM did not complete the entire ballot, participants who did not say anything to researchers until after completing the entire ballot (Quiet) were still found to finish their process faster than those who did not notice the flip (Paper, None, Incorrect).

back to top  Conclusion

We continue to iterate on the development of the transparent voting machine. Future versions and studies include changing the print selection prompt from "Confirm your selection is correct. Touch your selection to continue" to "Confirm your selection is correct. If it is not, please notify an official. Touch your selection to continue." This is from knowledge gained here of the Hawthorne effect, where subjects were less likely to speak up even though they noticed a discrepancy. Future studies will also include how often voters are asked to confirm their printed ballots. For example, variations may have voters confirm the ballot after two or three contest selections are made. This may aid in places where ballots are very long, but further study is needed.

Voting security is a critical area of research. However, security cannot come at the cost of accessibility. A 2011 study on voting experiences for legally blind users found that because many participants had to engage with separate accessible machines, poll workers were not trained to set up and adequately assist legally blind voters with audio and operation of the machine. Of the participants, 24.4 percent said that poll workers' attitudes were "an obstacle that they feel makes it difficult for them to vote" [7]. This can affect the chances for future voter turnout of disabled communities. One survey found that people with disabilities were less likely to vote than their peers without disabilities [8]. When accessible technologies are separated from the standard-use technology, accessible technologies are not maintained and updated frequently, which can lead to security risks. The TVM presented here is built on the open-source accessible Prime III technology discussed in Gilbert et al. [9].

Though the risks of current technological methods are worthy of consideration, the needs for technological interventions are just as important. Therefore, we propose more research to create technological solutions that are safe, secure, and usable by all. To this end, we introduced a transparent voting machine. We believe the findings show it is possible to provide a ballot-marking device for all voters, independent of their abilities, and secure our elections.

back to top  Acknowledgments

This material is based upon work supported by the U.S. Election Assistance Commission (EAC). Opinions or points of views expressed in this document are those of the authors and do not necessarily reflect the official position of, or a position that is endorsed by, the EAC or the federal government. This material is also based in part upon work supported by the National Science Foundation under grant numbers IIS-0738175, DGE-1315138, and DGE-1842473. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

back to top  References

1. National Academies of Sciences, Engineering, and Medicine. Securing the Vote: Protecting American Democracy. The National Academies Press, Washington, DC, 2018; https://doi.org/10.17226/25120

2. Bernhard, M. et al. Can voters detect malicious manipulation of ballot marking devices? Proc. of IEEE Symposium on Security and Privacy. IEEE, 2020, 679–694; http://doi.org/10.1109/SP40000.2020.00118

3. Kortum, P., Byrne, M.D., and Whitmore, J. Voter verification of BMD ballots is a two-part question: Can they? Mostly, they can. Do they? Mostly, they don't. Election Law Journal: Rules, Politics, and Policy. ahead of print, 2020; http://doi.org/10.1089/elj.2020.0632

4. Landsberger, H.A. Hawthorne Revisited: A Plea for an Open City. Cornell University, 1957.

5. Oxford Dictionary. Definition of Hawthorne Effect [Lexico]

6. Byrne, M.D. VSAP Mock Election Observation Report to the Los Angeles County Registrar-Recorder/County Clerk. 2020; http://chil.rice.edu/research/pdf/VSAP_MockElectionReport.pdf

7. Piner, G.E. and Byrne, M.D. The experience of accessible voting: Results of a survey among legally-blind users. Proc. of the Human Factors and Ergonomics Society Annual Meeting 55, 1 (2011), 1686–1690. DOI:10.1177/1071181311551351

8. Schur, L., Ameri, M., and Adya, M. Disability, voter turnout, and polling place accessibility. Social Science Quarterly 98 (2017), 1374–1390; https://doi.org/10.1111/ssqu.12373

9. Gilbert, J.E. et al. Universal access in e-voting for the blind. Univ Access Inf Soc 9 (2010), 357–365; https://doi.org/10.1007/s10209-009-0181-0

back to top  Authors

Juan E. Gilbert is the Andrew Banks Family Preeminence Endowed Professor and chair of the Computer & Information Science & Engineering Department at the University of Florida, where he leads the Human-Experience Research Lab. juan@ufl.edu

Isabel Laurenceau is a Ph.D. student in computer science at the University of Florida. She works in the Human-Experience Research Lab under Juan Gilbert. She received a master's degree in computer engineering from the University of Florida in 2019. Her research focuses on affective computing, wearable devices, and tech policy. isalau@ufl.edu

Jean Louis is a Ph.D. student in computer science at the University of Florida studying under Juan Gilbert in the Human-Experience Research Lab. In 2019 he became a National Science Foundation graduate research fellow and a Graduate Education for Minorites fellow. His research interests include brain-computer interfaces, human-computer/robotic interaction, and the Internet of Things. jeandlouis1@ufl.edu

back to top 

©2021 ACM  1072-5520/21/11  $15.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2021 ACM, Inc.

Post Comment


No Comments Found