FAQ's


How does AuditEngine work?

AuditEngine analyzes ballot images captured by the voting system scanners and provides a detailed independent tabulation of the results. AuditEngine then reconciles this independent tabulation to the results of the voting system to find those exact ballots where the two evaluations differ, and produces comprehensive reports.

What data is needed to run audits?

AuditEngine requires only a few data items to be exported by the election system. This makes our auditing solution more independent from other options that may require more data. For Ballot Image Audits -- The normal data we need is, at a minimum, the following:
  • Ballot images,combined into ZIP archives, up to about 50K ballots per archive, are provided as soon as possible after election day. Audit Engine can then read the votes from these ballots to produce an independent tabulation. If there are no images, AuditEngine can scan the ballots if access is allowed.
  • (Desirable) Hash values of image files – Hash values are a short code calculated from each file, and any change in the file will give it a different hash value. Hashes are created by election machines when they scan ballots. If hash values of image files are made public on paper and in a time-stamped archive, as soon as the image files are created, they prevent hidden changes in the image files at any later stage.
  • (Desirable) Cast Vote Records Files (CVRs) -- provide the voting system results for each ballot typically in xlsx format (ES&S) or JSON format (Dominion). Finally, after the report by AuditEngine is published, we are provided with the official evaluation of each ballot by the voting system, and AuditEngine produces reports of disagreements.
If we can get ballot layouts before election day, we call it a "Cooperative Workflow" methodology. Layouts are important to allow us to provide quick-turnaround results, usually within 24 hours prior to receiving the CVRs.
  • Style Masters (BSM) are the files delivered to printers to print blank ballots. We need these as PDFs in searchable format with all timing marks and barcodes. These are used to create a "target map" of the ovals to be marked by the voter on each ballot style. This we can process prior to the election in a nearly fully automated process with review of proofs to ensure our mapping is correct.

If we do not get ballot layouts before election day, we call it a "Public Oversight" workflow. If we don’t get the Ballot Style Masters, at all, it may take longer to map the ovals on all ballot styles, but we have a special app for doing that step. In addition to this core operational mode, Audit Engine can also support:
  • Verification Images -- where someone chooses specific contests, precincts or batches which are scanned on non-voting system scanners and compared with the same groups as reported in the CVR and aggregated reports.
  • Digital Poll Tapes Audit --For systems from the vendor with the most deployed systems, Election Systems & Software, ES&S, we can run also a "Digital Poll Tapes Audit" which parses the batch-by-batch summaries (“digital poll tapes”) which can be exported from the ES&S election management system for each machine used in early voting or on election day and comparing these with the aggregated totals.
  • Hand-count Result Comparisons -- Hand-counted results can be imported and included in the same comparison tables.
  • Image Repeats Detection -- AuditEngine can find exact repeats of images, which likely means they were loaded twice (this is a normal function of all our audits now).

What reports does AuditEngine generate?

There are three categories of reports:
  1. Reports useful for members of the public, candidates, campaigns and election officials, to quickly compare the overall results of Audit Engine with the voting system results.
  2. Reports useful to diagnose issues with the results and dig into specific cases of ballots where the AuditEngine disagrees with the voting system. This also provides an overview of all contests and highlights any that are close.
  3. Reports primarily to document and provide proof of our results.
For most people looking at the results, the "Totals Report by Contest" will be the most important, because it provides the results of each contest and each candidate or option, and the vote count as provided by the voting system and the vote count independently generated by AuditEngine.
For a full list of reports click here

How can we trust the result of the audit by AuditEngine?

We don't expect observers to automatically trust our results. Trust is earned, and we realize it can take time. If we do find issues, however, then those will be readily accepted because the issues can be easily checked and proven to be true.
In some cases, AuditEngine identified cases in the voting system and they ordered a hand count. AuditEngine's independent result with within +/-3 votes of each item in the hand count (Monmouth County, NJ, 2022 election).
It is harder to prove there are no issues. But we do provide detailed reports and supporting files information so we can prove each step we perform for any ballot chosen for further review. We don't claim that the election results are correct, we can only prove whether the voting system results are consistent with the ballot images.
The AuditEngine auditing system is simple in concept. We read the votes from each ballot image and create an independent tabulation. Our system provides complete transparency, so you can take any ballot and follow it through the system.
The system will find "disagreements", where Audit Engine interpreted the marks on the ballot differently from the original voting system. We will be able to manually inspect those ballot images with disagreements and confirm how those ballots should be interpreted, and if the jurisdiction supports it, dig into the original paper ballots and find those exact ballots.
Our case studies show that when AuditEngine disagrees with the voting system regarding how the marks should be interpreted, AuditEngine correctly interprets the marks about 93% of the time, in those remaining instances, it is generally the case that neither system can correctly interpret them. If the voting system was programmed right, the disagreements are fewer than 0.25%, a quarter of one percent, depending on whether the voting system results were heavily manually adjudicated.
AuditEngine also raises "Gray Flags" for any vote where we made an educated guess or if the vote is ambiguous.
AuditEngine tends to find incorrectly interpreted undervotes, where the voter made a mark that was intended for the candidate but was not sufficiently inside the target oval for the voting system to accept it as a valid mark.
AuditEngine also can usually correctly interpret "hesitation marks" and "scratch-outs" which election systems interpret as overvotes (votes for too many candidates) but the voter’s choices of the right number of candidates are clear to AuditEngine and human eye review.
AuditEngine uses an "adaptive threshold" method which evaluates the marks based on other marks on the same ballot and the relative darkness or lightness of the ballot itself, and the habits of that particular voter.

How do we know the ballot images have not been altered?

The proper ballot images from the election department, as exported by the "election management system" or EMS must be uploaded to the secure cloud data center used by AuditEngine. After being uploaded, the hash values are easily read in the listing of each file without any further processing. These hash values can be compared with the values produced with similar calculations by election officials to confirm that the image files are the same. The use of these secure hashes is commonplace and a well-respected methodology.
For additional detail, click here

Does AuditEngine ever fail to process ballot images?

Yes. We find that some (very few) ballot images are distorted and poorly created by the voting system. This is particularly true with some older ES&S equipment. In such a case, the ballot image would be tagged by AuditEngine for manual review using our AdjudiTally App to determine how the voter voted. However the number of ballots that are this badly distorted is usually quite small. If there are many of these distorted ballots, then we can flag those particular voting system scanners for retirement or maintenance.

How much time do you need in advance of the election to set up AuditEngine?

For audits conducted by the public using publicly available information, AuditEngine is typically deployed after the election when the results and ballot images have been finalized, or at least semi-final results have been published. In this case, then we don't set up before the election at all. Experience with the details and nuances of a given area from prior audits is helpful, as are the specific methods used in any given jurisdiction by prior audits. This is called the Public Oversight Workflow.
Using our "Cooperative Workflow", we can expedite our results by doing more work before the election. Specifically, we need to get the Ballot Style Masters in advance so that the mapping process can be completed prior to the election and AuditEngine will be able to quickly process the ballot images within the tight time constraints after the election.
Only after AuditEngine produces its independent evaluation of the election, the CVR is provided so these results can be compared on a ballot-by-ballot basis and any discrepancies highlighted. The election office may want to use these results to fine-tune the accuracy of the election prior to certification.

Does AuditEngine also audit “Ballot Marking Device" (BMD) ballot summary sheets?

Yes. Audit Engine "reads" the printed "voter verifiable" text rather than by relying on the unverifiable barcodes. BMD ballots are those printed by systems that incorporate touch screens to allow the voter to make selections, followed by printing a voted selection summary card. This card, or sheet, includes barcodes that provide a machine-readable representation of the selections by the voter. These barcodes typically are essentially impossible for voters to verify, and instead voters can only verify their selections as printed in the summary. Thus, the part verified by the voter is not read by voting systems.
There are possibilities that the BMD systems can be misconfigured to show a vote in the readable summary, and yet have a different vote represented by the barcodes.
AuditEngine stands alone in the field of ballot image auditing offerings because we perform OCR on the printed selections to determine the vote on the ballot rather than relying on the barcodes, or not auditing the BMD summary cards at all. Because we compare that result with the official result in the Cast Vote Record, this essentially puts a check on the possibility that the barcodes might say one thing while the text says something else.

What voting systems can you analyze images from?

Currently, we support the three leading voting system vendors, Election Systems & Software (ES&S), Dominion Voting Systems and Hart Intercivic. We prefer working with the latest generations of these systems which provide a ballot-by-ballot cast-vote-record (CVR) report of the voting system results so we can compare with the voting system down to the individual ballot. The older Dominion and ES&S systems do not provide that level of reporting even if they provide ballot images. Although we can process the images to produce an overall tabulation, we can't compare on a ballot-by-ballot basis.

How many people are involved in doing an audit?

This can vary greatly depending on how "clean" the data is and how consistent contest name and options are among the different representations, such as in the CVR, printed on the ballot, and on BMD summary cards. We do, however, have a way to process nearly any set of data even if it is not optimally prepared.
We need at least one auditor to oversee each audit, plus a number of workers who can help with the mapping and adjudication process, to the extent those are required, and any number of observers. The amount of work required is highly dependent on how clean the data is. If the margin of victory is large, and if we find a relatively small number of disagreements, we may not need to review them all to conclude that the result is consistent.
On the other hand, with a very close margin of victory, every disagreement will need to be reviewed. If there are many write-ins, this can also increase the amount of work involved, because some voters write in a candidate’s name even when the candidate is printed on the ballot, so analyzing the write-ins can find the last few votes.
With that said, we encourage the process of each stage of the audit to be witnessed by a set of interested parties in an observer's panel, so they can have all their questions answered. Additionally, the process can be live streamed to the public.

Is AuditEngine “open source”?

Although AuditEngine uses a lot of open-source software and we endorse standardization, at this time AuditEngine is not fully open source software. We believe the most important aspect is providing "open data" transparency, so that anyone can check the data at each stage of the process. Open-Source software works best when the users of those software modules are programmers who can actively work to improve them, and if there is a benefit of sharing the code.
The audiences for AuditEngine are not programmers, and so providing open-source software would not help them verify the accuracy of the audit result.
Our philosophy is that it is more important that the data is open and that it can be checked at intermediate locations along the way.
Additionally, one of the reasons people use open source is so the software can be shared among different independent users. In this case, however, it defeats the purpose of multiple auditors if they are simply using the same software. We encourage campaigns and people with doubts to have skilled programmers independently analyze the ballot images for doubtful results.

Shouldn’t we examine the paper ballots, rather than scanned images?

Paper ballots have occasionally been changed in storage, to create inaccurate results. Locks and seals are imperfect, and no state has standards for them. Few states even require multiple locks, with keys held by different people. As a result, many people mistrust results from paper ballots after they have been in storage.
If hashes of image files are made public on paper and in a time-stamped archive, as soon as the image files are created, they prevent hidden changes in the image files at any later stage. There is no similar way to detect changes in paper files.
Ideally election offices and observers (from parties, candidates, public) would check a random sample of images against paper ballots, right after scanning, before the ballots disappear into storage, to detect any flaws in creating the images. Ideally they would also use multiple certified locks for paper ballots, such as UL-437, not all keyed by the same locksmith, and test how hard they are to bypass or defeat.
Paper ballots can be trusted and used if they match image files and if the image files have the same hashes made public when the image files were created.