United States Election Assistance Comittee

Register to Vote!

Use the National Mail Voter Registration Form to register to vote, update your registration information with a new name or address, or register with a political party.

Note: If you wish to vote absentee and are a uniformed service member or family member or a citizen living outside the U.S., contact the Federal Voting Assistance Program to register to vote.

EAC Newsletters
and Updates

Sign up to receive information about EAC activities including public meetings, webcasts, reports and grants.

Give Us Your Feedback

Share your feedback on EAC policy proposalsElection Resource Library materials, and OpenEAC activities. Give feedback on general issues, including the Web site, through our Contact Us page.

Military and Overseas Voters

EAC has several projects under way to assist states in serving military and overseas citizens who register and vote absentee under the Uniformed and Overseas Citizens Absentee Voting Act. Learn more

Chapter 1: Introduction

This part of the VVSG, Testing Requirements, contains requirements applying to the conformity assessment to be conducted by test labs. It is intended primarily for use by test labs.

This part contains 5 chapters, organized as follows:

  • Chapter 2: an overview of the conformity assessment process and related requirements;
  • Chapter 3: overview of general testing approaches;
  • Chapter 4: requirements for documentation and design reviews; and
  • Chapter 5: requirements for different methods for testing.

NOTE: Requirements in Part 3 do not contain "Test Reference:" fields, as the testing reference is implied by the requirement and its context within Part 3.


Comment by Donna Mummery (Advocacy Group)

The testing should be to the same standard as used by the Secretary of State in California which showed that most electronic voting machines already in use in California electorates were easily corrupted. It is impossible to have an electronic voting machine where software is counting votes be anything but easily corrupted. Whoever writes the software, determines the election. There is not point in testing any electronic voting machine. They should not even be considered much less tested for voters to use.

Comment by U.S. Public Policy Committee of the Association for Computing Machinery (USACM) (None)

Testing and Certification Testing to specific requirements, while necessary, is only one of the necessary steps to ensure that a voting system is worthy of certification. USACM recommends that the processes of testing and certification maximize the opportunities for independent review by qualified individuals prior to approval of any voting system. Review and testing by a range of qualified evaluators will increase the likelihood that systems will perform as needed. The transparency provided by such testing will strengthen the trust in the voting system — something that is process dependent, not technology dependent. When we think about testing requirements, we should consider the overall testing strategy and how it fits in with the voting process. We start with development of voting equipment (hardware and software) by a vendor who may or may not be trustworthy (see Appendix A for a definition). There are a few different ways we can check the vendor’s trustworthiness. These ways can include: process quality improvements — such as Capability Maturity Model Integration — as part of the certification process; the use of independent test labs (with a mix of testing techniques) for certifying software; holding the company to a higher liability standard; public or outside expert review of the software; or some combination of these and other methods (not all of which can be implemented through the VVSG). If there are concerns about feasibility, practicality or expense of particular methods, adjustments should be made with emphasis on preserving the process of demonstrating the trustworthiness of the voting systems — that the underlying systems are worthy of certification. The testing requirements and processes should always be focused on ensuring accurate, reliable, accessible, secure, and verifiable elections. To the extent that logistical concerns become blocks to effective testing and/or certification, the burden should be on the voting systems to demonstrate (much as it is in the innovation class requirements) that they will not pose significant logistical burdens in the testing and voting processes. Another important part of the testing process is to conduct tests that reflect the possible conditions voting systems will experience in the field including tests for accessibility, usability, security, and reliability. Systems that pass tests in idealized laboratory conditions may not fare as well in field conditions. Vendors and testing personnel may be too close to voting systems to understand how accessible or usable they may be for the average voter or poll worker. If tests are restricted to only lab conditions, or are narrowly constrained to focus on the guidelines and nothing else, testing authorities and test labs are risking the equivalent of teaching to the test — worried only about what is in the VVSG, regardless of the impact a flaw or error could have on elections. USACM recommends that voting systems should be tested, and benchmarks met, in conditions most likely to be found in polling places.

Comment by U.S. Public Policy Committee of the Association for Computing Machinery (USACM) (None)

USACM Comment #23. Lack of Accessibility Testing Specification [incomplete] In our review of the testing specifications, we have found the accessibility and testing requirements either lacking or in need of development to ensure conformance with accessibility and usability standards. USACM recommends that TGDC develop testing requirements and procedures for accessibility and usability for inclusion in the VVSG.

Comment by Cem Kaner (Academic)

VVSG does not consider the case in which a voting system passes certification testing but later use reveals flaws that, if found during testing, would have caused a failure of testing. .......... VVSG should explicitly address this case, providing a process for decertification of the voting equipment and also a reconsideration of the qualification of the independent test lab that missed the problem. .......... A majority of respondents, in a poll of the IEEE SCC38 Voting Standards committee, supported the recommendation that: " a section be added to the VVSG that pertains to the decertification of voting systems. This section will provide details regarding the process that would occur if evidence of a flaw has been revealed in a certified voting product or component such that it should not have passed the ITA examination and/or would be in violation of the VVSG requirements. Upon receipt of this evidence by the EAC, the ITA that performed the original certification testing of this product or component shall be immediately contacted and required to perform additional testing with a results report issued to the EAC within 10 days. If the product or component fails the test, the EAC must immediately rescind the certification, and all entities (states, counties, municipalities) that have this product deployed for use must immediately be notified (at the vendor's expense) of the decertification. Certification can later be made to a new version of the product that has corrected the failure if it subsequently passes both component and integration testing by the ITA." .......... (Affiliation Note: IEEE representative to TGDC)

1.1 Changes from VVSG 2005 and Previous Versions of the Standards

1.1.1 Reorganization of testing-related material

Part 3, Testing Requirements, focuses on test methods and avoids repetition of requirements from Parts 1 and 2. VVSG 2005’s Volume II did contain voting equipment-related requirements as well as testing information.

The hardware testing vs. software testing distinction is no longer a guiding principle in the organization of the Guidelines. Although different testing specialties are likely to be subcontracted to different laboratories, the prime contractor must report to the certification authority on the conformity of the system as a whole.

1 Comment

Comment by Frank Padilla (Voting System Test Laboratory)

Paragraph 2 states that "Although different testing specialties are likely to be subcontracted to different laboratories, the prime contractor must report to the certification authority on the conformity of the system as a whole." The EAC needs to ensure that this conforms with the VSTL manual and process.

1.1.2 Applicability to COTS and borderline COTS products

To clarify the treatment of components that are neither manufacturer-developed nor unmodified COTS and to allow different levels of scrutiny to be applied depending on the sensitivity of the components being reviewed, new terminology has been introduced: aaplication logic, border logic, configuration data, core logic, COTS (revised definition), hardwired logic, and third-party logic. Part 3:Table 1-20 describes the resulting categories.

Table 1-1 Levels of scrutiny













third-party logic, border logic, configuration data






application logic

Coding standards





core logic

Logic verification





COTS may be tested as a black-box (i.e., exempted from source code inspections). Whether it is exempted from specific tests depends on whether the certifications and scrutiny that it has previously received suffice for voting system certification purposes. This determination is made by the test lab and justified in the test plan as described in Requirement Part 2: 5.1-D.

Notably, the distinction between software, firmware, and hardwired logic does not impact the level of scrutiny that a component receives; nor are the requirements applying to application logic relaxed in any way if that logic is realized in firmware or hardwired logic instead of software.

By requiring "many different applications," the definition of COTS deliberately prevents any application logic from receiving a COTS designation.

Finally, the conformity assessment process has been modified to increase assurance that what is represented as unmodified COTS is in fact COTS (Part 3: "Unmodified COTS verification").



Comment by Howard Jow (Voter)

This is a general comment that applies to all future planned electronic voting. I believe that electronic voting machines should be certified to DO-178B Level A standards. These are the highest standards the FAA places on avionics used in flight. It requires extensive requirements review, independence, source code reviews, MCDC testing, etc. The requirements and code should be publicly available for scrutiny in case reviews did not catch all issues. Development of a voter machine of this level of assurance would be expensive; however, it would foster voter confidence in electronic voting machines.

Comment by Terrill Bouricius, FairVote: the Center for Voting and Democracy (Advocacy Group)

My concern is the broad definition of "voting systems" as it applies to COTS hardware and software, and the apparent narrowing of the COTS exemption. A voting system includes a tool used just for generating a report, or just for auditing, even if it is not used for any vote casting, or tallying, etc. Then a town clerk just using Excel software or even a calculator to generate a report could meet the definition of "voting system." But the proposed COTS exemption for such tools would only exempt them at the discretion of the test labs and it "depends on whether the certifications and scrutiny that it has previously received suffice for voting system certification purposes." Clearly that language was intended to cover voting machine manufacturers incorporating COTS into systems they would sell as packages. But this could also be interpreted to require testing labs to sign off on the use of a calculator by a town clerk, if that state's laws follow federal certification guidelines. I would propose that the COTS exemption be broadened to state that COTS used by election administrators (rather than voting machine vendors, or incorporated into full systems) are blanketly exempted from federal testing. Thus if a jurisdiction (the end user) chooses to use COTS spread sheet software to tally the final vote totals, or use a COTS commercial scanner hooked to a COTS PC to scan all paper ballots to aid the auditing of the election, such use of COTS tools as a way of reporeting or even possibly avoiding the need to purchase "voting systems" from voting machine vendors, should not be complicated by the implied need for federal testing or express testing lab permission (through a formal exemption).

1.1.3 New and revised inspections Source code review for workmanship and security

In harmony with revisions to the requirements in Part 1: 6.4 "Workmanship", the source code review for workmanship now focuses on coding practices with a direct impact on integrity and transparency and on adherence to published, credible coding conventions, in lieu of coding conventions embedded within the standard itself. A separate section for security has been added to focus on source code reviews for security controls, networking-related code, and code used in ballot activation. Logic verification

This version of the VVSG adds logic verification to the testing campaign to achieve a higher level of assurance that the system will count votes correctly.

Traditionally, testing methods have been divided into black-box and white-box test design. Neither method has universal applicability; they are useful in the testing of different items.

Black-box testing is usually described as focusing on testing functional requirements, these requirements being defined in an explicit specification. It treats the item being tested as a "black-box," with no examination being made of the internal structure or workings of the item. Rather, the nature of black-box testing is to develop and utilize detailed scenarios, or test cases. These test cases include specific sets of input to be applied to the item being tested. The output produced by the given input is then compared to a previously defined set of expected results.

White-box testing (sometimes called clear-box or glass-box testing to suggest a more accurate metaphor) allows one to peek inside the "box," and focuses specifically on using knowledge of the internals of the item being tested to guide the testing procedure and the selection of test data. White-box testing can discover extra non-specified functions that black-box testing would not know to look for and can exercise data paths that would not have been exercised by a fixed test suite. Such extras can only be discovered by inspecting the internals.

Complementary to any kind of operational testing is logic verification, in which it is shown that the logic of the system satisfies certain constraints. When it is impractical to test every case in which a failure might occur, logic verification can be used to show the correctness of the logic generally. However, verification is not a substitute for testing because there can be faults in a proof just as surely as there can be faults in a system. Used together, testing and verification can provide a high level of assurance that a system's logic is correct.

A commonly raised objection to logic verification is the observation that, in the general case, it is exceedingly difficult and often impractical to verify any nontrivial property of software. This is not the general case. While these Guidelines try to avoid constraining the design, all voting system designs must preserve the ability to demonstrate that votes will be counted correctly. If a voting system is designed in such a way that it cannot be shown to count votes correctly, then that voting system does not satisfy Requirement Part 1: 6.1-B.

1.1.4 New and revised test methods End-to-End testing

The testing specified in [VSS2002] and [VVSG2005] is not required to be end-to-end but may bypass portions of the system that would be exercised during an actual election ([VVSG2005] II.

The use of text fixtures that bypass portions of the system may lower costs and/or increase convenience, but the validity of the resulting testing is difficult to defend. If a discrepancy arose between the results reported by test labs and those found in state acceptance tests, it would likely be attributable to this practice.

Language permitting the use of simulation devices to accelerate the testing process has been tightened to prohibit bypassing portions of the voting system that would be exercised in an actual election, with few exceptions (Part 3: 2.5.3 "Test fixtures"), and a volume test analogous to the California Volume Reliability Testing Protocol [CA06] has been specified (Requirement Part 3: 5.2.3-D).

1 Comment

Comment by Cem Kaner (Academic)

While much testing should certainly be done end-to-end, it is not necessary to run every test end-to-end. Independent testing imposes an enormous cost on the voting equipment. It is important to consider the cost-benefit considerations for individual tests. The more expensive we make each test, the smaller the set of tests we can rationally impose on the vendor. Allowing the public to obtain these systems and run their own tests will create an important additional source of end-to-end tests. Given public and researcher interest in these systems, such testing will probably be far more varied (and thus a richer sample) than can be achieved by the test lab anyway. .......... (Affiliation Note: IEEE representative to TGDC) Reliability, accuracy, and probability of misfeed

Previous versions of these Guidelines specified a Probability Ratio Sequential Test [Wald47][Epstein55][MIL96] for assessment of reliability and accuracy. No test was specified for assessment of probability of misfeed, though it would have been analogous.

The Probability Ratio Sequential Tests for reliability and accuracy ran concurrent with the temperature and power variation test. There was no specified way to assess errors and failures observed during other portions of the test campaign.

Reliability, accuracy, and probability of misfeed are now assessed using data collected through the course of the entire test campaign. This increases the amount of data available for assessment of conformity to these performance requirements without necessarily increasing the duration of testing. Open-ended vulnerability testing

This version adds Open Ended Vulnerability Testing (OEVT) as a test method. OEVT is akin to vulnerability penetration testing, conducted by a team of testers in an open-ended fashion not necessarily constrained with a test script. The goal of OEVT is to discover architecture, design and implementation flaws in the system that may not be detected using systematic functional, reliability, and security testing and which may be exploited to change the outcome of an election, interfere with voters’ ability to cast ballots or have their votes counted during an election or compromise the secrecy of vote.

OEVT is generally not called out in Test reference: fields; the assumption is that any requirement in the VVSG or aspect of voting system operations is "fair game" for OEVT. In particular, OEVT should be useful for testing those requirements that require source code inspection as a test method.

1 Comment

Comment by Cem Kaner (Academic)

The more common name for open-ended testing is exploratory testing. This is commonly done in the testing of any attribute of computer software. There is widespread belief among practitioners, including plenty of experience reports at practitioner conferences, that exploratory testing yields many more failures than running the same sequence of regression tests time and again. The need for skilled exploratory testing in the lab will be significantly reduced if public testing is made possible. .......... The inevitable arguments about whether a test lab's exploratory testing was sufficiently skilled and was run for a sufficient time will also be reduced by public testing, which will be a significant separate source of open-ended tests. .......... (Affiliation Note: IEEE representative to TGDC)