United States Election Assistance Comittee

Register to Vote!

Use the National Mail Voter Registration Form to register to vote, update your registration information with a new name or address, or register with a political party.

Note: If you wish to vote absentee and are a uniformed service member or family member or a citizen living outside the U.S., contact the Federal Voting Assistance Program to register to vote.

EAC Newsletters
and Updates

Sign up to receive information about EAC activities including public meetings, webcasts, reports and grants.

Give Us Your Feedback

Share your feedback on EAC policy proposalsElection Resource Library materials, and OpenEAC activities. Give feedback on general issues, including the Web site, through our Contact Us page.

Military and Overseas Voters

EAC has several projects under way to assist states in serving military and overseas citizens who register and vote absentee under the Uniformed and Overseas Citizens Absentee Voting Act. Learn more

Chapter 2

This document contains considerable new material and material expanded from previous versions of the voting standards. This section provides an introduction to and overview of major features of the VVSG, those being

  • Organization of the VVSG, requirements structure, and classes;
  • Usability performance metrics;
  • Expanded human factors coverage;
  • Software Independence, Independent Voter-Verifiable Records voting systems, and the Innovation Class;
  • Open-ended vulnerability testing and expanded security coverage;
  • Treatment of COTS in voting system testing;
  • End-end testing for accuracy and reliability;
  • New metric for voting system reliability; and
  • Expanded core requirements coverage.

3 Comments

Comment by Marti Cockrell (General Public)

I am a concerned voter, and have no confidence in our voting system as it stands. I will not vote unless there is a paper ballot, because I have no way to know that the vote is recorded accurately, or that there can be a recount, if needed. I don't understand why we can't use the most reliable method of voting and counting votes and use paper ballots and hand count, with random mandatory audits, no matter how long it takes to be sure the outcome is as accurate as possible. If we don't trust the results of our elections, we don't have a democracy.

Comment by Brian V. Jarvis (Local Election Official)

In the 3rd to last bullet in Section 2.0, recommend changing "End-end" to "End-to-end" to be consistent with usage in Sections 2.3, 2.8, & 2.10.

Comment by Electronic Privacy Information Cener (Advocacy Group)

The 2007 VVSG draft recommendations are a great improvement over the 1990, 2002, and 2005 versions of the document. The organization and presentation of the material in the 2007 VVSG draft is clearer and better organized. The presentation of the material in the 2007 should guidance on the foundational requirements of voting systems by promoting precision, reducing ambiguity, and elimination of repeated requirements. An appendix document that explains the purpose and function of VVSG topic areas might increase usability of the document for secondary audiences.

2.1 The New Structure of the VVSG

The VVSG structure is markedly different from the structure of previous versions. First, the VVSG should be considered as a foundation for requirements for voting systems; it is a foundation that provides precision, reduces ambiguity, eliminates repeated requirements, and provides an avenue for orderly change, i.e., the addition of new types of voting devices or voting variations.

It was necessary to focus on providing this robust foundation for several reasons. First, previous versions suffered from ambiguity, which resulted in a less-robust testing effort. In essence, it has been more difficult to test voting systems when the requirements themselves are subject to multiple interpretations. This new version should go a long way towards reducing that ambiguity.

Secondly, there are simply more different types of voting devices than anticipated by previous versions, and new devices will continue to be marketed as time goes by. The VVSG provides a strong organizational foundation so that existing devices can be unambiguously described and development of new devices can proceed in an orderly, structured fashion.

1 Comment

Comment by ted selker (Academic)

In order to establish a foundation for future voting systems, this document will have to speak to issues of algorithms for alternative elections styles like town meetings and Instant Run-off voting as well as new, experimental kinds of voting such as internet, TV, and phone-based voting. The possibility of new kinds of voting in which information comes in new forms and along new channels will come to matter as well. Whereas the Innovation Class gives the illusion of inviting innovation in voting systems, it might serve to reduce it. All innovations described within it will carry all the constraints of the full VVSG and add their own. This is not an invitation to innovate; it might be a way to ghettoize it.

2.1.1 VVSG Standards Architecture

The VVSG has been reorganized to bring it in line with applicable standards practices of ISO, W3C and other standards-creating organizations. It contains three volumes or "Parts" for different types of requirements:

Part 1, Equipment Requirements, provides guidelines for manufacturers to produce voting systems that are secure, accurate, reliable, usable, accessible, and fit for their intended use. Requirements in VVSG 2005 that were ambiguous have been clarified. In those cases where no precise replacement could be determined and no testing value could be ascribed, requirements have been deleted.

Part 2, Documentation Requirements, is a new section containing documentation requirements separate from functional and performance requirements applying to the voting equipment itself. It contains requirements applying to the Technical Data Package, the Voting Equipment User Documentation, the Test Plan, the Test Report, the Public Information Package, and the data for voting software repositories.

Part 3, Testing, contains requirements that apply to the national certification testing to be conducted by non-governmental certified testing laboratories. It has been reorganized to focus on test methods and to avoid repetition of requirements from the product standard. Although different testing specialties are likely to be subcontracted to different laboratories, the prime contractor must report to the certifying authority on the conformity of the system as a whole.

The requirements in these Parts rely on delimitation and strict usage of certain terms, included in Appendix A, Definition of Words with Special Meanings. This covers terminology for standardization purposes that must be sufficiently precise and formal to avoid ambiguity in the interpretation and testing of the standard. Terms are defined to mean exactly what is intended in the requirements of the standard. Note: Readers may already be familiar with definitions for many of the words in this section, but the definitions here often may differ in small or big ways from locality usage because they are used in special ways in the VVSG.

The VVSG also contains a table of requirement summaries, to be used as a quick reference for locating specific requirements within sections/subsections. Appendix B contains references and end notes.

1 Comment

Comment by Matt S. (General Public)

Part 3: non-governmental certified testing laboratories - please clarify/enforce that they are truly an unbiased third party and have not taken money or are held in any part or have received any motivation from a lobbying group or the voting machine company OR its competitors.

2.1.2 Voting System and Device Classes

Voting system and device classes are new to the VVSG. Classes in essence form profiles of voting systems and devices. They are used as fields in requirements to connote the scope of the requirements. For example, Figure 2-1 shows the high-level device class called vote-capture device. There are various requirements that apply to vote-capture device; this means that all vote-capture devices must satisfy these requirements (e.g., for security, usability, etc.). There are also requirements that apply more specifically to, say, IVVR vote-capture devices and those explicit devices underneath it, such as VVPAT. These devices inherit the requirements that apply to vote-capture device, that is, they must satisfy all the general vote-capture device requirements as well as the more specific requirements that apply. In this way, new types of specific vote-capture devices can be added in the future; they must satisfy the general requirements that all Vote-capture devices are expected to satisfy, but at the same time they can satisfy specific requirements that only apply to the new device. This structure assists in unambiguously making it clear to manufacturers and test labs which requirements apply to ALL vote-capture devices, for example, as opposed to which requirements apply specifically to just VVPAT. This structure also allows for the addition or modification of new or existing device requirements without affecting the rest of the standard.

Figure 2-1 Voting device class hierarchy

intro2 1

1 Comment

Comment by ted selker (Academic)

How is it that the only verification in the diagrams and writing is VVPAT when Democracy Systems has made the VoteGuard audio verification system commercially available and Hart InterCivic has demonstrating an audio verification system?

2.1.3 Requirements Structure

Requirements are now very specific to either a type of voting variation or a type of voting device (as stated in the previous section, the voting device can be a general profile of certain types of voting devices or be a profile of a more specific voting device). The requirements contain expanded description text and more precise language to make requirements explicit and to indicate the general test method to be used by the test lab for determining whether the requirement is satisfied in the voting system under test. As appropriate, the requirement also contains a reference to versions of the requirement in previous standards (e.g., VVSG 2005 or the 2002 VSS) so as to show its genesis and to better convey its purpose.

0 Comments

No Comments

2.1.4 Strict Terminology

The terminology used in the VVSG has been considered carefully and is used strictly and consistently. In this way, requirements language can be made even more clear and unambiguous. Hypertext links are used throughout the VVSG for definitions of terminology to reinforce the importance of understanding and using the terminology in the same way.

However, it is important to understand that the terminology used in the VVSG is specific to the VVSG. An effort has been made to make sure that the terms used in the VVSG mean essentially the same thing as used in other contexts, however at times the definitions in the VVSG may vary in big or small ways.

Figure 2-2 illustrates the relationships and interaction between requirements, device classes, and types of testing from Part 3, all in the framework of strictly used terminology.

Figure 2-2Interaction between requirements, definitions, and parts of the VVSG

intro2 2

0 Comments

No Comments

2.2 Usability Performance Requirements

Usability is conventionally defined as "the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use" [ISO98a]. In VVSG 2005, the usability guidelines relied on three assessment methods:

  1. Checking for the presence of certain design features which are believed to support usability, and for the absence of harmful design features;
  2. Checking for the presence of certain functional capabilities which are believed to support usability; and
  3. Requiring manufacturers to perform summative usability testing with certain classes of subjects and to report the results. However, the VVSG 2005 reporting requirements do not specify the details of how the test is designed and conducted.

While all these help to promote usability, methods 1 and 2 are all somewhat indirect methods. The actual "effectiveness, efficiency and satisfaction" of voting systems are never evaluated directly in the 3rd method.

This version of the VVSG uses a new method based on summative usability testing that directly addresses usability itself, i.e., measured mainly in how accurately voters cast their ballot choices. The features of this new method include:

  • The definition of a standard testing protocol, including a test ballot, set of tasks to be performed, and demographic characteristics of the test participants. The protocol supports the test procedure as a repeatable controlled experiment.
  • The use of a substantial number of human subjects attempting to perform those typical voting tasks on the systems being tested, in order to achieve statistically significant results.
  • The gathering of detailed data on the subjects' task performance, including data on accuracy, speed, and confidence.
  • The precise definition of the usability metrics to be derived from the experimental data.
  • The definition of effectiveness benchmarks against which systems will be evaluated.

Obviously, the implementation of such complex tests is more difficult than simply checking design features. However, performance-based testing using human subjects yields the most meaningful measurement of usability because it is based on their interaction with the system's voter interface, whereas design guidelines, while useful, cannot be relied upon to discover all the potential problems that may arise. The inclusion of requirements for performance testing in these Guidelines advances the goal of providing the voter with a voting system that is accurate, efficient, and easy to use.

2 Comments

Comment by Electronic Privacy Information Center (Advocacy Group)

We strongly support the language and objectives of the Usability Performance Requirements outlined in this section and the establishment of "summative usability testing" as the standard for voting system usability testing. Usability of ballot design is sighted as a contributing factor in 2000 and 2006 when election margin of victories have fallen within the margin of error. It is important as this section states to test ballot usability based on the "tasks to be performed, and demographic characteristics of the test participants." We would further add that the typical voting population of the jurisdiction to be served by the voting system or device should be reflected in summative usability testing. Definitions of what usability means in voting environments should be fully investigated and appropriate measures developed for those systems that seek adoption under this standard. This section promotes voter privacy and ballot secrecy by establishing a goal of eliminating constraints to voter intent that may be present in the ballot design. The section also raises the bar on the rigorous testing of ballot design to serve the particular needs of a jurisdiction’s voting population.

Comment by Cem Kaner (Academic)

The creation of a testing protocol so standardized that even the test ballot is fully specified runs the risk of design optimization by the vendor in ways that pass the test easily without improving the usability of the system as a whole. A test ballot is only a tiny sample of the population of possible ballots, and it stops being a representative sample when application designers consider it specifically in their designs and test designers know that they will be required to use this ballot in their tests of each version of each voting device they test. Metrics based on such a test are likely to show apparent improvement over time, as systems are optimized toward better performance on this particular test. This improvement may or may not correlate with the underlying level of usability. Usability testing should include both, tests against a standard protocol and testing against a larger set of randomly varied ballots. In addition, usability testing should consider ballot designs that the voting equipment enables that are not optimal. In practice, weak design by voting officials (the Florida butterfly ballot is an example. As a non-Florida example, consider this one from California: http://voxexmachina.wordpress.com/2008/02/06/call-shenanigans-on-los-angeles-county/). Some designs are semantically confusing—outside of the scope of evaluation of the system that merely presents what is written. But to the extent that a ballot is confusing partially because of its content but partially because of the equipment design, that is a usability issue worth understanding. Usability evaluators should be asking, from system to system, what are the particular details of this system that might make user confusion more likely, and how serious are those risks? The results of usability testing should be public, and compared from tested version to tested version of the voting device. Trends in these results over time should be carefully considered. (Affiliation Note: IEEE representative to TGDC)

2.3 Expanded Usability and Accessibility Coverage

In addition to usability performance metrics, the treatment of human factors, i.e., usability, accessibility, and privacy, has been expanded considerably. Table 2-1 summarizes the new and expanded material.

Table 2-1 Expanded human factors coverage

Human Factors Topic

Description

Voter-Editable Ballot Device

The VVSG defines a new class of voting station: Voter-Editable Ballot Device (VEBD). These are voting systems such as DREs and EBMs that present voters with an editable ballot (as opposed to manually-marked paper ballots), allowing them to easily change their choices prior to final casting of the ballot. See Part 1: 2.5 and Part 1: 3.1.2.

Ballot Checking and Correction

Requirements for both interactive and optical-scan based ballot checking and correction (so-called "voter's choice" issues). There is also a new requirement for detection and reporting of marginal marks. See Part 1: 3.2.2.

Notification of Ballot Casting

Requirements to notify the voter whether the ballot has been cast successfully. See Requirements Part 1: 3.2.2.1-F and Part 1: 3.2.2.2-F.

Plain Language

Requirements for the use of plain language when the voting system communicates with the voter. The goal is to make the instructions for use of the system easier to understand and thus improve usability. See Requirement Part 1: 3.2.4-C.

Icons and Language

New requirement that instructions cannot rely on icons alone; they must also include linguistic labels. See Requirement Part 1: 3.2.4-G.

Adjustability

Clarified that when the voter can control or adjust some aspect of voting station, the adjustment can be done throughout the voting session. See Requirement Part 1: 3.2.5-B.

Choice of Font and Contrast

Requirements for the availability of the choice of font size and contrast on VEBDs. See Requirements Part 1: 3.2.5-E and Part 1: 3.2.5-H.

Legibility

Legibility for voters with poor reading vision has been strengthened from a recommendation to a requirement. See Requirement Part 1: 3.2.5-G.

Timing

Requirements on the timing for interactive systems. Addresses the response time of system to the user (no undue delay) and mandates that systems issue a warning if there is lengthy user inactivity. See Section Part 1: 3.2.6.1.

Alternative Languages

This entire section has been expanded and clarified. See Section Part 1: 3.2.7.

Poll Workers

Addresses usability for poll workers as well as for voters. Manufacturers are required to perform usability testing of system setup, operation, and shutdown. System safety is addressed. See Section Part 1: 3.2.8.

End-to-End Accessibility

New requirement to ensure accessibility throughout the entire voting session. See Requirement Part 1: 3.3.1-A.

Accessibility of Paper Records

Requirements address the need for accessibility when the system uses paper records as the ballot or for verification. In particular, an audio readback mechanism is required to ensure accessibility for those with vision problems. See Requirement Part 1: 3.3.1-E.

Color Adjustment

Consolidated and clarified material on color adjustment of voting station. See Requirement Part 1: 3.3.2-B.

Synchronized Audio and Video

Clarifies the availability of synchronized audio and video for the accessible voting station. The voter can choose any of three modes: audio-only, visual-only, or synchronized audio/video. See Requirement Part 1: 3.3.2-D.

3 Comments

Comment by E Smith/P Terwilliger (Manufacturer)

Why is a MMPB deemed uneditable? Voters edit (erase) them all the time.

Comment by U.S. Public Policy Committee of the Association for Computing Machinery (USACM) (None)

Accuracy and Usability A critical factor in election accuracy is accurately capturing a voter’s intent. The voting process always starts with the voter’s intent, which must be converted from a selection on the machine through the user interface. Therefore, designing usable interfaces by building upon a body of best practices and knowledge for interface design is a critical first-step toward accuracy in elections. The VVSG should reflect the above principle. We present some specific comments below to help clarify and increase focus on the importance of usability in its affect on accuracy. Section 3.2 of Part 1 of the draft VVSG cites the basic usability standards of the HAVA. These requirements define important and fundamental functional capabilities of the voting system, but are incomplete in that they do not specify any goal or mechanism to achieve usability in the initial casting of the ballot. By omitting usability requirements for the primary vote-casting activity, an incorrect impression is given that this is not a point of emphasis. While the standards themselves should reflect the goal of designing usable interfaces to capture voter intent, the law should as well. If the EAC puts forward amendments to the law in the future, we suggest that it address this gap. Specifically, we would recommend the following new clause be amended to HAVA: "i. Have a vote-casting mechanism presented, following best practices for user interface design, to enhance the ability of voters to accurately make selections that represent their intent. The design approaches for reaching this goal may include such basic principles as consistency, visibility, feedback, mapping between selections and candidates, and clear visual design" Accessibility and Usability While these terms are used separately within the VVSG, accessibility and usability have the same goal — making the voting experience and the voting equipment as easy as possible for the voter (and in the case of setup, shutdown and auditing, the poll worker) to use. While it is important to make sure that those with disabilities are able to vote with privacy and the other election guarantees provided to all voters, it is a mistake to restrict accessibility and usability concerns to only those with disabilities. Limiting accessibility features to machines specifically designated for users with disabilities may limit the ability of other voters to benefit from technologies and innovations that could improve their voting experience. Similarly, by restricting features to a limited number of machines, costs for those machines will be greater. They will likely be used more often, and reach their mean time to failure faster than other machines. To the extent feasible, we recommend that accessibility features be included with as many voting machines as possible and practical. Assistive devices that must be connected to voting systems may raise security concerns. Specifically, devices that must interface with the voting system software may introduce viruses, or the interaction of two disparate systems may prompt unintentional problems with the voting system. We recommend that jurisdictions should provide common assistive devices that can be connected via industry standard interfaces (such as USB) with voting systems. This would allow for testing of the interface as part of the certification process. Other assistive devices are either external to the voting system or connect through some mechanism that does not require a software interface (such as the audio devices currently available with some voting systems); Voters who need such devices should be allowed to bring such a device with them to vote.

Comment by Electronic Privacy Information Center (Advocacy Group)

We are in strong support for the expanded section on Usability and Accessibility Coverage.

2.4 Software Independence

Software independence [Rivest06] means that an undetected error or fault in the voting system’s software is not capable of causing an undetectable change in election results. All voting systems must be software independent in order to conform to the VVSG.

There are essentially two issues behind the concept of software independence, one being that it must be possible to audit voting systems to verify that ballots are being recorded correctly, and the second being that testing software is so difficult that audits of voting system correctness cannot rely on the software itself being correct. Therefore, voting systems must be ‘software independent’ so that the audits do not have to trust that the voting system’s software is correct; the voting system must provide proof that the ballots have been recorded correctly, e.g., voting records must be produced in ways in which their accuracy does not rely on the correctness of the voting system’s software.

This is a major change from previous versions of the VVSG, because previous versions permitted voting systems that are software dependent, that is, voting systems whose audits must rely on the correctness of the software. One example of a software dependent voting system is the DRE, which is now non-conformant to this version of the VVSG.

13 Comments

Comment by Lawrence Wiencke (Academic)

Require software be open source. The transparancy provides public confidence. Any programmer can contribute to find and fix bugs.

Comment by Gabriel Sorrel (General Public)

It is very good that these guidelines work from the position that software can never be completely trusted. While having source code open is helpful, but even that assumes that the code is never changed after being checked. I would amend this section to say that testing cannot rely on the correction of ANY software; we should not assume that since one computer sends a vote to another computer that either computer's records can be trusted. There must be a physical representation of the vote to be counted by hand.

Comment by Charles A. Gaston (Manufacturer)

A requirement for "Software Independence" is based on the unwarranted assumption that people always are more trustworthy and reliable than software. Because this is patently untrue, I submit this substitute for requirement 2.4. 2.4 Election Official Independence Election official independence means that an undetected error or fault among the voting system’s election officials is not capable of causing an undetectable change in election results. All voting systems must be election official independent. There are essentially two issues behind the concept of election official independence, one being that it must be possible to monitor election officials on election day to verify that procedures are being followed correctly, and the second being that monitoring election official behavior before and after elections is so difficult that assurance of voting system correctness cannot rely on the election officials themselves being correct. Therefore, voting systems must be ‘election official independent’ so that the voters do not have to trust that the election officials' behavior is correct; the voting system must provide proof that the ballots have been recorded correctly, e.g., voting records must be produced in ways in which their accuracy does not rely on the correctness of the election officials' actions. This is a major paradigm change from the VVSG, because this and the previous version permitted voting systems that are election official dependent, that is, voting systems whose results must rely on the correctness of the election officials. Examples of election-official-dependent voting systems include every voting system that involves "trusted" election officials (or vendors) with keys, combinations or other forms of unique access to voting hardware, software, ballots or results. [In other words, every voting system except SAVIOC.]

Comment by ted selker (Academic)

"Software independence means that an undetected error of fault in the software is not capable of causing an undetectable change in election results." Should be changed to: "Single-agent independence means that an undetected error of fault in any part of the voting system is not capable of causing an undetectable change in election results."

Comment by Jason Good (General Public)

The Rivest definition of software independence is narrow in at least two ways. First, it only considers effects that change the results of the election. A broader definition of software independence would require that the voting system still meet all of its requirements in the face of software malfunction. For example, the implementation of a VVPAT with strong audit controls can still allow a software breach of voter privacy. Second, the Rivest definition requires that result-changing software malfunctions only be detectable, not that they be detected. These guidelines largely ignore Rivest's own warning that his version of software independence can not be implemented simply by providing a VVPAT. Effective procedures for auditing are also required. These guidelines do not define or require the appropriate level of auditing. Consequently, exaggerated claims in these guidelines about the achievement of software independence and the benefits of VVPAT should be adjusted.

Comment by David Beirne (Manufacturer)

The notion of "software independence" as defined is a misnomer. The requirement for a software independent voting system implies that the election results are only verifiable upon the conclusion of a post-election audit. The logic incorporated into this section requires us to accept that no software can be relied upon for accuracy of election results. In turn, this requirement of software independence assumes that a post-election audit is required and is the only means to verify the accuracy of an election. In fact, it can be argued that the use of voter verifiable paper records attached to a DRE voting unit or optical-scan paper ballots are only software independent if a post election manual audit is incorporated. The intent is laudable, but fails in its logic unless there is an additional pursuit for developing assurance tools to verify the use of software for reporting of election results. Assurance tools can be developed for both software dependent and software independent systems and so the overall recommendation would be for the continuing use of two classifications of voting systems, both software independent and software dependent systems as included within the 2005 VVSG.

Comment by Audrey N. Glickman (Advocacy Group)

Thank you for this. This paragraph recognizes something that advocates have been saying for three years and longer: that the purpose of a computer in voting is not to act as our ballot in any way, but to provide (as does any computer in any circumstance) a quick, accurate method of tallying, and a streamlined interface between the voter and the ballot: no more, no less. We note, however, that there have been DREs which, as a part of a larger system, would conform to the provisions of this paragraph, contrary to the last paragraph of the opening of paragraph 2.4.

Comment by E Smith/P Terwilliger (Manufacturer)

This states that a DRE is non-conformant to the standards. If that is so, why are there 134 instances of the term DRE throughout the document, attached to various requirements? Confusing, at best.

Comment by U.S. Public Policy Committee of the Association for Computing Machinery (USACM) (None)

Software Independence We have mentioned our support for the principle of software independence described in the VVSG. We include with our comments the letter [1] we sent to the then-Chairman of the TGDC, Dr. William Jeffrey, expressing our support for Software Independence and other recommendations made to the TGDC. Given the shortfalls of security testing, it is our long-standing belief that voting systems should also enable each voter to inspect a physical (e.g., paper) record to verify that his or her vote has been accurately cast and to serve as an independent check on the result produced and stored by the system. We are pleased that the TGDC recommends that voting systems must have an independent way of verifying a voter’s selections. An important part of ensuring a software independent system is developing both an effective test and definition for determining software independence. We find both lacking in this version of the VVSG. We recommend that you define software independence as meaning that an error or fault in the voting system’s software is not capable of causing an undetectable change in election results. This will help provides state and local elections officials, as well as vendors, with the knowledge they need to help ensure that their systems are software independent. Without a specific test or a more specific definition, other groups will object to the principle on the grounds that the concept is too vague and indistinct to be effectively implemented. Given that many states currently do not conduct effective post-election audits, there is a need for software independence, together with clear guidance as to what makes a voting system software independent.. We recommend you include in the VVSG a process akin to the hypothetical example we outline in Appendix B — a process that demonstrates both the production of Independent Voter Verifiable Records and Software Independence. http://usacm.acm.org/usacm/PDF/USACMCommentsSTSPaper.pdf

Comment by David B. Aragon (Voter)

The definition of software independence correctly quotes Rivest's meaning, but encourages the misconception that the only flaws of concern are those affecting accuracy -- or even more narrowly, those that change the outcome of an election. Voters have civil rights, privacy rights etc. independent of whether their vote changes the election result. Language should be added to this section stating clearly whether the software independence requirement does or does not apply to system requirements in areas other than accuracy.

Comment by Rebecca Mercuri PhD (Advocacy Group)

The definition of software independence proposed by MIT’s Ron Rivest and NIST’s John Wack allows for computational-based cryptographic systems that do not necessarily include voter verified paper ballots to be certified for use in elections. This provision for the introduction of cryptographic solutions is also evident in the use of the incorrect phrase "voter verifiable" rather than the appropriate term "voter verified" throughout the draft. A "verifiable" ballot can never actually represent the true intention of a voter. Only when a ballot has been "verified" via independent examination and a deliberate casting action, can it contain a legitimate record of the voter’s choices. Cryptographic ballots cannot satisfy these constraints. Nor can a voting system that includes software in any stage, ever be considered "software independent" since it is always vulnerable to a whole host of unresolvable software-related issues, including malware and denial of service attacks, as well as unintentional misprogramming, all of which can alter the outcome of an election (although not necessarily within the Rivest/Wack constraints).

Comment by Electronic Privacy Information Center (Advocacy Group)

We are in agreement with the goals of "software independence." This provision of the standards goes to the heart of the challenges to security, reliability, and accuracy of electronic voting systems. Further, this definition is not closed because it may encompass other types of electronic voting system designs as well as ballot marking configurations so long as they meet the standard of "software independence." This may allow the development and testing of future generations of voting systems under a uniform standard.

Comment by David B. Aragon (Voter)

The "software independence" framework is a considerable improvement over previous requirements based on specific designs. The value of the performance-based logical framework is shown by the change in the treatment of DRE since the 2005 VVSG. There is however a danger that generalizing the requirements will weaken them, i.e. that the framework will allow new solutions to omit qualities taken for granted in existing solutions. This is particularly a concern with regard to transparency, because most uses of the word "transparency" in this draft VVSG refer to software source code, which –- as advocates of software independence appreciate -– is not transparent to anyone not skilled in the art of software development. I have made a specific comment under 2.4.2, but also ask that throughout VVSG, TGDC and EAC frankly and explicitly consider the goal to be transparency and that software independence is one means, but not a substitute, for achieving transparency.

2.4.1 Independent voter-verifiable records

The VVSG requires that, to be software independent, all voting systems include an IVVR vote-capture device, that is, a vote-capture device that uses independent voter-verifiable records (IVVR). IVVR can be audited independently of the voting system software but do not necessarily have to be paper-based. IVVR relies on voter-verification, that is, the voter must verify that the electronic record is being captured correctly by examining a copy that is maintained independently of the voting system’s software, i.e., the IVVR.

Voter-verifiable paper records (VVPR) is a form of IVVR that is paper-based. Currently, the voting systems that can satisfy the definition of software independence use VVPR, such as with

Figure 2-3 illustrates this in a tree-like structure. At the top of the tree is software independence; as stated previously all voting systems that are conformant to the VVSG must be software independent. One route to achieving software independence is to use IVVR. The VVSG contains requirements for IVVR, of which VVPR is one (currently the only) type. If different types of IVVR are developed that do not use paper, systems that use them can also be conformant to the VVSG "as is." In other words, new types of IVVR that do not use paper are already "covered" by the IVVR requirements in the VVSG; new requirements will not necessarily need to be added.

Figure 2-3 Voting systems that can conform to current requirements in the VVSG

intro2 3

7 Comments

Comment by ted selker (Academic)

Manually marked paper ballots Should be changed to: Manually marked ballots This gives the potential for broader coverage.

Comment by David Beirne, Executive Director, Election Technology Council (Manufacturer)

Although the IVVR does not have to be paper-based, current technology in the market place is all paper-based; therefore, this section is too prescriptive. Maintaining a dual process for software independent and software dependent systems will provide greater flexibility within the marketplace and will permit the users to determine the best platform for their needs. The following phrase, "IVVR relies on voter-verification, that is, the voter must verify that the electronic record is being captured correctly by examining a copy that is maintained independently of the voting system’s software." This description is incorrect. There is no way for a voter to verify, through the use of a software independent record, the performance of the electronic record being generated. The IVVR in this instance is serving as its own auditing record, but its existence alone provides no level of assurance on the performance of the software. The performance of the software will continue to rely upon the use of software assurance tools.

Comment by Electronic Privacy Information Center (Advocacy Group)

We are in support of this section because it establishes a standard that support options that may produce physical ballots or audit records that allow for voter verification. A key provision of privacy protection is that the creator of a record not be forced to disclose personally identifiable information to others. For this reason, voting systems should incorporate features that will optimize privacy for the broadest range of voters. Usability and accessibility of IVVRs and CVRs SHALL facilitate the voters’ option for final review.

Comment by Rebecca Mercuri PhD (Advocacy Group)

The proper phrase is "voter verified" (not "verifiable").

Comment by Brian Newby (Local Election Official)

Recommend this section either be deleted or expanded to allow for several other methods of electronic voter verifiable records. Many states have decertified equipment that included VVPATs. There is no value in requiring this feature if states can de-certify equipment because of perceived drawbacks in VVPATs. This requirement only increases costs to systems that may or may not still be certified by states, and most if not ALL increased costs (including EAC certification fees) are eventually passed on to taxpayers.

Comment by Cem Kaner (Academic)

An IVVR is not a valid record for this purpose unless it is (a) immutable, and (b) sufficiently durable to be available for audits, reviews of audits, and as raw data for reasonably timely election-related research. For example, an IVVR could be stored electronically if it was stored on a read-only disk, but not on a read-write medium because there is no assurance that the record approved by the voter matches the record stored. Storage via printing on thermal tape probably meets immutability, but not durability. (Affiliation Note: IEEE representative to TGDC)

Comment by Cem Kaner (Academic)

When a voter is presented with an IVVR, the record presented must be read off the storage device, so that the voter confirms that what is in storage matches the voter’s expectations. ........ Optical scanners do not do this. Instead, the voter is left to trust that the scanner has read the ballot correctly and stored it accurately. ........ IVVR might be necessary for software independence but it is not sufficient. Some (probably small) percentage of voters will actually check the accuracy of their individual single records. The election consists of a large number of records that must be stored safely, retrieved completely and accurately, and then read and tabulated accurately, with accurate transmission of the tabulated results and accurate integration of those results into larger pools of results. ........ (Affiliation Note: IEEE representative to TGDC)

2.4.2 The Innovation Class

Use of IVVR is currently the only method specified by requirements in the VVSG for achieving software independence. Manufacturers that produce systems that do not use IVVR must use the Innovation Class as a way of proving and testing conformance to the VVSG. The innovation class is for the purpose of ensuring a path to conformance for new and innovative voting systems that meet the requirement of software independence but for which there may not be requirements in the VVSG. Technologies in the innovation class must be different enough to other technologies permitted by the VVSG so as to justify their submission. Technologies in the innovation class must meet the relevant requirements of the VVSG as well as further the general goals of holding fair, accurate, transparent, secure, accessible, timely, and verifiable elections.

A review panel process, separate from the VVSG conformance process, will review innovation class submissions and make recommendations as to their eventual conformance to the VVSG.

6 Comments

Comment by David Beirne, Executive Director, Election Technology Council (Manufacturer)

"Technologies in the innovation class must meet the relevant requirements of the VVSG as well as further the general goals of holding fair, accurate, transparent, secure, accessible, timely, and verifiable elections." The context of the word "timely" needs to be clarified or stricken. It is impossible for the reader to know if this is a reference to timely tabulation or timely processing during the actual voting process.

Comment by David Beirne, Executive Director, Election Technology Council (Manufacturer)

This review panel process needs to be clearly defined or simply include current EAC models with the the use of the TGDC, Standards Board, and Board of Advisors. Since the EAC does not recognize itself as a rule-making agency, no variant procedure should be codified with the inclusion of a separate review panel unless specifically authorized under the Help America Vote Act.

Comment by U.S. Public Policy Committee of the Association for Computing Machinery (USACM) (None)

Innovation Class USACM supports the concept of the innovation class. However, we note that there has been a substantial amount of confusion about the scope of the innovation class, and the application and process associated with being certified under the innovation class. There is some question as to whether software dependent machines could be certified under the innovation class and whether the class could be applied to other voting devices not strictly related to achieving software independence. We recommend that the VVSG maintain a consistent strategy of only sanctioning voting systems in which a software fault cannot cause an undetectable error in election results, whether the system is evaluated under the well-defined software standard or the more progressive innovation standard. Put another way, innovation class systems should adhere to the software independence requirement. Regarding whether the class applies to a broader array of voting devices, our understanding is that the innovation class would only be focused on specific devices meant to achieve software independence. If it is the TGDC’s and the EAC’s contention that the innovation class is broader than that, it should clarify the application of the innovation class and detail the process involved in being certified under it. We also are concerned that the current testing process for the innovation class is vague. Without a more definitive testing procedure this process runs the risk of being an end-run around the standards. We recommend that the VVSG include a specific test or several kinds of tests to demonstrate that the innovation class submission can produce secure, accessible, reliable, and verifiable election results equal to or better than voting systems that can be tested to the VVSG without the innovation class. In addition to describing these tests, there must also be some description of the experts and process involved in judging two things: whether the device or system in question must apply for certification through the innovation class, and whether that device or system should be certified. To simply refer to a review panel process is insufficient.

Comment by Electronic Privacy Information Center (Advocacy Group)

The innovation class can be an important support for advancing voting system design as outlined by the underlying standards document. It may also increase the options for voters with a wide range of abilities or disabilities to effectively exercise their right to secret ballot and voter privacy. However, the innovation class may also present temptation to define voting systems or components under this designation should the path to approval appear to be less rigorous. There should be a separate and through investigation of the standard for the innovation class, which should include the definition of hardware and software components or voting systems that might be considered. Treatments of upgrades in firmware and software must be address under a unique standard topic area.

Comment by David B. Aragon (Voter)

ISSUE: This requirement sounds good but lacks support from the rest of the VVSG. For example, this is almost the only use of the term "transparency" in the VVSG (outside of the context of software source code, to which this section on software independence presumably does not refer.) The danger is that the high-level goal stated here is subject to so much interpretation that innovators will expect to negotiate it, rather than treating it as a requirement. Even a few specifics here would help protect the "review panel process" from becoming an open back door. PROPOSED SOLUTION: At least, add at end of 2.4.2: "For example, they must support hand-auditing without technological assistance, corresponding to what is required for IVVR under Part I: 4.4.1-A.3."

Comment by Rebecca Mercuri PhD (Advocacy Group)

The concept of an "innovation class" allows for a dangerous fast-track circumvention of the certification process. If a construct is truly innovative, the existing guidelines will not be able to appropriately address it, hence the resulting certification may be flawed or the implementation of the new design may necessarily be impeded by a lack of understanding as to how to properly perform certification. What this indicates is that the guidelines are flawed and need to be rewritten, NOT that a special class that avoids the testing should be allowed. This is because the 2007 draft VVSG (like its predecessors) masquerades as a functional standard, while actually continuing to be predisposed to existing designs. Even the TGDC’s description of the innovation class makes design assumptions, such as its limiting "expect[ation that] most technologies in this class [will] be based on multiple mutually auditing components."

2.5 Open-Ended Vulnerability Testing

The goal of open-ended vulnerability testing (OEVT) is to discover architecture, design and implementation flaws in the system which may not be detected using systematic functional, reliability, and security testing and which may be exploited to change the outcome of an election, interfere with voters’ ability to cast ballots or have their votes counted during an election, or compromise the secrecy of vote. The goal of OEVT also includes attempts to discover logic bombs, time bombs or other Trojan Horses that may have been introduced in the system hardware, firmware or software for said purposes. Open-ended vulnerability testing (OEVT) relies heavily on the experience and expertise of OEVT team members, their knowledge of the system, its component devices and associated vulnerabilities, and the team’s ability to exploit those vulnerabilities.

6 Comments

Comment by Brian V. Jarvis (Local Election Official)

Recommend that requirements also be established mandating fully documented verification points during development of the voting system to ensure that work products (at each phase of development) properly reflect the requirements specified for them (rather than waiting until deployment testing to discover a design or implementation flaw or vulnerability).

Comment by David Beirne, Executive Director, Election Technology Council (Manufacturer)

The use of OEVT is laudable from the standpoint of attempting to increase security levels, but multiple problems exist for a voting system provider to design a system to meet security thresholds in which the thresholds are undefined and subjective. This section is too ambiguous. Although assurances have been given that the OEVT will not have "fail" authority, this does appear to be the case especially in light of the discussion in Part III, Section 5.

Comment by Rebecca Mercuri PhD (Advocacy Group)

Safety is not assured via the open-ended testing, since the VVSG provides no method whereby later-detected flaws initiate reexamination. Perversely, there is even a disincentive for vendors to issue corrections to deployed systems, because any changes (even necessary ones) require costly recertification. Nor does the draft address the matter of subsequently identified vulnerabilities in the uninspected COTS components, by requiring ongoing updates and integration testing. One might think that, at least, if a voting system (or any of its components or modules) was found to be defective, or if the testing was discovered to have been improperly performed or deemed inadequate, there would be some process whereby the EAC would be required to withdraw certification. But the 2007 draft VVSG (like its predecessors) continues to leave the methodology whereby certification can be rescinded because of later-discovered flaws to the EAC.

Comment by Cem Kaner (Academic)

"Open-Ended Vulnerability Testing" is an odd name for the activity described here because this testing is not open-ended. It is tightly time-constrained (closed-ended). The current expectation in VVSG is a period of 12 person-weeks. .......... The style of testing described for OEVT is normally called "exploratory testing". One of the critical risks of exploratory testing is that it will be done poorly (at best, superficially or focused only on previously well-understood risks) if the tester has insufficient time to learn the particular system and assess its particular risks. .......... (Affiliation Note: IEEE representative to TGDC)

Comment by Cem Kaner (Academic)

ADD the following text to the end of Section 2.5: " Because of the importance of this reliance, OEVT team members must attest under oath that in their professional opinion (a) they have had sufficient time to develop sufficient knowledge of the system and of tools and techniques to investigate it; and (b) their testing was sufficiently competent and thorough to meet the objectives of this phases of testing." .......... Rationale: First, we are creating a task that depends on the expertise of the task-performers, in an environment in which it is very difficult to qualify that expertise. There is no widely accepted credential for testing expertise: there are no degree programs in testing, nor broadly recognized certifications, nor professional society fellowships. At a minimum, we must require the people who claim to be experts to certify that their work has been competently done. .......... (Affiliation Note: IEEE representative to TGDC)

Comment by U.S. Public Policy Committee of the Association for Computing Machinery (USACM) (None)

Transparency and Feedback Given the complex nature of voting systems — and computer systems in general — it is not uncommon for some problems to arise after the testing phase is over and the systems are operational. This is particularly true when systems are scaled up in size, such as the expansion from precinct systems to citywide voting centers that Denver attempted during the 2006 general elections. If voting standards are adjusted once every few years, they likely will not keep pace with changes in technology and new problems that appear after long hours of use or changes in scope and/or scale. There should be some means for addressing new problems or concerns with voting standards between iterations of a VVSG. While this is handled through the EAC’s certification manuals and processes, it is important to include feedback from this certification process into the standards. There needs to be a process developed — as part of the standards — to incorporate this feedback. We recommend that any problems found in the testing processes that are not covered by the standards be addressed quickly, prior to a subsequent iteration of the standards. For instance, if systems are consistently demonstrating a functional problem — one that could affect the election process and which are not covered by the standards, reporting this activity should result in corrective actions that are as binding as standards, approved by the EAC and appropriate advisory groups. With such a process, the decertification and recertification prompted by top-to-bottom reviews such as those held in California could be made less disruptive and more broadly applicable.

2.6 Expanded Security Coverage

In addition to software independence and OEVT, the treatment of security in voting systems has been expanded considerably. There are now detailed sets of requirements for eight aspects of voting system functionality and features, as shown in Table 2-2.

Table 2-2 Expanded security coverage

Security Topic

Description

Cryptography

Requirements relating to use of cryptography in voting systems, e.g., use of U.S. Government FIPS standards. Voting devices must now contain hardware cryptographic modules to sign election information.

Setup Inspection

Requirements that support the inspection of a voting device to determine that: (a) software installed on the voting device can be identified and verified; (b) the contents of the voting device’s storage containing election information can be determined; and (c) components of the voting device (such as touch screens, batteries, power supplies, etc.) are within proper tolerances, functioning properly, and ready for use.

Software Installation

Requirements that support the secure installation of voting system software using digital signatures.

Access Control

Requirements that address voting system capabilities to limit and detect access to critical voting system components in order to guard against loss of system and data integrity, availability, confidentiality, and accountability in voting systems.

System Integrity Management

Requirements that address operating system security, secure boot loading, system hardening, etc.

Communications Security

Requirements that address both the integrity of transmitted information and protect the voting system from communications based threats.

System Event Logging

Requirements to address system event logging to assist in voting device troubleshooting, recording a history of voting device activity, and detecting unauthorized or malicious activity.

Physical Security

Requirements that address the physical aspects of voting system security: locks, tamper-evident seals, etc.

0 Comments

No Comments

2.7 Treatment of COTS in Voting System Testing

To clarify the treatment of components that are neither manufacturer-developed nor unmodified COTS (commercial off-the-shelf software/hardware) and to allow different levels of scrutiny to be applied depending on the sensitivity of the components being reviewed, different subdivisions of COTS have been identified, with various requirements scoped to the new terminology. For example, a COTS operating system may not require source code review, but configuration files that support the configuration of the operating system would require test lab review.

The way in which COTS is tested has also changed; the manufacturer must deliver the system to test without the COTS installed, and the test lab must procure the COTS separately and integrate it. If the integration is successful, the COTS can safely be assumed to be unmodified.

4 Comments

Comment by Philip Loughmiller (Voting System Test Laboratory)

Please explain "procure" as this could mean different things to different indiviuals, one of the definations in the dictionary states "To obtain". Is it the responsibility of the test lab to purchase the COTS or will the manufacture provied it to the test lab with the shrink wrap still intact or will the test lab get the cots from the manufacture with out the shrink wrap?

Comment by E Smith/P Terwilliger (Manufacturer)

It may not be possible for the test lab to independently procure COTS components. For example, a vendor may purchased a package with a one-time cost (versus per unit) and that package may have since been upgraded or discontinued, making it impossible to legally procure.

Comment by Rebecca Mercuri PhD (Advocacy Group)

The VVSG, through its perpetuation of the legacy COTS exemption from source code examination, continues to allow voting systems to be shrouded in secrecy while also circumventing salient portions of the testing process via the innovation class. There is no need for the COTS exemption, since operating systems, language compilers and application software (such as databases and spreadsheets) have all existed in the open source libraries for over two decades. As well, vendors have always had the option of protecting their proprietary interests by copyrighting and patenting their intellectual property, rather than insisting on trade secrecy.

Comment by Electronic Privacy Information Center (Advocacy Group)

Documentation regarding the reliability of COTS products for inclusion in systems that require a measurable degree of precision such as vote recording, aggregation of ballot totals, and reporting of results SHALL also be included in the review. Should the manufacturer of a voting system or component be aware of a COTS manufacturer’s specific warnings regarding the use of their product in applications or processes which require precession, this information SHALL be provided at the time the system is submitted for testing under this standard. In addition, the manufacturer should provide information on how a particular problem is addressed in the product submitted for testing.

2.8 End-to-End Testing

The testing specified in previous versions of the VVSG for accuracy and reliability is not required to be end-to-end but may bypass significant portions of the system that would be exercised during an actual election, such as the touch-screen or keyboard interface. This resulted in the voting system not being tested thoroughly for reliability or accuracy, thus this practice is now prohibited in this version of the VVSG. For example, if a tabulator is specified to count paper ballots that are manually-marked with a specific writing utensil, it is not valid to substitute ballots that were mechanically marked by a printer. Devices or software that closely and validly simulate actual election use of the system are permissible.

2 Comments

Comment by Cem Kaner (Academic)

The prohibition against component testing may go too far. Lab test results are poor sources of estimates of a system's reliability (see comments on reliability estimators later) and so I do not accept as valid the concern expressed that partial-system tests limit the thoroughness of testing for reliability. Lab tests of the kind envisioned in VVSG do not test for reliability whether they are end-to-end or not. .......... Requiring that every test always be run in the context of an end-to-end task imposes an enormous testing cost on the voting equipment vendor. It is important to consider cost-benefit issues for individual tests: the more expensive we make each test, the smaller the set of tests we can rationally impose on the vendor. Software defects are more like design defects than manufacturing defects—a software error shows up in every manufactured copy of the same product and discovery of new defects comes from testing the system in a new way, not from testing for the same flaw again in a new copy of the device. Frequent, perfect execution of a narrow range of tests will yield less information than strategies involving broader sets of tests. .......... This issue becomes moot if we allow the public to obtain these systems and run their own tests. Given public and researcher interest in these systems, such testing will probably be a far more varied (and thus richer) sample of component and end-to-end tests than can be achieved by any test lab or afforded by any vendor. .......... (Affiliation Note: IEEE representative to TGDC)

Comment by Electronic Privacy Information Center (Advocacy Group)

We strongly support this provision of the 2007 VVSG draft document and recommend that it be included in the final draft of the document.

2.9 Reliability

The metric for reliability has been changed from Mean Time Between Failure (MTBF) to a failure rate based on volume that varies by device class and severity of failure (failures are equipment breakdowns, including software crashes, such that continued use without service or replacement is worrisome to impossible). In this version of the VVSG, there are now different failure rates per device, which permits more refined testing and eliminates the previous "one size fits all" approach.

Additionally, a volume test is now included that is analogous to the California Volume Reliability Testing Protocol. This test simulates actual election conditions and will better assess overall reliability and accuracy.

Reliability, accuracy, and probability of misfeed for optical scanners are now assessed using data collected through the course of the entire test campaign, including the volume testing. This increases the amount of data available for assessment of conformity to these performance requirements without necessarily increasing the duration of testing.

2 Comments

Comment by Brian V. Jarvis (Local Election Official)

Throughout this manual, recommend limiting the use of terminology such as "worrisome." Rather, use terms and concepts that are measurable and quantifiable.

Comment by Electronic Privacy Information Center (Advocacy Group)

Getting elections right the first time is of critical importance to public elections. Developing methods that accurately measure the demands placed on voting system is of critical importance. Reliability shall be defined in such as way that a single failure or series of failures will not prevent the successful completion of the ballot casting process for an individual voter or the successful conclusion of the election administration process. Transparency on the diminished or malfunctioning states of key vote casting, retention, tabulation, and reporting functions should be readily apparent when each of the following are involved: • Voter; • Poll Worker; • Election Administrator

2.10 Expanded Core Requirements Coverage

The general core requirements for voting systems have been expanded greatly. In addition to the already noted improvements in COTS coverage, end-to-end testing for accuracy and reliability, and the new reliability metric, the following topics in Table 2-3 have been added or expanded.

Table 2-3 Expanded core coverage in the VVSG

Core Topic

Description

EBMs

Requirements broadened to cover Electronically-assisted Ballot Markers (EBMs) and Electronic Ballot Printers (EBPs).

Early voting

Updates to requirements to handle early voting.

Optical scanner accuracy

Significant changes to accuracy requirements for optical scanners and handling of marginal marks.

Coding conventions

Major revisions to coding conventions and prohibited constructs in languages.

QA and CM

Major revisions to Quality Assurance and Configuration Management requirements for manufacturers.

Humidity

New operating tests for humidity affecting paper and the voting system.

Logic verification

Requirements to show that the logic of the system satisfies certain constraints and correctness.

Epollbooks

Requirements on ballot activation involving epollbooks to protect integrity and privacy of ballot activation information and to ensure records on epollbooks do not violate secrecy of the ballot.

Common data formats

Requirements dealing with making voting device interfaces and data formats transparent and interchangeable and to use consensus-based, publicly available formats.

0 Comments

No Comments