United States Election Assistance Comittee

Register to Vote!

Use the National Mail Voter Registration Form to register to vote, update your registration information with a new name or address, or register with a political party.

Note: If you wish to vote absentee and are a uniformed service member or family member or a citizen living outside the U.S., contact the Federal Voting Assistance Program to register to vote.

EAC Newsletters
and Updates

Sign up to receive information about EAC activities including public meetings, webcasts, reports and grants.

Give Us Your Feedback

Share your feedback on EAC policy proposalsElection Resource Library materials, and OpenEAC activities. Give feedback on general issues, including the Web site, through our Contact Us page.

Military and Overseas Voters

EAC has several projects under way to assist states in serving military and overseas citizens who register and vote absentee under the Uniformed and Overseas Citizens Absentee Voting Act. Learn more

Chapter 1: Overview

This document represents a recommendation from the Technical Guidelines Development Committee to the Election Assistance Commission for a voting system standard written to address the next generation of voting equipment. It is a complete re-write of the Voluntary Voting System Guidelines (VVSG) of 2005 and contains new and expanded material in many areas, including reliability and quality, usability and accessibility, security, and testing. The requirements are more precise, more detailed, and written to be clearer to voting system manufacturers and test laboratories. The language throughout is written to be readable and usable by other audiences as well, including election officials, legislators, voting system procurement officials, various voting interest organizations and researchers, and the public at large.

24 Comments

Comment by Elisabeth S. Hall (General Public)

I do not trust e-voting machines because the programing is not able to be checked by any but the few who are programers. Someone could mess with the program and no one would even know. What I do like and think is great is the kind where you vote on a paper ballot and then insert it into a machine that reads it, kicks it back out to you if your vote is uncountable for any reason and then sends the vote to a main computer. This kind can be checked by recounting the paper ballots. And it probablly should be checked against those paper ballots on a random sampling of sites. Thank you.

Comment by Vicki Coffman (None)

"Chapter 1: OverviewThis document represents a recommendation from the Technical Guidelines Development Committee to the Election Assistance Commission for a voting system standard written to address the next generation of voting equipment." The "next generation of voting equipment" should be paper ballots, hand counted at the precinct level. These unauditable electronic monstrosities that have been foisted upon us must be done away with.

Comment by Donna Mummery (Advocacy Group)

These new guidelines should be evaluated by independent computer software engineers such as Bo Lipari of New Yorkers for Verified Voting. Otherwise the public will have little confidence in them.

Comment by Matt S. (General Public)

"security, and testing". A bit more about traeability and accountability and tamper-indicators would be nice.

Comment by Geoffrey A. Landis (Academic)

The FAQ section states: "Q: Will the source code be available to the public? A: No. The EAC will make all information available to the public consistent with Federal law. The EAC is prohibited under the Trade Secrets Act (18 U.S.C. §1905) from making the source code information available to the public. However, the test labs will examine the source code to ensure compliance with the voluntary voting system guidelines. Test plans, test reports, and other information about the test labs and the voting system manufacturers are available on the EAC Web site (www.eac.gov)." This is a bad idea. A much better idea is: "No voting machine shall be certified unless the vendor makes the source code available for public inspection." This is fully in compliance with 18 U.S.C. §1905, in that any company retains the right to keep their code proprietary. However, if they do so, they should not be allowed to have their machines used in public elections. Vote counting should never be conducted in secret.

Comment by Zachary Buckholz (General Public)

"...contains new and expanded material in many areas, including reliability and quality, usability and accessibility, security, and testing." Please include "verification" and "accountability". I want to verify my vote was counted.

Comment by Robert Eden (General Public)

I've read through the document and I can't find a place where the source code is release for public review. That should be a requirement. While a certification lab may study the code, they certainly won't go at it with the vigor some members of the general public will. There should also be a place where issues found can be reported. (Not to the company, who would probably ignore it) Companies may be concerned with others "stealing" their software if the source code is released. Copyright law can protect them from that. No one could copy the code knowing they would have to release it and be easily caught. Companies may also be concerned with their image if the code is sloppy or buggy. Well, tough. Get it right, or don't do it at all. Avoiding the embarrassment factor will increase the quality of code. If companies want to hide the code, the public has a valid reason wondering what they have to hide.

Comment by Gabriel Sorrel (General Public)

I second (or third, or fourth) other comments that voter-verification is a must, and should be top priority. Voters must be able to see that their votes are accurate and recorded; the only way to do that is for at the end of the process for them to turn in a legible paper ballot that they hand in to be counted.

Comment by Kevin Baas (General Public)

"The EAC is prohibited under the Trade Secrets Act (18 U.S.C. §1905) from making the source code information available to the public." - You've got to be kidding! "Trade secret"?!?! Let me make this moot by revealing all of their "trade secret"s: "int tally[num_races][max_candidatdates]; void vote() { for( int i = 0; i < num_races; i++) tally[i][getVoteForRace(i)]++; }" There. Trade secrets revealed. Bet you didn't think this comment section would be used for such illegal activity! In any case, now you can arrest me and the public can finally see the source code for these machines. Consider me a "sacrificial lamb". Point is that the "Trade secret" doesn't hold any water, and wouldn't hold up in court if the judges/jury had any technical knowledge. The code is so entirely trivial that to a programmer, the idea that it would contain any "trade secret"s is completely laughable. Really, the notion is absolutely ridiculous. I.e. worthy of ridicule. And lots of it. Only problem is, it's not funny. I cannot stress enough how ridiculous the argument is. Did my little vote-counting program not demonstrate this clearly enough? They can all use my code, if they want, by the way. I hereby release it under GPL. It's probably the fastest, most efficient, and most reliable vote-counting code ever written, and they can use it FOR FREE! Really, if they are putting any code in there that they think is worthy of a patent, they're obviously doing something wrong. I can tell you this for sure: any piece of code sophisticated enough to ACTUALLY BE a "trade secret", isn't counting votes. And since the only thing the machine is supposed to do is votes, more likely than not, that code is doing something illegal.

Comment by Carl Hage (General Public)

The Public Comment web-interface for the VVSG is unsuitable-- worse than trying to read through a drinking straw. Viewing each comment of a few sentences requires many clicks. Comments should be aggregated into pages, and/or viewable as a sidebar within the document.

Comment by ted selker (Academic)

Usability problems for voters and system administrators are one of the most common ways that votes are lost. The recommendations' usability benchmarks do not give adequate direction for testing labs, vendors, or election officials to significantly improve this. Experimental methods need to be adequately explained and usability testing exemplars should be added. The goal is to implement performance metrics that testing labs can use to show that a particular voting machine allows people to find mistakes as easily as they can with some accepted benchmark. The recommendations also repeatedly state the need for instructions; however, what counts as adequate instructions remains undefined. Many improvements can be made to the usability recommendations to increase the effectiveness of the VVSG. The concept of "Software Independence (SI)" as expressed in the recommendations give all systems less integrity and accuracy. The term SI has to be changed before the recommendations are worth putting forward because it is too narrow, fails to cover the entire voting process, and disadvantages particular technologies. Improvements in elections have come from taking away opportunities for some individuals to effect the expression of others' votes. Oversight of the entire process ­-- most generally by mutual supervision -- has been central. Practices such as requiring a representative of each party to present a key to open a voting records storage area (the double-lock approach) have improved security in many jurisdictions. Technology for more than one assurance of security can be introduced into many parts of the voting process without extravagant research or retooling. I recommend the term SI be replaced by Single Agent Independence throughout the document. This broader concept would cover human agents, hardware, and voting records as well as software. The recommendations should provide a measurement for each requirement. Performance metrics built on standards drawn from the best real-world performance to date should be added. References to standards and their justifications throughout the document would make it easier to understand and to update. For example, a standard for residuals in experiments could be the mean for a national election, like the 1.2 percent experienced in 2004 [http://www.vote.caltech.edu/journals/ELJ-Stewart_06.pdf]. Similarly, adjacency errors should be less than the .5 percent in real-world punch card elections [http://vote.caltech.edu/media/documents/wps/vtp_wp8.pdf]. The document states the goal of coverage of a complete voting system. This goal would require the document to describe in detail the process of creating a voting system, from accurately reflecting the races in the election to reporting the results. The document does not comment adequately on verifying that votes are collected and reported after the election as the voter intended to vote. The introduction of VVPAT into the recommendations is inappropriate given the unreliable performance history of VVPAT. It is also inappropriate to not highlight other solutions such as the audio verification systems from Hart InterCivic, ES&S, Democracy Systems, and others. It is important to define principles for success instead of choosing specific technologies or applications. Given the fad of no-excuse absentee voting, the recommendations should address issues of its legitimacy and problems. At an extreme, the recommendations could set a standard that would secure offsite voting. Would it be possible to require the mail carrier to act as one of the agents in the voting process? The VVSG recommendations are full of crucial numbers and information that need references. The recommendations should provide supporting research or documentation at these key points. Many specifications need conceptual explanation and references to have meaning. To the extent possible, the way the calculation was arrived at should be in the language. An example of an adequately justified specification is ‘150 milliseconds, the amount of time it takes a person to physically respond to a visual stimulus [ref],’ or ‘A socially comfortable conversational response time of 1/3 to 1 second will be required for responses [ref].’ Many terms are important enough to the document that they should be included in the glossary but don’t appear. Some of the terms I would like to see defined include: Audit, Auditor, Cast, Cognitive Disabilities, Human Readable, Ink, Instructions, Receipt, Paper, Private Vote, Privacy, Private, Thermal Paper, and Usable. Some definitions in the existing glossary are inadequate. Many of these terms need specification as well as definition. For example, is paper defined at cellulose, dimensionally stable, capable of being stored for 22 months at 55 degrees Centigrade without changing characteristics, etc.? Another example of a need for further specification is cognitive disabilities, which might include all language, reading, memory and affectively observable disabilities (such as ADD, Tourette’s Syndrome, the autistic spectrum, etc.) that limit the performance of a person below their cognitive functioning level. Cognitive disabilities might also need to address people who can not act volitionally to make selections in an election. The recommendations as framed do not help us create elections that are less expensive, more accessible, or have higher integrity systems that other countries are creating and enjoying. By eliminating electronic ballot marking from the mainstream acceptability, for example, this document would contribute to a measurable increase in residual votes over what was already achieved by states in 2004 [http://www.vote.caltech.edu/journals/ELJ-Stewart_06.pdf]. The current form of this document is likely to undermine support for VVSG by states attempting to improve the conduct of their elections. Our comments are extremely important to make the document a viable recommendation to the EAC.

Comment by Phillip Michaels (None)

Designing voting equipment specifications without knowing the process in which the equipment will be working is like designing a car without know whether it will be driven only on the interstate, in the city, on country roads, or off-road. As this document goes to some length to point out, these are voluntary guidelines. So, why are you not specifying the processes that would this equipment must support? Here are the minimum requirements for a free and open voting system that reliably collects and totals voters opinions: 1 - All voting systems shall generate a paper record of each voter's ballot that will be the official ballot. The equipment producing the paper ballot will not accumulate votes. Any questions of voting totals shall be resolved by refering to the official ballot. 2 - Enough random audits of sufficient quantity and quality will be conducted to determine with a 95% degree of confidence that the results of an election fairly and honestly reflect the wishes of the voters. These audits shall compare the official paper ballot to the final election totals, total ballots printed versus the number used, procedures for ballot security, procedures for tying out the voter identification verification, chain-of-custody procedures for software, machine testing, comparisons of precinct vote totals and numbers of voters signing in. These audits are to be performed and posted for public consumption prior to certification of election results. 3 - Audits must be public. A video and audio record must be made of every audit. 3 - Failure to meet the audit standards shall invalidate the election. 3 - Individual ballot records and precinct level vote totals must be posted on-line for at least two-weeks of public comment before election results are certified. 4 - No public voting software or equipment can be considered a trade secret. Designs must be open to the public for review and comment. Now that the basic process voting equipment must support is known, the equipment specifications laid out in this document should be re-written to support it.

Comment by Peter M Zelechoski, CISSP, MBA-TM, VP Election Systems & Software (Manufacturer)

ES&S Response to the EAC’s Request for Comments On the 2007 Draft VVSG Elections Systems & Software Inc. has taken great care in reviewing the 2007 Draft VVSG proposed by the EAC and takes pleasure in submitting its comments in response to the Election Assistance Commission’s request. Executive Overview Elections Systems & Software (ES&S) lauds the EAC’s efforts to enhance the guidelines used to evaluate the viability of voting systems. Many of the changes in the proposed draft will better clarify and strengthen the voting systems that meet the certification. Nonetheless, many of the requirements in this draft seem to warrant rework before the draft moves forward. ES&S believes that all existing voting systems will become non-compliant under review of the proposed standards. All existing hardware will require replacement, primarily due to the requirement for an embedded hardware encryption module; ES&S believes this is an overly prescriptive mandate that would be better left as a non-prescriptive statement of requirement that could be addressed with other solutions. ES&S wonders when/if any jurisdiction can afford to replace its existing, in many cases relatively new, equipment without a new round of Federal money. A necessary extension, included in the VVSG by reference, is the Usability Performance Benchmarks for the Voluntary Voting System Guidelines. This document is the source of the usability performance benchmark against which voting systems are to be measured and the Voting Performance Protocol (VPP) that is to be used to assess them. ES&S has also reviewed this document in conjunction with the Draft VVSG and has provided some comments on it, in relation to its inclusion. Of primary concern with this document is the disparity between the VVSG requirement for a tester population that matches the "general population" while the benchmark used a much different tester population and calls for such in the VPP. Review of Part 1 Equipment Requirements Calling Part 1 "Equipment Requirements" is misleading. It is requirements for the Voting System (in whole and in part). Chapter 2 Conformance Clause Pollbooks are listed elsewhere in the document but they are not listed in this section, where the various classes of devices are defined. P1-2.4-A ES&S believes this requirement (P1-2.4-A: Implementation Statement) allows for declaration of a system that implements an incremental change to a system such that an existing VVSG-2002 or VVSG-2005 system is extended with a class (or classes) that meet the new VVSG-2007 while the existing/other classes of the system retain their previous conformance. ES&S believes such incremental improvements are in the best interest of all parties and would like to see this declared more specifically. Chapter 3 Usability, Accessibility, and Privacy Requirements Most of the comments ES&S has on this chapter fall into 3 categories: 1) requirements where we believe clarification is required, 2) requirements where there are conflict with state and country requirements, and 3) requirements where there are conflicts between the VVSG and the VPP. P1-3.1.1-1 It is impossible to develop a system that can be accessible to ALL eligible voters. Although a broad principle may be that ALL voters shall be afforded opportunity, ES&S believes this section should note the reality that no one method or device can provide such access to ALL voters. P1-3.2.1.1 (1st bullet) ES&S requests clarification on how erroneous votes are to be determined during Usability Performance benchmark testing. P1-3.2.1.1-A ES&S would like clarification on Confidence Intervals and the Adjusted Wald Method which are discussed and used in the benchmark. Are they mandated? Does falling within the benchmark interval constitute passing? P1-3.2.1.1-B ES&S would like clarification on Confidence Intervals and the Adjusted Wald Method which are discussed and used in the benchmark. Are they mandated? Does falling within the benchmark interval constitute passing? P1-3.2.1.1-C ES&S would like clarification on Confidence Intervals and the Adjusted Wald Method which are discussed and used in the benchmark. Are they mandated? Does falling within the benchmark interval constitute passing? P1-3.2.1.1-D.3 ES&S believes the measure of satisfaction is not being captured accurately by the existing benchmark and would like the VPP to be modified to accurately capture it so that satisfaction metric truly represents satisfaction. The VPP did not accurately capture user satisfaction because satisfaction does not correlate to the perception of confidence (ISO satisfaction - freedom from discomfort, and positive attitudes to the use of the product). The question as it was posed tried to assess the participant’s perception of their effectiveness. P1-3.2.1.2-A ES&S requests clarification or definition of "general population". P1-3.2.1.2-A ES&S asks for a clarification on Usability Testing for the General Population; specifically, for units designed for accessibility (ACC VS), the general population is frequently not adept in their use and ES&S believes general population usability tests would be contraindicated. P1-3.2.1.2-A & P1-3.2.8.1-B ES&S would like to request consistency in regards to usability test participants. The manufacturers’ summative tests and the VPP benchmark summative test should not be using different demographic characteristics. This requirement states the "using individuals who are representative of the general population" while the VPP benchmark deliberately chose not to use individuals who are representative of the general population P1-3.2.2.1-A ES&S has been required by some jurisdictions to allow voters to purposely OVERVOTE. This requirement should be modified to allow for warnings but not requiring the system to prohibit. P1-3.2.2.1-B ES&S has been required by some jurisdictions to allow them to disable the UNDERVOTE notification. This requirement should be modified to allow for warnings but not requiring them. P1-3.2.3.1-A.2 ES&S requests technical specification for meeting the requirement for "low sound leakage" headphones, and the test method. P1-3.2.3.2-A ES&S would like clarification on how this would be handled (no indication of alternate language on a CVR) if the Audio Tactile Interface alternative language has a write-in. P1-3.2.4-C ES&S requests a specification reference to the required "best practice" for "plain language". P1-3.2.4-C (5, 6 and 7) ES&S would like to see the Plain Language requirement annotated for the case that non-English languages do not always have gender neutral and non-negative adherence. P1-3.2.5.B ES&S believes an additional/sub-requirement may be needed for Audio Reset, to cover the case of a fleeing voter and protecting the pollworker from exposure to an abnormally loud setting. P1-3.2.5-F ES&S would like to see the requirement for SANS SERIF fonts clarified to indicate it only applies to rendered text and not to graphics (i.e. the scanned image of a physical ballot would display whatever font was used in the physical printing of that ballot). P1-3.2.6.1-E ES&S believes an allowance should be stipulated in the standard to allow for the case where jurisdictional rules require the inactivity time to be outside the range specified (2 - 5 minutes). P1-3.2.7-A ES&S believes requirements P1-3.2.3.2-A and P1-3.2.7-A are in conflict when the VVR is also the official CVR. P1-3.2.7-A.2 ES&S believes it would be good to clarify that presentation of information that is presented to other voters in written form, in alternate languages that do not have written form shall be presented in auditory form. P1-3.2.7-A.4 ES&S believes the requirement for testers to be "not fluent in English" is ill formed. ES&S would like the phrase "but not fluent in English" to be replaced with "have difficulty with English". The US Census Bureau has already determined a range of "English-speaking ability". Reference U.S. Census Bureau, Census 2000 questionnaire #11c. (using the same Likert scale) P1-3.2.8.1-A ES&S requests the definition of the level of understanding of a typical poll worker and the criteria for determining whether the documentation complies with this requirement? P1-3.2.8.1-B ES&S requests clarification or definition of general population. P1-3.2.8.1-C.1 ES&S would like to request the "persona" or the definition of a representative user type to be recruited for the usability test. Poll-workers receive training and therefore fall into an "expert poll worker" category. The discussion must state the relevant characteristics to be used for the persona of poll worker recruiting – what they should be, NOT what they can not be. P1-3.3.2-B ES&S requests a specification and test method for determining compliance with the Color Saturation requirement. P1-3.3.2-C ES&S requests clarification on criteria to determine conformance with the requirement for "distinctive buttons and controls". P1-3.3.3-F Requirement P1-3.3.3-F has an erroneous reference to P1-3.2.5-C for more information on "accidental activation". P1-3.3.4-C ES&S requests specifications for the minimum level of dexterity that must be supported. Are vendors required to engineer a device to support an individual with no dexterity at all? Chapter 4 Security and Audit Architecture P1-4.3.2-A ES&S is required by several jurisdictions to include provisional ballots in the counts and by several others not to include them. This is a jurisdictional specific requirement. The requirements must be modified to allow for the handling of this feature based on local jurisdictional rules. P1-4.3.2-C ES&S believes that putting date and time of creation on ballot images would violate secrecy since it would allow voting order to be determined. ES&S believes the capture of rejected ballot images is pertinent only to the VVPAT environment. In other environments (e.g. optical scan rejects) the ballot is not actually captured. Chapter 5 General Security Requirements ES&S’s main issue in this section is its over-prescriptive requirement for a hardware change that prematurely renders all voting devices obsolete. Strengthening existing solutions with requirements leading to similar objectives should be given preference to a requirement that forces jurisdictions to choose between replace all of their voting devices or not complying with the revised VVSG. P1-5.1.2-A & P1-5.1.2-B All existing devices now become obsolete. ES&S believes this is an overly prescriptive requirement. It would be possible to provide the same level of authentication and non-repudiation without prescribing the use of a hardware module embedded in each machine. P1-5.2.1.1.A thru P1-5.2.1.2.B.1 ES&S requests clarification on how often validation of the installed software/firmware is required to be executed. P1-5.3-B Election offices typically use warehouse people to perform voting device firmware upgrades and poll workers run autoloads so that machines do not contain election data until the polls are open. ES&S believes this requirement is mandating an unwelcome change to the jurisdictional procedures in use today. P1-5.5.1-A The requirement for a Trusted Platform Module (TPM) is another new hardware requirement that makes all existing hardware obsolete; most current hardware can not access such a device. P1-5.6.1-A ES&S would request review of this requirement for the purpose of allowing use of RFID to validate the ballot as authentic/counterfeit. ES&S requests clarification of the handling/requirements for Pollbooks. They are briefly referenced in the standards but are not adequately defined nor are there many direct references to their use/requirements. P1-5.7.2-J ES&S believes that local jurisdictions would want an integrated log that is time based and shows ALL activities from election prep, L&A, voting and collection. We believe that separate machine and election logs should be optional, not a SHALL. [NOTE: the paragraph/requirement numbering gets mixed up in section 5.3] Chapter 6 General Core Requirements P1-6.1-D b. The requirement that alignment marks be outside the marking area seems to assume a particular method of alignment registration and we believe it is over prescriptive. It would seem to preclude the use of targets that are self clocking such as that used in the "Eagle arrow" ballots, where the head and tail of the arrow serve as the response target clock and are in the area where votes are recorded. Self clocking techniques provide fully flexible ballot layout ability and eliminate the need for having a fixed grid for potential voter response locations. P1-6.1-F b. The specification of "if needed" in this requirement for separate compartments in ballot boxes provides no reference for solidifying when it is needed. P1-6.3.1.2 Direct Recording Electronic (P182) ES&S believes the use of DREs as Voter Assist Terminals falls closer to the profile shown for EBMs in the estimated volume section (P1-6.3.1.2) and should be listed separately from the standard DRE profile. In as much as vote time is a function of the number of contests and candidates to be voted, ES&S believes that a standard benchmark ballot configuration should be identified as associated with these estimates. P1-6.3.1.2 Electronically-assisted Ballot Marker (P182) ES&S believes that the use of EBMs as Voter Assist Terminals is a separate profile from their use by others (non-assistive voters) and that each should be listed separately in the estimated volume section (P1-6.3.1.2). P1-6.3.1.3 Central-count optical scanner ES&S believes the specification for manageable failures in a central-count optical scanner environment (P1-6.3.1.3) is much too simplistic. The manageability of processing in this environment is dependent on multiple factors, including number of scanners, processing capacity of each scanner, number of ballots to be processed, availability of service technicians, time to service, and mean time between failures. An outage for a single device in a multiple device deployment where the processing of all ballots could still be accomplished in the processing window would be much more manageable than an outage of a single device in a single device deployment that could not be rectified within the processing window free time. ES&S believes the specification needs to be modified to take these factors into account. P1-6.4.2.2-A.2 ES&S requests that a definition of what is constitutes the "manufacturer's certification information" specified by this tagging requirement (P1-6.4.2.2-A.2). A hardware modification may change the model number would the date then include manufacture date and the date of the modification or modifications? If the tag on a device is tamper-resistant, this may present a problem when the manufacturer's information changes. P1-6.4.3-B ES&S believes the new requirement for "Suitability of COTS Components" (P1-6.4.3-B) contradicts the EAC interpretation "EAC Decision on Request for Interpretation 2007-05(COTS).doc" of 6-11-2007. P1-6.6-B.1 ES&S supports the requirement to use EML as the public interchange format. Chapter 7 Requirements by Voting Activity It is possible that there are election programming variations other than those enumerated. Provision must be made to not disallow them simply by the failure of them not being included in the list of variations provided. This same provision must be applied to the vote capture device, as a similar attempt at enumeration of the variations has been included in this chapter. An EMS may create logical ballot style definitions in EML and may process EML Cast Vote Records but have no physical ballot presentation. Yet, section 7.2 seems to require all EMSs to produce physical ballots. P1-7.5.4-C A voting system that supports the full interoperability premise might not include a Casting capability. It might serve only to produce and receive the interoperable messages. That is, it might provide for election definition and produce EML election and ballot definition messages and then it might receive EML Cast Vote Records and produce EML Results messages. In this scenario, the system does not include a Casting capability, as that is left to others. P1-7.6-B.5 It is common that extenuating circumstances require extending the voting period. Usually that happens prior to polling locations performing their close. It is quite foreseeable however, that a polling location may not receive the instruction prior to completing its closing procedure. In such cases, the polling location would need to re-open polling on its vote-capture device. Review of Part 2 Documentation Requirements In practice, the test labs have been requiring a statement of non-applicability for items included in the documentation requirements that don’t have a manifestation in the vendor’s system. ES&S believes this is a poor practice that unnecessarily inflates the amount of documentation and makes it less usable for the users. ES&S would like to see guidance included to eliminate this practice. Chapter 2 Quality Assurance and Configuration Management Data Package (manufacturer) P2-2.1-A.7 ES&S requests further definition of the phrase "that impacts conformity to the VVSG". Who makes the determination and how? Are these the only ones that require testing statements? P2-2.1-A.9 ES&S believes the phrasing in this requirement (P2-2.1-A.9) is much too general. A manufacturer's Configuration Management manual should address its own voting systems, not "a" or "any". Chapter 3 Technical Data Package (manufacturer) P2-3.5.4-H through L seem to be duplicates of P2-3.5.4-C through G Review of Part 3 Testing Requirements Chapter 5 Test Methods P3-5.1.4-A.3 Requiring storage temperature ranges of minus 4 degrees to 140 degrees is totally appropriate to equipment expected to be stored and used in military field environments, but applying those same requirements to voting equipment can potentially require manufacturers to use very expensive electronic components (LCD screens, processors, etc.) which add no value to 99.9% of the users. Review of Usability Performance Benchmarks for the Voluntary Voting System Guidelines UPB-3.2 ES&S requests clarification on the conformance test methods to be conducted. Better definition around 100 participants is needed. Is it per Certification, or per class of test subjects, per product, or per process? UPB-5.1 ES&S would like to request that the actual segments be modified on two of the "demographic bands". One of the three "Age bands" and "Education bands" need to be modified to include "55+" and "no college" to be consistent with the manufacturers’ requirement to test the general population.

Comment by Jerry Buerge (General Public)

Let there be no question about having a permanent non-changeable record of each vote cast by every voter as to allow an independent hand or alternate electronic count to be made of each vote cast within each jurisdiction that will support a detailed audit should same become necessary, in addition to a sample audit of each jurisdiction's machine tabulation to assure that the count has been accurately made with the machinery used. This record to be created on paper or within a sealed system that indexes to a separate zone that can be first verified by the voter prior to the vote being cast to an area of the device that will record it to a tabulator that will calculate the total votes cast for each participant or issue of each contest and then held for review at a later date. If that can be done without and question about its accuracy and ability to be reviewed by people power, that is all that will be necessary to satisfy the public. Any system that does NOT establish a non-changable record of each vote cast to enable a proper audit will not satisfy that need.

Comment by Jerry Buerge (General Public)

Your system has just proven my point. It has acknowledged my previous comment BUT did not let me audit it by displaying a record of exactly and everything I said.

Comment by Dr. Alan T. Sherman, Associate Professor of Computer Science, UMBC (Academic)

I prefer to make all of my comments in one place here. Positive Features: 1. It is positive that there is a process to create guidelines. 2. To the extent that there are testable functional performance standards, that is helpful. 3. The idea of an open-ended review, required for all systems, is desirable. Such review is necessary to catch issues not otherwise tested. 4. It is important that there is some attempt to encourage and accommodate innovations. 5. The concept of "software independence" is a useful notion. It is an example of how to state a performance requirement without micromanaging implementations in particular technologies. Areas of Concern: 1. Parts of the guidelines mandate technology. The guidelines should mandate performance requirements, not technology. I fundamentally disagree with those who claim that technology mandates are necessary to achieve testability. The philosophy that underlies the current guidelines mistakenly places methods above substance. Much greater emphasis should be placed on performance requirements. 2. The way the guidelines treat innovations risks stifling innovation. First, the separate "innovations class" creates a high-risk unclear path for new products. I would prefer innovations to have the option of following the same basic path as for traditional products as follows. Each criterion should begin with a performance requirement. Then, there could be one or more standard ways of complying with the requirement, together with an alternative special innovations review for that requirement. That way, an innovative product could limit the number of special reviews. Second, the whole framework assumes a particular way of conducting elections, which stifles radically different approaches. The notion of a separate "innovations class" should be expanded to permit consideration of such fundamentally different approaches. Third, more flexible provisions could be made for an "experimental class" for special testing of new systems in certain real and mock elections. 3. The document is much too long. It needs to be shortened drastically (to at most about 100 pages). The excessive length impedes effectiveness and increases costs. I fundamentally disagree with those who claim such length is required to achieve testability. By focusing on performance requirements, the guidelines could be easily shortened. 4. The guidelines should include standards for common interfaces among election system components. For example, marking devices and scanners should have standard ports for attaching verifiers. Doing so will help encourage better modular engineering design and could permit independent testing of certain components.

Comment by Electronic Privacy Information Center (Advocacy Group)

The privacy protections provided in this version of the draft 2007 Voluntary Voting System Guidelines is a great improvement over the 1990, 2002 and 2005 editions of this document. One of the core values of democratic elections is the secret ballot and voter privacy. Federal and state courts and legislatures have historically taken measures to protect the right of voters to vote freely without fear of retaliation. This document's final version should reflect the importance of voter privacy and ballot secrecy in the public election process.

Comment by James Davidson (General Public)

ALL voting machines should leave some sort of software-independent paper record so that a random sampling of precincts can be counted by hand against the computer-tabulated results. This is the only way elections will ever be reliable and trusted by the public. Thank you.

Comment by Verified Voting Foundation (Advocacy Group)

Verified Voting Foundation strongly supports the emphasis on Software Independence using Independent Voter-Verified Records, as well as the recommendation that voting systems hardware and software must provide detailed data in standard formats to support inter-operability and post-election audits that compare hand-eye manual counts of voter-verified records with electronic tabulation results. Appendix A "Definitions of Words with Special Meanings" is also an important contribution in and of itself.

Comment by U.S. Public Policy Committee of the Association for Computing Machinery (USACM) (None)

On behalf of the U.S. Public Policy Committee of the Association for Computing Machinery (USACM), we are submitting the following comments on the Voluntary Voting System Guidelines (VVSG) released by the Election Assistance Commission (EAC). With over 88,000 members, ACM is the world’s largest educational and scientific computing society, uniting educators, researchers and professionals to inspire dialogue, share resources and address the field’s challenges. USACM acts as the focal point for ACM’s interaction with the U.S. Congress and government organizations. It seeks to educate and assist policy-makers on legislative and regulatory matters of concern to the computing community. USACM is a standing committee of the ACM. It tracks US public policy initiatives that may impact the membership of ACM and the public at large, and provides expert input to policy-makers. This input is in the form of non-partisan scientific data, educational materials, and technical analyses that enable policy-makers to reach better decisions. Members of USACM come from a wide-variety of backgrounds including industry, academia, government, and end users. Our goal in this review is to ensure technical feasibility and accuracy, best practices, and promotion of voter confidence in election results while protecting potential vendors from unduly onerous or vague requirements, and also providing reasonable, actionable statements for local and Federal officials. We have submitted our comments on specific recommendations through the online submission website. Those comments are also included here, following our general comments about the VVSG. We also include a glossary, and a proposed test for determining whether voting systems produce truly independent voter verifiable records. This test is meant as a hypothetical example, and is not an endorsement of a particular technology or technologies for use in voting systems. We also note that the technology in this field is quite dynamic, and the issues quite complex. Even if all of our suggestions were accepted there will be issues yet to be addressed in the near future. We encourage the EAC to be proactive in anticipating changes that may present problems for accurate, safe voting, and to revisit these guidelines in a timely fashion. Introduction: USACM strongly supports efforts to ensure that all voting systems — particularly computer-based electronic voting systems — embody careful engineering, strong safeguards, and rigorous testing in both their design and operation. The development and implementation of comprehensive voting system guidelines — including this effort by the Election Assistance Commission (EAC) and its Technical Guidelines Development Committee (TGDC) — are an important part of ensuring that elections are accurate, reliable, accessible, secure, and verifiable. We applaud the efforts of the EAC and TGDC to develop this edition of the VVSG. It represents a complete rewrite of previous editions, including some provisions that had not been reviewed since the 2002 Voting System Standards developed by the Federal Election Commission. The staffs of the EAC, the TGDC, and the National Institute of Standards and Technology are to be commended for their work. We urge the EAC to adopt the TGDC recommended text with some modifications and clarifications, as described below and through the online comment system. With the passage of the Help America Vote Act (HAVA), systems were rushed to market without the benefit of a comprehensive set of Federal guidelines. This new edition of the VVSG is a welcome step forward. Given the nature of current electronic voting systems, where security, accessibility, reliability, usability and privacy were not designed in from the beginning, some tensions in developing and finalizing this VVSG are unavoidable. A good guiding principle is to focus on desired election principles rather than desired election technologies. For instance, maximizing voter participation would be a good election principle, and strong usability and accessibility standards would be a means to support that principle. The goal is to respond to as many constituencies as possible. To show favoritism to one would preclude perfectly reasonable standards because a small percentage of voters are inconvenienced. To focus on specific technologies used in voting narrows the scope of this discussion — and this document — too much to be effective. The concern over unverifiable voting machines is widespread throughout the computing community. In a 2004 poll that ACM conducted of its members, 95 percent of those responding indicated that they have serious concerns about electronic voting machines — concerns that should be addressed with specific safeguards. In an effort to bring the community’s concerns to policymakers, ACM adopted a policy statement in support of physical audit trails, which a voter could inspect to verify their vote. The principle of software independence — as defined in the VVSG — is encouraging because it embraces the notion of being able to verify the results of an election independent of the machines used to cast the ballots. Another development in the VVSG — the innovation class — is also heartening. The VVSG should encourage technological innovation in voting systems, and welcome the innovation class as a means for ensuring that new devices and new voting systems can be effectively tested to ensure that they can provide accurate, reliable, accessible, secure, usable, auditable, and verifiable elections. While we appreciate the opportunity to provide comments on the VVSG, we are concerned about the timeliness of implementing this edition of the VVSG. As elections continue to be close in several jurisdictions, criticism — warranted and unwarranted — will be levied against voting systems. When problems or unusual results leave an election in doubt, conducting a transparent and credible recount becomes extremely difficult, leaving election officials with no choice but to conduct a revote or accept the existing results. Audit trails (if they exist) that are not physical may not accurately reflect the votes cast when undetected errors or tampering alter the outcomes of elections. The resulting lack of certainty in the results, especially in close races, not only undermines the accuracy of the vote, but also may serve to diminish citizen confidence in the fairness of the process. If this confidence is not strengthened by the time the next VVSG is implemented, we face the risk that the public will turn away from electronic voting systems — in whole or in part. Conclusion: We thank the EAC and the TGDC for developing this version of the VVSG, as well as the dedicated NIST staff that helped develop the proposed requirements. In 2002 Congress gave NIST significant new responsibilities and created the TGDC with the specific intent of building much-needed technical expertise into voting guidelines. At the same time, HAVA appropriated billions of Federal dollars for the purchase of new voting systems based on Federal guidelines that were woefully inadequate. NIST, the TGDC and the EAC were given few resources and little time to develop new standards for equipment that state and local jurisdictions were purchasing. The computing community found that the 2002 standards (which HAVA deemed to be adopted based on previous Federal Election Commission standards) and the 2005 revision were lacking in scope, depth and technical rigor. The evidence for the inadequacy of the standards is clear: Numerous independent technical reviews of voting equipment currently in service have found major security, accessibility, and reliability flaws. This draft is a sweeping and fundamental change from the previous standards, and a welcome step forward toward making voting systems accurate, reliable, accessible, secure, usable, and auditable. These high-level objectives support the most critical factor in elections — that voters have confidence in the results. There is a growing sense that it will be many years before there are any major revisions of these standards, once they are adopted. Therefore, we urge the EAC to resist weakening the critical concepts in the draft that provide for the development of more robust voting systems. Such a weakening would repeat the pattern of inadequate Federal standards. USACM has outlined its support for many of the important principles — such as software independence, independent voter-verified records, innovation class and vulnerability testing — in this document. Clearly these principles are only meaningful if the requirements behind them are detailed, clear and rigorous. We have recommended many specific improvements to the detailed requirements and urge the EAC to adopt the text approved by the TGDC, incorporating the comments we have submitted. Additionally, we would like to make ourselves available to NIST, the TGDC, and the EAC to provide technical advice and expertise. Our members have contributed to EAC and TGDC meetings in the past, and they look forward to the opportunity to continue to contribute as the EAC deliberates these standards. Please contact our Public Policy Office — 202-659-9711 — with any questions you may have. Acknowledgments: Finally, ACM wishes to thank the members of USACM for their dedication in drafting and vetting these comments. In particular, ACM thanks the Chair of USACM, Eugene Spafford (Purdue University); the Chairs of USACM’s Voting Subcommittee, Alec Yasinac (Florida State University) and Barbara Simons (retired, formerly with IBM) for their leadership on this project; David Bruggeman (ACM’s Public Policy Analyst) serving as editor; and all the members of USACM’s voting subcommittee listed below: Annie Anton (North Carolina State University) Jean Camp (Indiana University) Lillie Coney (Electronic Privacy Information Center) David L. Dill (Stanford University) Jeremy Epstein (Software AG) Edward W. Felten (Princeton University) Harry Hochheiser (Towson University) Lance Hoffman (George Washington University) Douglas W. Jones (University of Iowa) Cem Kaner (Florida Institute of Technology) Kim Lawson-Jenkins Vince Lipsio Peter Neumann (SRI) Barbara Simons (retired, formerly with IBM) Eugene Spafford (Purdue University) David Wagner (University of California, Berkeley) Alec Yasinsac (Floridia State University) Specific Section by Section Comments on Draft VVSG: Preface to Comments In August 2007, the Technical Guidelines Development Committee (TGDC) submitted recommended guidelines to the Election Assistance Commission. This draft introduces several new concepts including a requirement for Software Independence and Open Ended Vulnerability Testing, resulting in significant debate and controversy. To assist with further evaluation of the VVSG draft, the EAC established a public feedback forum to seek an analytical approach that can produce a threat model to assist in attributing risk, defining costs, and estimating return on investment for corresponding VVSG draft provisions. In response to this call for comments, members of the U.S. Public Policy committee of the Association of Computing Machinery (USACM) reviewed the VVSG draft, discussed many of the pertinent issues, produced a set of comments, reviewed and revised those comments, and provide them to the Elections Assistance Commission for its use. Thus, the comments herein in total are those of the USACM. Each comment is tagged in brackets with one or more descriptive terms (e.g. imprecise, incomplete, inaccurate, vague) describing a summarizing reason for the comment.

Comment by E Smith - Sequoia Voting Systems (Manufacturer)

First, I admire the caliber and amount of effort that went into this VVSG. It takes several steps forward with voting technology and the attempt to gain/regain public trust in voting systems. That said, I fear that we have put the cart before the horse and developed this VVSG prior to the development of threat models, FMEA for the election process, and before the NIST offices were cleared on Election Day (Early Voting and pre-election setup and LAT, too) so that the authors of the Guidelines could see for themselves what happens in an election cycle. Although it will add further long-term chaos to the market for voting systems by delaying the new VVSG, I recommend a complete re-start of the drafting process AFTER the three items above are completed or at least are at a usable state. After really reading the VVSG in detail to assist in editing and submitting the Sequoia technical community's comments, I have concluded that in many portions of the Guidelines there exists a solution looking for a problem. ...and in some portions that problem does not exist or is so poorly understood that the logical extrapolation is the marketing of the voting system that no election jurisdiction wants.

Comment by Rebecca Mercuri PhD (Advocacy Group)

This comment is intended to augment the formal remarks I submitted pertinent to the 2007 Draft VVSG at the April 24, 2008 Voting Advocate Roundtable discussion (see ). By means of introduction, I am a computer scientist/engineer who has researched, written, and testified on the subject of electronic voting since 1989. My testimony on this topic includes appearances before the U.S. House Science Committee, the U.S. Election Assistance Commission, the U.S. Commission on Civil Rights, the U.K. Cabinet, various State Legislative Committees (in CT, MD, PA, VA, NY and NC), and court proceedings (in NJ, FL, OH, CA and MI). I have directly influenced the wording of Federal, State and international election legislation, especially as it pertains to voter verified ballots and independent auditing of election results, and have provided comment to the EAC and FEC on the earlier 2002 and 2005 draft VVSGs, as well as participated in the IEEE voting standards work that was consulted during the construction of the 2005 and 2007 draft VVSG. The 2007 draft Voluntary Voting System Guidelines (VVSG) represents a significant departure from earlier Federal voting system guidelines (2005 EAC, 2002 and 1990 FEC), while still retaining much of the certification framework that has been increasingly demonstrated to be problematic. Among other changes, it appears to recognize earlier shortcomings of the certification process (especially in the areas of voter verification, transparency, auditability and security) by introducing an innovation class that allows for the submission of novel voting system paradigms for certification, and provides for the (somewhat related) adoption of a software independence requirement. Unfortunately, these concepts fall short of their intended purpose and instead provide a fast-track backdoor whereby a new generation of experimental, unproven, electronic voting systems can be foisted on the voting public, without thorough examination. In particular, the definition of software independence proposed by MIT’s Ron Rivest and NIST’s John Wack allows computational cryptographic systems that do not necessarily include voter verified paper ballots to be certified for use in elections. This provision for the introduction of cryptographic solutions is also evident in the use of the incorrect phrase "voter verifiable" rather than the appropriate term "voter verified" throughout the draft. A "verifiable" ballot can never actually represent the true intention of a voter. Only when a ballot has been "verified" via independent examination and a deliberate casting action, can it contain a legitimate record of the voter’s choices. Cryptographic ballots cannot satisfy these constraints. Nor can a voting system that includes software in any stage, ever be considered "software independent" since it is always vulnerable to a whole host of unresolvable software-related issues, including malware and denial of service attacks, as well as unintentional misprogramming, all of which can alter the outcome of an election (although not necessarily within the Rivest/Wack constraints). Following the revelation of serious equipment shortcomings via independent state-authorized testing of previously federally certified equipment, election integrity advocates and citizens have increasingly and adamantly insisted on transparency, independent auditability, and voter verification in the election process. But the 2007 draft VVSG, through its perpetuation of the legacy COTS exemption from source code examination, continues to allow voting systems to be shrouded in secrecy while also circumventing salient portions of the testing process via the innovation class. There is no need for the COTS exemption, since operating systems, language compilers and application software (such as databases and spreadsheets) have all existed in the open source libraries for over two decades. As well, vendors have always had the option of protecting their proprietary interests by copyrighting and patenting their intellectual property, rather than insisting on trade secrecy. One might think that, at least, if a voting system (or any of its components or modules) was found to be defective, or if the testing was discovered to have been improperly performed or deemed inadequate, there would be some process whereby the EAC would be required to withdraw certification. But the 2007 draft VVSG (like its predecessors) continues to leave the methodology whereby certification can be rescinded because of later-discovered flaws to the EAC. Safety is not assured via the open-ended testing, since the VVSG provides no method whereby later-detected flaws initiate reexamination. Perversely, there is even a disincentive for vendors to issue corrections to deployed systems, because any changes (even necessary ones) require costly recertification. Nor does the draft address the matter of subsequently identified vulnerabilities in the uninspected COTS components, by requiring ongoing updates and integration testing. The limitations and flaws of the 2007 draft VVSG (like its predecessors) are primarily due to the fact that it masquerades as a functional standard, while actually continuing to be predisposed to existing designs. But even as a design specification, the draft VVSG falls short of achieving its goals of specifying "how voting systems should perform or be used in certain types of elections and voting environments." This is because the guidelines repeatedly make the erroneous assumption that insiders (i.e. vendors, repair personnel, election officials, etc.) are trusted agents in the highly partisan process of US elections. In reality, insiders have both motive and opportunity to make changes and cover up the fact that they have done so. Nor are the VVSG’s specified controls transparent enough to allow verification by the voter or the election officials that the election system has been configured properly. Production of a voter-verified paper ballot is utterly moot if vote totals are generated electronically and never checked against the original paper records. Recent literature has suggested random audits (or spot-checks), but since these percentages are based on the computer-generated results, they grossly underestimate the amount of independent tallies that must be performed to sufficiently validate the election. As well, these checks are not prescriptive as to what to do when anomalies are revealed. In sum, the 2007 draft VVSG is flawed end-to-end, and is even more dangerous than its inadequate predecessors. It should be scrapped and a complete rewrite performed.

Comment by Scott Shorter (Manufacturer)

The TGDC's proposed draft of the VVSG requirements represent a step in the right direction for the level of scrutiny and specification of security requirements required for voting equipment. I strongly encourage the EAC to resist calls to weaken the requirements because of engineering difficulties in accomplishing them - the requirements should encourage innovation rather than validate the existing state of affairs.

Comment by U.S. Public Policy Committee of the Association for Computing Machinery (USACM) (None)

We have included this appendix of definitions as explanations of terms that we use throughout our comments. Appendix A. Definitions Capability Maturity Model Integrated (CMMI) — CMMI is a scientifically developed set of process management processes that allow system developers to establish, demonstrate, and achieve certification for mature development processes. CMMI is a product of the Software Engineering Institute. CMMI - Capability Maturity Model Integrated High Assurance — Assurance is a term of art that typically refers to the rightful trust or confidence that a system user can have in the performance of the described system. In systems development theory, system assurance costs increase linearly until they approach an asymptotic turn, where the cost to increase assurance becomes exponential. High assurance systems generally demand assurance levels beyond the asymptotic turn by requiring redundancy and independent mechanisms. See also: trustworthiness. Independent — Two events or items are independent if they are not causally correlated. In its purest sense, independence is Boolean and two events are either independent or they are not. Generally within the VVSG Draft context, events and processes may be evaluated on a continuum where the level of independence is determined either by the strength of the causal relationship or the impact of the existing causation. Independent Recording Mechanisms (IRM) — Two mechanisms are independent if they are not controlled by a common mechanism and if they do not operate on the same data. A system employs IRMs if and only if it registers, marks, copies, renders or enters the voters selections in two forms that each are: 1. Caused by voter actions or are reviewed and verified by the voter and 2. Independent in the sense that modifying one form cannot impact the other. Independent Record — In the VVSG Draft context, two records are independent if manipulation of one cannot impact the other. A single record is independent if it cannot be manipulated. Independent System - The term independent systems in the VVSG Draft context, refers to systems that protect one another against concurrent failure. Thus, purely independent systems’ failure modes are unrelated. Mission Critical Systems (MCS) — MCS is a self-defining phrase that refers to systems that hold a particularly high failure cost, thus can endure only a very low failure rate. The connotation is that if an MCS system fails, the overall mission it supports is likely to fail as well. MCS are often developed using high assurance processes. Redundancy - Redundant systems provide identical or similar functionality from more than one source. In the VVSG Draft context, system redundancy protects against concurrent, thus against overall, system failure. Reliability — Reliability refers to a systems overall ability to complete its intended purpose even in the face of unexpected natural events. A de facto interpretation of reliability does include protection against malicious intruders, though systems that are subject to malicious manipulation negatively impacts system reliability in its purest sense. Static Analysis — Static Analysis is used to refer to any approach to software review or assessment that does not require the software’s execution. As opposed to testing techniques that observe the program’s execution, Static Analysis considers the code’s structure, logic, data, and semantics based exclusively on the code’s syntactic representation. Trustworthiness — Trustworthiness refers to the rightful trust or confidence that a person can have in process or person. See also: High Assurance Verification — Verification refers to a process of checking a result for correctness. It naturally requires redundancy and is best accomplished via independent mechanisms.

1.1 Purpose

This document will be used primarily by voting system manufacturers and voting system test labs. Manufacturers will refer to the requirements in this document when they design and build new voting systems; the requirements will inform them in how voting systems should perform or be used in certain types of elections and voting environments. Test labs will refer to this document when they develop test plans for verifying whether the voting systems have indeed satisfied the requirements. This document, therefore, serves as a very important, foundational tool for ensuring that the voting systems used in U.S. elections will be secure, reliable, and easier for all voters to use accurately.

8 Comments

Comment by paquette (Advocacy Group)

last line, recommend that "easier for all voters" be reworded to "easy for all voters"

Comment by Jane Engelsiepen (General Public)

Electronic voting machines have been systematically hacked in the past two presidential elections and the past two Congressional elections, resulting in a President who was not legally chosen by the American people, and more Republican Congressional representatives than were legally elected. Electronic voting machines have put our very Democracy at risk and must be completely removed from our nations voting systems. The American people have no faith in our voting system when we know that our votes are not being counted, and those who we vote to represent us are fraudulently kept out of office.

Comment by Donna Mummery (Advocacy Group)

The public will have little confidence in this new documents unless the contents are approved by independent computer software engineers prominent in the voting rights movement.

Comment by Rob Richie (Advocacy Group)

As executive director of the non-partisan, non-profit, pro-democracy organization, FairVote (www.fairvote.org), I appreciate the opportunity to comment on these standards. I am writing today to make a general point -- one that I hope is appropriate on this section, as it points out that your key goal is to have the VVSG serve "as a very important, foundational tool for ensuring that the voting systems used in U.S. elections will be secure, reliable, and easier for all voters to use accurately." Unfortunately, there is a gaping hole in these guidelines. Across the United States, communities have been adopting ranked voting methods, such as instant runoff voting. Even the United States’ two presidential frontrunners this year, John McCain and Barack Obama, have been active proponents of instant runoff voting. As it is, however, your standards do not meet your key goal of serving "as a very important, foundational tool for ensuring that the voting systems used in U.S. elections will be secure, reliable, and easier for all voters to use accurately." The problems is that, under your direction, equipment manufacturers will not need to prepare for the possibility of communities and states adopting ranked voting methods such as instant runoff voting. The result will be that many communities will delay or indefinitely postpone popularly or legislatively adopted election reforms, due to the inability of voting equipment vendors to implement the changes. They may make easily preventable mistakes in implementation, as indeed happened in San Francisco in 2007. Growing Number of U.S. Jurisdictions Using Ranked Choice Voting Methods -- Moving to ranked voting is not a theoretical issue. In 2006, for example, four cities with a combined population of more than 1.5 million people voted collectively by landslide to adopt ranked voting methods. A number of major cities use or shortly will use ranked voting systems, including Minneapolis (MN), Oakland (CA), Pierce County (WA) and San Francisco (CA). States like Arkansas, South Carolina and Louisiana now have many long-distance absentee voters use ranked ballots in elections with runoffs, while Vermont and North Carolina are among states seriously debating statewide use of ranked systems in the near future. More nations now use ranked voting than ever – for example, every single voter in Ireland, New Zealand, Australia, Papua New Guinea, Malta, Scotland, Northern Ireland and London now can vote in a ranked voting governmental election. Persistent Problems Caused by a Lack of Ranked Voting Equipment Standards -- Preparing for ranked voting methods’ adoption in communities and states only makes sense. The general failure of our election administration leadership and voting equipment manufacturers to prepare has caused severe strains and costs on communities adopting systems. San Francisco was forced to violate its charter in failing to use instant runoff voting in its 2003 mayoral elections, fending off a lawsuit simply because the exasperated judge admitted he couldn’t force the city to run elections it hadn’t taken the necessary steps to do, in large part because of its vendor not being prepared for ranked voting elections. 695 of Oakland voters supported instant runoff voting in 2006, but will not use it this year because its voting equipment vendor does not yet have certified equipment ready to run ranked voting elections. Minneapolis voted 65% for ranked voting in 2006, but may not have a system ready for its next elections in November 2009, fully three years later. Follow Minnesota Model and Secretary of State Mark Ritchie’s Solution -- Minnesota Secretary of State Mark Ritchie has created an advisory committee to establish clear standards for how to run ranked voting elections. So should the EAC, rather than punting the issue until the next round of guidelines and throwing potentially millions of voters into the potential limbo of passing laws that lack of preparation makes their government unable to execute. This process does not involve re-inventing the wheel. Good standards have been developed in Minnesota (ones I can make available if you contact us at 301-270-4616) and proposed in various other states and cities. It will simply take a focused effort on your part to get this done. The Need for an EAC Advisory Committee on Ranked Ballot Standards & Ballot Image Capture Requirements -- I request that you form a special advisory committee to develop these standards for ranked ballots. I also urge you to establish very clearly that all optical scan equipment must capture an electronic ballot image of every ballot. This redundancy provides further security to elections and also is the basic requirement that machines must have to be able to run ranked voting elections.

Comment by Phyllis Huster (Advocacy Group)

Please change this statement from "This document, therefore, serves as a very important, foundational tool for ensuring that the voting systems used in US elections will be secure, reliable, and easier for all voters to use accurately" to "This document, therefore, servers as a very important, foundational tool for ensuring that the voting systems used in US elections will be secure, reliable, capable of independent audits, maintain the voter's chain of custody of the ballot to accurate counting of the vote and easier for elections officials to assure the accuracy of the systems." *** The problem with saying "for all voters to use accurately" implies a voter could make a mistake and use the system inaccurately putting the burden of accuracy on the voter when what we need is a fool proof guaranteed accurate system that ensures the voter's intent is accurately captured by the system. The only way to do this is to make sure the technology and process allow for independent audit of the votes and this is known as requiring a paper ballot at some point in the process to allow a clear non digital way to validate the actual intent of the voter. Electronic voting will never offer such a guarantee as no system is 100% secure. I am a 20 year telecomms person who knows that an electronic vote may be manipulated often without notice, while a paper ballot once marked may never be changed except with obvious stickers or other forms of ballot destruction. It bothers me that the EAC in the first paragraph is putting burden of accuracy on the voter who cannot even get a copy of the Diebold CD which hold the ballots. - Phyllis Huster / Executive Director "Count Paper Ballots" phyllis@countpaperballots.com

Comment by Audrey N. Glickman (Advocacy Group)

Please consider, as a part of this recommendation, including direct encouragement that all equipment currently in use SHALL be subject to testing anew under any and all new standards, and that no equipment may be grandfathered in as acceptable either for sale or for use. The voting system of a country is only as strong as its weakest link, and if we employ higher standards on an unequal basis we serve an injustice in both directions.

Comment by E Smith/P Terwilliger (Manufacturer)

Overall commentary: These guidelines, as written, will require total replacement of all equipment and systems currently in use in any jurisdiction (state) that adopts them. The EAC should note two things. (1) This will be a vendor gold rush an order of magnitude larger than HAVA, exspecially given that systems will cost many times more than current ones to procure and to use. (2) Unless a "HAVA 2" is in the works at the federal level, it seems likely that states will balk at the cost, scrap, difficulty for pollworkers due to the new security constraints, and as a result refuse to adopt the draft VVSG. General Notes: There is a general confusion between the terms "test lab" and "accredited test lab". Are they the same? And what happened to VSTL, which is notw only used in the OEVT section? We suggest that the EAC check carefully for usage of the terms "voting system", voting device", "programmed device", "electronic device" and "voting station". There are numerous places where the incorrect term has been used. For example, Table 2-1 of the Introduction uses "voting system" in the description for VEBD instead of the correct "voting station" (or is it "voting device"?). There also numerous instances of similar but undefined terms, such as "system" and "electronic voting system". It is difficult to reconcile the requirements for using published/standard data interchange formats between electronic devices that comprise a voting system, and the suggestions of a goal of interoperability between systems (ie a mix & match situation), with the very explicit statements in multiple places that only an entire voting system can be certified, not specific components. With no provision for a mixed-vendor voting system, there is no point in requiring published data interchange formats. Numerous sections do not have "shall" in caps. Numerous requirements are shown as "SHALL be capable of", "SHALL provide the capability", "SHALL support a process", "SHALL provide for", etc. These are confusing constructs. Are they all equivalent? Does it mean that the function must be present but does not have to be used? "Record" versus "Report". The VVSG often appears to switch between terms. Per the VVSG definitions, the two are not equivalent. This needs to be addressed throughout the document. In several places, an "audit device" is introduced, yet there is no overall definition of what such a device is, or of what class it is. "programmed device"? "voting device"? To define "election Official" as a comglomeration of 3 roles, including that of "central election official" is confusing and error-prone. Suggest removing this "virtual role" and instead spelling out which define roles are meant. There is general confusion in terms between a "provisional" and a "challenged" ballot.

Comment by Electronic Privacy Information Center (Advocacy Group)

In addition to the points outlined in the draft, this document also makes more transparent to the public the agency's work to develop new voting system standards.

1.2 Scope

The VVSG is described as "Voluntary" and a "Guideline" because individual states and U.S. territories purchase their own voting systems and use them according to state and territory-specific laws and procedures; the Federal Government cannot dictate how elections are to be run. The vast majority of states and territories, however, now require that their voting systems conform to the requirements in the VVSG. Therefore, the VVSG can be considered essentially as a mandatory standard.

This document is titled as "Recommendations to the EAC" because it is not yet the final version that voting systems manufacturers and test labs will follow. The Technical Guidelines Development Committee (TGDC), a committee authorized under the HELP America Vote Act (HAVA) of 2002, and researchers at the National Institute of Standards and Technology (NIST) have written this document for the Election Assistance Commission (EAC). The EAC will make this document available to the public for a series of public reviews. After consideration of comments, the EAC will issue a final version and subsequently require its use in testing for Federal voting system certification. Until that occurs, voting system manufacturers and test labs will continue to use the VVSG 2005 and its requirements.

8 Comments

Comment by paquette (None)

recommend that we use the term "EAC certification" rather than "Federal certification". It is more precise and parallels previous terminology of "NASED qualification"

Comment by paquette (State Election Official)

1st paragraph needs to be rewritten to be more precise e.g., name change from "standards" to "guidelines" due to HAVA. The terms "guidelines" and "standards" are synonymous. The fact that most states now require conformance to national standards doesn't make the standards essentially mandatory. They are still only mandatory in those states that have accepted them and not mandatory for those states that haven't. Instead of saying federal govt can't dictate, would be preferable to say Constitution reserves the manner for how elections are to be conducted to the states.

Comment by Richard C. Johnson (Manufacturer)

It is not true that the Guidelines need to be voluntary in Federal Elections; rather, since state elections are under the purview of the state, states must volunteer to observe these guidelines for state elections.

Comment by Donna Mummery (Advocacy Group)

Even if this document is made available to the public for comments, the EAC is still being allowed to issue the final version. After the EAC's performance in allowing Ciber to continue "testing" voting machines even when it had been decertified by EAC, I have little confidence in letting EAC determine the final version. The New York State Board of Elections was shocked that EAC had never told them Ciber was decertified even though Ciber was under contract to test NYS voting machines. The National Institute of Statistics and Technology is much more credible as their report on voting machines said that no software should ever be allowed to count votes. We should be using paper ballots with optical scanners to count them, they advised.

Comment by Roger Ross (Academic)

I would hope that this section, given it is Voluntary and a Guideline for individual states and US Territories, would point out that no matter the guidelines or standards discussed further in this document that Paper Ballots are cheaper, more secure and more easily understood, used, monitored by all parties. It should definitely point out the advantages of using a World-Wide accepted standard of PAPER BALLOTS as an alternative to electronic voting machines. Thanks,

Comment by Michael Angstadt (General Public)

The following sentence should be removed altogether. I think it's redundant. "Therefore, the VVSG can be considered essentially as a mandatory standard."

Comment by Diane Gray (Voting System Test Laboratory)

"The VVSG is described as 'Voluntary' and a 'guideline'…Therefore, the VVSG can be considered essentially as a mandatory standard." This statement could be misleading because the VVSG is not a required standard to all purchasers.

Comment by Phyllis Huster (Advocacy Group)

1.2 Scope I don't accept the premise of this section and therefore find the whole EAC VVSG To be a waste of time and taxpayer money. The premise is as follows: Federal Government cannot dictate how elections are to be run. However, due to Federal equal protection clause of the constitution and Federal and state requirements for a ballot, the federal government must protect citizen rights of equal protection under the law. With status supporting a combination of systems, Electronic voting for people who come to polls and absentee paper based ballots for people who don't vote at the polls, we create a situation of UNEQUAL PROTECTION for the citizens and violates the 14th amendment of the Constitution. The Equal Protection Clause, part of the Fourteenth Amendment to the United States Constitution, provides that "no state shall… deny to any person within its jurisdiction the equal protection of the laws." The Equal Protection Clause can be seen as an attempt to secure the promise of the United States' professed commitment to the proposition that "all men are created equal" by empowering the judiciary to enforce that principle against the states. If voters (myself as a sample voter) vote on a Diebold Electronic machine with no way to verify the accuracy of the vote, and as a citizen I can't get access to the CD that contains the ballots to validate the votes in my precinct, but folks who vote absentee ballot get a paper balllot which thru open records I can get copies of, this is unequal in that I have no way as a citizen to validate the accuracy of the vote counts. *** SO THE PREMISE IS INACCURATE, it is indeed a requirement of the Federal government to guarantee accurate voting, regardless of the HOW voting is implemented state by state, the Federal government must write a standards document for the standardized REQUIREMENTS of voting to include but not limited to: 1. Chain of custody suggestions (removing any barriers to the citizen's intent, placing intent on the voting apparatus, the voting apparatus being counted accurately the first time and capable of citizen audit on election night in full view of other citizens, and that vote being secured in a locked ballot box all day until the counting begins after polls close. That is a simple standard and yet the EAC has not even done the minimum to discuss or outline voter chain of custody which is the critical underpinning to any voting technology. 2. Voting Verifiability and auditability. A basic premise that any voting system be verifiable for accuracy thru an audit or randomized race count. 3. Citizen counting of ballots. Regardless of technology the supremacy of the ownership must be with the citizen. If we don't own the elections, we don't have a democracy. We don't have to pay taxes because we are not giving our consent of the governed because we do not trust the accuracy of the elections. This basic premise must be covered by the EAC or any voting guidelines are null and void and useless to me as a voter. 4. Mandatory recognition and standard understanding of a ballot. Ballots cannot be a PROCESS as was defined by Kennesaw state college in testimony by Brittain Williams during the deposition by plaintiffs for the VoterGA lawsuit. The ballot is a ballot and a ballot is physical evidence of voter intent. A ballot cannot be the algorithm that converts your screen choice or even your lever pull to a physical or nonphysical representation of that vote. a ballot is the physical way a voter shows intent, without the physical component, you don't have a vote and you don't have a way to guarantee voter intent. 5. Finally, we cannot privatize the importance of election to corporations who have profit motive. You have conflicting goals, if elections are run by the government, they are supposedly protected by all arms of government, when you allow a company such as Diebold to dispatch technical folks who serve as pollworkers and yet don't make them sign pollworker oaths you just handed over the security of your election and the function government should do to a private corporation such as Diebold who has partisan and nongovernmental aims with the outcome of that election. Privatization of military in the form of Halliburton contracting has been a failure, privatization of our defense contracts with the Boeing tanker contract awarded to a European contractor Airbus, is not only a violation of taxpayer demand for national security (becauase you now outsourced your defense secrets to other countries) but you damage the overall GDP of your country in doing so. What happens when a foreign company buys diebold's election division ( or premiere global gets bought). you now have a major security threat in that any foreign government can control the outcome of US elections. This came clear with Cathy Cox the AntiDemocratic governor of Georgia that ushered Diebold into the state reversing 132 years of democratic voting patterns in 4 years turned Georgia fully republican in house/senate/governor and presidential races. Having started a book called "the night Democracy was lost in Georgia" i'm passionate about this very serious blunder in history and the EAC helped aid in the crime of Diebold taking over Georgia voting and removing it from the control of georgia citizens. Privatization breaks down when I asked 159 counties to give me copies of the ballots and Cathy cox faxed out 159 letters saying counties could not release the Diebold CD that contained the ballots because it had proprietary software that she was contractually required by Diebold to keep secret. In essence claiming I was a terrorist for asking for a copy of the ballots. I realize none of this will make it into any formal comments as the EAC is bought and paid for by the Electronic voting Lobby, but I will say one more thing, that The premise that Federal government does not have a role in mandating aspects of standards regarding elections is ludicrous. You may leave up the states the HOW of elections, but you must be airtight in the WHAT of elections and in this regard the EAC has failed miserably. - phyllis@countpaperballots.com 5.

1.3 Audience

  • The VVSG is intended primarily as a critical reference document for:
  • Designers and manufacturers of voting systems;
  • Test labs performing the analysis and testing of voting systems in support of the national certification process;
  • Software repositories designated by the national certification authority or by a state; and
  • Test labs and consultants performing the state certification of voting systems.

5 Comments

Comment by Jacob Johnson (General Public)

I'm a computer programmer in the Air Force, and with that mindset, I'd like to know what the testing agency is doing to make sure the source code is free of anything that can be used to rig elections.

Comment by Richard C. Johnson (Manufacturer)

The Audience also must include: * The American public, including voters, registered voters, taxpayers, and all other citizens. * Those Americans concerned enough about voting issues to join associations and campaign for better voting systems Not everyone will understand or appreciate the fine points of the document, but it affects everyone and everyone should be considered at least part of the audience if not an actor.

Comment by Dennis Lynch (General Public)

The Public should also be considered as part of the audience because obtaining and maintaining public confidence in our elections is critical to the future of our republic. Voting machine companies and voting machine test labs and the EAC should produce reports which provide overview and detail information about testing procedures and testing results which support high levels of confidence in our voting systems.

Comment by Michael Angstadt (General Public)

The following sentence should not be bulleted: "The VVSG is intended primarily as a critical reference document for:"

Comment by Electronic Privacy Information Center (Advocacy Group)

The 2007 VVSG will also serve a secondary audience, which may include: • Researchers of voting system technology; • Election Reform Advocacy Organizations; • Media; • Policy makers; • Contestants

1.4 Structure

The VVSG contains the following sections:

  • Part 1, Equipment Requirements: for requirements that pertain specifically to voting equipment.
  • Part 2, Documentation Requirements: for documentation requirements that must be satisfied by both manufacturers and test labs – the Technical Data Package, user documentation, test lab reports, etc.
  • Part 3, Testing Requirements: information and requirements about testing; the approaches to testing that will be used by test labs; the types of tests that will be used to test conformance to the requirements in Parts 1 and 2.
  • Appendix A, Definitions of Words with Special Meanings: covers terminology used in requirements and informative language.
  • Appendix B, References and End Notes: contains references to documents and on-line document used in the writing of this standard.

A separate volume of tests will accompany the VVSG in the future. The VVSG contains descriptions for test methods and general protocols for how requirements are to be tested, but does not contain the actual tests themselves.

The following sections contain further introductory and background material, with an overview of the document structure, its high-level contents, the history of the voting system standards, and guidance on how to read the document.

5 Comments

Comment by paquette (None)

Part 3 last line - change "test" conformance to "verify" conformance - tests used to test is a tautology

Comment by paquette (None)

Appendix B - reword to "documents and on-line materials" or "documents and on-line references" or "hardcopy and on-line documents" to distinguish source format "documents and on-line documents" is fuzzy usage

Comment by Diane Gray (Voting System Test Laboratory)

General comments for this Standard: --So many cross-references make using the manual cumbersome to use. --TDP requirements are not exclusive to Part 2 Documentation Requirements. In fact, many TDP requirements are found throughout the entire document. --So many acronyms have been used, the reader must frequently refer to Appendix A Definitions. --Hyperlinks take the reader to the cited chapter, but not to the numbered paragraph in that chapter, so that the reader must scroll until the correct paragraph is found. Can the hyperlinks be revised to go to the specific paragraph?

Comment by E Smith/P Terwilliger (Manufacturer)

"voting equipment" is not defined. Is this different from "voting station"? "voting device"?

Comment by Cem Kaner (Academic)

REPLACE the following text: "A separate volume of tests will accompany the VVSG in the future. The VVSG contains descriptions for test methods and general protocols for how requirements are to be tested, but does not contain the actual tests themselves." WITH this text: "The VVSG contains descriptions of some test methods and general protocols for how requirements are to be tested, but does not contain the actual tests themselves. A separate volume of examples of recommended tests will accompany the VVSG in the future." Rationale: It might be useful to create a separate volume of examples of recommended tests, but VVSG should not encourage the notion that any particular finite list of tests of the software would be sufficient to test the voting system. Software defects are more like design defects than manufacturing defects—a software error shows up in every manufactured copy of the same product and discovery of new defects comes from testing the system in a new way, not from testing for the same flaw again in a new copy of the device. Sound software testing practice encourages diversity of testing, rather than focus on a pre-specified set of tests that the software can be optimized to pass. (Affiliation Note: IEEE representative to TGDC)