Chapter 3: Introduction to General Testing
Inspection is the examination of a product design, product, process, or installation and the determination of its conformity with specific requirements or, on the basis of professional judgment, with general requirements. [ISO04a]
Inspection is indicated when there is no operational test for assessing conformity to a given requirement. Inspection can be as simple as a visual confirmation that a particular design element or function is present or review of documentation to ensure inclusion of specific content, or it can be as complex as formal evaluation by an accredited specialist.
Logic verification is an example of inspection. Although formal proofs can be checked automatically, the determination that the premises correctly describe the behavior of the system requires professional judgment.
Source code inspections and architecture reviews are also types of inspections.
Comment by Harrine Freeman, IEEE member (Voter)
This information is accurate. I discussed similar information in the IEEE article Software Testing in the Instrumentation & Measurement Magazine.
3.2 Functional Testing
Functional testing is the determination through operational testing of whether the behavior of a system or device in specific scenarios conforms to requirements. Functional tests are derived by analyzing the requirements and the behaviors that should result from implementing those requirements. For example, one could determine through functional testing that a tabulator reports the correct totals for a specific simulated election day scenario.
Functional testing is indicated when the requirements on the behavior of a system or device are sufficiently precise and constraining that conformity can be objectively demonstrated.
Strategies for conducting functional testing are broadly characterized as either "black-box" or "white-box." However, a given test is neither black-box nor white-box. That distinction pertains to the strategy by which applicable tests are developed and/or selected, not to the tests themselves. For example, if a given input is tested because it is a special case in the functional specification of the system, then it is black-box testing; but if that same input is tested because it exercises an otherwise unused block of code found during the review of source code, then it is white-box testing.
Functional testing can be performed using a test suite or it can be open-ended.
Comment by Andy Podgurski (Academic)
The definition of functional testing given is incorrect. Functional testing involves exercising functional requirements as identified in a requirements specification. It is synonymous with black-box testing and is also called specification-based testing. (See for example "Software Testing Techniques" by Boris Beizer, 2nd ed.) Functional testing is *always* indicated. If the requirements aren't precise enough, they should be revised. Each functional requirement should be testable. Functional testing is *not* synonymous with operational testing, which involves testing a product in the field with actual users. Structural testing, involves selecting test cases based on the internal structure of the software, e.g., to cover each program statement. It is synonymous with white-box or glass-box testing and is also called coverage testing. Andy Podgurski Associate Professor Associate Chair for Computer Science Electrical Engineering & Computer Science Dept. Case Western Reserve University Cleveland, OH 44106 andy at eecs dot case dot edu
Comment by Gail Audette (Voting System Test Laboratory)
part 3: 3.2"Functional Testing can be performed using a test suite or it can be open-ended." Will the test plan allow for 'Ad-hoc" test methods and how do these test methods get validated per the requirements of ISO 17025?
3.3 Performance Testing (Benchmarking)
Performance testing, a.k.a. benchmarking, is the measurement of a property of a system or device in specific scenarios. For example, one could determine through performance testing the amount of time that a tabulator takes to report its totals in a specific simulated election day scenario.
What distinguishes performance testing from functional testing is the form of the experimental result. A functional test yields a yes or no verdict, while a performance test yields a quantity. This quantity may subsequently be reduced to a yes or no verdict by comparison with a benchmark, but in the case of functional testing there is no such quantity to begin with (e.g., there is no concept of "x % conforming" for the requirement to support 1-of-M voting – either it is supported or it is not).
Performance testing is indicated when the requirements supply a benchmark for a measurable property.
Usability testing is an example of performance testing. The property being measured in usability testing involves the behavior of human test subjects.
Comment by Harrine Freeman, IEEE member (Voter)
We must also consider usuability in another view as the degree to which a piece of software assists the voter who is trying to accomplish a specific tasks, i.e. voting for a candidate. Usability testing should also include: ease of use, and user satisfaction. You can also refer to the HCI standards (Human Computer Interaction) for additional information on usability testing. Tests should also be conducted for various groups of people based on the height of voters (height challenged voters who may be 4 ft or under in stature), disabled voters, senior voters, and color blind voters.
3.4 Vulnerability Testing
Vulnerability testing is an attempt to bypass or break the security of a system or a device. Like functional testing, vulnerability testing can falsify a general assertion (namely, that the system or device is secure) but it cannot verify the security (show that the system or device is secure in all cases). Vulnerability testing is also referred to as penetration testing. Vulnerability testing can be performed using a test suite or it can be open-ended. Vulnerability testing involves the testing of a system or device using the experience and expertise of the tester; using the knowledge of system or device design and implementation; using the publicly available knowledge base of vulnerabilities in the system or device; using the publicly available knowledge base of vulnerabilities in similar system or device; using the publicly available knowledge base of vulnerabilities in similar and related technologies; and using the publicly available knowledge base of vulnerabilities generally found in hardware and software (e.g., buffer overflow, memory leaks, etc.).
Comment by John Baker (Academic)
I agree with the basic concepts in all of the recommendations, however from personal experiance with the entire E Voting process we have a thing called reality. That being, who would perform the testing? Most if not all municipailties have no one skilled enough to do OS testing. you would be hard pressed to find a handful of IT people in the whole country that have that experinace.Recoomendations are needed, but if they are not utilized, what is the point. Many localities and election offices have their own guidlines governed by each state and belive they are suffciant. On another note, you will not find a manufacturer that will give you enough access to their databases or OS to do the proper testing described (they are all proprietary) The recommendations were written by IT experts living in the "perfect world" The bottom line is that the localities using the systems can secure the physical equipment at the most, but are at ryhe mercy of the manufacture for just about everything down to password resets for applications (thatis how they make money post sales) The type of testing recommmened is feasibly impossible for the end users. Most of the machines deployed around 2004/2005 are using MS / OS and since they never been attched to a real connected network, have never been patched or upgraded. the monufactures that support these systems are so swampped with new orders and or deployment that a month or two before elections you would be lucky to get a tech to service a single malfunctioning machine. I am not a basher, I think the technology took a lot of human errors out, but added a plethera of hardware and software issues amoung other things. I belive an additional reccomendation is that each locality should be required to have a IT system application and secuirty expert( with credentials recommeded by the EAC) assigned to the Elections staff to work specificcly on these systems. Most elections officials have very little knowledge of the systems they use other than basic use.
Comment by Brian V. Jarvis (Local Election Official)
Was the repeat of the phrase "using the publicly available knowledge base of vulnerabilities in similar system or device" intentional or an error? Couldn't tell for sure.
Comment by Mike Ahmadi (Voter)
Before performing a vulnerability test a suitable threat model formed via a collaborative method is necessary to determine what precisely is vulnerable, the scalability of the threats, and proposed countermeasures. I am not seeing this anywhere in this document.
3.5 Interoperability Testing
Interoperability testing is the determination through operational testing of whether existing products are able to cooperate meaningfully for some purpose. It consists of bringing together existing products, configuring them to work together, and performing a functional test to determine whether the operation succeeds.
Conformance testing and interoperability testing are fundamentally different. Conformance testing focuses on the relationship of a given product to the standard. As defined in Appendix A, this is what "testing" normally means throughout the VVSG. Interoperability testing, on the other hand, focuses on the practical cooperation of two or more products, irrespective of any standard. Conformance to a standard is neither necessary nor sufficient to achieve interoperability.
Because interoperability testing focuses on practical cooperation, the use of test scaffolding is to be avoided. All of the components should be actual product.