Abstract: In this issue, we have twelve regular research papers.The first six of these are all connected by the common theme of testing, whereas the next three are all related to metrics and benchmarks, and the final three are concerned with process and projects.Testing is a crucially important part of the software life cycle and so it should come as no surprise that half of the papers in this issue are related to software testing.In BA Study Examining Relationships Between Micro Patterns and Security Vulnerabilities,^Kazi Zakia Sultana, Byron J. Williams, and Tanmay Bhowmik investigate the correlation between vulnerabilities and code micro patterns.By analyzing Apache Tomcat and three Java web applications, the authors found that certain micro patterns are frequently present in vulnerable classes.This research will help developers and testers to detect code vulnerabilities.The paper BA vector table model-based systematic analysis of spectral fault localization techniques^by Chunyan Ma, Chenyang Nie, Weicheng Chao, and Bowei Zhang presents a method to evaluate and compare the reliability and effectiveness of spectral fault localization techniques, i.e., techniques that work with data collected at run-time.As there are a large number of spectral fault localization techniques, this method will be of use to developers who need to choose the optimal method for their system testing.In BCode Coverage Differences of Java Bytecode and Source Code Instrumentation Tools,F erenc Horváth, Tamás Gergely, Árpád Beszédes, Dávid Tengeri, Gergő Balogh, and Tibor Gyimóthy discuss an empirical study comparing code coverage results provided by a number of bytecode instrumentation tools for Java.The impacts on test prioritization and test suite reduction are also investigated.The results show that significant differences occur between measurements of bytecode and source code.The authors suggest that source code-based instrumentation is the correct approach to code coverage measurement.The understandability of documentation has a considerable impact on test development.The paper BComprehensibility of System Models during Test Design: a Controlled Experiment Comparing UML Activity Diagrams and State Machines^by Michael Felderer and Andrea Herrmann compares the comprehensibility of UML activity diagrams and state machines during test case derivation.The authors performed experiments with 84 student participants