You can open full PVS-Studio documentation as single file. In addition, you can print it as .pdf with help virtual printer.
We grouped the diagnostic, so that you can get the general idea of what PVS-Studio is capable of.
As it is hard to do strict grouping, some diagnostics belong to several groups. For example, the incorrect condition "if (abc == abc)" can be interpreted both as a simple typo, but also as a security issue, because it leads to the program vulnerability if the input data are incorrect.
Some of the errors, on the contrary, couldn't fit any of the groups, because they were too specific. Nevertheless this table gives the insight about the functionality of the static code analyzer.
You can find a permanent link to machine-readable map of all analyzer's rules in XML format here.
Main PVS-Studio diagnostic abilities |
Diagnostics |
64-bit issues |
C, C++: V101-V128, V201-V207, V220, V221, V301-V303 |
Check that addresses to stack memory does not leave the function |
C, C++: V506, V507, V558, V758 |
Arithmetic over/underflow |
C, C++: V636, V658, V784, V786, V1012, V1028, V1029, V1033 C#: V3040, V3041 Java: V6011, V6088 |
Array index out of bounds |
C, C++: V557, V582, V643, V781, V1038 C#: V3106 Java: V6025, V6079 |
Double-free |
C, C++: V586, V749, V1002, V1006 |
Dead code |
C, C++: V606, V607 |
Microoptimization |
C, C++: V801-V829 |
Unreachable code |
C, C++: V551, V695, V734, V776, V779, V785 C#: V3136, V3142 Java: V6018, V6019 |
Uninitialized variables |
C, C++: V573, V614, V679, V730, V737, V788, V1007, V1050 C#: V3070, V3128 Java: V6036, V6050, V6052, V6090 |
Unused variables |
C, C++: V603, V751, V763, V1001 C#: V3061, V3065, V3077, V3117, V3137, V3143 Java: V6021, V6022, V6023 |
Illegal bitwise/shift operations |
C, C++: V610, V629, V673, V684, V770 C#: V3134 Java: V6034, V6069 |
Undefined/unspecified behavior |
C, C++: V567, V610, V611, V681, V704, V708, V726, V736, V1016, V1026, V1032, V1061 |
Incorrect handling of the types (HRESULT, BSTR, BOOL, VARIANT_BOOL, float, double) |
C, C++: V543, V544, V545, V716, V721, V724, V745, V750, V676, V767, V768, V772, V775, V1027, V1034, V1046, V1060 C#: V3111, V3121, V3148 |
Improper understanding of function/class operation logic |
C, C++: V518, V530, V540, V541, V554, V575, V597, V598, V618, V630, V632, V663, V668, V698, V701, V702, V717, V718, V720, V723, V725, V727, V738, V742, V743, V748, V762, V764, V780, V789, V797, V1014, V1024, V1031, V1035, V1045, V1052, V1053, V1054, V1057 C#: V3010, V3057, V3068, V3072, V3073, V3074, V3082, V3084, V3094, V3096, V3097, V3102, V3103, V3104, V3108, V3114, V3115, V3118, V3123, V3126, V3145 Java: V6009, V6010, V6016, V6026, V6029, V6049, V6055, V6058, V6064, V6068, V6081 |
Misprints |
C, C++: V501, V503, V504, V508, V511, V516, V519, V520, V521, V525, V527, V528, V529, V532, V533, V534, V535, V536, V537, V539, V546, V549, V552, V556, V559, V560, V561, V564, V568, V570, V571, V575, V577, V578, V584, V587, V588, V589, V590, V592, V600, V602, V604, V606, V607, V616, V617, V620, V621, V622, V625, V626, V627, V633, V637, V638, V639, V644, V646, V650, V651, V653, V654, V655, V660, V661, V662, V666, V669, V671, V672, V678, V682, V683, V693, V715, V722, V735, V747, V754, V756, V765, V767, V787, V791, V792, V796, V1013, V1015, V1021, V1040, V1051 C#: V3001, V3003, V3005, V3007, V3008, V3009, V3011, V3012, V3014, V3015, V3016, V3020, V3028, V3029, V3034, V3035, V3036, V3037, V3038, V3050, V3055, V3056, V3057, V3062, V3063, V3066, V3081, V3086, V3091, V3092, V3107, V3109, V3110, V3112, V3113, V3116, V3122, V3124, V3132, V3140 Java: V6001, V6005, V6009, V6012, V6014, V6015, V6017, V6021, V6026, V6028, V6029, V6030, V6031, V6037, V6041, V6042, V6043, V6045, V6057, V6059, V6061, V6062, V6063, V6077, V6080, V6085, V6091 |
Missing Virtual destructor |
C, C++: V599, V689 |
Coding style not matching the operation logic of the source code |
C, C++: V563, V612, V628, V640, V646, V705, V1044 C#: V3018, V3033, V3043, V3067, V3069, V3138, V3150 Java: V6040, V6047, V6086, V6089 |
Copy-Paste |
C, C++: V501, V517, V519, V523, V524, V571, V581, V649, V656, V691, V760, V766, V778, V1037 C#: V3001, V3003, V3004, V3008, V3012, V3013, V3021, V3030, V3058, V3127, V3139, V3140 Java: V6003, V6004, V6012, V6021, V6027, V6032, V6033, V6039, V6067, V6072 |
Incorrect usage of exceptions |
C, C++: V509, V565, V596, V667, V740, V741, V746, V759, V1022 C#: V3006, V3052, V3100, V3141 Java: V6006, V6051 |
Buffer overrun |
C, C++: V512, V514, V594, V635, V641, V645, V752, V755 |
Security issues |
C, C++: V505, V510, V511, V512, V518, V531, V541, V547, V559, V560, V569, V570, V575, V576, V579, V583, V597, V598, V618, V623, V642, V645, V675, V676, V724, V727, V729, V733, V743, V745, V750, V771, V774, V782, V1003, V1005, V1010, V1017 C#: V3022, V3023, V3025, V3027, V3053, V3063 Java: V6007, V6046, V6054 |
Operation priority |
C, C++: V502, V562, V593, V634, V648 C#: V3130, V3133 Java: V6044 |
Null pointer / null reference dereference |
C, C++: V522, V595, V664, V757, V769 C#: V3019, V3042, V3080, V3095, V3105, V3125, V3141, V3145, V3146, V3148, V3149, V3153 Java: V6008, V6060 |
Unchecked parameter dereference |
C, C++: V595, V664, V783, V1004 C#: V3095 Java: V6060 |
Synchronization errors |
C, C++: V712, V1011, V1018, V1025, V1036 C#: V3032, V3054, V3079, V3083, V3089, V3090, V3147 Java: V6070, V6074, V6082 |
WPF usage errors |
C#: V3044-V3049 |
Resource leaks |
C, C++: V701, V773, V1020, V1023 |
Check for integer division by zero |
C, C++: V609 C#: V3064, V3151, V3152 Java: V6020 |
Serialization / deserialization issues |
C, C++: V739, V1024 C#: V3094, V3096, V3097, V3099, V3103, V3104 Java: V6065, V6075, V6076, V6083, V6087 |
Customized user rules |
C, C++: V2001-V2014 |
Table – PVS-Studio functionality.
As you see, the analyzer is especially useful is such spheres as looking for bugs caused by Copy-Paste and detecting security flaws.
To these diagnostics in action, have a look at the error base. We collect all the errors that we have found, checking various open source projects with PVS-Studio.
Welcome to the PVS-Studio page, which lists all the ways to activate the license. Most likely, you've just got the license to try out the analyzer. Here you can find out how to use it. The analyzer supports the analysis of such programming languages as C, C++, C# and Java, you can run the analyzer on Windows, Linux and macOS. Due to this, ways to activate the analyzer may differ for various projects. Therefore, please, go to the section that will be right for you and follow the instruction.
Note. All actions are performed after the analyzer installation. You can download it on the page "Download PVS-Studio".
In Visual Studio, go to the menu PVS-Studio > Options > PVS-Studio > Registration to enter the name and the license number:
Go to the menu of the utility Tools > Options > Registration to enter the name and the license number:
When using the MSBuild projects analyzer, in case if it's not possible to enter the license information via GUI (plugin for Visual Studio or Compiler Monitoring UI), you can use the analyzer itself in a special mode.
The command line might look as follows (in one line):
PVS-Studio_Cmd.exe credentials
--username NAME --serialNumber XXXX-XXXX-XXXX-XXXX
In this case, the analyzer will write the license information in the settings file by default location. If the settings file doesn't exist, it'll be created. By using the flag --settings you can specify the path to the file in the non-standard location.
After the analyzer installation, you can activate it using the following command:
pvs-studio-analyzer credentials NAME XXXX-XXXX-XXXX-XXXX
In IDE, go to the menu File > Settings > PVS-Studio > Registration to enter the name and the license number:
After the analyzer installation, you can activate it using the following command:
mvn pvsstudio:pvsCredentials "-Dpvsstudio.username=USR" "-Dpvsstudio.serial=KEY"
After the analyzer installation, you can activate it using the following command:
./gradlew pvsCredentials "-Ppvsstudio.username=USR" "-Ppvsstudio.serial=KEY"
You can read more about running the analyzer on the following pages:
After downloading the PVS-Studio distribution and requesting a key to experience the tool, you'll get a fully functioning version, which will be working for one week. In this version, there are absolutely no limits - it is a completely full license. When filling out the form, you can choose which type of license you would like to try: Team License or Enterprise License.
Differences between Enterprise and Team Licenses are given on this page.
If a week wasn't enough for you to get acquainted with the tool, just let us know in your reply - we'll send you another key.
PVS-Studio analyzer works under 64-bit systems in Windows, Linux and macOS environments, and can analyze source code intended for 32-bit, 64-bit and embedded ARM platforms.
PVS-Studio requires at least 1 GB of RAM (2 GBs or more is recommended) for each processor core, when running analysis on a multi-core system (the more cores you have, the faster code analysis is).
The list of programming languages and compilers supported by the analyzer is available here.
Supported versions are Windows Vista, Windows Server 2008, Windows 7, Windows 8, Windows Server 2012, Windows 10, Windows Server 2016 and Windows Server 2019. PVS-Studio works only under 64-bit versions of Windows.
PVS-Studio requires .NET Framework version 4.7.2 or above (it will be installed during PVS-Studio installation, if it not present).
The PVS-Studio plugin can be integrated with Microsoft Visual Studio 2019, 2017, 2015, 2013, 2012, 2010 development environments. For analysis of C and C++ code for embedded systems, the appropriate compiler toolchain should be present in the system.
PVS-Studio works under 64-bit Linux distributions with the Linux kernel versions 2.6.x and above. For analysis of C and C++ code for Linux, cross-platform applications and embedded systems, the appropriate compiler toolchains should be installed in the system.
PVS-Studio works under macOS 10.13.2 High Sierra and above. For analysis of C and C++ code, the appropriate compiler toolchains should be present in the system.
PVS-Studio for Java works under 64-bit Windows, Linux and macOS systems. Minimum required Java version to run the analyzer with is Java 8 (64-bit). A project being analyzed could use any Java version.
PVS-Studio is an actively developing analyzer. For example, our team is constantly improving its integration with such systems as PlatformIO, Azure DevOps, Travis CI, CircleCI, GitLab CI/CD, Jenkins, SonarQube, etc. However, the best way to demonstrate the development of analyzer capabilities is to show the graph of the number of diagnostics.
Figure 1. Graph of increasing the number of diagnostics in PVS-Studio
As you can see, we are actively improving the capabilities of the analyzer to detect new error patterns and at the same time are pushing the development of other tools :). More detailed information on innovations in various versions of the analyzer is presented below.
Please read release history for old versions here.
Please read actual release history here.
PVS-Studio Java static code analyzer consists of 2 main parts: the analyzer core, which performs the analysis, and plugins for integration into build systems and IDEs.
Plugins extract project structure (a collection of source files and classpath), then pass this information to the analyzer core. In addition, plugins are responsible for deploying the core for analysis - it will be automatically installed during the first analysis run.
The analyzer has several different ways to integrate into a project.
For projects with Maven build system, you can use the pvsstudio-maven-plugin. To do this, you need to add the following to the pom.xml file:
<pluginRepositories>
<pluginRepository>
<id>pvsstudio-maven-repo</id>
<url>http://files.viva64.com/java/pvsstudio-maven-repository/</url>
</pluginRepository>
</pluginRepositories>
<build>
<plugins>
<plugin>
<groupId>com.pvsstudio</groupId>
<artifactId>pvsstudio-maven-plugin</artifactId>
<version>7.11.44138</version>
<configuration>
<analyzer>
<outputType>text</outputType>
<outputFile>path/to/output.txt</outputFile>
</analyzer>
</configuration>
</plugin>
</plugins>
</build>
After that, you can run the analysis:
$ mvn pvsstudio:pvsAnalyze
In addition, the analysis can be included in a project build cycle by adding the execution element:
<plugin>
<groupId>com.pvsstudio</groupId>
<artifactId>pvsstudio-maven-plugin</artifactId>
<version>7.11.44138</version>
<executions>
<execution>
<phase>compile</phase>
<goals>
<goal>pvsAnalyze</goal>
</goals>
</execution>
</executions>
</plugin>
To enter the license information you can use the following command:
mvn pvsstudio:pvsCredentials "-Dpvsstudio.username=USR" "-Dpvsstudio.serial=KEY"
After that, the license information will be saved in %APPDATA%/PVS-Studio-Java/PVS-Studio.lic in Windows OS or in ~/.config/PVS-Studio-Java/PVS-Studio.lic in macOS and Linux.
Analyzer configuration is performed in the <analyzer> section. A list of analyzer options is given below.
In addition to configuring the <analyzer > block, in pom.xml you can define the analyzer settings via the command line. Definition format:
-Ppvsstudio.<nameSingleParam>=value
-Ppvsstudio.<nameMultipleParam>=value1;value2;value3
Example:
mvn pvsstudio:pvsAnalyze -Ppvsstudio.outputType=text
-Ppvsstudio.outputFile=path/to/output.txt
-Ppvsstudio.disabledWarnings=V6001;V6002;V6003
Important! When defining parameters via the command line, keep in mind that the parameters explicitly specified in the command line when running the analysis take precedence over the parameters specified when configuring the <analyzer > block in pom.xml.
For projects with the Gradle build system, you can use the pvsstudio-gradle-plugin plugin. To do this, you need to add the following to the build.gradle file:
buildscript {
repositories {
mavenCentral()
maven {
url uri('http://files.viva64.com/java/pvsstudio-maven-repository/')
}
}
dependencies {
classpath group: 'com.pvsstudio',
name: 'pvsstudio-gradle-plugin',
version: '7.11.44138'
}
}
apply plugin: com.pvsstudio.PvsStudioGradlePlugin
pvsstudio {
outputType = 'text'
outputFile = 'path/to/output.txt'
}
After that, you can run the analysis:
$ ./gradlew pvsAnalyze
To enter the license information you can use the following command:
./gradlew pvsCredentials "-Ppvsstudio.username=USR" "-Ppvsstudio.serial=KEY"
After that, the license information will be saved in % APPDATA%/PVS-Studio-Java/PVS-Studio.lic in Windows OS or in ~/.config/PVS-Studio-Java/PVS-Studio.lic in macOS and Linux.
The analyzer configuration is performed in the section "pvsstudio". A list of analyzer configurations is given below.
In addition to configuring the 'pvsstudio' block, in build.gradle, you can define the analyzer settings via the command line. Definition format:
-Dpvsstudio.<nameSingleParam>=value
-Dpvsstudio.<nameMultipleParam>=value1;value2;value3
Example:
./gradlew pvsAnalyze -Dpvsstudio.outputType=text
-Dpvsstudio.outputFile=path/to/output.txt
-Dpvsstudio.disabledWarnings=V6001;V6002;V6003
Important! When defining parameters via the command line, keep in mind that the parameters explicitly specified in the command line when running the analysis take precedence over the parameters specified when configuring the 'pvsstudio' block in build.gradle.
The PVS-Studio Java analyzer can be also used as a plugin for IntelliJ IDEA. In this case, parsing of a project structure is performed by means of this IDE and the plugin provides a convenient graphic interface to work with the analyzer.
PVS-Studio plug-in for IDEA can be installed either from the official JetBrains plug-in repository, or from a repository on our site. Another way of the plugin and the analyzer core installation is the PVS-Studio installer for Windows. It is available on the download page.
The following instructions describe how to install the plugin from our repository.
1) File -> Settings -> Plugins
2) Manage Plugin Repositories
3) Add repository (http://files.viva64.com/java/pvsstudio-idea-plugins/updatePlugins.xml)
4) Install
Then you should enter license information.
1) Analyze -> PVS-Studio -> Settings
2) Registration tab
And finally, you can run the analysis of a current project:
If none of the above methods of integration into a project is appropriate, you can use the analyzer core directly. You can download the analyzer core by the link (http://files.viva64.com/java/pvsstudio-cores/7.11.44138.zip) or using the PVS-Studio installer for Windows which is available on the download page.
If you install the analyzer via the PVS-Studio installer for Windows, the core will be downloaded to %APPDATA%/PVS-Studio-Java/7.11.44138.
To get information about all available arguments of the analyzer, you must run the command '--help':
java -jar pvs-studio.jar --help
Let's look at the main arguments of the analyzer:
The analyzer requires a collection of source files (or directories with source files) for analysis, and classpath information in order to build the program metamodel correctly.
Examples of quick launch:
java -jar pvs-studio.jar -s A.java B.java C.java -e Lib1.jar Lib2.jar -j4
-o report.txt -O text -username name someName –serial-number someSerial
java -jar pvs-studio.jar -s src/main/java --ext-file classpath.txt -j4
-o report.txt -O text --license-path PVS-Studio.lic
To avoid writing all the necessary parameters in the command line every time, you can use the '--cfg' parameter. To do this, create a file with the following contents:
{
"src": ["A.java", "B.java", "C.java"],
"threads": 4,
"output-file": "report.txt",
"output-type": "text",
"username": "someName",
"serial-number": "someSerial"
....
}
Or
{
"src": ["src/main/java"],
"threads": 4,
"ext-file": "classpath.txt",
"output-file": "report.txt",
"output-type": "text",
"license-path": "PVS-Studio.lic"
....
}
In this case, running the analyzer will narrow down to the following line:
java -jar pvs-studio.jar –-cfg cfg.json
Important! When you use the configuration file, keep in mind that arguments explicitly written in the command line, take precedence when running the analyzer.
Any of the following methods of integration of the analysis into a build system can be used for automated analysis in Continuous Integration systems. This can be performed in Jenkins, TeamCity and other CI systems by setting up automatic analysis launch and notification on the generated errors.
It is also possible to integrate PVS-Studio analyzer with the SonarQube continuous quality inspection system using the corresponding PVS-Studio plug-in. Installation instructions are available on this page: "Integration of PVS-Studio analysis results into SonarQube".
There are several ways to suppress analyzer messages.
1. Using special comments:
void f() {
int x = 01000; //-V6061
}
2. Using a special suppression file
The special suppression 'suppress' file can be generated PVS-Studio IDE plug-in for InlelliJ IDEA. Path to suppress file can be specified as a parameter to maven or gradle analyzer plug-ins, or it can be passed to as a parameter to the direct call of analyzer core.
When suppressing messages through IDEA, suppress file will be generated in the '.PVS-Studio' directory, which itself is located in the directory of a project that is currently opened in the IDE. The name of the suppress file will be suppress_base.json;
3. Using @SuppressWarnings(....) annotations
Analyzer can recognize several annotations and is able to skip warnings for the code that was already marked by such annotations. For example:
@SuppressWarnings("OctalInteger")
void f() {
int x = 01000;
}
The insufficient memory problem can be solved by increasing the available amount of memory and stack.
Plugin for Maven:
<jvmArguments>-Xmx4096m, -Xss256m</jvmArguments>
Plugin for Gradle:
jvmArguments = ["-Xmx4096m", "-Xss256m"]
Plugin for IntelliJ IDEA:
1) Analyze -> PVS-Studio -> Settings
2) Environment tab -> JVM arguments
Typically, the default amount of memory may be insufficient when analyzing some generated code with a large number of nested constructs.
It's probably better to exclude that code from analysis (using exclude), to speed it up.
The analyzer runs core with java from the PATH environment variable by default. If you need to run the analysis with some other java, you can specify it manually.
Plugin for Maven:
<javaPath>C:/Program Files/Java/jdk1.8.0_162/bin/java.exe</javaPath>
Plugin for Gradle:
javaPath = "C:/Program Files/Java/jdk1.8.0_162/bin/java.exe"
Plugin for IntelliJ IDEA:
1) Analyze -> PVS-Studio -> Settings
2) Environment tab -> Java executable
If you are unable to run the analysis, please email us (support@viva64.com) and attach text files from the .PVS-Studio directory (located in the project directory).
Development for embedded systems has its own specific characteristics and approaches, but control of code quality in this sphere is no less important than in the other ones. PVS-Studio supports the analysis of projects that use the following compilers:
Supported platforms for development are Windows, Linux and macOS.
After installing the analyzer in Linux or macOS, the pvs-studio-analyzer utility for projects analysis will become available.
Automatic definition of supported compilers is added in the utility. If a modified or advanced development package is used, you can list the names of the used embedded-compilers with the help of the --compiler parameter.
-C [COMPILER_NAME...], --compiler [COMPILER_NAME...]
Filter compiler commands by compiler name
After installing the analyzer, a large set of different utilities meant for various analyzer working modes, will be available.
Console mode
The project analysis can be automated using successive runs of the following commands of the CLMonitor utility:
"C:\Program Files (x86)\PVS-Studio\CLMonitor.exe" monitor
<build command for your project>
"C:\Program Files (x86)\PVS-Studio\CLMonitor.exe" analyze ... -l report.plog ...
Note. The command runs the process in a nonblocking mode.
Graphic mode
In the Compiler Monitoring UI utility, you need to change the mode to the build monitoring in the menu Tools > Analyze Your Files (C/C++) or by clicking the "eye" on the toolbar:
Before running the build monitoring, the following menu for additional analysis configuration will be available:
After running the monitoring, a project build is to be performed in an IDE or with the help of build scripts. Once the build is complete, click Stop Monitoring in the following window:
The analysis results will be available in the Compiler Monitoring UI utility after the analysis of files in the compilation.
In the analyzer report, such warnings might be encountered:
V001: A code fragment from 'source.cpp' cannot be analyzed.
Developers of compilers for embedded systems often diverge of standards and add non-standard extensions in the compiler. In the sphere of microcontrollers it is particularly prevalent and is not something unusual for developers.
However, for a code analyzer it represents non-standard C or C++ code, which requires additional support. If such warnings come up for your code, send us, please, the archive with the preprocessed *.i files received from the problematic source files and we'll add support of new compiler extensions.
You can enable the mode of saving such files while analyzing in the following way:
The market of development packages for embedded systems is very wide, so if you haven't found your compiler in the list of supported ones, please, report us about your desire to try PVS-Studio via the feedback form and describe in detail the used development tools.
To improve the quality of code or security of devices in the sphere of development for embedded systems, some people often follow different coding standards, such as SEI CERT Coding Standard and MISRA and also try to avoid the emergence of potential vulnerabilities, guided by a list of the Common Weakness Enumeration (CWE). PVS-Studio checks code compliance to such criteria.
To analyze a project for embedded system with PVS-Studio, you can also use PlatformIO cross-platform IDE. It can manage build toolchains, debuggers and library dependencies, and is available under many mainstream operating systems, such as Windows, macOS and Linux.
To enable PVS-Studio analysis, you should add the following in configuration file (platformio.ini):
check_tool = pvs-studio
check_flags = pvs-studio: --analysis-mode=4
Then use this command in the terminal:
pio check
More details about PlatformIO static analysis support are available on its project page, as well as on PVS-Studio analyzer configuration page.
This document includes peculiarities of launching the analyzer and checking of projects for embedded systems. As for the rest, the analyzer launch and its configuration are made the same as for other types of projects. Before using the analyzer we recommend checking out the following documentation pages:
PVS-Studio is a static analyzer for C, C++, C# and Java code designed to assist programmers in searching for and fixing a number of software errors of different patterns. The analyzer can be used in Windows, Linux and macOS.
Working under Windows, the analyzer integrates into Visual Studio as a plugin, providing a convenient user interface for easy code navigation and error search. There is also a C and C++ Compiler Monitoring UI (Standalone.exe) available which is used independently of Visual Studio and allows analyzing files compiled with, besides Visual C++, such compilers as GCC (MinGW) and Clang. Command line utility PVS-Studio_Cmd.exe will allow to perform analysis of MSBuild / Visual Studio projects without a run of IDE or Compiler Monitoring UI, that will let, for instance, use the analyzer as a part of CI process.
PVS-Studio for Linux is a console application.
This document describes the basics of using PVS-Studio on Windows. To get information about working in Linux environment refer to articles "Installing and updating PVS-Studio on Linux" and "How to run PVS-Studio on Linux and macOS".
A static analyzer does not substitute other bug searching tools - it just complements them. Integrating a static analysis tool with the development process helps to eliminate plenty of errors at the moment when they are only "born", thus saving your time and resources on their subsequent elimination. As everyone knows, the earlier a bug is found, the easier it is to fix it. What follows from this is the idea that a static analyzer should be used regularly, for it is the only best way to get most of it.
PVS-Studio divides all the warnings into 3 levels of certainty: High, Medium and Low. Some warnings refer to a special Fails category. Let's consider these levels in more detail:
It should be borne in mind that a certain code of the error does not necessarily bind it to a particular level of certainty, and the distribution across the levels highly depends on the context, where they were generated. The output window of diagnostic messages in the plugin for Microsoft Visual Studio and the Compiler Monitoring UI has buttons of the levels, allowing to sort the warnings as needed.
The analyzer has 5 types of diagnostic rules:
Short description of the diagnostic groups (GA, OP, 64, CS, MISRA) with the numbers of certainty levels (1, 2, 3) are used for the shorthand notation, for example in the command line parameters. Example: GA: 1,2.
Switching a certain group of diagnostics rules set shows or hides the corresponding messages.
You may find the detailed list of diagnostic rules in the corresponding section of the documentation.
Analyzer messages can be grouped and filtered by various criteria To get more detailed information about a work with a list of analyzer warnings, please, refer to the article " Handling the diagnostic messages list ".
When installing PVS-Studio, you can choose which versions of the Microsoft Visual Studio IDE the analyzer should integrate with.
After deciding on all the necessary options and completing the setup, PVS-Studio will integrate into the IDE's menu. In the figure, you can see that the corresponding command has appeared in Visual Studio's menu, as well as the message output window.
In the settings menu, you can customize PVS-Studio as you need to make it most convenient to work with. For example, it provides the following options:
Most likely, you won't need any of those at your first encounter with PVS-Studio, but later, they will help you optimize your work with the tool.
When installing the analyzer, it is possible to integrate the PVS-Studio plugin into the IntelliJ IDEA, which allows performing the analysis and handling analyzer reports right from the IDE.
After the installation, the plugin will be available in the menu 'Analyze' ('Analyze' > 'PVS-Studio'). The screenshot of IntelliJ IDEA with integrated PVS-Studio plugin is given below.
In the settings menu, is possible the disable diagnostic rules, exclude files / directories from the analysis, etc.
The documentation section "How to Run PVS-Studio Java" describes operating features of the Java analyzer. It also provides alternative installation options, including installation of plugins for Maven, Gradle.
When installing the analyzer, it is possible to integrate the PVS-Studio plugin into the JetBrains Rider, which allows performing the analysis and handling analyzer reports right from the IDE.
The plugin is available in the 'Tools' menu after its installation. Current solution / project analysis can be done the following way: 'Tools' > 'PVS-Studio' > 'Check Current Solution/Project'.
The screenshot of JetBrains Rider with integrated PVS-Studio plugin is given below.
You can learn more about PVS-Studio plugin for IDE JetBrains Rider in the following documentation section: "Using PVS-Studio with JetBrains Rider".
PVS-Studio can be used independently of the Microsoft Visual Studio IDE. The Compiler Monitoring UI allows analyzing projects while building them. It also supports code navigation through clicking on the diagnostic messages, and search for code fragments and definitions of macros and data types. To learn more about how to work with the Compiler Monitoring UI, see the article "Viewing Analysis Results with C and C++ Compiler Monitoring UI".
PVS-Studio_Cmd.exe is a tool, which enables the analysis of Visual Studio solutions (.sln), as well as Visual C++ and Visual C# projects (.vcxproj, .csproj) from the command line. This can be useful, for example, in the case of a need to integrate static analysis on the build server. PVS-Studio_Cmd.exe allows to perform as a full analysis of the target project, and incremental (analysis of files that have changed since the last build). View of return code of the utility work as a bitmask enables you to get detailed information on the results of the analysis and identify the problems, in case of their occurrence. Thus, using the PVS-Studio_Cmd.exe utility you can configure a scenario of static code analysis 'subtly' enough and embed it into the CI process. Using of PVS-Studio_Cmd.exe module is described in more detail in the section "Analyzing Visual C++ (.vcxproj) and Visual C# (.csproj) projects from the command line".
PVS-Studio provides an extensive help system on its diagnostic messages. This message database is accessible both from PVS-Studio's interface and at the official site. The message descriptions are accompanied by code samples with error examples, the error description, and available fixing solutions.
To open a diagnostic description, just click with the left mouse button on the diagnostic number in the message output window. These numbers are implemented as hyperlinks.
Technical support for PVS-Studio is carried out via e-mail. Since our technical support is delivered by the tool developers themselves, our users can promptly get responses to a wide variety of questions.
PVS-Studio integrates into Microsoft Visual Studio 2019, 2017, 2015, 2013, 2012, 2010 development environments. You may learn about the system requirements for the analyzer in the corresponding section of the documentation.
After you obtain the PVS-Studio installation package, you may start installing the program.
After approval of the license agreement, integration options will be presented for various supported versions of Microsoft Visual Studio. Integration options which are unavailable on a particular system will be greyed-out. In case different versions of the IDE or several IDEs are present on the system, it is possible to integrate the analyzer into every version available.
To make sure that the PVS-Studio tool was correctly installed, you may open the About window (Help/About menu item). The PVS-Studio analyzer must be present in the list of installed components.
When working in the Visual Studio IDE, you can run different types of the analysis - at the solution, project file, the selected items, etc. For example, the analysis solutions run is executed as follows: "PVS-Studio -> Check -> Solution".
After launching the verification, the progress bar will appear with the buttons Pause (to pause the analysis) and Stop (to terminate the analysis). Potentially dangerous constructs will be displayed in the list of detected errors during the analysis procedure.
The term "a potentially dangerous construct" means that the analyzer considers a particular code line a defect. Whether this line is a real defect in an application or not is determined only by the programmer who knows the application. You must correctly understand this principle of working with code analyzers: no tool can completely replace a programmer when solving the task of fixing errors in programs. Only the programmer who relies on his knowledge can do this. But the tool can and must help him with it. That is why the main task of the code analyzer is to reduce the number of code fragments the programmer must look through and decide what to do with them.
in real large projects, there will be not dozens but hundreds or even thousands of diagnostic messages and it will be a hard task to review them all. To make it easier, the PVS-Studio analyzer provides several mechanisms. The first mechanism is filtering by the error code. The second is filtering by the contents of the diagnostic messages' text. The third is filtering by file paths. Let's examine examples of using filtering systems.
Suppose you are sure that the diagnostic messages with the code V112 (using magic numbers) are never real errors in your application. In this case you may turn off the display of these diagnostic warnings in the analyzer's settings:
After that, all the diagnostic warnings with the code V112 will disappear from the error list. Note that you do not need to restart the analyzer. If you turn on these messages again, they will appear in the list without relaunching the analysis as well.
Now let's look at another option - a text-based diagnostic messages filtering. Let's look at an example of analyzer warning and code on which it was issued:
obj.specialFunc(obj);
Analyzer warning: V678 An object is used as an argument to its own method. Consider checking the first actual argument of the 'specialFunc' function.
The analyzer found it suspicious that the same object is passed as an argument to from which this method is called. A programmer, as opposed to the analyzer may be aware of what usage of this method is acceptable. Therefore, you might need to filter out all such warnings. You can do this by adding the related filter in settings "Keyword Message Filtering".
After that, all the diagnostic messages whose text contains that expression will disappear from the error list, without the necessity of restarting the code analyzer. You may get turn them on back by simply deleting the expression from the filter.
The last mechanism of reducing the number of diagnostic messages is filtering by masks of project files' names and file paths.
Suppose your project employs the Boost library. The analyzer will certainly inform you about potential issues in this library. But if you are sure that these messages are not relevant for your project, you may simply add the path to the folder with Boost on the page "Don't check files":
After that diagnostic messages related to files in this folder will not be displayed.
Also, PVS-Studio has the "Mark as False Alarm" function. It enables you to mark those lines in your source code which cause the analyzer to generate false alarms. After marking the code, the analyzer will not produce diagnostic warnings on this code. This function makes it more convenient to use the analyzer permanently during the software development process when verifying newly written code.
Thus, in the following example, we turned off the diagnostic messages with the code V640:
for (int i = 0; i < m; ++i)
for (int j = 0; j < n; ++j)
matrix[i][j] = Square(i) + 2*Square(j);
cout << "Matrix initialization." << endl; //-V640
....
This function is described in more detail in the section "Suppression of false alarms".
There are also some other methods to influence the display of diagnostic messages by changing the code analyzer's settings but they are beyond the scope of this article. We recommend you to refer to the documentation on the code analyzer's settings.
When you have reviewed all the messages generated by the code analyzer, you will find both real errors and constructs which are not errors. The point is that the analyzer cannot detect 100% exactly all the errors in programs without producing the so called "false alarms". Only the programmer who knows and understands the program can determine if there is an error in each particular case. The code analyzer just significantly reduces the number of code fragments the developer needs to review.
So, there is certainly no reason for correcting all the potential issues the code analyzer refers to.
Suppression mechanisms of individual warnings and mass analyzer messages suppression are described in the articles "Suppression of false alarms" and "Mass Suppression of Analyzer Messages".
This document covers the usage of command-line utilities for the analysis of MSBuild projects (.vcxproj / .csproj) and Visual Studio solutions.
This document covers the usage of command line utilities. Usage of plugins for Visual Studio and JetBrains Rider is described in the following documentation sections: "Getting acquainted with the PVS-Studio static code analyzer on Windows", "Using PVS-Studio with JetBrains Rider".
The command-line analyzer of MSBuild projects has various names on different platforms supported by the analyzer:
The features described below are relevant for both utilities. Examples with PVS-Studio_Cmd / pvs-studio-dotnet are interchangeable unless explicitly stated otherwise.
Note. To analyze C++ projects that don't use the MSBuild build system, on Windows use the compilation monitoring system or direct integration of the analyzer into the build system. Analysis of C++ projects on Linux / macOS is described in detail in this section of the documentation.
Command line utilities are unpacked to the following directories by default:
'--help' command displays all available arguments of the analyzer:
PVS-Studio_Cmd.exe --help
Let's look at the main arguments of the analyzer:
Here is an example of running a check of the files list written in "pvs.txt", from the solution "My Solution":
PVS-Studio_Cmd.exe --target "mysolution.sln" --platform "Any CPU"
--configuration "Release" --output "mylog.plog"
--sourceFiles "pvs.txt" --progress
PVS-Studio command-line version supports all settings on filtering/disabling messages available in the IDE plugin for Visual Studio. You can either set them manually in the xml file, passed through the '--settings' argument, or use the settings specified through the UI plugin, without passing this argument. Note that the PVS-Studio IDE plugin uses an individual set of settings for each user in the system.
Only relevant for PVS-Studio_Cmd. If you have installed multiple instances of PVS-Studio of different versions for the current system user, all instances of the program will use the installation directory specified during the last installation. To avoid conflicts in the analyzer's operation, in the settings passed with the --settings (-s) argument, the path to the installation directory (the value of the <InstallDir> element) must be specified.
PVS-Studio_Cmd allows you to selectively check individual files specified in the list passed using the '--sourceFiles' (-f) flag. The file list is a simple text file that contains line-by-line paths to the files being checked. Relative file paths will be expanded relative to the current working directory. You can specify both compiled source files (c/cpp for C++ and cs for C#), and header files (h/hpp for C++).
In this mode, when analyzing C and C++ files, a compilation dependency cache is generated, which will be used for subsequent analysis runs. By default, dependency caches are saved in a special '.PVS-Studio' subdirectory where project files (.vcxproj) are located. If necessary, you can change their storage location using the '--dependencyRoot' (-D) flag. By default, dependency caches keep full paths to source files, and relocation of project files will cause caches' regeneration. You can generate portable caches by specifying the '--dependencyCacheSourcesRoot' (-R) flag, which will cause the source file paths inside caches to be generated relative to it.
To specify the list of analyzed files with path patterns, you need to pass a specially formatted XML file to the '--sourceFiles' (-f) flag. It accepts the list of absolute and relative paths and/or wildcards to analyzed files.
<SourceFilesFilters>
<SourceFiles>
<Path>C:\Projects\Project1\source1.cpp</Path>
<Path>\Project2\*</Path>
<Path>source_*.cpp</Path>
</SourceFiles>
<SourcesRoot>C:\Projects\</SourcesRoot>
</SourceFilesFilters>
The PVS-Studio_Cmd / pvs-studio-dotnet utilities have several non-zero exit codes that don't indicate a problem with the utility itself, i.e. even if the utility returned not '0', it doesn't mean that it crashed. The exit code is a bit mask that masks all possible states that occurred during the operation of the utility. For example, the utility will return a non-zero code if the analyzer finds potential errors in the code being checked. This allows you to handle this situation separately, for example, on the build server, when the analyzer usage policy doesn't imply the presence of warnings in the code uploaded in the version control system.
Let's look at all possible utility state codes that form the bit mask of the return code.
Here is an example of a Windows batch script for decrypting the return code of the PVS-Studio_Cmd utility:
@echo off
"C:\Program Files (x86)\PVS-Studio\PVS-Studio_Cmd.exe"
-t "YourSolution.sln" -o "YourSolution.plog"
set /A FilesFail = "(%errorlevel% & 1) / 1"
set /A GeneralExeption = "(%errorlevel% & 2) / 2"
set /A IncorrectArguments = "(%errorlevel% & 4) / 4"
set /A FileNotFound = "(%errorlevel% & 8) / 8"
set /A IncorrectCfg = "(%errorlevel% & 16) / 16"
set /A InvalidSolution = "(%errorlevel% & 32) / 32"
set /A IncorrectExtension = "(%errorlevel% & 64) / 64"
set /A IncorrectLicense = "(%errorlevel% & 128) / 128"
set /A AnalysisDiff = "(%errorlevel% & 256) / 256"
set /A SuppressFail = "(%errorlevel% & 512) / 512"
set /A LicenseRenewal = "(%errorlevel% & 1024) / 1024"
if %FilesFail% == 1 echo FilesFail
if %GeneralExeption% == 1 echo GeneralExeption
if %IncorrectArguments% == 1 echo IncorrectArguments
if %FileNotFound% == 1 echo FileNotFound
if %IncorrectCfg% == 1 echo IncorrectConfiguration
if %InvalidSolution% == 1 echo IncorrectCfg
if %IncorrectExtension% == 1 echo IncorrectExtension
if %IncorrectLicense% == 1 echo IncorrectLicense
if %AnalysisDiff% == 1 echo AnalysisDiff
if %SuppressFail% == 1 echo SuppressFail
if %LicenseRenewal% == 1 echo LicenseRenewal
Note. Since the maximum value of the exit code under Unix is limited by 255, exit codes of the PVS-Studio_Cmd (where the exit code may exceed 255) and pvs-studio-dotnet utilities are different.
Let's look at all possible utility state codes that form the bit mask of the return code.
Note. This section is relevant for Windows. Analysis of C++ projects on Linux / macOS is described in the corresponding section of the documentation.
If your C/C++ project doesn't use standard Visual Studio build systems (VCBuild/MSBuild) or even uses your own build system / make files via NMAKE Visual Studio projects, you will not be able to check such a project using PVS-Studio_Cmd.
In this case, you can use the compiler monitoring system, which allows you to analyze projects regardless of their build system, "intercepting" the start of compilation processes. The compilation monitoring system can be used either from the command line or through the user interface of the C and C++ Compiler Monitoring UI application.
You can also directly embed the command line launch of the analyzer core right into your build system. Mind you, this will require writing a call to the PVS-Studio analyzer.exe core for each compiled file, similar to the the way how the C++ compiler is called.
When you run code analysis from the command line, the default settings are the same as when you run analysis from the IDE (Visual Studio / Rider). You can also specify which settings file to use via the --settings argument, as described above.
For example, as for the filter system (Keyword Message Filtering and Detectable Errors), it is NOT used when analyzing from the command line. Which means that the report file will contain all error messages regardless of the parameters you set. However, when you upload the results file to the IDE, the filters will already be applied. This is because filters are applied dynamically to results. The same occurs when running from the IDE as well. This is very convenient, because when you get a list of messages, you may want to disable some of them (for example, V201). Just disable them in the settings and the corresponding messages will disappear from the list WITHOUT restarting the analysis.
The analyzer report format isn't intended for direct display or human reading. However, if you need to filter the analysis results in some way and convert them to a "readable" view, you can use the PlogConverter utility distributed with PVS-Studio.
To work with reports in different formats, you need to use different utilities:
The source code of both utilities is open and available for download: PlogConverter; plog-converter, which allows you to simply add support for new formats based on existing algorithms.
These utilities are described in more detail in the corresponding sections of the documentation:
The PVS-Studio Compiler Monitoring system (CLMonitoring) was designed for "seamless" integration of the PVS-Studio static analyzer into any build system under Windows that employs one of the preprocessors supported by the PVS-Studio.exe command-line analyzer (Visual C++, GCC, Clang, Keil MDK ARM Compiler 5/6, IAR C/C++ Compiler for ARM) for compilation.
To perform correct analysis of the source C/C++ files, the PVS-Studio.exe analyzer needs intermediate .i files which are actually the output of the preprocessor containing all the headers included into the source files and expanded macros. This requirement defines why one can't "just take and check" the source files on the disk - besides these files themselves, the analyzer will also need some information necessary for generating those .i files. Note that PVS-Studio doesn't include a preprocessor itself, so it has to rely on an external preprocessor in its work.
As the name suggests, the Compiler Monitoring system is based on "monitoring" compiler launches when building a project, which allows the analyzer to gather all the information essential for analysis (that is, necessary to generate the preprocessed .i files) of the source files being built. In its turn, it allows the user to check the project by simply rebuilding it, without having to modify his build scripts in any way.
This monitoring system consists of a compiler monitoring server (the command-line utility CLMonitor.exe) and UI client (Standalone.exe), and it is responsible for launching the analysis (CLMonitor.exe can be also used as a client when launched from the command line).
In thedefault mode, the system doesn't analyze the hierarchy of the running processes; instead, it just monitors all the running processes in the system. It means that it will also know if a number of projects are being built in parallel and monitor them.
CLMonitor.exe can monitor only compiler runs, that have been generated by the specified (by PID) parent process. Such operational mode is provided for the case, when several projects are simultaneously built, but you need to monitor compiler runs only for a specific project or solution. Child processes monitoring mode will be described below.
CLMonitor.exe server monitors launches of processes corresponding to the target compiler (for example cl.exe for Visual C++ and g++.exe for GCC) and collects information about the environment of these processes. Monitoring server will intercept compiler invocations only for the same user it was itself launched under. This information is essential for a correct launch of static analysis to follow and includes the following data:
Once the project is built, the CLMonitor.exe server must send a signal to stop monitoring. It can be done either from CLMonitor.exe itself (if it was launched as a client) or from Standalone's interface.
When the server stops monitoring, it will use the collected information about the processes to generate the corresponding intermediate files for the compiled files. And only then the PVS-Studio.exe analyzer itself is launched to carry out the analysis of those intermediate files and output a standard PVS-Studio's report you can work with both from the Standalone version and any of the PVS-Studio IDE plugins.
Note: in this section, we will discuss how to use CLMonitor.exe to integrate the analysis into an automated build system. If you only to check some of your projects manually, consider using the UI version of C and C++ Compiler Monitoring (Standalone.exe) as described below.
CLMonitor.exe is a monitoring server directly responsible for monitoring compiler launches. It must be launched prior to the project build process. After launching the server in monitoring mode, it will trace the invocations of supported compilers.
The supported compilers are:
But if you want the analysis to be integrated directly into your build system (or a continuous integration system and the like), you can't "just" launch the monitoring server because its process blocks the flow of the build process while active. That's why you need to launch CLMonitor.exe with the monitor argument in this case:
CLMonitor.exe monitor
In this mode, CLMonitor will launch itself in the monitoring mode and then terminate, while the build system will be able to continue its work. At the same time, the second CLMonitor process (launched from the first one) will stay running and monitoring the build process.
Since there are no consoles attached to the CLMonitor process in this mode, the monitoring server will - in addition to the standard stdin\stdout streams - output its messages into a Windows event log (Event Logs -> Windows Logs -> Application).
Also you can monitor only compiler runs that have been generated by a specific process specified by PID. To do this, you need to run CLMonitor.exe in the monitoring mode with arguments trace and --parentProcessID ('-p' short version). The argument --parentProcessID as a parameter has to obtain the process PID, which is supposed to be the parent of the compiler processes to run. The CLMonitor.exe command line might look as follows in this case:
CLMonitor.exe trace –-parentProcessID 10256
If you perform the build from the console and you want CLMonitor.exe to monitor only the build, launched from that very console, you can run CLMonitor.exe with the argument --attach (-a):
CLMonitor.exe monitor –-attach
In this operational mode, the program will monitor only those compiler instances which are child processes of the console process, from which the build was run.
We need to take into account, that the MSBuil build system leaves some MSbuild.exe processes from the previous builds running. In this case, CLMonitor.exe monitoring child processes, won't be able to track compiler runs, generated by those remaining MSBuild.exe processes. That is so because these MSBuild.exe processes, most likely, aren't included in the hierarchy of the process specified by the argument --parentProcessID. Thus, before running CLMonitor.exe in the mode of monitoring child processes, we recommend terminating MSBuild.exe processes remaining in the system from the previous build.
Note: for the monitoring server to run correctly, it must be launched with the same privileges as the compiler processes themselves.
To ensure correct logging of messages in the system event logs, you need to launch the CLMonitor.exe process with elevated (administrative) privileges at least once. If it has never been launched with such privileges, it will not be allowed to write the error messages into the system log.
Notice that the server only records messages about its own runtime errors (handled exceptions) into the system logs, not the analyzer-generated diagnostic messages!
Once the build is finished, run CLMonitor.exe in the client mode so that it can generate the preprocessed files and call the static analyzer itself:
CLMonitor.exe analyze -l "c:\test.plog"
As the '-l' argument, the full path to the analyzer's log file must be passed.
When running as a client, CLMonitor.exe will connect to the already running server and start generating the preprocessed files. The client will receive the information on all of the compiler invocations that were detected and then the server will terminate. The client, in its turn, will launch preprocessing and PVS-Studio.exe analyzer for all the source files which have been monitored.
When finished, CLMonitor.exe will save a log file (C:\ptest.plog) which can be viewed in Visual Studio PVS-Studio IDE plugin or Compiler Monitoring UI client (Standalone.exe, PVS-Studio|Open/Save|Open Analysis Report).
You can also use the analyzer message suppression mechanism with CLMonitor through the '-u' argument:
CLMonitor.exe analyze -l "c:\ptest.plog" -u "c:\ptest.suppress" -s
The '-u' argument specifies a full path to the suppress file, generated through the 'Message Suppression' dialog in Compiler Monitoring UI client (Standalone.exe, Tools|Message Suppression...). The optional '-s' argument allows you to append the suppress file specified through the -u with newly generated messages from the current analysis run.
For setting additional display parameters and messages filtration you can pass the path to the file of diagnostics configuration (.pvsconfig) using the argument '-c':
CLMonitor.exe analyze -l "c:\ptest.plog" -c "c:\filter.pvsconfig"
CLMonitor.exe allows you to save information it gathered from monitoring a compilation process in a dump file. This will make possible re-running the analysis without the need to re-build a project and monitor this build. To save a dump you will first need to run monitoring in a regular way with either trace or monitor commands, as described above. After the build is finished, you can stop monitoring and save dump file. For this, run CLMonitor.exe with the saveDump command:
CLMonitor.exe saveDump -d c:\monitoring.zip
You can also finish monitoring, save dump file and run the analysis on the files that the monitoring have caught. For this, specify a path to the dump file to the CLMonitor.exe analyze command:
CLMonitor.exe analyze -l "c:\ptest.plog" -d c:\monitoring.zip
Running the analysis from the pre-generated dump file is possible with the following command:
CLMonitor.exe analyzeFromDump -l "c:\ptest.plog"
-d c:\monitoring.zip
Compilation monitoring dump file is a simple zip archive, containing a list of parameters from compiler processes that CLMonitor had caught (such as process command line arguments, environment variables, current working directory and so on) in an XML format. The analyzeFromDump command supports running the analysis form both the zipped dump file and an un-zipped XML.
For the "manual" check of individual projects with CLMonitor, you can use the interface of the Compiler Monitoring UI client (Standalone.exe) which can be launched from the Start menu.
To start monitoring, open the dialog box: Tools -> Analyze Your Files... (Figure 1):
Figure 1 - The compiler monitoring start dialog box
Click "Start Monitoring" button. CLMonitor.exe process will be launched and the environment main window will be minimized.
Start building your project, and when it's done, click the "Stop Monitoring" button in the bottom right-hand corner of the window (Figure 2):
Figure 2 - The monitoring management dialog box
If the monitoring server has successfully tracked all the compiler launches, the preprocessed files will be generated first and then they will be analyzed. When the analysis is finished, you will see a standard PVS-Studio's report (Figure 3):
Figure 3 - The resulting output of the monitoring server and the analyzer
The report can be saved as an XML file (a .plog file): File -> Save PVS-Studio Log As...
A convenient navigation for analyzer messages and source code navigation is available in Visual Studio IDE through PVS-Studio extension. If the project to be analyzed can be opened inside this IDE, but the 'regular' analysis by PVS-Studio (i.e. PVS-Studio|Check|Solution) is not available (for example, for makefile Visual Studio projects), it is still possible to have all the benefits of Visual Studio by loading the analysis results (plog file) into PVS-Studio by the ' PVS-Studio|Open/Save|Open Analysis Report...' command. This action can also be automated, through the use of Visual Studio automation mechanism, by tying it, and also the analysis itself, to the project build event. As an example, let's review the integration of PVS-Studio analysis through compiler monitoring into a makefile project. Such type of projects is used, for instance, by the build system of Unreal Engine projects under Windows.
As a command to run the build of our makefile project, let's specify the run.bat file:
Figure 4 – configuring makefile project
The contents of the run.bat file are the following:
set slnPath=%1
set plogPath="%~2test.plog"
"%ProgramFiles(X86)%\PVS-Studio\CLMonitor.exe" monitor
waitfor aaa /t 10 2> NUL
nmake
"%ProgramFiles(X86)%\PVS-Studio\CLMonitor.exe" analyze -l %plogPath%
cscript LoadPlog.vbs %slnPath% %plogPath%
As arguments to run.bat, we pass the paths to solution and project. Compiler monitoring is first launched with CLMonitor.exe. The 'waitfor' command is used as a delay between launching the monitoring and building the project – without it, monitoring might not catch the first compiler invocations. Next step is the build command itself – nmake. After build is finished, we run the analysis, and after this is complete (the analysis results are saved along the project file), we load the results into Visual Studio with the 'LoadPlog.vbs' script. Here is this script:
Set objArgs = Wscript.Arguments
Dim objSln
Set objSln = GetObject(objArgs(0))
Call objSln.DTE.ExecuteCommand("PVSStudio.OpenAnalysisReport",
objArgs(1))
Here we use the DTE.ExecuteCommand function from the Visual Studio automation to access our running Visual Studio (in which our solution is currently open) instance directly from the command line. Running this command is virtually identical to clicking the 'PVS-Studio|Open/Save|Open Analysis Report...' menu item in the UI.
To find a running Visual Studio instance, we use the GetObject method. Please take a note that this method uses the solution path to identify the running Visual Studio instance. Therefore, when using it, opening the same solution in several instances of Visual Studio is inadvisable – the method could potentially "miss" and analysis results will be opened inside the wrong IDE instance – not the one that was used to run the build\analysis.
Sometimes, IAR Embedded Workbench IDE can set up the current working directory of the compiler process (iccarm.exe) to 'C:\Windows\System32' during the build process. Such behavior can cause issues with the analysis, considering that current working directory of the compiler process is where CLMonitoring stores its intermediate files.
To avoid writing intermediate files to 'C:\Windows\System32', which in turn can cause insufficient access rights errors, a workspace should be opened by double clicking the workspace file ('eww' extension) in Windows explorer. In this case, intermediate files will be stored in the workspace file's directory.
In case of necessity of performing the incremental analysis when using the Compiler Monitoring system, it is enough to "monitor" the incremental build, i.e. the compilation of the files that have been modified since the last build. This way of usage will allow to analyze only the modified/newly written code.
Such a scenario is natural for the Compiler Monitoring system. Accordingly, the analysis mode (full or analysis of only modified files) depends only on what build is monitored: full or incremental.
Despite the convenience of the "seamless" analysis integration into the automated build process (through CLMonitor.exe) employed in this mode, one still should keep in mind the natural restrictions inherent in this mode - particularly, that a 100% capture of all the compiler launches during the build process is not guaranteed, which failure may be caused both by the influence of the external environment (for example antivirus software) and the hardware-software environment specifics (for example the compiler may terminate too quickly when running on an SSD disk while CPU's performance is too low to "catch up with" this launch).
That's why we recommend you to provide whenever possible a complete integration of the PVS-Studio static analyzer with your build system (in case you use a build system other than MSBuild) or use the corresponding PVS-Studio IDE plugin.
We recommend utilizing PVS-Studio analyzer through the Microsoft Visual Studio development environments, into which the tool is perfectly integrated. But sometimes you can face situations when command line launch is required, for instance in case of the cross-platform build system based on makefiles.
In case you possess project (.vcproj/.vcxproj) and solution (.sln) files, and command line execution is required for the sake of daily code checks, for instance, we advise you to examine the article "Analyzing Visual C++ (.vcxproj) and Visual C# (.csproj) projects from the command line".
In addition, regardless of the build system being utilized, you can use PVS-Studio compiler monitoring system.
So, how does a code analyzer work (be it PVS-Studio or any other tool)?
When the analyzer user gives a command to check some file (for example, file.cpp), the analyzer performs preprocessing of this file at first. As a result, all the macros are defined and #include-files are arranged.
The preprocessed i-file can now be parsed by the code analyzer. Pay attention that the analyzer cannot parse a file which has not been preprocessed, for it won't have information about the types, functions and classes being used. Operation of any code analyzer includes at least two steps: preprocessing and analysis itself.
It is possible that C++ sources do not have project files associated with them, for example it is possible in case of multiplatform software or old projects which are built using command line batch utilities. Various Make systems are often employed to control building process in such cases, Microsoft NMake or GNU Make for instance.
To analyze such projects it is necessary to embed the direct call for the analyzer into building process (by default, the file is located at 'programfiles%\PVS-Studio\x64\PVS-Studio.exe') , and to pass all arguments required for preprocessing to it. In fact the analyzer should be called for the same files for which the compiler (cl.exe in case of Visual C++) is being called.
The PVS-Studio analyzer should be called in batch mode for each C/C++ file or for a whole group of files (files with c/cpp/cxx etc. extensions, the analyzer shouldn't be called for header h files) with the following arguments:
PVS-Studio.exe --cl-params %ClArgs%
--source-file %cppFile% --cfg %cfgPath% --output-file %ExtFilePath%
%ClArgs% — arguments which are passed to cl.exe compiler during regular compilation, including the path to source file (or files).
%cppFile% — path to analyzed C/C++ file or paths to a collection of C/C++ files (the filenames should be separated by spaces)
%ClArgs% and %cppFile% parameters should be passed to PVS-Studio analyzer in the same way in which they are passed to the compiler, i.e. the full path to the source file should be passed twice, in each param.
%cfgPath% — path to PVS-Studio.cfg configuration file. This file is shared between all C/C++ files and can be created manually (the example will be presented below)
%ExtFilePath% — optional argument, a path to the external file in which the results of analyzer's work will be stored. In case this argument is missing, the analyzer will output the error messages into stdout. The results generated here can be viewed in Visual Studio's 'PVS-Studio' toolwindow using 'PVS-Studio/Open Analysis Report' menu command (selecting 'Unparsed output' as a file type). Please note, that starting from PVS-Studio version 4.52, the analyzer supports multi-process (PVS-Studio.exe) output into a single file (specified through --output-file) in command line independent mode. This allows several analyzer processes to be launched simultaneously during the compilation performed by a makefile based system. The output file will not be rewritten and lost, as file blocking mechanism had been utilized.
Consider this example for starting the analyzer in independent mode for a single file, utilizing the Visual C++ preprocessor (cl.exe):
PVS-Studio.exe --cl-params "C:\Test\test.cpp" /D"WIN32" /I"C:\Test\"
--source-file "C:\Test\test.cpp" --cfg "C:\Test\PVS-Studio.cfg"
--output-file "C:\Test\test.log"
The PVS-Studio.cfg (the --cfg parameter) configuration file should include the following lines:
exclude-path = C:\Program Files (x86)\Microsoft Visual Studio 10.0
vcinstalldir = C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\
platform = Win32
preprocessor = visualcpp
language = C++
skip-cl-exe = no
Let's review these parameters:
You can filter diagnostics messages generated by analyzer using analyzer-errors and analysis-mode parameters (set them in cfg file of pass through command line). These parameters are optional.
Also there is a possibility to pass the analyzer a ready-made prepossessed file (i-file), by missing the preprocessing phase and by getting to the analysis. To do this, use the parameter skip-cl-exe, specifying yes. In this mode there is no need to use cl-params parameter. Instead, specify the path to the file (--i-file) and set the type of the preprocessor, used to create this i-file. Specifying the path to the source file (--source file) is also necessary. Despite the fact that the i-file already contains the necessary information for analysis, it may be needed to compare the i-file with the file of the source code, for example, when the analyzer has to look at unexpanded macro. Thus, the call of the analyzer in the independent mode with the specified i-file for the preprocessor Visual C++ (cl.exe) could be:
PVS-Studio.exe --source-file "C:\Test\test.cpp"
--cfg "C:\Test\PVS-Studio.cfg" --output-file "C:\Test\test.log"
The configuration file PVS-Studio.cfg (parameter --cfg) should contain the following lines:
exclude-path = C:\Program Files (x86)\Microsoft Visual Studio 10.0
vcinstalldir = C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\
platform = Win32
preprocessor = visualcpp
language = C++
skip-cl-exe = yes
i-file = C:\Test\test.i
The full list of command line switches will be displayed with this argument:
PVS-Studio.exe –help
It should be noted that when calling PVS-Studio.exe directly, the license information stored in the file 'Settings.xml' is not used. When running PVS-Studio.exe, you should explicitly specify the path to a separate file with the license. This is a text file in the UTF-8 encoding, consisting of the two lines: the name and the key.
The path to the file with the license can be either specified in the PVS-Studio configuration file or passed as a command-line argument. Appropriate parameter: lic-file.
For example, to specify the path to the license file in the .cfg file, you should add the following line:
lic-file = D:\Test\license.lic
For example let's take the Makefile project which is build using VisualC++ compiler and it is declared in the project's makefile like this:
$(CC) $(CFLAGS) $<
The $(CC) macro calls cl.exe, the compilation parameters $(CFLAGS) are passed to it and finally all C/C++ files on which the current build target is dependent are inserted using the $< macro. Thereby the cl.exe compiler will be called with required compilation parameters for all source files.
Let's modify this script in such a way that every file is analyzed with PVS-Studio before the compiler is called:
$(PVS) --source-file $< --cl-params $(CFLAGS) $<
--cfg "C:\CPP\PVS-Studio.cfg"
$(CC) $(CFLAGS) $<
$(PVS) - path to analyzer's executable (%programfiles%\PVS-Studio\x64\PVS-Studio.exe). Take into account that the Visual C++ compiler is being called after the analyzer on the next line with the same arguments as before. This is done to allow for all targets to be built correctly so the build would not stop because of the lack of .obj-files.
PVS-Studio tool has been developed to work within the framework of Visual Studio environment. And launching it from the command line is the function that is additional to the main working mode. However, all of analyzer's diagnostic capabilities are available.
Error messages, which were generated in this mode, could be easily redirected into the external file with the help of --output-file command line switch. This file will contain the unprocessed and unfiltered analyzer output.
Such a file could be viewed in PVS-Studio IDE extension or C and C++ Compiler Monitoring UI (Standalone.exe) by using 'Open Analysis Report' menu command (select 'Unparsed output' as a file type) and afterwards it could be saved in a standard PVS-Studio log file (plog) format. This allows you to avoid the duplication of error messages and also to use all of the standard filtering mechanisms for them.
In addition, the 'raw' unparsed output can be converted to one of the supported formats (xml, html, csv and so on) by using the PlogConverter command line tool.
The users who are familiar with PVS-Studio incremental analysis mode within the IDE naturally will miss this feature in the command line mode. But fortunately, almost any build system could provide an incremental analysis just "out of the box", because by invoking "make" we recompile only file which were modified. So the incremental analysis will be automatically provided by using the independent command line version.
Although it is possible to open the unfiltered text file containing analyzer diagnostic messages from within the IDE into PVS-Studio Output window (which itself will allow you to use file navigation and filtering mechanisms), you will only be able to use the code text editor inside the Visual Studio itself, as the additional IntelliSense functionality will be unavailable (that is, autocompletion, type declarations and function navigation, etc.). And all this is quite inconvenient, especially while you are handling the analysis results, even more so with the large projects, forcing you to search class and method declarations manually. As a result the time for handling a single diagnostic message will be greatly increased.
To solve this issue, you need to create an empty Visual C++ project (Makefile based one for instance) in the same directory with C++ files being verified by the analyzer (vcproj/vcxproj file should be created in the root folder which is above every file verified). After creating an empty project you should enable the 'Show All Files' mode for it (the button is in the upper part of the Solution Explorer window), which will display all the underlying files in the Solution Explorer tree view. Then you will be able to use the 'Include in Project' context menu command to add all the necessary c, cpp and h files into your project (You will also probably have to add include directory paths for some files, for instance the ones containing third-party library includes). If including only a fraction of the files verified, you also should remember that IntelliSense possibly will not recognize some of the types from within these files, as these types could be defined right in the missing files which were not included by you.
Figure 1 — including files into the project
The project file we created could not be used to build or verify the sources with PVS-Studio, but still it will substantially simplify handling of the analysis results. Such a project could also be saved and then used later with the next iteration of analyzer diagnostics results in independent mode.
The cl.exe compiler is able to process source files as either one at a time or as a whole group of files at once. In the first case the compiler is called several times for each file:
cl.exe ... file1.cpp
cl.exe ... file2.cpp
cl.exe ... file2.cpp
In the second case it is called just once:
cl.exe ... file1.cpp file2.cpp file3.cpp
Both of these modes are supported by the PVS-Studio.exe console version as demonstrated above in the examples.
It could be helpful for a user to understand the analyzer's logics behind theses two modes. If launched individually, PVS-Studio.exe will firstly invoke the preprocessor for each file and the preprocessed file will be analyzed after it. But when processing several files at once, PVS-Studio.exe will firstly preprocess all these files and then separate instances of PVS-Studio.exe will be invoked individually for each one of the resulting preprocessed files.
This section describes the analysis of Unreal Engine projects on Windows operating system. The instructions for checking projects under Linux\macOS are available by this link.
Integration with Unreal Build System is available only under Enterprise PVS-Studio license. You can request a trial Enterprise license at the download page.
A specialized build system called Unreal Build System is used for building Unreal Engine projects. This system is integrated over the build system used by the Visual Studio \ JetBrains Rider environment (MSBuild) by utilizing autogenerated makefile MSBuild projects. This is a special type of Visual C++ (vcxproj) projects in which the execution of the build is relegated to the execution of a command calling a third-party utility, for example (but not necessarily), Make. The use of makefile projects allows working with source code of Unreal Engine from Visual Studio \ JetBrains Rider environment, taking advantage of such features as code autocompletion, syntax highlighting, symbol navigation, etc.
Because makefile MSBuild projects themselves do not contain full information, necessary to perform the compilation, and therefore, preprocessing of C/C++ source files, PVS-Studio does not support the analysis of such projects from within Visual Studio, or by PVS-Studio_Cmd.exe command line tool. Therefore, to check such projects with PVS-Studio, you can go two ways - monitoring of compiler invocations (Compiler Monitoring) and direct integration of the PVS-Studio.exe C/C++ analyzer in the Unreal Build Tool utility. Let's consider these options in more detail.
Unreal Build System uses the Visual C++ compiler-cl.exe for building under Windows. This compiler is supported by the system of PVS-Studio compiler monitoring on Windows. It can be both used from the C and C++ Compiler Monitoring UI (Standalone.exe) or from CLMonitor.exe command line tool.
Compiler monitoring can be launched manually from within the Compiler Monitoring UI or it can be assigned to the event of starting\ending builds in Visual Studio. The result of the analysis by the monitoring system is a plog XML report file, which you can open from the Visual Studio PVS-Studio extension, or convert to one of the standard formats (txt, html, csv) using the PlogConverter special tool.
A more detailed description for the system of compiler monitoring is available in this section of the documentation. We recommend using this way to run the analysis when you want to check it for the first time and get acquainted with the analyzer, as this way is the easiest one to set up.
A general description of how to integrate PVS-Studio C/C++ analyzer into any build system directly is available here.
In case of Unreal Build System, the developers from Epic Games provide the opportunity to use PVS-Studio through the direct integration with the build utility called Unreal Build Tool, starting from version 4.17.
Before starting the analysis, you should enter your license for the analyzer. For this you need to enter your data in IDE:
Please note, that before Unreal Engine version 4.20, UBT was unable to get the license information from the PVS-Studio common settings file. In case UBT does not recognize a license entered via UI, you should create a separate license file with the name of PVS-Studio.lic and place it to the '%USERPROFILE%\AppData\Roaming\PVS-Studio' directory.
Unreal Build Tool allows to run the PVS-Studio analysis by adding the following flag in the command line:
-StaticAnalyzer=PVSStudio
For instance, a full command line of Unreal Build Tool might look as follows:
UnrealBuildTool.exe UE4Client Win32 Debug -WaitMutex -FromMsBuild
-StaticAnalyzer=PVSStudio -DEPLOY
To enable analysis when running from IDE, open the project properties for the chosen configuration:
and add the flag -StaticAnalyzer=PVSStudio in the build and rebuild options (Build Command Line / Rebuild All Command Line).
Note. Note that in this usage scenario, only the analysis will be performed. A build won't be performed.
Note 2. PVS-Studio integration with Unreal Build Tool supports not all analyzer settings, available from Visual Studio (PVS-Studio|Options...). At the moment, PVS-Studio supports adding exceptions for specific directories through 'PVS-Studio|Options...|Don't Check Files', enabling various diagnostic groups, filtration of loaded analysis results through 'Detectable Errors'.
If you need to configure a simultaneous build and analysis in terms of one Visual Studio configuration, you can create auxiliary scripts (for our example let's name them BuildAndAnalyze and RebuildAndAnalyze, respectively) based on standard Build and Rebuild scripts.
The main change in the RebuildAndAnalyze script is a call for building a new script BuildAndAnalyze.bat, but not Build.bat.
In the BuildAndAnalyze script you need to add removal of actions cache and run of UnrealBuildTool with the analysis flag after performing a successful build.
Actions performed by UBT (builds, analysis and so on) are saved in cache.
Cache removal is needed before the analysis, in order not to save the analysis actions, which will allow for a repeated full analysis. In case if analysis actions are cached, a kind of incremental analysis will be performed on the re-run (only modified files will be analyzed). But the resulting analyzer report will include warnings from all logs (both newly received and old). At the same time if, for example, the analysis is performed by the updated analyzer version (that will include new diagnostic rules), the analyzer won't check unmodified files.
Restoring the cache from the backup is needed to restore saved build actions. If UBT hasn't found saved build actions - build will be re-run.
Removing/restoring the cache is needed in order not to save the analysis results (to be able to perform full analysis again), but not to lose actions on the project build.
Note. Changes described above are based on the standard Build script and the standard script command line. In case if the modified script or non-standard order of arguments is used, additional changes may be required.
Initially, you need to define the number of variables that are needed to remove/restore the action cache file. Operations related to actions cache are relevant for Unreal Engine version 4.21 and later.
SET PROJECT_NAME=%1%
SET PLATFORM=%2%
SET UPROJECT_FILE=%~5
SET ACTIONHISTORY_FOLDER=....
SET ACTION_HISTORY=....
SET ACTION_HISTORY_BAC=%ACTION_HISTORY%.bac
SET ACTIONHISTORY_PATH="%ACTIONHISTORY_FOLDER%\%ACTION_HISTORY%"
SET ACTIONHISTORY_BAC_PATH=
"%ACTIONHISTORY_FOLDER%\%ACTION_HISTORY_BAC%"
Different variable values for variables ACTIONHISTORY_FOLDER and ACTION_HISTORY are required for various engine versions in the script fragment above.
For version 4.21 and 4.22:
SET ACTIONHISTORY_FOLDER=
%UPROJECT_FILE%\..\Intermediate\Build\%PLATFORM%\%PROJECT_NAME%
SET ACTION_HISTORY=ActionHistory.bin
For version 4.23 and later:
SET ACTIONHISTORY_FOLDER=
%UPROJECT_FILE%\..\Intermediate\Build\%PLATFORM%\%PLATFORM%\
%PROJECT_NAME%
SET ACTION_HISTORY=ActionHistory.dat
After calling UnrealBuildTool for building (and the command 'popd') you need to add the following code:
SET "UBT_ERR_LEVEL=!ERRORLEVEL!"
SET "NEED_TO_PERFORM_ANALYSIS="
IF "!UBT_ERR_LEVEL!"=="0" (
SET "NEED_TO_PERFORM_ANALYSIS=TRUE"
)
IF "!UBT_ERR_LEVEL!"=="2" (
SET "NEED_TO_PERFORM_ANALYSIS=TRUE"
)
IF DEFINED NEED_TO_PERFORM_ANALYSIS (
pushd "%~dp0\..\..\Source"
ECHO Running static analysis
IF EXIST %ACTIONHISTORY_PATH% (
ECHO Copying %ACTION_HISTORY% to %ACTION_HISTORY_BAC%
COPY %ACTIONHISTORY_PATH% %ACTIONHISTORY_BAC_PATH%
ECHO Removing %ACTION_HISTORY%: %ACTIONHISTORY_PATH%
DEL %ACTIONHISTORY_PATH%
)
..\..\Engine\Binaries\DotNET\UnrealBuildTool.exe
%* -StaticAnalyzer=PVSStudio -DEPLOY
popd
SET "UBT_ERR_LEVEL=!ERRORLEVEL!"
IF EXIST %ACTIONHISTORY_BAC_PATH% (
ECHO Recovering %ACTION_HISTORY%
COPY %ACTIONHISTORY_BAC_PATH% %ACTIONHISTORY_PATH%
ECHO Removing %ACTION_HISTORY_BAC%: %ACTIONHISTORY_BAC_PATH%
DEL %ACTIONHISTORY_BAC_PATH%
)
)
The most important operations from the code above are the cache removal and recovery as well as the run of UnrealBuildTool with the flag -StaticAnalyzer=PVSStudio to perform the analysis.
If needed, use the modified script when working from the IDE environment. For this, you need to specify it as the one you use in the project properties:
Note that when using modified scripts, you don't need to specify the flag -StaticAnalyzer=PVSStudio in the script launching arguments, as it's already set in the script when running UnrealBuildTool for the analysis.
Starting from version 4.25 of Unreal Engine you can enable various diagnostic groups.
To select the needed diagnostic groups, you need to modify the target files of the project.
To use the appropriate options from the PVS-Studio settings file (Settings.xml), set the value 'true' for the 'WindowsPlatform PVS.UseApplicationSettings' property (for example, in the constructor) in the target file:
public MyUEProjectTarget( TargetInfo Target) : base(Target)
{
....
WindowsPlatform.PVS.UseApplicationSettings = true;
}
You can also include the necessary diagnostic groups in the target file directly. For example, you can enable diagnostics of micro-optimizations as follows:
WindowsPlatform.PVS.ModeFlags =
UnrealBuildTool.PVSAnalysisModeFlags.Optimizations;
Valid values for enabling the appropriate diagnostic groups are:
To enable several groups of diagnostics, use the '|' operator:
WindowsPlatform.PVS.ModeFlags =
UnrealBuildTool.PVSAnalysisModeFlags.GeneralAnalysis
| UnrealBuildTool.PVSAnalysisModeFlags.Optimizations;
The path to the file with the analysis results will be displayed in the Output (Build) Visual Studio window (or stdout, if you launched Unreal Build manually from the command line). This file with results is unparsed - it can be opened in IDE:
Or you can convert the analysis results using the utility PlogConverter in the way is was described in the section for the XML log above.
You can read more about handling the list of diagnostic warnings in the article "Handling the diagnostic messages list". As for working with the analyzer report - check out the article "Managing XML Analyzer Report (.plog file)".
Automatic loading of the analysis log in the PVS-Studio output window when working in IDE is more convenient. For such a scenario, you need to enable the appropriate option:
PVS-Studio static analyzer for C/C++ code is a console application, named pvs-studio, and several supporting utilities. For the program work it is necessary to have configured environment for a build of your project.
A new run of the analyzer is performed for every code file. The analysis results of several source code files can be added to one analyzer report or displayed in stdout.
There are three main work modes of the analyzer:
Examples of commands to install the analyzer from the packages and repositories are given on these pages:
You can request a license for acquaintance with PVS-Studio via a feedback form.
To save information about a license in file it is necessary to use the following command:
pvs-studio-analyzer credentials NAME KEY [-o LIC-FILE]
PVS-Studio.lic file will be created by default in the ~/.config/PVS-Studio/ directory. In this case it is not necessary to specify it in the analyzer run parameters, it will be caught automatically.
License key for the analyzer is a text file of UTF8 encoding.
You can check a license deadline using this command:
pvs-studio --license-info /path/to/PVS-Studio.lic
The best way to use the analyzer is to integrate it into your build system, namely near the compiler call. However, if you want to run the analyzer for a quick test on a small project, use the pvs-studio-analyzer utility.
Important. The project should be successfully compiled and built before analysis.
To check the CMake-project we use the JSON Compilation Database format. To get the file compile_commands.json necessary for the analyzer, you should add one flag to the CMake call:
$ cmake -DCMAKE_EXPORT_COMPILE_COMMANDS=On <src-tree-root>
CMake supports the generation of a JSON Compilation Database for Unix Makefiles.
The analysis starts with the following commands:
pvs-studio-analyzer analyze -l /path/to/PVS-Studio.lic
-o /path/to/project.log -e /path/to/exclude-path -j<N>
plog-converter -a GA:1,2 -t tasklist
-o /path/to/project.tasks /path/to/project.log
It is important to understand that all files to be analyzed should be compiled. If your project actively uses code generation, then this project should be built before analysis, otherwise there may be errors during preprocessing.
To check the Ninja-project we use the JSON Compilation Database format. To get the necessary file compile_commands.json for the analyzer, you must execute the following commands:
cmake -GNinja <src-tree-root>
ninja -t compdb
The analysis is run with the help of following commands:
pvs-studio-analyzer analyze -l /path/to/PVS-Studio.lic
-o /path/to/project.log -e /path/to/exclude-path -j<N>
plog-converter -a GA:1,2 -t tasklist
-o /path/to/project.tasks /path/to/project.log
To check a project using Qt Build System, you must execute the following commands:
qbs generate --generator clangdb
The analysis is run with the help of following commands:
pvs-studio-analyzer analyze -l /path/to/PVS-Studio.lic
-o /path/to/project.log -e /path/to/exclude-path -j<N>
plog-converter -a GA:1,2 -t tasklist
-o /path/to/project.tasks /path/to/project.log
JSON Compilation Database has to be generated using the xcpretty utility:
xcodebuild [flags] | xcpretty -r json-compilation-database
The analysis is run with the help of following commands:
pvs-studio-analyzer analyze -l /path/to/PVS-Studio.lic
-f build/reports/compilation_db.json
-o /path/to/project.log -e /path/to/exclude-path -j<N>
plog-converter -a GA:1,2 -t tasklist
-o /path/to/project.tasks /path/to/project.log
This utility requires the strace utility.
This can be built with the help of the command:
pvs-studio-analyzer trace -- make
You can use any other build command with all the necessary parameters instead of make, for example:
pvs-studio-analyzer trace -- make debug
After you build your project, you should execute the commands:
pvs-studio-analyzer analyze -l /path/to/PVS-Studio.lic
-o /path/to/project.log -e /path/to/exclude-path -j<N>
plog-converter -a GA:1,2 -t tasklist
-o /path/to/project.tasks /path/to/project.log
Analyzer warnings will be saved into the specified project.tasks file. You may see various ways to view and filter the report file in the section "Filtering and viewing the analyzer report" within this document.
If your project isn't CMake or you have problems with the strace utility, you may try generating the file compile_commands.json with the help of the Bear utility. This file will help the analyzer to check a project successfully only in cases where the environment variables don't influence the file compilation.
In this case, the compilers may have special names and the analyzer will not be able to find them. To analyze such a project, you must explicitly list the names of the compilers without the paths:
pvs-studio-analyzer analyze ... --compiler COMPILER_NAME
--compiler gcc --compiler g++ --compiler COMPILER_NAME
plog-converter ...
Also, when you use cross compilers, the directory with the header files of the compiler will be changed. It's necessary to exclude such directories from the analysis with the help of -e flag, so that the analyzer doesn't issue warnings for these files.
pvs-studio-analyzer ... -e /path/to/exclude-path ...
There shouldn't be any issues with the cross compilers during the integration of the analyzer into the build system.
You can pass the response file to the pvs-studio-analyzer utility. Response file is a file which contains other command-line arguments.
The response file argument on the command line is indicated by the '@' character, which is followed by the path to the response file (e.g. '@/path/to/file.txt'). The arguments in the response file are separated by spaces/tabs/newlines. If you want to pass an argument that contains a whitespace, you can escape the whitespace with a backslash (\) character or put the whole argument in single ('') or double ("") quotes. You can’t escape quotes inside quotes. There's no difference between single-quoted and double-quoted arguments. Note that the arguments are passed as-is, no other processing takes place like shell variable expansion, glob expansion, etc. Recursive response files are supported.
For the pvs-studio-analyzer utility, incremental analysis mode is available (analysis of only changed files), for this, you need to run the utility with the parameter --incremental:
pvs-studio-analyzer analyze ... --incremental ...
This mode works independently from the incremental project build. I.g. if your project is completely compiled, the first run of the incremental analysis will still analyze all files. During the next run only changed files will be analyzed.
For monitoring the changed files, the analyzer saves service information in a directory named .PVS-Studio in the launch directory. That's why for using this mode it is always necessary to run the analyzer in one and the same directory.
Test projects are available in the official PVS-Studio repository on GitHub:
Figure 1 shows an example of analyzer warnings viewed in CLion:
Figure 1 - PVS-Studio warnings viewed in CLion
Figure 2 demonstrates an example of analyzer warnings viewed in QtCreator:
Figure 2 - PVS-Studio warnings viewed in QtCreator
Figure 3 shows an example of analyzer warnings viewed in Eclipse CDT:
Figure 3 - PVS-Studio warnings viewed in Eclipse CDT
The analyzer checks not the source files, but preprocessed files. This method allows the analyzer perform a more in-depth and qualitative analysis of the source code.
In this regard, we have several restrictions for the compilation parameters being passed. These are parameters that hinder the compiler run in the preprocessor mode, or damage the preprocessor output. A number of debugging and optimization flags, for example,-O2, -O3, -g3, -ggdb3 and others, create changes which affect the preprocessor output. Information about invalid parameters will be displayed by the analyzer when they are detected.
This fact does not presuppose any changes in the settings of project to be checked, but part of the parameters should be excluded for the analyzer to run in properly.
During integration of the analyzer into the build system, you should pass it a settings file (*.cfg). You may choose any name for the configuration file, but it should be written with a "--cfg" flag.
The settings file with the name PVS-Studio.cfg, which is located in the same directory as the executable file of the analyzer, can be loaded automatically without passing through the command-line parameters.
Possible values for the settings in the configuration file:
An important note:
You don't need to create a new config file to check each file. Simply save the existing settings, for example, lic-file, etc.
Any of the following methods of integration of the analysis into a build system can be automated in the system Continuous Integration. This can be done in Jenkins, TeamCity and others by setting automatic analysis launch and notification of the found errors.
It is also possible to integrate with the platform of the continuous analysis of SonarQube using the plug-in PVS-Studio. The plugin is available with the analyzer in .tgz archive available to download. Setting instructions are available on this page: "Integration of PVS-Studio analysis results into SonarQube".
To convert the analyzer bug report to different formats (*.xml, *.tasks and so on) you can use the Plog Converter, which can be found open source.
Enter the following in the command line of the terminal:
plog-converter [options] <path to the file with PVS-Studio log>
All options can be specified in random order.
Available options:
Detailed description of the levels of certainty and sets of diagnostic rules is given in the documentation section "Getting Acquainted with the PVS-Studio Static Code Analyzer".
At this point, the available formats are:
The result of execution of the utility, is a file containing messages of a specified format, filtered by the rules that are set in the configuration file.
The following is an example of a command which would be suitable for most users, for opening the report in QtCreator:
plog-converter -a GA:1,2 -t tasklist
-o /path/to/project.tasks /path/to/project.log
Figure 3 demonstrates an example of a .tasks file, viewed in QtCreator:
Figure 4 - A .tasks file viewed in QtCreator
The analyzer report converter allows generating an Html report of two types:
1. FullHtml - full report to view the results of the analysis. You can search and sort messages by type, file, level, code and warning text. A feature of this report is the ability to navigate to the location of the error, to the source code file. The source code files themselves, which triggered the analyzer warnings, are copied in html and become a part of report. Examples of the report are shown in figures 4-5.
Figure 4 - Example of the Html main page report
Figure 5 - Warning view in code
Example of a command for receiving such a report:
plog-converter -a GA:1,2 -t fullhtml
/path/to/project.log -o /path/to/report_dir
This report is convenient to send in an archive, or to provide access by the local network using any web server, for example, Lighttpd, etc.
2. Html is a lightweight report, consisting of a single .html file. It contains brief information about the found warnings and is suitable for notification by email. A report example is shown on the Figure 6.
Figure 6 - Simple Html page example
Example of a command for receiving such a report:
plog-converter -a GA:1,2 -t html
/path/to/project.log -o /path/to/project.html
An example of commands to open the report in gVim editor:
$ plog-converter -a GA:1,2 -t errorfile
-o /path/to/project.err /path/to/project.log
$ gvim /path/to/project.err
:set makeprg=cat\ %
:silent make
:cw
The figure 7 demonstrates an example of viewing an .err file in gVim:
Figure 7 - viewing the .err file in gVim
An example of commands to open the report in Emacs editor:
plog-converter -a GA:1,2 -t errorfile
-o /path/to/project.err /path/to/project.log
emacs
M-x compile
cat /path/to/project.err 2>&1
Figure 8 demonstrates an example of viewing an .err file in Emacs:
Figure 8 - viewing the .err file in Emacs
An example of commands to convert the report in CSV format:
plog-converter -a GA:1,2 -t csv
-o /path/to/project.csv /path/to/project.log
After opening the file project.csv in LibreOffice Calc, you must add the autofilter: Menu Bar --> Data --> AutoFilter. Figure 9 demonstrates an example of viewing an .csv file in LibreOffice Calc:
Figure 9 - viewing an .csv file in LibreOffice Calc
More settings can be saved into a configuration file with the following options:
The option name is separated from the values by a '=' symbol. Each option is specified on a separate string. Comments are written on separate strings; insert # before the comment.
To add your own output format, follow these steps:
Create your own output class, making it an heir from the IOutput class, and redefine the virtual method void write(const AnalyzerMessage& msg). Describe the message output in the correct format for this method. The fields of the AnalyzerMessage structure are defined in the analyzermessage.h file. The following actions are the same as for the existing output classes (XMLOutput, for example).
In OutputFactory::OutputFactory in m_outputs add your format by analogy with the one that is already specified there. As a variant - add it through the method OutputFactory::registerOutput.
The format will be available as the utility option -t after these actions.
The blame-notifier utility is meant for automating the process of notifying developers who have committed the code in the repository for which the PVS-Studio analyzer has issued warnings. The analyzer report is passed to the blame-notifier with specification of additional parameters; the utility finds files that triggered warnings and generates an HTML-report for each "guilty" developer. It is also possible to send a full report: it will contain all warnings related to each "guilty" developer.
The following documentation section describes the ways how to install and use the utility: "Notifying the developer teams (blame-notifier utility)".
Mass warnings suppression allows you to easily embed the analyzer in any project and immediately start to benefit from this, i.e. to find new bugs. This mechanism allows you to plan correcting of missed warnings in future, without distracting developers from performing their current tasks.
There are several ways of using this mechanism, depending on the integration of the analyzer.
To suppress all analyzer warnings (first time and in subsequent occasions) you need to execute the command:
pvs-studio-analyzer suppress /path/to/report.log
If you want to suppress a warning for a specific file, use the --file(-f) flag:
pvs-studio-analyzer suppress -f test.c /path/to/report.log
In addition to the file itself, you can explicitly specify the line number to suppress:
pvs-studio-analyzer suppress -f test.c:22 /path/to/report.log
This entry suppresses all warnings that are located on line 22 of the 'test.c' file.
This flag can be specified repeatedly, thus suppressing warnings in several files at once.
In addition to explicit file specification, there is a mechanism for suppressing specific diagnostics:
pvs-studio-analyzer suppress -v512 /path/to/report.log
The --warning(-v) flag can also be specified repeatedly:
pvs-studio-analyzer suppress -v1040 -v512 /path/to/report.log
The above-mentioned --file and --warning flags can be combined to suppress warnings more precisely:
pvs-studio-analyzer suppress -f test.c:22 -v512 /path/to/report.log
So the above command will suppress all v512 diagnostic warnings on line 22 of the 'test.c' file.
Analysis of the project can be run as before. At the same time suppressed warnings will be filtered:
pvs-studio-analyzer analyze ... -s /path/to/suppress.json \
-o /path/to/report.log
plog-converter ...
Direct integration might look as follows:
.cpp.o:
$(CXX) $(CFLAGS) $(DFLAGS) $(INCLUDES) $< -o $@
$(CXX) $(CFLAGS) $< $(DFLAGS) $(INCLUDES) -E -o $@.PVS-Studio.i
pvs-studio --cfg $(PVS_CFG) --source-file $< --i-file $@.PVS-Studio.i
--output-file $@.PVS-Studio.log
In this mode, the analyzer cannot verify source files and filter them simultaneously. So, filtration and warnings suppression would require additional commands.
To suppress all the warnings, you must also run the command:
pvs-studio-analyzer suppress /path/to/report.log
To filter a new log, you must use the following commands:
pvs-studio-analyzer filter-suppressed /path/to/report.log
plog-converter ...
File with suppressed warnings also has the default name suppress_base.json, for which you can optionally specify an arbitrary name.
1. The strace utility issues the following message:
strace: invalid option -- 'y'
You must update the strace program version. Analysis of a project without integrating it into a build system is a complex task, this option allows the analyzer to get important information about the compilation of a project.
2. The strace utility issues the following message:
strace: umovestr: short read (512 < 2049) @0x7ffe...: Bad address
Such errors occur in the system processes, and do not affect the project analysis.
3. The strace utility issues the following message:
No compilation units found
The analyzer could not find files for analysis. Perhaps you are using cross compilers to build the project. See the section "If you use cross compilers" in this documentation.
4. The analyzer report has strings like this:
r-vUVbw<6y|D3 h22y|D3xJGy|D3pzp(=a'(ah9f(ah9fJ}*wJ}*}x(->'2h_u(ah
The analyzer saves the report in the intermediate format. To view this report, you must convert it to a readable format using a plog-converter utility, which is installed together with the analyzer.
5. The analyzer issues the following error:
Incorrect parameter syntax:
The ... parameter does not support multiple instances.
One of the parameters of the analyzer is set incorrectly several times.
This can happen if part of the analyzer parameters are specified in the configuration file, and part of them were passed through the command line parameters. At the same time, some parameter was accidentally specified several times.
If you use pvs-studio-analyzer, then almost all the parameters are detected automatically, this is why it can work without a configuration file. Duplication of such parameters can also cause this error.
6. The analyzer issues the warning:
V001 A code fragment from 'path/to/file' cannot be analyzed.
If the analyzer is unable to parse some code fragment, it skips it and issues the V001 warning. Such a situation doesn't influence the analysis of other files, but if this code is in the header file, then the number of such warnings can be very high. Send us a preprocessed file (.i) for the code fragment, causing this issue, so that we can add support for it.
If you have any questions or problems with running the analyzer, feel free to contact us.
Docker is a software for automating deployment and management of applications in environments that support OS-level virtualization (containers). Docker can "pack" an application with its entire environment and dependencies into a container, that can then be deployed at any system with Docker installation.
You can use Dockerfile to build an image with the latest version of PVS-Studio included.
On debian-based systems:
FROM gcc:7
# INSTALL DEPENDENCIES
RUN apt update -yq \
&& apt install -yq --no-install-recommends wget \
&& apt clean -yq
# INSTALL PVS-Studio
RUN wget -q -O - https://files.viva64.com/etc/pubkey.txt | apt-key add - \
&& wget -O /etc/apt/sources.list.d/viva64.list \
https://files.viva64.com/etc/viva64.list \
&& apt update -yq \
&& apt install -yq pvs-studio strace \
&& pvs-studio --version \
&& apt clean -yq
On zypper-based systems:
FROM opensuse:42.3
# INSTALL DEPENDENCIES
RUN zypper update -y \
&& zypper install -y --no-recommends wget \
&& zypper clean --all
# INSTALL PVS-Studio
RUN wget -q -O /tmp/viva64.key https://files.viva64.com/etc/pubkey.txt \
&& rpm --import /tmp/viva64.key \
&& zypper ar -f https://files.viva64.com/rpm viva64 \
&& zypper update -y \
&& zypper install -y --no-recommends pvs-studio strace \
&& pvs-studio --version \
&& zypper clean -all
On yum-based systems:
FROM centos:7
# INSTALL DEPENDENCIES
RUN yum update -y -q \
&& yum install -y -q wget \
&& yum clean all -y -q
# INSTALL PVS-Studio
RUN wget -q -O /etc/yum.repos.d/viva64.repo \
https://files.viva64.com/etc/viva64.repo \
&& yum install -y -q pvs-studio strace \
&& pvs-studio --version \
&& yum clean all -y -q
Note. PVS-Studio for Linux also can be acquired using following permalinks:
https://files.viva64.com/pvs-studio-latest.deb
https://files.viva64.com/pvs-studio-latest.tgz
https://files.viva64.com/pvs-studio-latest.rpm
Command to build an image:
docker build -t viva64/pvs-studio:7.11 -f Dockerfile
Note. A base image and dependencies must be changed according to the target project.
To start the analysis, for example, of a CMake-based project, execute the following command:
docker run --rm -v "~/Project":"/mnt/Project" \
-w "/mnt/Project" viva64/pvs-studio:7.11 \
sh -c 'mkdir build && cd build &&
cmake -DCMAKE_EXPORT_COMPILE_COMMANDS=On .. && make -j8 &&
pvs-studio-analyzer analyze ... -o report.log -j8 ...'
It is recommended that you run the converter of analyzer-generated reports (plog-converter) outside the container to ensure that reports contain correct paths to the source files. The only report type that you may want to generate inside the container is fullhtml (an HTML report file that supports message sorting and code navigation). To have other report types generated, you will need to additionally configure the analyzer.
When checking non-CMake projects in a container using the compiler call tracing mode, you may get this error:
strace: ptrace(PTRACE_TRACEME, ...): Operation not permitted
Error: Command strace returned 1 code.
To eliminate this error, run Docker with extended privileges by executing this command:
docker run ... --security-opt seccomp:unconfined ...
or like this:
docker run ... --cap-add SYS_PTRACE ...
Specifying the license file
Since a container's lifetime is limited, the analyzer license file should be committed into the image or specified by mounting the directory containing that file and specifying the path to it:
pvs-studio-analyzer analyze ... -l /path/to/PVS-Studio.lic ...
Restoring paths to source files in the report
To get a report with correct paths to the source files, specify the path to the project directory first:
pvs-studio-analyzer analyze ... -r /path/to/project/in/container ...
After that, run the report converter outside the container.
On Linux or macOS:
plog-converter ... -r /path/to/project/on/host ...
On Windows:
PlogConverter.exe ... -r /path/to/project/on/host
On Windows, you can also use the Compiler Monitoring UI utility to open the report file without converting it.
Excluding directories from analysis
You can exclude the compiler directory or directories with third-party libraries or tests by adding the -e parameter:
pvs-studio-analyzer analyze ... -e /path/to/tests ... -e /path/to/contrib ...
Specifying the cross compiler
If your container includes a cross compiler or compiler without aliases (for example, g++-7), its name must be specified additionally:
pvs-studio-analyzer analyze ... -C g++-7 -C compilerName ...
Installing from an archive
FROM openkbs/ubuntu-bionic-jdk-mvn-py3
ARG PVS_STUDIO_CORE="7.11.44138"
RUN wget "https://files.viva64.com/java/pvsstudio-cores/${PVS_STUDIO_CORE}.zip"\
-O ${PVS_STUDIO_CORE}.zip \
&& mkdir -p ~/.config/PVS-Studio-Java \
&& unzip ${PVS_STUDIO_CORE}.zip -d ~/.config/PVS-Studio-Java \
&& rm -rf ${PVS_STUDIO_CORE}.zip
Command to build an image:
docker build -t viva64/pvs-studio:7.11 -f Dockerfile
Committing the analyzer layer
The analyzer is unpacked automatically at the first analysis of a project. You can specify the container's name and perform the analysis first:
docker run --name analyzer
-v "D:\Project":"/mnt/Project"
openkbs/ubuntu-bionic-jdk-mvn-py3
sh -c "cd /mnt/Project && mvn package
&& mvn pvsstudio:pvsAnalyze -Dpvsstudio.licensePath=/path/to/PVS-Studio.lic"
and then commit to a new image:
docker commit analyzer viva64/pvs-studio:7.11
Note. A base image and dependencies must be changed according to the target project. Make sure you install and launch the analyzer as the same user.
Regular checks should be launched in the same way with the --rm parameter added:
docker run --rm -v "D:\Project":"/mnt/Project"
openkbs/ubuntu-bionic-jdk-mvn-py3
sh -c "cd /mnt/Project
&& mvn package
&& mvn pvsstudio:pvsAnalyze -Dpvsstudio.licensePath=/path/to/PVS-Studio.lic"
All of the parameters are specified in the Maven or Gradle project file, into which the analysis is integrated.
Documentation for this section is under development.
Documentation for this section is under development.
In order to automate the analysis process in CI (Continuous Integration) you have to run the analyzer as a console application.
In Jenkins you can create one of the following build steps:
and write the analysis command (and the command to convert the report in the needed format).
Examples of commands to run and integrate the analyzer into build systems are given on the following pages of documentation:
Download the PVS-Studio Plugin on the download page in the section about Jenkins. The pvs-studio.hpi file will be saved on the disk. Jenkins plugins are distributed in such packages for manual installation.
You can install the plugin via UI by going to the menu Manage Jenkins > Manage Plugin > Advanced > Upload Plugin > Choose File, or via Jenkins CLI:
java -jar jenkins-cli.jar -s http://localhost:8080/ install-plugin SOURCE ...
PVS-Studio Plugin enables publishing static analysis results. To do this, it uses the analyzer report as an .html file, formed in the following ways:
Windows: C, C++, C#
PlogConverter.exe ... --renderTypes Html ...
Linux/macOS: C, C++
plog-converter ... --renderTypes html ...
Windows/Linux/macOS: Java
In the settings of the plugins for Maven and Gradle in the outputType field set the html value.
To publish the analysis results in the project settings you have to add the post-build step (section Post-build Actions) Publish PVS-Studio analysis result. Mandatory field - Path to PVS-Studio analysis report, which sets the path to the .html file, the contents of which will be displayed on the build page. Relative (related to project's workspace) paths are supported. Jenkins' environment variables are also supported.
If you need to display combined results from several analyzer reports, you can combine them using the above converting utilities. Additional filtration of analysis results is available in a similar way.
After successful publishing of PVS-Studio analysis results, they will be displayed on build pages. The results of the analysis are saved for each build that allows you to view the analyzer report corresponding to a specific build.
If the resulting analyzer report is large, its preview will be displayed on the build page. The full report will be available for viewing by clicking the links View full report over / under the report preview, or Full PVS-Studio analysis report in the menu on the left.
Build page look, displaying the results of PVS-Studio analysis:
Warnings NG Plugin supports PVS-Studio analyzer reports, starting from the plugin version 6.0.0. This plugin is designed to visualize the results of various analyzers.
You can install the plugin from the standard Jenkins repository in the menu Manage Jenkins > Manage Plugins > Available > Warnings Next Generation Plugin:
To publish the analysis results, in project settings, you have to add the post-build step (Post-build Actions section) Record compiler warnings and static analysis results.. Next, you need to open the list Tool and choose PVS-Studio. In the Report File Pattern field you can specify the mask or the path to the analyzer report. Reports with extensions .plog and .xml.are supported.
The Report Encoding field specifies the encoding in which the report file will be read. If the field is empty, the encoding of the operation system in which Jenkins is run, will be used. Fields Custom ID and Custom Name override the identifier and the name of the chosen utility in the interface.
Here are some ways to generate a report in the needed format:
Windows: C, C++, C#
Reports with .plog extension are standard for Windows.
Linux/macOS: C, C++
plog-converter ... --renderTypes xml ...
Windows/Linux/macOS: Java
In the settings for plugin for Maven and Gradle in the outputType field set the value xml.
After building the project a new element will appear in the PVS-Studio Warnings menu on the left. Clicking on it opens a page that visualizes data of the report created by the PVS-Studio analyzer:
Also, when you click on the value in the File column, the browser will open a source file on the line where the error was found. If the file doesn't open, it means that the report was generated outside the build directory or the files involved in the report have been moved or deleted.
In other CIs, configuration of the analyzer run and handling reports are performed in the same way.
In order to automate the analysis process in CI (Continuous Integration), you need to run the analyzer as a console application.
You need to create a Build Step in TeamCity with the following parameters:
In the script, write the analysis command (and the command for converting the report in the needed format).
Examples of commands to run the analyzer, handle the analysis results and integrate the analyzer into build systems are given on the following pages of documentation:
In TeamCity you can attach analyzer reports in the HTML format to builds, having specified them in the artifacts.
Here are some ways to generate an HTML report with navigation along the code:
Windows: C, C++, C#
PlogConverter.exe ... --renderTypes FullHtml ...
Linux/macOS: C, C++
plog-converter ... --renderTypes fullhtml ...
Windows/Linux/macOS: Java
In the plugins settings for Maven and Gradle in the 'outputType' field set the 'fullhtml' value.
In the menu Edit Configuration Settings > General Settings > Artifact paths specify the directory with the resulting HTML-report.
After the build is completed successfully, the analyzer report in html format will be available in artifacts. To open it, click on the file index.html on the 'Artifacts' tab. You can also make the analyzer report appear on a special tab of the build session report. To do this, go to the project settings, open 'Report Tabs' and create new build report tab.
In the window for adding a tab, specify the path to the index.html file in the 'Start page' field relative to the root directory of artifacts. For example, if the content of the 'Artifacts' tab looks something like this:
Then write the path 'fullhtml/index.html' in the 'Start Page' field. After you add a tab, you can view the analysis results on it:
When clicking analyzer warnings, additional browser tabs will be opening:
In other CIs, configuration of analyzer runs and working with reports are performed in the same way.
The converter supports standard reports for TeamCity - TeamCity Inspections Type. After generation a report it has to be output in stdout at any step of the build.
Such report can be generated and output in stdout in the following ways:
Windows: C, C++, C#
PlogConverter.exe ... –-renderTypes=TeamCity -o TCLogsDir ...
Type TCLogsDir\MyProject.plog_TeamCity.txt
Linux/macOS: C, C++
plog-converter ... -t teamcity -o report_tc.txt ...
cat report_tc.txt
Windows/Linux/macOS: Java
Support will be available soon.
After successful build execution the analyser report will appear in a new tab with the information on this build:
In order to navigate along the code you can click the line number on the left of the diagnostic rule. The transition will occur if there is an absolute path to the source file, the open project in an IDE (Eclipse, Visual Studio, IntelliJ IDEA), and the installed TeamCity plugin.
The blame-notifier utility is meant for automating the process of notifying developers who have committed the code in the repository for which the PVS-Studio analyzer has issued warnings. The analyzer report is passed to the blame-notifier with specification of additional parameters; the utility finds files that triggered warnings and generates an HTML-report for each "guilty" developer. It is also possible to send a full report: it will contain all warnings related to each "guilty" developer.
The blame-notifier utility is available only if you have an Enterprise license. To order the license, please, write to us.
Note. The utility's name differs under different platforms. Under Windows it has the name BlameNotifier.exe, under Linux and macOS - blame-notifier. If we aren't talking about the utility for a specific OS, the name "blame-notifier" is used to avoid duplication in this document.
The blame-notifier utility on Linux and macOS requires .NET Core Runtime 3.1.
The BlameNotifier utility can be found in the PVS-Studio installation directory ("C:\Program Files (x86)\PVS-Studio\" by default).
For debian-based systems:
wget -q -O - https://files.viva64.com/etc/pubkey.txt | \
sudo apt-key add -
sudo wget -O /etc/apt/sources.list.d/viva64.list \
https://files.viva64.com/etc/viva64.list
sudo apt-get update
sudo apt-get install blame-notifier
For yum-based systems:
wget -O /etc/yum.repos.d/viva64.repo \
https://files.viva64.com/etc/viva64.repo
yum update
yum install blame-notifier
For zypper-based systems:
wget -q -O /tmp/viva64.key https://files.viva64.com/etc/pubkey.txt
sudo rpm --import /tmp/viva64.key
sudo zypper ar -f https://files.viva64.com/rpm viva64
sudo zypper update
sudo zypper install blame-notifier
Installation:
brew install viva64/pvs-studio/blame-notifier
Update:
brew upgrade blame-notifier
Use the "--help" flag to display basic information about the utility:
blame-notifier --help
An example of using the blame-notifier utility (in one line):
blame-notifier path/to/PVS-Studio.log
--VCS Git
--recipientsList recipients.txt
--server ... --sender ... --login ... --password ...
Here's a quick description of the utility's parameters:
When using the utility, at least one of the flags, via which the list of reports recipients is set, has to be specified: '--recipientsList' or '--vcsBasedRecipientsList'.
If necessary, these flags can be used jointly.
File format with a list of report recipients:
# Recipients of the full report
username_1 *email_1
...
username_N *email_N
# Recipients of individually assigned warnings
username_1 email_1
...
username_N email_N
You can comment on the line with the symbol "#". For full report recipients, you need to add the "*" symbol at the beginning or end of an email address. The full report will include all warnings sorted by developers.
The filtering masks look like this: MessageType:MessageLevels.
"MessageType" can take one of the following values: GA, OP, 64, CS, MISRA, Fail.
"MessageLevels" can take a value of 1 to 3.
A combination of different masks through ";" is possible (without spaces), for example:
--analyzer=GA:1,2;64:1
In this case, general-analysis warnings (GA) of 1 and 2 levels, and 64-bit warnings of the 1 level will be handled.
This article discusses integration of PVS-Studio into the continuous integration process on Windows. Integration into the CI process on Linux is discussed in the article "How to run PVS-Studio on Linux".
Before talking on the subject of this article, it would be useful for you to know that running PVS-Studio solely on the build server is effective yet inefficient. A better solution is to build a system that could perform source code analysis at two levels: locally on the developers' machines and on the build server.
This concept stems from the fact that the earlier a defect is detected, the less expensive and difficult it is to fix. For that reason, you want to find and fix bugs as soon as possible, and running PVS-Studio on the developers' machines makes this easier. We recommend using the incremental analysis mode, which allows you to have analysis automatically initiated only for recently modified code after the build.
However, this solution does not guarantee that defects will never get to the version control system. It is to track such cases that the second security level - regular static analysis on the build server - is needed. Even if a bug does slip in, it will be caught and fixed in time. With the analysis integrated into night builds, you will get a morning report about the errors made the day before and be able to fix the faulty code quickly.
Note. It is not recommended to have the analyzer check every commit on the server, as the analysis process may take quite a long time. If you do need to use it in this way and your project is built with MSBuild build system, use the incremental analysis mode of command line module 'PVS-Studio_Cmd.exe'. For details about this mode, see the section "Incremental analysis in command line module 'PVS-Studio_Cmd.exe'" of this paper. You can also use utility 'CLMonitor.exe' (for C and C++ code only) to analyze your source files in this mode (regardless of the build system). To learn more about the use of 'CLMonitor.exe' utility, see the section "Compiler monitoring system" of this paper.
Preparing for integration of PVS-Studio into the CI process is an important phase that will help you save time in the future and use static analysis more efficiently. This section discusses the specifics of PVS-Studio customization that will make further work easier.
You need administrator privileges to install PVS-Studio. Unattended installation is performed by running the following command from the command line (in one line):
PVS-Studio_setup.exe /verysilent /suppressmsgboxes
/norestart /nocloseapplications
Executing this command will initiate installation of all available PVS-Studio components. Please note that PVS-Studio may require a restart to complete installation if, for example, the files being updated are locked. If you run the installer without the 'NORESTART' flag, it may restart the computer without any prior notification or dialogue.
The package includes utility 'PVS-Studio-Updater.exe', which checks for analyzer updates. If there are updates available, it will download and install them on local machine. To run the utility in 'silent' mode, use the same options as with installation:
PVS-Studio-Updater.exe /verysilent /suppressmsgboxes
Settings file is generated automatically when running the Visual Studio with installed PVS-Studio plugin or running C and C++ Compiler Monitoring UI (Standalone.exe), and it can then be edited or copied to other machines. The information about the license is also stored in the settings file. The default directory of this file is:
%AppData%\PVS-Studio\Settings.xml
To learn more about unattended deployment of PVS-Studio, see the article "Unattended deployment of PVS-Studio".
Before running the analyzer, you need to configure it to optimize handling of the warning list and (if possible) speed up the analysis process.
Note. The options discussed below can be changed by manually editing the settings file or through the settings page's interface of the Visual Studio plug-in or Compiler Monitoring UI.
It may often be helpful to exclude certain files or even entire directories from analysis - this will allow you to keep the code of third party libraries unchecked, thus reducing the overall analysis time and ensuring that you will get only warnings relevant to your project. The analyzer is already configured by default to ignore some files and paths such as the boost library. To learn more about excluding files from analysis, see the article "Settings: Don't Check Files".
At the phase of analyzer integration, you also want to turn off those PVS-Studio diagnostics that are irrelevant to the current project. Diagnostics can be disabled both individually and in groups. If you know which diagnostics are irrelevant, turn them off right away to speed up the check. Otherwise, you can turn them off later. To learn more about disabling diagnostic rules, see the article "Settings: Detectable Errors".
When integrating static analysis into an existing project with a large codebase, the first check may reveal multiple defects in its source code. The developer team may lack the resources required for fixing all such warnings, and then you need to hide all the warnings triggered by the existing code so that only warnings triggered by newly written/modified code are displayed.
To do this, use the mass warning suppression mechanism, described in detail in the article "Mass Suppression of Analyzer Messages".
Note 1. If you need to hide only single warnings, use the false positive suppression mechanism described in the article "Suppression of false alarms".
Note 2. Using SonarQube, you can specify how warnings issued within a certain period are displayed. You can use this feature to have the analyzer display only those warnings that were triggered after the integration (that is, turn off the warnings triggered by old code).
Integrating PVS-Studio into the CI process is relatively easy. In addition, it provides means for convenient handling of analysis results.
Integration of PVS-Studio with the SonarQube platform is possible only if you own an Enterprise license. You can order one by emailing us.
The principles of analysis of the projects, based on different build systems will be described below, as well as the utilities for working with the results of analysis.
This section discusses the most effective way of analyzing MSBuild / Visual Studio solutions and projects, i.e. Visual Studio solutions (.sln) and Visual C++ (.vcxproj) and Visual C# (.csproj) projects.
Project types listed above can be analyzed from the command line by running the 'PVS-Studio_Cmd.exe' module, located in PVS-Studio's installation directory. The default location is 'C:\Program Files (x86)\PVS-Studio\'.
You can modify analysis parameters by passing various arguments to 'PVS-Studio_Cmd.exe'. To view the list of all available arguments, enter the following command:
PVS-Studio_Cmd.exe --help
The analyzer has one obligatory argument, '--target', which is used to specify the target object for analysis (a .sln, .vcxproj, or .csproj file). The other arguments are optional; they are discussed in detail in the article "Analyzing Visual C++ (.vcxproj) and Visual C# (.csproj) projects from the command line".
The following example demonstrates how to start analysis of a .sln file (in one line):
PVS-Studio_Cmd.exe --target "targetsolution.sln" --platform "Any CPU"
--output "results.plog" --configuration "Release"
Executing this command will initiate analysis of .sln file 'targetsolution.sln' for platform 'Any CPU' in 'Release' configuration. The output file ('results.plog') will be created in the directory of the solution under analysis. The check will be performed with the standard analyzer settings since no specific settings have been specified.
The 'PVS-Studio_Cmd.exe' module employs a number of non-zero exit codes, which it uses to report the final analysis status. An exit code is a bit mask representing all states that occurred while the utility was running. In other words, a non-zero exit code does not necessarily indicate an error in the utility's operation. For a detailed description of exit codes, see the above-mentioned article "Analyzing Visual C++ (.vcxproj) and Visual C# (.csproj) projects from the command line".
If you use the analyzer regularly, you may want it to issue warnings triggered only by newly written/modified code. With night builds on the build server, this would allow you to view only those warnings that were triggered by mistakes made on the previous day.
To turn on this mode, run the 'PVS-Studio_Cmd.exe' module with the command line argument '--suppressAll'. When this flag is present, the utility will add all the messages to the database of suppressed warnings (.suppress files of the corresponding projects) after saving the analysis results. This will prevent those messages from appearing at the next check. In case you need to view the old warnings again, the complete analysis log can be found in the same directory where the .plog file with new messages is located.
To learn more about the mass warning suppression mechanism, see the article "Mass Suppression of Analyzer Messages".
Note. When using the SonarQube platform, you can keep track of new messages without applying the suppression mechanisms. To do this, configure it to display changes only for the past day.
PVS-Studio's incremental analysis mode allows you to check only those files that have been modified/affected since the last build. This mode is available in both the Visual Studio plug-in and the command line module. With incremental analysis, only warnings triggered by modified code will be displayed, thus reducing the analysis time by excluding unaffected parts of the solution from analysis.
This mode is useful when your continuous integration system is configured to run an automatic incremental build every time changes in the version control system are detected; that is, when the project is built and analyzed on the build server many times during the day.
The use of incremental analysis in the 'PVS-Studio_Cmd.exe' module is controlled by the flag '--incremental'. The following modes are available here:
To learn more about PVS-Studio's incremental analysis, see the article "PVS-Studio's incremental analysis mode".
Note. There are a few details to keep in mind about this mode. Specifically, you could encounter a file locking issue when PVS-Studio uses Visual C++'s preprocessor ('cl.exe'). It has to do with the fact that the 'cl.exe' compiler may lock a file while preprocessing it, causing writing of this file to fail. When the Clang preprocessor is used, this issue is much rarer. Please keep this in mind when configuring the server to run incremental analysis rather than full-fledged analysis at night.
If you need to analyze CMake projects, it is recommended that you convert them into Visual Studio solutions and continue to work with these. This will allow you to use the 'PVS-Studio_Cmd.exe' module's capabilities in full.
If your project uses a build system other than MSBuild, you will not be able to analyze it with the command line module 'PVS-Studio_Cmd.exe'. The package, however, includes utilities to make it possible to analyze such projects too.
The PVS-Studio Compiler Monitoring system, or CLMonitoring, is designed to provide 'seamless' integration of PVS-Studio into any build system under Windows that employs one of the preprocessors supported by the command line module 'PVS-Studio.exe' for compilation.
The monitoring server (CLMonitor.exe) monitors the launches of processes corresponding to the target compiler and collects information about these processes' environment. The server monitors only those processes that run under the same user profile where it has been launched.
Supported compilers:
Before integrating the monitoring server into the build process, start the 'CLMonitor.exe' module with the argument 'monitor':
CLMonitor.exe monitor
This command will tell the monitoring server to call itself in monitoring mode and terminate, while the build system will be able to continue with its tasks. Meanwhile, the second CLMonitor process (called by the first) will be still running and monitoring the build process.
Once the build is complete, you will need to launch the 'CLMonitor.exe' module in client mode to generate preprocessed files and start static analysis proper:
CLMonitor.exe analyze -l "c:\ptest.plog" -u "c:\ptest.suppress" -s
This command contains the following arguments:
To learn more about the use of the compiler monitoring system, see the article "Compiler Monitoring System in PVS-Studio".
Note. The compiler monitoring system has a number of drawbacks stemming from the natural limitations of this approach, namely the impossibility to guarantee a 100% intercept of all the compiler launches during the build process (for example, when the system is heavily loaded). Another thing to remember is that when several build processes are running in parallel, the system may intercept compiler launches related to another build.
Note. In direct integration mode, the analyzer can check only C/C++ code.
Direct integration may be necessary when you cannot use the command line module 'PVS-Studio_Cmd.exe' (since the project is built with a system other than MSBuild) and the compiler monitoring system (see the note in the corresponding section).
In that case, you need to integrate a direct call of the analyzer ('PVS-Studio.exe') into the build process and provide it with all the arguments required for preprocessing. That is, the analyzer must be called for the same files that the compiler is called for.
To learn more about direct integration into build automation systems, see the article "Direct integration of the analyzer into build automation systems (C/C++)".
Once the check has finished, the analyzer outputs a .plog file in the XML format. This file is not intended to be handled manually (read by the programmer). The package, however, includes special utilities whose purpose is to provide a convenient way to handle the .plog file.
The analysis results can be filtered even before a start of the analysis by using the No Noise setting. When working on a large code base, the analyzer inevitably generates a large number of warning messages. Besides, it is often impossible to fix all the warnings straight out. Therefore, to concentrate on fixing the most important warnings first, the analysis can be made less "noisy" by using this option. It allows completely disabling the generation of Low Certainty (level 3) warnings. After restarting the analysis, the messages from this level will disappear from the analyzer's output.
When circumstances will allow it, and all of the more important messages are fixed, the 'No Noise' mode can be switched off - all of the messages that disappeared before will be available again.
To enable this setting use the Specific Analyzer Settings page.
'PlogConverter.exe' is used to convert the analyzer report into one of the formats that could be read by the programmer:
This example demonstrates how to use 'PlogConverter.exe' utility (in one line):
PlogConverter.exe test1.plog -o "C:\Results" -r "C:\Test"
-a GA:1 -t Html
This command converts the 'test1.plog' file into an .html file that will include the first-level diagnostic messages of the GA (general-analysis) group. The resulting report will be written to 'C:\Results', while the original .plog file will stay unchanged.
To see full help on 'PlogConverter' utility's parameters, run the following command:
PlogConverter.exe --help
Note. 'PlogConverter' utility comes with the source files (in C#), which can be found in the archive 'PlogConverter_src.zip'. You can adopt the algorithm of parsing a .plog file's structure to create your own output format.
To learn more about 'PlogConverter', see the article "Managing the Analysis Results (.plog file)".
Analysis results can be imported into the SonarQube platform, which performs continuous code quality inspection. To do this, use the 'sonar-pvs-studio-plugin' included into the package. This plugin allows you to add warnings issued by PVS-Studio to the SonarQube server's message database. This, in its turn, enables you to view bug occurrence/fixing statistics, navigate the warnings, view the documentation on diagnostic rules, and so forth.
Once added to SonarQube, all PVS-Studio messages are assigned type Bug. SonarQube's interface keeps the same layout of message distribution across diagnostic groups as in the analyzer.
To learn more about integrating analysis results into SonarQube, see the article "Integration of PVS-Studio analysis results into SonarQube".
Sending analysis report copies to developers is an effective way to inform them about the results. It can be done with the help of special utilities such as SendEmail. SonarQube provides this option as well.
Another way to inform the developers is to use 'BlameNotifier' utility, which also comes with the PVS-Studio package. This application allows you to form reports in a flexible way. For example, you can configure it so that it will send individual reports to the developers who submitted faulty code; team leaders, development managers, etc. will get a complete log with the data about all the errors found and developers responsible for them.
For basic information about the utility, run the following command:
BlameNotifier.exe --help
To learn more about 'BlameNotifier', see the article "Managing the Analysis Results (.plog file)", section "Notifying the developer team".
If you have any questions, please feel free to contact us at support@viva64.com.
Server incremental analysis mode from command line is available only under Enterprise PVS-Studio license. You can request a trial Enterprise license at the download page. IDE incremental analysis on developer's machine is available under all PVS-Studio license types.
It is possible to run analysis on the entire code base independently – say, once a day during night builds. However, to get the most out of the analyzer, you need to be able to find and fix bugs as early as possible. In other words, the optimal way to use a static analyzer is to run it on freshly written code right away. Of course, having to manually run a check every time you modify a few files and wait for it to finish makes this scenario complicated and incompatible with the idea of intense development and debugging of new code. It's simply inconvenient, after all. However, PVS-Studio has a solution to this problem.
Note that it is advisable to examine all the diagnostic messages generated after the very first full analysis of the code base, and fix any bugs found. As for the remaining warnings, you can either mark them as false positives, turn off irrelevant diagnostics or diagnostic sets, or suppress whatever messages you haven't addressed to get back to them some other time. This approach allows you to keep the warning list uncluttered by meaningless and irrelevant warnings.
To enable the post-build incremental analysis mode, click Extensions > PVS-Studio > Analysis after Build (Modified Files Only):
This option is enabled by default.
Once this mode is activated, PVS-Studio will automatically analyze all recently modified files in the background immediately after the build is finished. When the analysis starts, an animated PVS-Studio icon will appear in the Windows taskbar notification area:
The drop-down menu from the notification area includes commands that allow you to pause or abort the current check.
To keep track of modified files, the analyzer relies on the build system. A complete rebuild will cause it to check all the files comprising the project, so you need to use incremental build to be able to check only modified files. If any bugs are detected during incremental analysis, their number will be displayed on the tab of the PVS-Studio window in Visual Studio, and Windows notification will pop-up:
Clicking on the icon in the notification area (or on the notification itself) will take you to the PVS-Studio Output window.
When working within Visual Studio, you can set an incremental analysis timeout or the maximum level of analyzer warnings. These settings can be tweaked in PVS-Studio > Options > Specific Analyzer Settings > IncrementalAnalysisTimeout and PVS-Studio > Options > Special Analyzer Settings > IncrementalResultsDisplayDepth.
The incremental analysis mode can also be used with Visual Studio solutions when using the command-line utility (PVS-Studio_Cmd.exe). This practice is good for speeding up analysis on the CI server and employs incremental build approaches similar to those used in MSBuild.
To set up incremental analysis on the server, use the following commands:
PVS-Studio_Cmd.exe ... --incremental Scan ...
MSBuild.exe ... -t:Build ...
PVS-Studio_Cmd.exe ... --incremental Analyze ...
Here's a complete description of all the modes of incremental analysis:
If you need to use incremental analysis along with the compiler monitoring system, you simply need to "trace" the incremental build, i.e. the compilation of files modified since the previous build. This way, you will be able to analyze only modified or new code.
This scenario is natural to the compiler monitoring system, as it is based on the tracing of compiler invocations during the build process, and, thus, collects all the information needed to analyze the source files the compilation of which has been traced. Therefore, which type of analysis will be performed depends on which type of build is being traced: full or incremental.
To learn more about the compiler monitoring system, see the article "Compiler Monitoring System in PVS-Studio".
To check a CMake project, you can use a JSON Compilation Database file. To have the required compile_commands.json file generated, add the following flag to the CMake call:
cmake -DCMAKE_EXPORT_COMPILE_COMMANDS=On <src-tree-root>
To enable incremental analysis for such projects, add the --incremental flag to the analyze command:
pvs-studio-analyzer analyze ... --incremental ...
File dependencies and modification history will be stored in the .PVS-Studio directory as the analyzer's work does not depend on the build system in this mode. This directory must be preserved for the analyzer to be able to work in this analysis mode.
If your CMake generator doesn't allow generating a compile_commands.json file, or if this file can't be generated conveniently, you may directly integrate PVS-Studio into CMake: using the direct integration module will allow you to run incremental analysis along with incremental build.
You can specify an analyzer invocation command after the compiler command in the scripts of the Make build system or other similar systems:
$(CXX) $(CFLAGS) $< ...
pvs-studio --source-file $< ...
This will let incremental analysis and incremental build run together, with the information about modified files retrieved from the build system.
A collection of examples demonstrating the integration of PVS-Studio into Makefile can be found in the GitHub repository: pvs-studio-makefile-examples.
You can check any project without integrating the analyzer into a build system by running the following commands:
pvs-studio-analyzer trace – make
pvs-studio-analyzer analyze ...
Any build command with all the necessary parameters can be substituted instead of make.
In this mode, the analyzer traces and logs child processes of the build system and spots compilation processes among them. If you build the project in incremental build mode, only modified files will be analyzed as well.
The incremental analysis mode of C# projects under Linux and macOS is the same as the one described above in the section "Command line analyzer for MSBuild projects (PVS-Studio_Cmd.exe)" except for the following:
To turn on the post-build incremental analysis mode, click Analyze > PVS-Studio > Settings > PVS-Studio > Misc > Run incremental analysis on every build:
Once this mode is activated, PVS-Studio will automatically analyze all recently modified files in the background immediately after the build is finished. All issued warnings will be collected in the PVS-Studio window:
To enable incremental analysis in the maven plugin, set the incremental flag:
<plugin>
<groupId>com.pvsstudio</groupId>
<artifactId>pvsstudio-maven-plugin</artifactId>
....
<configuration>
<analyzer>
....
<incremental>true</incremental>
....
</analyzer>
</configuration>
</plugin>
Once this mode is activated, the pvsstudio:pvsAnalyze command will start the analysis of only those files that have been modified since the last check.
To enable incremental analysis in the gradle plugin, set the incremental flag:
apply plugin: com.pvsstudio.PvsStudioGradlePlugin
pvsstudio {
....
incremental = true
....
}
Once this mode is activated, the pvsAnalyze command will start the analysis of only those files that have been modified since the last check.
SonarQube is an open-source platform developed by SonarSource for continuous inspection of code quality to perform automatic reviews with static analysis of code to detect bugs, code smells, and security vulnerabilities on 20+ programming languages. SonarQube offers reports on duplicated code, coding standards, unit tests, code coverage, code complexity, comments, bugs, and security vulnerabilities. SonarQube can record metrics history and provides evolution graphs.
This page showcases SonarQube's capabilities: sonarqube.org.
To import analysis results into SonarQube, PVS-Studio provides a special plugin, which allows you to add messages produced by PVS-Studio to the message base of the SonarQube server. SonarQube's Web interface allows you to filter the messages, navigate the code to examine bugs, assign tasks to developers and keep track of the progress, analyze bug amount dynamics, and measure the code quality of your projects.
The following plugins for SonarQube are available for PVS-Studio users:
The guide on installing and starting the SonarQube server can be found on the page Installing the Server.
Once the SonarQube server is installed, copy the plugin (sonar-pvs-studio-plugin.jar) to this directory:
SONARQUBE_HOME/extensions/plugins
Depending on what language the analysis results refer to, install the corresponding plugins from the list below (some of them may be installed by default, depending on the SonarQube edition in use):
Restart the SonarQube server.
A Quality Profile is a collection of diagnostic rules to apply during an analysis. You can include PVS-Studio diagnostics into existing profiles or create a new profile. Every profile is bound to a particular programming language, but you can create several profiles with different rule sets. The ability to perform any action on quality profiles is granted to members of the sonar-administrators group.
A new profile is created using the menu command Quality Profiles -> Create:
To include PVS-Studio diagnostics into the active profile, select the desired repository through Rules -> Repository:
After that, click on the Bulk Change button to add all of the diagnostics to your profile, or select the desired diagnostics manually.
Diagnostics activation window:
You can also filter diagnostics by tags before selecting them for your profile:
After creating/tweaking your profiles, set one of them as the default profile:
The default profile is started automatically for source files written in the specified language. You don't necessarily have to group your profiles based on the utilities used. You can create a single profile for your project and add diagnostics from different utilities to it.
When a new PVS-Studio version releases, new diagnostics may appear, so you will have to update the plugin on the SonarQube server and add the new rules to the Quality Profile that uses PVS-Studio diagnostics. One of the sections below describes how to set up automatic updates.
Analysis results can be imported into SonarQube using the SonarQube Scanner utility. It requires a configuration file named sonar-project.properties and stored in the project's root directory. This file contains analysis parameters for the current project, and you can pass all or some of these settings as launch parameters of the SonarQube Scanner utility.
Below we will discuss the standard scanner launch scenarios for importing PVS-Studio analysis results into SonarQube on different platforms. SonarQube Scanner will automatically pick up the configuration file sonar-project.properties in the current launch directory.
MSBuild projects are checked with the PVS-Studio_Cmd.exe utility.
Option 1
By launching the PVS-Studio_Cmd once, you can get both an analysis report and the configuration file sonar-project.properties:
PVS-Studio_Cmd.exe ... -o Project.plog --sonarqubedata ...
This is what the scanner launch command looks like:
sonar-scanner.bat ^
-Dsonar.projectKey=ProjectKey ^
-Dsonar.projectName=ProjectName ^
-Dsonar.projectVersion=1.0 ^
-Dsonar.pvs-studio.reportPath=Project.plog
Option 2
If you're using SonarQube Scanner for MSBuild, then use the following commands to analyze the project and upload the results in SonarQube:
SonarScanner.MSBuild.exe begin ... /d:sonar.pvs-studio.reportPath=Project.plog
MSBuild.exe Project.sln /t:Rebuild ...
PVS-Studio_Cmd.exe -t Project.sln ... -o Project.plog
SonarScanner.MSBuild.exe end
Add the following lines to the Java project under analysis (depending on the project type):
Maven
<outputType>xml</outputType>
<outputFile>output.xml</outputFile>
<sonarQubeData>sonar-project.properties</sonarQubeData>
Gradle
outputType = 'xml'
outputFile = 'output.xml'
sonarQubeData='sonar-project.properties'
Just like in the previous case, the configuration file will be created automatically once the Java analyzer has finished the check.
The scanner launch command will look like this:
sonar-scanner.bat ^
-Dsonar.projectKey=ProjectKey ^
-Dsonar.projectName=ProjectName ^
-Dsonar.projectVersion=1.0 ^
-Dsonar.pvs-studio.reportPath=output.xml
For a project, you will have to create the configuration file manually after the analysis. As an example, it may include the following parameters:
sonar.projectKey=my:project
sonar.projectName=My project
sonar.projectVersion=1.0
sonar.pvs-studio.reportPath=report.xml
sonar.sources=.
This is what the converter and scanner launch commands look like:
plog-converter ... -t xml - o report.xml ...
sonar-scanner.sh \
-Dsonar.projectKey=ProjectKey \
-Dsonar.projectName=ProjectName \
-Dsonar.projectVersion=1.0 \
-Dsonar.pvs-studio.reportPath=report.xml
To fine-tune the analysis further, you can compose the configuration file manually from the following settings (or edit the automatically created file when checking MSBuild and Java projects):
The other standard scanner configuration parameters are described in the general documentation on SonarQube.
When subproject directories are located at different levels, it becomes impossible to upload the results of several subprojects to SonarQube in one project with standard settings. This is because this sub-project structure requires additional adjustment of the indexer in the SonarScanner utility.
You can set up such a project correctly by using modules where each module is configured for one subproject:
sonar.projectKey=org.mycompany.myproject
sonar.projectName=My Project
sonar.projectVersion=1.0
sonar.sources=src
sonar.modules=module1,module2
module1.sonar.projectName=Module 1
module1.sonar.projectBaseDir=modules/mod1
module2.sonar.projectName=Module 2
module2.sonar.projectBaseDir=modules/mod2
There are two ways to specify the path to the PVS-Studio analysis file.
The first way
Specify different reports for modules:
....
sonar.modules=module1,module2
module1.sonar.projectName=Module 1
module1.sonar.projectBaseDir=modules/mod1
module1.sonar.pvs-studio.reportPath=/path/to/report1.plog
module2.sonar.projectName=Module 2
module2.sonar.projectBaseDir=modules/mod2
module2.sonar.pvs-studio.reportPath=/path/to/report2.plog
The second way
Specify one report at the project level:
sonar.projectKey=org.mycompany.myproject
sonar.projectName=My Project
sonar.projectVersion=1.0
sonar.sources=src
sonar.pvs-studio.reportPath=/path/to/report.plog
sonar.modules=module1,module2
....
In this case, each module will download only warnings relevant to it from the report. Unfortunately, a warning (WARN) will be issued for the files from other modules, indicating the absence of files in the SonarScanner utility output, but all analysis results will be downloaded correctly.
PVS-Studio's capabilities of detecting potential vulnerabilities are described on the page PVS-Studio SAST (Static Application Security Testing).
Security-related information on the code under analysis provided by PVS-Studio is additionally highlighted by SonarQube in the imported analysis results.
PVS-Studio warnings can be grouped based on different security standards through Issues -> Tag or Rules -> Tag:
You can also select a particular CWE ID if available (when a warning falls into several CWE IDs at once, it will be marked with a single cwe tag; use prefixes in the warning text to filter by IDs):
In SonarQube [7.8, 8.4], a new filter by security categories is available on pages Issues and Rules. Using this filter, SonarQube lets to classify rules according to security standards, such as:
Rules and issues from PVS-Studio mapped with CWE ID can also be grouped in the following menu (Security Category -> CWE):
Note. Starting from the SonarQube 8.5 version, only Issues/Rules related to security of the 'Vulnerability' or 'Security Hotspot' type will be able to get to the Security Category tab.
All the rules in PVS-Studio are of the 'Bug' type by default. If you need to change the 'Bug' rule type for 'Vulnerability' when having a CWE ID, then you need to add the following line in the '$SONARQUBE_HOME\conf\sonar.properties' server configuration file:
sonar.pvs-studio.treatPVSWarningsAsVulnerabilities=active
For the changes to be applied, you need to restart the SonarQube server. Once you have done this, the rules with the CWE ID will have the 'Vulnerability' type, and the new generated issues will already take this change into account.
Note. If you had old issues before, this change will not affect them. You'll need to manually change the type of these issues.
The configuration file sonar-project.properties provides the following options:
sonar.pvs-studio.cwe=active
sonar.pvs-studio.misra=active
They are used to enable the inclusion of CWE and MISRA IDs into analyzer warnings:
Warnings can be filtered by tags anytime, regardless of the specified options.
The tab Projects -> Your Project -> Measures shows various code metrics calculated each time a check is launched. All collected information can be visualized as graphs. The Security section allows you to track the number of warnings with CWE and MISRA tags for the current project:
The other, general, metrics of PVS-Studio warnings can be viewed in a separate section, PVS-Studio.
Most actions available to SonarQube users are standard for this platform. These actions include viewing and sorting analysis results, changing warning status, and so on. For this reason, this section will focus only on the additional features that come with the PVS-Studio plugin.
PVS-Studio warnings are divided into several groups, some of which may be irrelevant to the current project. That's why we added an option allowing you to filter diagnostics by the following tags when creating a profile or viewing the analysis results:
PVS-Studio diagnostics group |
SonarQube tag |
General analysis |
pvs-studio#ga |
Micro-optimizations |
pvs-studio#op |
64-bit errors |
pvs-studio#64 |
MISRA |
pvs-studio#misra |
Customers' specific diagnostics |
pvs-studio#cs |
Analyzer fails |
pvs-studio#fails |
These are the standard tags used in PVS-Studio warnings:
Code quality control standards |
SonarQube tag |
CWE |
cwe |
CERT |
cert |
MISRA |
misra |
Unlike the pvs-studio# tag group, the standard SonarQube tags may include, depending on the active quality profile, messages from other tools in addition to those from PVS-Studio.
The tab Projects -> Your Project -> Measures shows various code metrics calculated each time a check is launched. When installing the analyzer plugin, a new section, PVS-Studio, is also added, where you can find useful information on your project and have graphs plotted:
When working with a large code base, the analyzer will inevitably generate a lot of messages, and it's usually impossible to address them all at once. In order to focus on the most important warnings and keep the statistics "uncluttered", you can do some tweaking of the analyzer settings and log filtering before launching SonarQube Scanner. There are several ways to do this.
1. You can have less "noise" in the analyzer's output by using the No Noise option. It allows you to completely turn off messages of the Low Certainty level (which is the third level). After restarting the analysis, all messages of this level will disappear from the analyzer's output. To enable this option, use the settings window "Specific Analyzer Settings" in Windows or refer to the general documentation for Linux and macOS.
2. You can speed up the check by excluding external libraries, test code, etc. from analysis. To add files and directories to the exceptions list, use the settings window "Don't Check Files" in Windows or refer to the general documentation for Linux and macOS.
3. If you need additional control over the output, for example, message filtering by level or error code, use the message filtering and conversion utility (Plog Converter) for the current platform.
4. If you need to change a warning's level, you can do so in the settings of the analyzer itself rather than in SonarQube. PVS-Studio has the following certainty levels: High, Medium, Low, and Fails. The respective levels in SonarQube are Critical, Major, Minor, and Info. See the page "Additional diagnostics configuration" on how to change warnings' levels.
The update procedure can be automated with SonarQube Web Api. Suppose you have set up an automatic PVS-Studio update system on your build server (as described in the article "Unattended deployment of PVS-Studio"). To update the PVS-Studio plugins and add the new diagnostics to the Quality Profile without using the Web interface, perform the following steps (the example below is for Windows; the same algorithm applies to other operating systems):
Suppose your SonarQube server is installed in C:\Sonarqube\ and is running as a service; PVS-Studio is installed in C:\Program Files (x86)\PVS-Studio\. The script which will automatically update the PVS-Studio distribution and sonar-pvs-studio-plugin will then look like this:
set PVS-Studio_Dir="C:\Program Files (x86)\PVS-Studio"
set SQDir="C:\Sonarqube\extensions\plugins\"
rem Update PVS-Studio
cd /d "C:\temp\"
xcopy %PVS-Studio_Dir%\PVS-Studio-Updater.exe . /Y
call PVS-Studio-Updater.exe /VERYSILENT /SUPPRESSMSGBOXES
del PVS-Studio-Updater.exe
rem Stop the SonarQube server
sc stop SonarQube
rem Wait until the server is stopped
ping -n 60 127.0.0.1 >nul
xcopy %PVS-Studio_Dir%\sonar-pvs-studio-plugin.jar %SQDir% /Y
sc start SonarQube
rem Wait until the server is started
ping -n 60 127.0.0.1 >nul
curl http://localhost:9000/api/qualityprofiles/search
-v -u admin:admin
The server's response will be as follows:
{
"profiles": [
{
"key":"c++-sonar-way-90129",
"name":"Sonar way",
"language":"c++",
"languageName":"c++",
"isInherited":false,
"isDefault":true,
"activeRuleCount":674,
"rulesUpdatedAt":"2016-07-28T12:50:55+0000"
},
{
"key":"c-c++-c-pvs-studio-60287",
"name":"PVS-Studio",
"language":"c/c++/c#",
"languageName":"c/c++/c#",
"isInherited":false,
"isDefault":true,
"activeRuleCount":347,
"rulesUpdatedAt":"2016-08-05T09:02:21+0000"
}
]
}
Suppose you want the new diagnostics to be added to your PVS-Studio profile for the languages 'c/c++/c#'. The key for this profile is the value c-c++-c-pvs-studio-60287.
Note that a profile key may contain special characters, so the URL characters need to be escaped when passing the key in the POST request. In our example, the profile key c-c++-c-pvs-studio-60287 must be converted into c-c%2B%2B-c-pvs-studio-60287
The tags parameter is used to pass the tags of the diagnostics you want activated in your profile. To activate all PVS-Studio diagnostics, pass the pvs-studio tag.
The request for adding all diagnostics to a PVS-Studio profile will look like this (in one line):
curl --request POST -v -u admin:admin -data
"profile_key=c-c%2B%2B-c-pvs-studio-60287&tags=pvs-studio"
http://localhost:9000/api/qualityprofiles/activate_rules
The PVS-Studio analyzer provides a work statistics gathering feature to see the number of detected messages (including suppressed ones) across different certainty levels and rule sets. Gathered statistics can be filtered and represented as a diagram in a Microsoft Excel file, showing the change dynamics for messages in the project under analysis.
PVS-Studio can save launch statistics when analyzing source code through the Microsoft Visual Studio plugin (supported in versions starting with Visual Studio 2010). To enable the statistics saving feature, use the 'Save Solution Statistics' option available on the 'Specific Analyzer Settings' page which can be accessed through the 'PVS-Studio|Options...' menu item of the plugin.
The statistics are saved in the folder '%AppData%/PVS-Studio/Statistics'. For each analyzed Visual Studio solution, an associated subfolder is created with the same name. For each solution analysis launch, once the analysis is over, an individual statistics file is created which contains the analysis results (when analyzing Visual Studio projects from the command line, the statistics are also collected). The statistics file contains the information about the number of output messages (both new and old ones hidden by means of the message suppression mechanism) in each PVS-Studio rule set (General Analysis, Optimization, 64-bit Analysis), for each error and error certainty level. Messages marked as false positives are not included into the statistics.
Each Visual Studio solution analysis launch is saved into an xml.zip file, which is a usual zip archive containing a simple-format xml file. Thanks to the open format, you can interpret these files on your own or use the PVS-Studio plugin's UI, which is described in detail further in this article.
PVS-Studio provides an interface to filter the gathered analysis launch statistics and represent them by means of Microsoft Excel.
To use this dialog, you need to have Microsoft Excel (2007 or better) installed on your computer as well as the Visual Studio Tools for Office runtime (installed together with the Visual Studio IDE by default).
You can open the statistics filtering dialog by clicking on the 'PVS-Studio|Analysis Statistics...' menu item (also available in C and C++ Compiler Monitoring UI):
Figure 1 - PVS-Studio analyzer launch statistics filtering dialog
The 'Include Suppressed Messages' checkbox allows showing/hiding suppressed analyzer messages. Messages disabled on the Detectable Errors (PVS-Studio|Options...) settings page are also filtered off when making the Excel document (but xml.zip statistics files themselves contain the complete information about all the error codes).
The PVS-Studio statistics filtering dialog includes only the "freshest" data per day. That is, if you ran analysis several times during the day, only the latest statistics file will be used (it is specified in the xml statistics file). However, the complete statistics are saved for every launch and can be found in the folder '%AppData%/PVS-Studio/Statistics/%SolutionName%', if necessary.
Once you have selected the required solutions in the list, set up the filters, and specified the time span you want to see the statistics for, an Excel document with the corresponding statistics data is created and can be opened by clicking on the 'Show in Excel' button (Figure 2).
Figure 2 - Statistics across message rule sets
The 'statistics across message rule sets' diagram shows the change dynamics for the total number of messages for each of the analyzer's rule sets, according to the filters set up previously.
Though opened through the PVS-Studio dialog, these diagrams are ordinary Excel documents providing the complete functionality of Excel's interface (filtering, scaling, etc.) and can be saved for further use.
In this article we describe working in the Windows environment. Working in the Linux environment is described in the article "How to run PVS-Studio on Linux".
As for most of other software setting up PVS-Studio requires administrative privileges.
Unattended setup is performed by specifying command line parameters, for example:
PVS-Studio_Setup.exe /verysilent /suppressmsgboxes
/norestart /nocloseapplications
PVS-Studio may require a reboot if, for example, files that require update are locked. To install PVS-Studio without reboot, use the 'NORESTART' flag. Please also note that if PVS-Studio installer is started in a silent mode without this flag, the computer may be rebooted without any warnings or dialogs.
By default, all available PVS-Studio components will be installed. In case this is undesirable, the required components can be selected by the 'COMPONENTS' switch (following is a list of all possible components):
PVS-Studio_setup.exe /verysilent /suppressmsgboxes
/nocloseapplications /norestart /components= Core,
Standalone,MSVS,MSVS\2010,MSVS\2012,MSVS\2013,MSVS\2015,MSVS\2017,
MSVS\2019,IDEA,JavaCore,Rider
Brief description of components:
During installation of PVS-Studio all instances of Visual Studio / IntelliJ IDEA / Rider should be shut down, however to prevent user's data loss PVS-Studio does not shut down Visual Studio / IntelliJ IDEA / Rider.
The installer will exit with '100' if it is unable to install the extension (*.vsix) for any of the selected versions of Visual Studio.
The PVS-Studio-Updater.exe can perform check for analyzer updates, and, if an update is available, it can download it and perform an installation on a local system. To start the updater tool "silently", the same arguments can be utilized:
PVS-Studio-Updater.exe /VERYSILENT /SUPPRESSMSGBOXES
If there are no updates on the server, the updater will exit with the code '0'. As PVS-Studio-Updater.exe performs a local deployment of PVS-Studio, devenv.exe should not be running at the time of the update as well.
If you connect to Internet via a proxy with authentication, PVS-Studio-Updater.exe will prompt you for proxy credentials. If the proxy credentials are correct, PVS-Studio-Updater.exe will save them in the Windows Credential Manager and will use these credentials to check for updates in future. If you want to use the utility with proxy without authorization you can do it using proxy flag (/proxy=ip:port).
Another possible installation option is to use the Chocolatey package manager. When using this installation option, the package manager itself has to already be installed.
The installation command of the latest available PVS-Studio package version:
choco install pvs-studio
The installation command of a specific PVS-Studio package version:
choco install pvs-studio --version=7.05.35617.2075
When installing the package, you can also set the list of installed components in a similar way to those listed in the section "Unattended deployment" of this document. To specify components, use the flag '--package-parameters'. The components are equivalent to those described above and differ only in the syntax of some parameters:
Only the 'Core' component is installed by default. When listing the installation components, there is no need to specify 'Core'.
Here's the example of the command which the analyzer will install with the components 'Core' and 'Standalone':
choco install pvs-studio --package-parameters="'/Standalone'"
Different ways to enter the license when using various environments are covered in the documentation section "How to enter the PVS-Studio License and what's the next move".
If you want deploy PVS-Studio for many computers then you can install license without manual entering. It should place valid 'Settings.xml' file into folder under user's profile.
If many users share one desktop each one should have its own license.
Default settings location is the following:
%USERPROFILE%\AppData\Roaming\PVS-Studio\Settings.xml
It is user-editable xml file, but it also could be edited by through PVS-Studio IDE plugin on a target machine. Please note that all settings that should be kept as default values could be omitted from 'Setting.xml' file.
To speed up the analysis, you can use a distributed build system, for example, IncrediBuild. The analysis of C/C++ code in PVS-Studio can be divided into 2 stages: preprocessing and analysis itself. Each of these steps can be executed remotely by the distributed build system. To analyze each C/C++ compiled file, PVS-Studio first launches an external preprocessor, and then the C++ analyzer itself. Each such process can be executed remotely.
Depending on the type of a checked project, the analysis of PVS-Studio is launched either through the PVS-Studio_Cmd.exe (for MSBuild projects) utility, or using the utility for monitoring the calls of the compiler - CLMonitor.exe \ Standalone.exe (for any build system). Further, one of these utilities will first run the preprocessor (cl.exe, clang.exe for Visual C++ projects, for the rest – the same process that was used for compilation) for each checked file, and then - the C++ analyzer PVS-Studio.exe.
Setting the value of 'ThreadCount' option to more than '16' (or more than a number of processor cores, if processor possesses more than 16 cores) is available only in PVS-Studio Enterprise license. Please contact us to order a license.
These processes run concurrently, depending on the 'PVS-Studio|Options...|Common AnalyzerSettings|ThreadCount' setting. By increasing the number of concurrently scanned files, with the help of this setting, and distributing the execution of these processes to remote machines, you can significantly (several times) reduce the total analysis time.
Here is an example of speeding up the PVS-Studio analysis by using the IncrediBuild distributed system. For this we'll need an IBConsole management utility. We will use the Automatic Interception Interface, which allows remotely executing any process, intercepted by this system. Launching of the IBConsole utility for distributed analysis using PVS-Studio will look as follows:
ibconsole /command=analyze.bat /profile=profile.xml
The analyze.bat file must contain a launch command for the analyzer, PVS-Studio_Cmd.exe or CLMonitor.exe, with all the necessary parameters for them (more detailed information about this can be found in the relevant sections of analyzer documentation). Profile.xml file contains the configuration for the Automatic Interception Interface. Here is an example of such a configuration for the analysis of the MSBuild project using PVS-Studio_Cmd.exe:
<?xml version="1.0" encoding="UTF-8" standalone="no" ?>
<Profile FormatVersion="1">
<Tools>
<Tool Filename="PVS-Studio_Cmd" AllowIntercept="true" />
<Tool Filename="cl" AllowRemote="true" />
<Tool Filename="clang" AllowRemote="true" />
<Tool Filename="PVS-Studio" AllowRemote="true" />
</Tools>
</Profile>
Let's see, what each record in this file means. We can see that the AllowIntercept attribute with the 'true' value is specified for PVS-Studio_Cmd. This means that a process with such a name will not be executed itself in a distributed manner, but the system of automatic interception will track the child processes generated by this process.
For the preprocessor cl and clang processes and the C/C++ analyzer PVS-Studio process, the AllowRemote attribute is specified. This means that processes with such names, after being intercepted from the AllowIntercept processes, will be potentially executed on other (remote) IncrediBuild agents.
Before running IBConsole, you must specify the 'PVS-Studio|Options...|Common AnalyzerSettings|ThreadCount' setting, according to the total number of cores available on all of IncrediBuild agents. If it's not done, there will be no effect from using IncrediBuild!
Note: during the analysis of Visual C++ projects, PVS-Studio uses clang.exe supplied in the PVS-Studio distribution for preprocessing C/C++ files before the analysis, instead of the cl.exe preprocessor. This is done to speed up the preprocessing, as clang is doing it faster than cl. Some older versions of Incredibuild performs a distributed launch of the clang.exe preprocessor not quite correctly, resulting in errors of preprocessing. Therefore, clang should not be specified in the IBConsole configuration file, if your version of IncrediBuild handles clang incorrectly.
The used type of preprocessor during the analysis is specified with the 'PVS- Studio|Options...|Common AnalyzerSettings|Preprocessor' setting. If you choose the 'VisualCpp' value for this setting, PVS-Studio will use only cl.exe for preprocessing, which will be executed in a distributed manner, but slower than clang, which cannot be executed in a distributed manner. You should choose this setting depending on the type of the project and the number of agents available to analyze - when having a large numbers of agents, the choice of VisualCpp will be reasonable. With a small numbers of agents, local preprocessing with clang might prove to be faster.
For a distributed analysis using CLMonitor / Compiler Monitoring UI (Standalone.exe), you must change the configuration file as follows: replace PVS-Studio_Cmd with CLMonitor or Standalone (depending on whether the check is triggered from the UI or from the command line); cl, if necessary, should be replaced with the type of the preprocessor which is used during build (gcc, clang). For example:
<?xml version="1.0" encoding="UTF-8" standalone="no" ?>
<Profile FormatVersion="1">
<Tools>
<Tool Filename="CLMonitor" AllowIntercept="true" />
<Tool Filename="gcc" AllowRemote="true" />
<Tool Filename="PVS-Studio" AllowRemote="true" />
</Tools>
</Profile>
When specifying the ThreadCount settings, please note, that the coordinator machine of the analysis (i.e. the one, which runs the PVS-Studio_Cmd/CLMonitor/Standalone) will be responsible for processing the results coming from all of the PVS-Studio.exe processes. This job cannot be distributed - therefore, especially when ThreadCount is set to a very high value (more than 50 processes simultaneously), it is worth thinking about how to "unload" the coordinator machine from the analysis tasks (i.e., from performing the processes of the analyzer and preprocessor). This can be done by using the '/AvoidLocal' IBConsole flag, or in the settings of local IncrediBuild agent on the coordinator machine.
PVS-Studio is distributed as Deb/Rpm packages or an archive. Using the installation from the repository, you will be able to receive updates about the release of a new version of the program.
The distribution kit includes the following files:
You can install the analyzer using the following methods:
wget -q -O - https://files.viva64.com/etc/pubkey.txt | \
sudo apt-key add -
sudo wget -O /etc/apt/sources.list.d/viva64.list \
https://files.viva64.com/etc/viva64.list
sudo apt-get update
sudo apt-get install pvs-studio
wget -O /etc/yum.repos.d/viva64.repo \
https://files.viva64.com/etc/viva64.repo
yum update
yum install pvs-studio
wget -q -O /tmp/viva64.key https://files.viva64.com/etc/pubkey.txt
sudo rpm --import /tmp/viva64.key
sudo zypper ar -f https://files.viva64.com/rpm viva64
sudo zypper update
sudo zypper install pvs-studio
You can download PVS-Studio for Linux here.
sudo gdebi pvs-studio-VERSION.deb
or
sudo dpkg -i pvs-studio-VERSION.deb
sudo apt-get -f install
$ sudo dnf install pvs-studio-VERSION.rpm
or
sudo zypper install pvs-studio-VERSION.rpm
or
sudo yum install pvs-studio-VERSION.rpm
or
sudo rpm -i pvs-studio-VERSION.rpm
tar -xzf pvs-studio-VERSION.tgz
sudo ./install.sh
After a successful analyzer installation on your computer, to check a project follow the instructions on this page: "How to run PVS-Studio on Linux".
PVS-Studio is distributed as a graphical installer, archive or via the Homebrew repository. Using installation from a repository, you can get analyzer updates automatically. The distribution kit includes the following files:
You can install the analyzer using the following methods:
Installation:
brew install viva64/pvs-studio/pvs-studio
Update:
brew upgrade pvs-studio
Run the .pkg file and follow the instructions of the installer:
Unpack the archive and place the executables in the directory, available in PATH.
tar -xzf pvs-studio-VERSION.tgz
After a successful analyzer installation on your computer, to check a project follow the instructions on this page: "How to run PVS-Studio on Linux and macOS".
Mass suppression of analyzer warnings can be useful in the following scenarios:
In such cases, analyzer warnings can be suppressed in a special way so that they won't get into newly generated reports. This mode doesn't require modification of the project's source files.
The analyzer supports the analysis of source code in C, C++, C# and Java programming languages. The analysis can be performed under Windows, Linux and macOS. In this regard, ways of warning suppression might differ depending on the used platform and projects' type. For this reason, please go to the section that suits you and follow the given instruction.
Mechanism of warning suppression is based on using special files, which are added next to the project (or in any specified place). These files contain messages, tagged for this project as "unnecessary". We should note that modification of the source file that contains the tagged messages, and, in particular, line shift, will not lead to the re-emergence of these messages. However, the edit of the line containing this analyzer message can lead to its repeated occurrence, since this message has already become "new".
For Microsoft Visual Studio, you can use the PVS-Studio plugin, which conveniently integrates in IDE. It allows you to check the entire solution, specific projects or files, and it also supports incremental analysis.
In PVS-Studio menu, the Suppress Messages section opens a window for working with suppressed analyzer warnings.
In that window, several actions are available:
A special window can be used to view analysis results in Visual Studio.
This window allows navigating along analyzer warnings and jump to the code to fix them. The PVS-Studio window provides a wide range of options for filtering and sorting the results. It is also possible to quickly navigate to the documentation of the selected diagnostic.
Additional actions for each message are available in the context menu by clicking the right mouse button on the message.
The command for suppressing a selected warning is available here. When opening the menu on an already suppressed warning, the option for restoring it will also be available.
In the same way you can also remove the "suppressed warning" mark, by using the 'Un-Suppress Selected Messages' context menu item. Selected warnings will be un-suppressed and they will be removed from the suppress files in case if the corresponding project is opened in the IDE.
After creating a suppress file you can add it to the project as a noncompiled/text file, using the 'Add|Existing Item...' menu command. If a project includes at least one suppress file, then files next to the project file itself will be ignored. This allows keeping suppress and project files in different directories. We support adding of only one suppress file per project - the rest will be ignored.
You can add a suppress file to the solution. You can do this by selecting 'Add|New Item...' command. The same as for projects, only one suppress file is supported - the rest will be ignored.
Suppress file of the solution level allows suppressing warnings in all projects of the corresponding solution. If projects have separate suppress files, the analyzer will take into account both warnings suppressed in a suppress file of the solution, and in a suppress file of a project.
When suppressing files in cases when a suppress file is added in solution, the following rules are applied:
Warnings suppression can also be used right from a command line. The command-line PVS-Studio_Cmd.exe utility automatically catches up existing suppress files when running an analysis. It can also be used to suppress previously generated analyzer warnings saved in a plog file. To suppress warnings from an existing plog file, run PVS-Studio_Cmd.exe with the '--suppressAll' flag. For example (in one line):
"C:\Program Files (x86)\PVS-Studio\PVS-Studio_Cmd.exe"
-t "Solution.sln" -o "results.plog" --suppressAll SuppressOnly
Execution of this command will generate suppress files for all of the projects in the Solution.sln for which warnings in results.plog have been generated.
The '--suppressAll' flag supports 2 modes. 'SuppressOnly' will run suppression for the given plog without restarting the analysis. 'AnalyzeAndSuppress' will first perform the analysis, write an output plog file, and only after that it will suppress all of the warnings from it. In this mode, we'll see only new analyzer warnings on every analysis run thereafter (as warnings from previous runs will be suppressed).
PVS-Studio on Windows can be used not only for MSBuild \ Visual Studio projects. Using compiler monitoring system, you can run static analysis for all types of projects that use one of the compilers supported by PVS-Studio C++.
When running the analysis after build monitoring, using the command
clmonitor.exe analyze --useSuppressFile %PathToSuppressFile%
you can pass a path to suppress file that will be used during the analysis, via the additional '--useSuppressFile' (-u) flag.
Besides the command line CLMonitor.exe tool, you can also use compiler monitoring through the C and C++ Compiler Monitoring UI tool. This tool allows you to check code regardless of the used compiler or build system, and then lets you work with the analysis results by providing a user interface similar to the PVS-Studio plugin for Visual Studio.
However, if you have a project which can be opened in Visual Studio, we recommend using the PVS-Studio plugin for Visual Studio to view the analysis results. The reason for it is that capabilities of a built-in code editor in Compiler Monitoring UI are far more limited than the code editor of Visual Studio. To open analysis report in Visual Studio, you can save the analyzer report in Compiler Monitoring UI, then reopen it.
The menu for running the analysis and suppressing warnings looks as follows.
After clicking "Analyze Your Files" menu item, you will see the "Compiler Monitoring (C and C++)" window.
To filter analyzer warnings, you need to specify a file with suppressed warnings before starting the analysis. You can create and maintain such file through the "Message Suppression..." menu, which is the same as the one presented in the section about Visual Studio. After the analysis is finished, only new errors will be shown in the PVS-Studio output window. Without specifying the file, the analyzer will show all the results.
Under Linux and macOS, the commands for suppression and filtration of analyzer warnings can only be performed from the command line. If necessary, this process can be automated on a server that performs an automated analyzer launch. There are several ways of using this mechanism, depending on the way of analyzer integration.
To suppress all of the analyzer's warnings (first time and in subsequent cases), you need to execute the command:
pvs-studio-analyzer suppress /path/to/report.log
If you want to suppress a warning for a specific file, use the --file(-f) flag:
pvs-studio-analyzer suppress -f test.c /path/to/report.log
In addition to the file itself, you can explicitly specify the line number to suppress:
pvs-studio-analyzer suppress -f test.c:22 /path/to/report.log
This entry suppresses all warnings that are located on line 22 of the 'test.c' file.
This flag can be specified repeatedly, thus suppressing warnings in several files at once.
In addition to explicit file specification, there is a mechanism for suppressing specific diagnostics:
pvs-studio-analyzer suppress -v512 /path/to/report.log
The --warning(-v) flag can also be specified repeatedly:
pvs-studio-analyzer suppress -v1040 -v512 /path/to/report.log
The above-mentioned --file and --warning flags can be combined to suppress warnings more precisely:
pvs-studio-analyzer suppress -f test.c:22 -v512 /path/to/report.log
So the above command will suppress all v512 diagnostic warnings on line 22 of the 'test.c' file.
Analysis of the project can be performed as always. At the same time, the suppressed warnings will be filtered out:
pvs-studio-analyzer analyze ... -o /path/to/report.log
plog-converter ...
This way, the suppressed warnings will be saved in the current directory, in a file named suppress_base.json, which should be stored with the project. New suppressed warnings will be appended to this file. If there is a need to specify a different name or location of the file, then the commands above may be supplemented by specifying the path to the file with suppressed warnings.
Direct integration of the analyzer might look like this:
.cpp.o:
$(CXX) $(CFLAGS) $(DFLAGS) $(INCLUDES) $< -o $@
pvs-studio --cfg $(CFG_PATH) --source-file $< --language C++
--cl-params $(CFLAGS) $(DFLAGS) $(INCLUDES) $<
In this integration mode, the C++ analyzer core is called directly, so the analyzer cannot perform analysis on the source files and filter them at the same time. So, filtration and warnings suppression would require additional commands.
To suppress all the warnings, you must run the command:
pvs-studio-analyzer suppress /path/to/report.log
To filter a new analysis log according to the previously generated suppression file, you will need to use the following commands:
pvs-studio-analyzer filter-suppressed /path/to/report.log
plog-converter ...
The default name for the file with the suppressed warnings remains as suppress_base.json, and can be changed, if necessary.
You can use a special window to view analysis results in IntelliJ IDEA.
This window allows navigating along found warnings and jumping to the source code, to fix these warnings. PVS-Studio window provides a wide range of options for filtering and sorting the results. It is also possible to quickly navigate to the documentation of the selected analyzer rule.
Additional options of working with each warning are available in the context menu by clicking the right button on the warning itself. The command for suppressing a selected warning is also available here.
PVS-Studio plugin for IntelliJ IDEA also allows you to suppress all of the generated messages in one click.
By default, a suppression file is available at {projectPath}/.PVS-Studio/suppress_base.json, but you can change this path in the settings of the plugin.
Whichever suppression method you use, the suppressed warnings will not appear in the subsequent analysis reports.
To suppress all of the warnings, use this command:
./gradlew pvsSuppress "-Ppvsstudio.report=/path/to/report.json"
"-Ppvsstudio.output=/path/to/suppress_base.json"
To suppress all of the warnings, use this command:
mvn pvsstudio:pvsSuppress "-Dpvsstudio.report=/path/to/report.json"
"-Dpvsstudio.output=/path/to/suppress_base.json"
To suppress all of the warnings, use this command:
java -jar pvs-studio.jar --convert toSuppress
--src-convert "/path/to/report.json"
--dst-convert "/path/to/suppress_base.json"
SonarQube (formerly Sonar) is an open source platform designed for continuous inspection and measurement of code quality. SonarQube combines the results of the analysis to a single dashboard, keeping track of the history of previous analysis runs, which allows you to see the overall trend of software quality during development. An additional advantage is the ability to combine results of different analyzers.
So, after getting the analysis results from one or more analyzers, you should go to the list of warnings and click the "Bulk Change" button, which opens the following menu.
In this window, you can mark up all warnings of the analyzer as "won't fix" and further work only with new errors.
Configure static analysis on the build server and developers' computers. Regularly correct new analyzer warnings and do not let them accumulate. It is also worth planning a review to correct suppressed warnings in the future.
Additional control over code quality can be achieved by sending results via mail. It is possible to send warnings to only those developers who had written erroneous code using BlameNotifier tool, which is included in PVS-Studio distribution.
For some users it may be convenient to view results in Jenkins or TeamCity using the PVS-Studio plugin, and send a link to such a page.
This section describes all the possible ways of suppressing analyzer warnings at the moment. The collected material is based on the documentation for the PVS-Studio analyzer, but the details on that topic were considered more than in documentation. General information may not be very informative for new users, so you should check out the documentation below.
While handling the large number of messages (and the first-time verification of large-scale projects, when filters have not been set yet and false positives haven't been marked, the number of generated messages can come close to tens of thousands), it is reasonable to use the navigational, searching and filtering mechanisms integrated into PVS-Studio output window.
The main purpose of PVS-Studio output window is to simplify the analyzed project's source code navigation and reviewing of potentially dangerous fragments in it. Double-clicking any of the messages in the list will automatically open the file corresponding to this message in the code editor, will place the cursor on the desired line and highlight it. The quick navigation buttons (see figure 1) allow for an easy review of the potentially dangerous fragments in the source code without the need of constant IDE windows switching.
Figure 1 — Quick navigation buttons
To present the analysis results, PVS-Studio output window utilizes a virtual grid, which is capable of fast rendering and sorting of generated messages even for huge large-scale projects (virtual grid allows you to handle a list containing hundreds of thousands of messages without any considerable hits to performance). The far left grid column can be used to mark messages you deem interesting, for instance the ones you wish to review later. This column allows sorting as well, so it won't be a problem to locate all the messages marked this way. The "Show columns" context menu item can be used to configure the column display in the grid (figure 2):
Figure 2 — Configuring the output window grid
The grid supports multiline selection with standard Ctrl and Shift hotkeys, while the line selection persists even after the grid is resorted on any column. The "Copy selected messages to clipboard" context menu item (or Ctrl+C hotkey) allows you to copy the contents of all selected lines to a system clipboard.
PVS-Studio output window filtering mechanisms make it possible to quickly find and display either a single diagnostic message or the whole groups of these messages. The window's toolstrip contains several toggle buttons which can be used to turn the display of their corresponding message groups on or off (figure 3).
Figure 3 — Message filtration groups
All of these switches could be subdivided into 3 sets: filters corresponding to the message certainty, filters corresponding to type of message diagnostics rule set, and filters corresponding to False Alarm markings within the source code. Turning these filters off will momentarily hide all of their corresponding messages inside the output list.
Detailed description of the levels of certainty and sets of diagnostic rules is given in the documentation section "Getting Acquainted with the PVS-Studio Static Code Analyzer".
The quick filtering mechanism (quick filters) allows you to filter the analysis report by the keywords that you can specify. The quick filtering panel could be opened with the "Quick Filters" button on the output window's toolstrip (figure 4).
Figure 4— Quick filtering panel
Quick filtering allows the display of messages according to the filters by 3 keywords: by the message's code, by the message's text and by the file containing this message. For example, it is possible to display all the messages containing the word 'odd' from the 'command.cpp' file. Changes to the output list are applied momentarily after the keyword edit box loses focus. The 'Reset Filters' button will erase all of the currently applied filtering keywords.
All of the filtering mechanisms described above could be combined together, for example filtering the level of displayed messages and the file which should contain them at the same time, while simultaneously excluding all the messages marked as false positives.
In case there is a need to navigate to an individual message in the grid, it is possible to use the quick jumping dialog, which can be accessed through the "Navigate to ID..." context menu item (figure 5):
Figure 5 - evoking of the quick jumping dialog
Figure 6 - Navigate to ID dialog
Each of the messages in PVS-Studio output list possesses a unique identifier — the serial number under which this message was added into the grid, which itself is displayed in the ID column. The quick navigation dialog allows you to select and auto-focus the message with the designated ID, regardless of current grid's selection and sorting. You also may note that the IDs of the messages contained within the grid are not necessarily strictly sequential, as a fraction them could be hidden by the filtering mechanism, so navigation to such messages is impossible.
The large-scale projects are often developed by a distributed team, so a single person isn't able to judge every message static analyzer generates for false-positives, and even more so, is unable to correct the corresponding sections of the source code. In this case it makes sense to delegate such messages to a developer who is directly responsible for the code fragment in question.
PVS-Studio allows you to automatically generate the special TODO comment containing all the information required to analyze the code fragment marked by it, and to insert it into the source code. Such comment will immediately appear inside the Visual Studio Task List window (in Visual Studio 2010 the comments' parsing should be enabled in the settings: Tools->Options->Text Editor->C++->Formatting->Enumerate Comment Tasks->true) on condition that the ' Tools->Options->Environment->Task List->Tokens' list does contain the corresponding TODO token (it is present there by default). The comment could be inserted using the 'Add TODO comments for selected messages' command of the context menu (figure 7):
Figure 7 - Inserting the TODO comment
The TODO comment will be inserted into the line which is responsible for generation of analyzer's message and will contain the error's code, analyzer message itself and a link to the online documentation for this type of error. Such a comment could be easily located by anyone possessing an access to the sources thanks to the Visual Studio Task List. And with the help of the comment's text itself the potential issue could be detected and corrected even by the developer who does not have PVS-Studio installed or does not possess the analyzer's report for the full project (figure 8).
Figure 8 —Visual Studio Task List
The Task List Window could be accessed through the View->Other Windows->Task List menu. The TODO comments are displayed in the 'Comments' section of the window.
This section describes analyzer's message suppression features. It provides ways to control both the separate analyzer messages under specific source code lines and whole groups of messages related, for example, to the use of C/C++ macros. The described method, by using comments of a special format, allows disabling individual analyzer rules or modifying text of analyzer's messages.
Features described in the following section are applicable to both C/C++ and C# PVS-Studio analyzers, if the contrary is not stated explicitly.
Any code analyzer always produces a lot of the so called "false alarms" besides helpful messages. These are situations when it is absolutely obvious to the programmer that the code does not have an error but it is not obvious to the analyzer. Such messages are called false alarms. Consider a sample of code:
obj.specialFunc(obj);
The analyzer finds it suspicious that a method is called from an object, in which as an argument the same object is passed, so it will issue a warning V678 for this code. The programmer can also know that the use of the 'specialFunc' method in this way is conceivable, therefore, in this case the analyzer warning is a false positive. You can notify the analyzer that the warning V678 issued on this code is a false positive. It can be done either manually or using context menu command.
To suppress a false positive you can add a special comment in code:
obj.specialFunc(obj); //-V678
Now the analyzer will not generate the V678 warning on this line. After marking the message as a false alarm, the message will disappear from error list. You may enable the display of messages marked as 'False Alarms' in PVS-Studio error list by changing the value of 'PVS-Studio -> Options... -> Specific Analyzer Settings -> DisplayFalseAlarms' settings option.
You may add this comment manually as well without using the "Mark selected messages as False Alarms" command, but you must follow the note's format: two slashes, minus (without a space), error code. You may also use a special command provided by PVS-Studio. The user is provided with two commands available from the PVS-Studio's context menu (figure 1).
Figure 1 - Commands to work with the mechanism of false alarm suppression
Let's study the available commands concerning False Alarm suppression:
1. Mark selected messages as False Alarms. You may choose one or more false alarms in the list (figure 2) and use this command to mark the corresponding code as safe.
Figure 2 - Choosing warnings before executing the "Mark selected messages as False Alarms" command
2. Remove False Alarm marks from selected messages. This command removes the comment that marks code as safe. This function might be helpful if, for instance, you were in a hurry and marked some code fragment as safe by mistake. Like in the previous case, you must choose the required messages from the list.
We do not recommend you to mark messages as false alarms without preliminarily reviewing the corresponding code fragments since it contradicts the ideology of static analysis. Only the programmer can determine if a particular error message is false or not.
Usually compilers employ #pragma-directives to suppress individual error messages. Consider a code sample:
unsigned arraySize = n * sizeof(float);
The compiler generates the following message:
warning C4267: 'initializing' : conversion from 'size_t' to 'unsigned int', possible loss of data
x64Sample.cpp 151
This message can be suppressed with the following construct:
#pragma warning (disable:4267)
To be more exact, it is better to arrange the code in the following way to suppress this particular message:
#pragma warning(push)
#pragma warning (disable:4267)
unsigned arraySize = n * sizeof(float);
#pragma warning(pop)
The PVS-Studio analyzer uses comments of a special kind. Suppression of the PVS-Studio's message for the same code line will look in the following way:
unsigned arraySize = n * sizeof(INT_PTR); //-V103
This approach was chosen to make the target code cleaner. The point is that PVS-Studio can inform about issues in the middle of multi-line expressions as, for instance, in this sample:
size_t n = 100;
for (unsigned i = 0;
i < n; // the analyzer will inform of the issue here
i++)
{
// ...
}
To suppress this message using the comment, you just need to write:
size_t n = 100;
for (unsigned i = 0;
i < n; //-V104
i++)
{
// ...
}
But if we had to add a #pragma-directive into this expression, the code would look much less clear.
Storage of the marking in source code lets you modify it without the risk to lose information about lines with errors.
It is also possible to use a separate base where we could store information in the following approximate pattern: error code, file name, line number. This pattern is implemented in the different PVS-Studio feature known as "Mass Suppression".
It goes without saying that the analyzer can locate potential problems within macro statements (#define) and produce diagnostic messages accordingly. But at the same time these messages will be produced by analyzer at such positions where the macro is being used, i.e. where placement of macro's body into the code is actually happening. An example:
#define TEST_MACRO \
int a = 0; \
size_t b = 0; \
b = a;
void func1()
{
TEST_MACRO // V1001 here
}
void func2()
{
TEST_MACRO // V1001 here
}
To suppress these messages you can use the "Mark as False Alarm" command. Then the code containing suppression commands will look like this:
#define TEST_MACRO \
int a = 0; \
size_t b = 0; \
b = a;
void func1()
{
TEST_MACRO //-V1001
}
void func2()
{
TEST_MACRO //-V1001
}
But in case the macro is being utilized quite frequently, marking it everywhere as False Alarm is quite inconvenient. It is possible to add a special marking to the code manually to make the analyzer mark the diagnostics inside this macro as False Alarms automatically. With this marking the code will look like this:
//-V:TEST_MACRO:1001
#define TEST_MACRO \
int a = 0; \
size_t b = 0; \
b = a;
void func1()
{
TEST_MACRO
}
void func2()
{
TEST_MACRO
}
During the verification of such a code the messages concerning issues within macro will be immediately marked as False Alarms. Also, it is possible to select several diagnostics at once, separating them by comma:
//-V:TEST_MACRO:1001, 105, 201
Please note that if the macro contains another nested macro inside it then the name of top level macro should be specified for automated marking.
#define NO_ERROR 0
#define VB_NODATA ((long)(77))
size_t stat;
#define CHECK_ERROR_STAT \
if( stat != NO_ERROR && stat != VB_NODATA ) \
return stat;
size_t testFunc()
{
{
CHECK_ERROR_STAT // #1
}
{
CHECK_ERROR_STAT // #2
}
return VB_NODATA; // #3
}
In the example mentioned above the V126 diagnostics appears at three positions. To automatically mark it as False Alarm one should add the following code at positions #1 and #2:
//-V:CHECK_ERROR_STAT:126
To make it work at #3 you should additionally specify this:
//-V:VB_NODATA:126
Unfortunately to simply specify "to mark V126 inside VB_NODATA macro" and not to specify anything for CHECK_ERROR_STAT macro is impossible because of technical specifics of preprocessing mechanism.
Everything that is written in this section about macros is also true for any code fragment. For example, if you want to suppress all the warnings of the V103 diagnostic for the call of the function 'MyFunction', you should add such a string:
//-V:MyFunction:103
Analyzer messages can be manipulated and filtered through the comments of as special format. Such comments can be placed either in the special diagnostic configuration files (.pvsconfig) for all analyzers, or directly inside the source code (but only for C/C++ analyzer).
The diagnostic configuration files are plain text files which are added to any Visual Studio project or solution. To add the configuration file, select the project or solution in question in the Solution Explorer window inside Visual Studio IDE, and select a context menu item 'Add New Item...'. In the following window, select the 'PVS-Studio Filters File' template (figure 3):
Figure 3 - Adding diagnostic configuration file to a solution.
Because of the specifics of some Visual Studio versions, the 'PVS-Studio Filters File' file template may be absent in some versions and editions of Visual Studio for projects and\or solutions. In such a case, it is possible to use add diagnostic configuration file as a simple text file by specifying the 'pvsconfig' extension manually. Make sure that after the file is added, it is set as non-buildable in its' compilation properties.
When a configuration file is added to a project, it will be valid for all the source files in this project. A solution configuration file will affect all the source files in all of the projects added to that solution.
In addition, .pvsconfig file can be placed in the user data folder (%AppData%\PVS-Studio\) - this file will be automatically used by analyzer, without the need to modify any of your project\solution files.
When using the PVS-Studio_Cmd command-line tool, you can specify the path to the .pvsconfig configuration file using the --rulesConfig (-C) parameter, for example, as follows:
PVS-Studio_Cmd.exe -t D:\project\project.sln
-C D:\project\rules.pvsconfig
The '.pvsconfig' files utilize quite a simple syntax. Any line starting with the '#' character is considered a comment and ignored. The filters themselves are written as one-line C++/C# comments, i.e. every filter should start with '//' characters.
In case of C/C++ code, the filters can also be specified directly in the source code. Please note, that this is not supported for C# projects!
Next, let's review different variants of diagnostic configurations and filters.
Let us assume that the following structure exists:
struct MYRGBA
{
unsigned data;
};
Also there are several functions that are utilizing it:
void f1(const struct MYRGBA aaa)
{
}
long int f2(int b, const struct MYRGBA aaa)
{
return int();
}
long int f3(float b, const struct MYRGBA aaa, char c)
{
return int();
}
The analyzer produces three V801: "Decreased performance. It is better to redefine the N function argument as a reference" messages concerning these functions. Such a message will be a false one for the source code in question, as the compiler will optimize the code by itself, thus negating the issue. Of course it is possible to mark every single message as a False Alarm using the "Mark As False Alarm" option. But there is a better way. Adding this line into the sources will suffice:
//-V:MYRGBA:801
For C/C++ projects, we advise you to add such a line into .h file near the declaration of the structure, but if this is somehow impossible (for example the structure is located within the system file) you could add this line into the stdafx.h as well.
And then, every one of these V801 messages will be automatically marked as false alarm after re-verification.
It's not only single words that the described mechanism of warning suppression can be applied. That's why it may be very useful sometimes.
Let's examine a few examples:
//-V:<<:128
This comment will suppress the V128 warning in all the lines which contain the << operator.
buf << my_vector.size();
If you want the V128 warning to be suppressed only when writing data into the 'log' object, you can use the following comment:
//-V:log<<:128
buf << my_vector.size(); // Warning untouched
log << my_vector.size(); // Warning suppressed
Note. Notice that the comment text string must not contain spaces.
Correct: //-V:log<<:128
Incorrect: //-V:log <<:128
When searching for the substring, spaces are ignored. But don't worry: a comment like the following one will be treated correctly:
//-V:ABC:501
AB C = x == x; // Warning untouched
AB y = ABC == ABC; // Warning suppressed
Our analyzer allows the user to completely disable output of any warning through a special comment. In this case, you should specify the number of the diagnostic you want to turn off, after a double colon. The syntax pattern is as follows:
//-V::(number)
To disable a number of diagnostics, you can list their numbers separating them by commas. The syntax pattern is the following:
//-V::(number1),(number2),...,(numberN)
There is also an option to disable a group of diagnostics. The syntax pattern is the following:
//-V::GA
//-V::X64
//-V::OP
//-V::CS
//-V::MISRA
To disable several groups of diagnostics, you can list them separating by commas. The syntax pattern is the following:
//-V::X64,CS,...
To turn off all the diagnostics of C++ or C# analyzer use the following form:
//-V::C++
//-V::C#
For example, if you want to ignore warning V122, you insert the following comment in the beginning of a file:
//-V::122
If you want to disable warnings V502, V507, and V525, then the comment will look like this:
//-V::502,507,525
Since the analyzer won't output the warnings you have specified, this might significantly reduce the size of the analysis log when too many false positives are generated for some diagnostic.
You can set the exclusion from the analysis of files / directories that correspond to specified masks. It might be convenient, for example, when it's needed to exclude the code of third-party libraries or automatically generated files from the analysis.
Several examples of masks:
//V_EXCLUDE_PATH C:\TheBestProject\thirdParty
//V_EXCLUDE_PATH *\UE4\Engine\*
//V_EXCLUDE_PATH *.autogen.cs
Syntax of masks is the same as the one for the options 'FileNameMasks' and 'PathMasks', described in the document "Settings: Don't Check Files".
There may be situations in which a certain type of diagnostics is not relevant for the analyzed project, or one of the diagnostics produces warnings for the source code which, you have no doubt in it, is correct. In this case, you can use the group messages suppression based on the filtering of the gained analysis results. The list of available filtering modes can be accessed through the 'PVS-Studio -> Options' menu item.
The suppression of multiple messages through filters does not require restarting of the analysis, the filtering results will appear in PVS-Studio output window immediately.
First, you may disable diagnosis of some errors by their code. You may do this using the "Settings: Detectable Errors" tab. On the tab of detected errors, you may specify the numbers of errors that must not be shown in the analysis report. Sometimes it is reasonable to remove errors with particular codes from the report. For instance, if you are sure that errors related to explicit type conversion (codes V201, V202, V203) are not relevant for your project, you may hide them. A display of errors of a certain type can be disabled using the context menu command "Hide all Vxxx errors". Accordingly, in case you need to enable a display back, you can configure it on the section "Detectable Errors", mentioned above.
Second, you may disable analysis of some project's parts (some folders or project files). This is the "Settings: Don't Check Files" tab. On this tab, you may insert information about libraries whose files' inclusions (through the #include directive) must not be analyzed. This might be needed to reduce the number of unnecessary diagnostic messages. Suppose your project employs the Boost library. Although the analyzer generates diagnostic messages on some code from this library, you are sure that it is rather safe and well written. So, perhaps there is no need to get warnings concerning its code. In this case, you may disable analysis of the library's files by specifying the path to it on the settings page. Besides, you may add file masks to exclude some files from analysis. The analyzer will not check files meeting the mask conditions. For instance, you may use this method to exclude autogenerated files from analysis.
Path masks for files which are mentioned in the latest generated PVS-Studio report in the output window could be appended to the 'Don't Check Files' list using the "Don't check files and hide all messages from..." context menu command for the currently selected message (figure 4).
Figure 4 — Appending path masks through the context menu
This command allows the appending either of a single selected file or of the whole directory mask containing such a file.
Third, you may suppress separate messages by their text. On the "Settings: Keyword Message Filtering" tab, you may set filtering of errors by their text and not their code. If necessary, you may hide error messages containing particular words or phrases in the report. For instance, if the report contains errors that refer to the names of the functions printf and scanf and you think that there cannot be any errors related to them, you should simply add these two words using the editor of suppressed messages.
Sometimes, especially on the stage of stage of implementation of static analysis in large projects, you may need to 'suppress' all warnings of code base, since the developers may not have the necessary resources to fix the errors found by the analyzer in the old code. In such a case, it can be useful to 'hide' all warnings issued for existing code to track it only when errors occur again. This can be achieved by using the "mass suppression of analyzer messages" mechanism. The use of the appropriate mechanism in Windows environment is described in the document "Mass Suppression of Analyzer Messages", in Linux environment - in the relevant section of document "How to run PVS-Studio on Linux".
In rare cases markers arranged automatically might sometimes appear in false places. In this case, the analyzer will again produce the same error warnings because it will fail to find the markers. This is the issue of the preprocessor refers to multi-line #pragma-directives of a particular type that also cause confusion with line numbering. To solve this issue, you should mark messages you experience troubles with manually. PVS-Studio always informs about such errors with the message "V002. Some diagnostic messages may contain incorrect line number".
Like in case of any other procedure involving mass processing of files, you must remember about possible access conflicts when marking messages as false alarms. Since some files might be opened in an external editor and modified there during file marking, the result of joint processing of such files cannot be predicted. That is why we recommend you either to have copies of source code or use version control systems.
This section covers converting logs for Windows. Converting logs for Linux and macOS is described in the document "How to run PVS-Studio on Linux and macOS".
The analysis results that PVS-Studio generates as its output after it has finished checking a project (either from the Visual Studio plugin or in command-line batch mode) are typically presented as an XML log file (".plog"). Using direct integration of C++ analyzer to the build system to perform the analysis produces an unparsed 'raw' log file. You can view these files in the PVS-Studio plugin for Visual Studio or in the C and C++ Compiler Monitoring UI (Standalone.exe). These formats, however, are not convenient for viewing directly in a text editor, sending them via email, and so on. The PVS-Studio distribution includes, among other things, the PlogConverter utility, which allows you to convert analysis results to other formats.
When opening log file in a text editor, a user has to deal with XML markup or 'raw' unreadable log from the analyzer. To convert the analysis results into a more convenient format, use PlogConverter utility, which comes with PVS-Studio and can be found in the PVS-Studio installation directory ("C:\Program Files (x86)\PVS-Studio" by default). You can also download the source code of the utility.
Use the "--help" option to display the basic information about the utility:
PlogConverter.exe --help
Let's take a closer look at the utility's parameters:
You can combine different format options by separating them with "," (no spaces), for example:
PlogConverter.exe D:\Projct\results.plog --renderTypes=Html,Csv,Totals
or
PlogConverter.exe D:\Projct\results.plog -t Html,Csv,Totals
MessageType:MessageLevels
"MessageType" can be set to one of the following types: GA, OP, 64, CS, MISRA, Fail
"MessageLevels" can be set to values from 1 to 3
You can combine different masks by separating the options with ";" (no spaces), for example (written in one line):
PlogConverter.exe D:\Projct\results.plog --renderTypes=Html,Csv,Totals
--analyzer=GA:1,2;64:1
or
PlogConverter.exe D:\Projct\results.plog -t Html,Csv,Totals -a GA:1,2;64:1
The command format reflects the following logic: convert ".plog" into Html, Csv, and Totals formats, keeping only the general-analysis warnings (GA) of the 1-st and 2-nd levels and 64-bit warnings (64) of the 1-st level.
PlogConverter.exe D:\Projct\results.plog --renderTypes=Html,Csv,Totals
--excludedCodes=V101,V102,V200
or
PlogConverter.exe D:\Projct\results.plog -t Html,Csv,Totals -d V101,V102,V200
The PlogConverter utility defines several non-zero exit codes, which do not necessarily indicate some issue with the operation of the tool itself, i.e. even when the tool returns something other when zero it does not always mean that the tool has 'crashed'. Here's the description of all possible exit codes, that PlogConverter can return.
The PVS-Studio distribution includes the BlameNotifier tool, which allows you to notify developers who have committed code in the repository that triggered analyzer warnings. You can also configure notifications about all detected warnings for a specific group of people (this can be useful, for example, for managers and team leads).
The following documentation section describes the ways how to use the BlameNotifier tool: "Notifying the developer teams (blame-notifier utility)".
Note. The blame-notifier tool is available only if you have an Enterprise license. To order the license, please, write to us.
When generating diagnostic messages, PVS-Studio by default generates absolute, or full, paths to the files where errors have been found. That's why, when saving the report, it's these full paths that get into the resulting file (XML plog file). It may cause some troubles in the future - for example when you need to handle this log file on a different computer. As you know, paths to source files may be different on two computers. This will lead to you being unable to open files and use the integrated mechanism of code navigation in such a log file.
Although this problem can be solved by editing the paths in the XML report manually, it's much more convenient to get the analyzer to generate messages with relative paths right away, i.e. paths specified in relation to some fixed directory (for example, the root directory of the project source files' tree). This way of path generation will allow you to get a log file with correct paths on any other computer - you will only need to change the root in relation to which all the paths in the PVS-Studio log file are expanded. The setting 'SourceTreeRoot' found on the page "PVS-Studio -> Options -> Specific Analyzer Settings" serves to tell PVS-Studio to automatically generate relative paths as described and replace their root with the new one.
Let's have a look at an example of how this mechanism is used. The 'SourceTreeRoot' option's field is empty by default, and the analyzer always generates full paths in its diagnostic messages. Assume that the project being checked is located in the "C:\MyProjects\Project1" directory. We can take the path "C:\MyProjects\" as the root of the project source files' tree and add it into the field 'SourceTreeRoot', and start analysis after that.
Now that analysis is over, PVS-Studio will automatically replace the root directory we've defined with a special marker. It means that in a message for the file "C:\MyProjects\Project1\main.cpp", the path to this file will be defined as "|?|Project1\main.cpp". Messages for the files outside the specified root directory won't be affected. That is, a message for the file "C:\MyCommonLib\lib1.cpp" will contain the absolute path to this file.
In the future, when handling this log file in the IDE PVS-Studio plugin, the marker |?| will be automatically replaced with the value specified in the 'SourceTreeRoot' setting's field - for instance, when using the False Alarm function or message navigation. If you need to handle this log file on another computer, you'll just need to define a new path to the root of the source files' tree (for example, "C:\Users\User\Projects\") in the IDE plugin's settings. The plugin will correctly expand the full paths in automated mode.
This option can also be used in the Independent mode of the analyzer, when it is integrated directly into a build system (make, msbuild, and so on). It will allow you to separate the process of full analysis of source files and further investigation of analysis results, which might be especially helpful when working on a large project. For example, you can perform a one-time complete check of the whole project on the build server, while analysis results will be studied by several developers on their local computers.
You can also use the setting 'UseSolutionDirAsSourceTreeRoot' described on the same page. This setting enables or disables the mode of using the path to the folder, containing the solution file *.sln as a parameter 'SourceTreeRoot'. When this mode is enabled (True), the field 'SourceTreeRoot' will display the value '<Using solution path>'. The actual value of the parameter 'SourceTreeRoot' saved in the settings file does not change. When the setting 'UseSolutionDirAsSourceTreeRoot' is disabled (False), this value (if it was previously set) will be displayed in the field 'SourceTreeRoot' again. Thus, the setting 'UseSolutionDirAsSourceTreeRoot' just changes the mechanism of generating the path to the file, allowing to use 'SourceTreeRoot' as a parameter or a specified value or a path to a folder, containing the solution file.
PVS-Studio can be used independently from the Visual Studio IDE. The core of the analyzer is a command-line utility allowing analysis of C/C++ files that can be compiled by Visual C++, GCC, or Clang. For this reason, we developed a standalone application implemented as a shell for the command-line utility and simplifying the work with the analyzer-generated message log.
PVS-Studio provides a convenient plug-in for the Visual Studio environment, allowing "one-click" analysis of this IDE's vcproj/vcxproj-projects. There are, however, a few other build systems out there which we also should provide support for. Although PVS-Studio's analyzer core doesn't depend on any particular format used by this or that build system (such as, for example, MSBuild, GNU Make, NMake, CMake, ninja, and so on), the users would have to carry out a few steps on their own to be able to integrate PVS-Studio's static analysis into a build system other than VCBuild/MSBuild projects supported by Visual Studio. These steps are as follows:
All these issues can be resolved by using the C and C++ Compiler Monitoring UI (Standalone.exe).
Figure 1 - Compiler Monitoring UI
Compiler Monitoring UI enables "seamless" code analysis regardless of the compiler or build system one is using, and then allows you to work with the analysis results through a user interface similar to that implemented in the PVS-Studio plug-in for Visual Studio. The Compiler Monitoring UI also allows the user to work with the analyzer's log obtained through direct integration of the tool into the build system when there is no Visual Studio installed. These features are discussed below.
Compiler Monitoring UI provides a user interface for a compilation monitoring system. The monitoring system itself (the console utility CLMonitor.exe) can be used independently of the Compiler Monitoring UI - for example when you need to integrate static analysis into an automated build system. To learn more about the use of the compiler monitoring system, see this documentation section.
To start monitoring compiler invocations, open the corresponding dialog: Tools -> Analyze Your Files... (Figure 2):
Figure 2 - Build process monitoring start dialog
Click on "Start Monitoring". After that, CLMonitor.exe will be called while the main window of the tool will be minimized.
Run the build and after it is finished, click on the "Stop Monitoring" button in the window in the bottom right corner of the screen (Figure 3):
Figure 3 - Compiler monitoring dialog
If the monitoring server has successfully tracked the compiler invocations, static analysis will be launched for the source files. When it is finished, you will get a regular PVS-Studio's analysis report (Figure 4):
Figure 4 - Results of the monitoring server's and static analyzer's work
The analysis results can be saved into an XML file (with the plog extension) for further use through the menu command 'File -> Save PVS-Studio Log As...'.
The way of performing incremental analysis is the same as the process of the analyzing the whole project. The key difference is the need to implement not a full, but an incremental build of the project. In such a case compilers runs for modified files will be monitored that will allow to check only them. The rest of the analysis process is completely identical to the described above, in the section "Analyzing source files with the help of the compiler process monitoring system".
Once you have got the analysis report with the analyzer-generated warnings, you can start viewing the messages and fixing the code. You can also load a report obtained earlier into the Compiler Monitoring UI. To do this, use the menu command 'File|Open PVS-Studio Log...'.
Various message suppression and filtering mechanisms available in this utility are identical to those employed in the Visual Studio plug-in and are available in the settings window 'Tools|Options...' (Figure 5).
Figure 5 - Analysis settings and message filtering mechanisms
In the Analyzer Output window, you can navigate through the analyzer's warnings, mark messages as false positives, and add filters for messages. The message handling interface in the Compiler Monitoring UI is identical to that of the output window in the Visual Studio plug-in. To see a detailed description of the message output window, see this documentation section.
Although the built-in editor of the Compiler Monitoring UI does not provide a navigation and autocomplete system as powerful and comfortable as Microsoft IntelliSense in the Visual Studio environment or other similar systems, Compiler Monitoring UI still offers several search mechanisms that can simplify your work with the analysis results.
Besides regular text search in a currently opened file (Ctrl + F), Compiler Monitoring UI also offers the Code Search dialog for text search in opened files and folders of the file system. This dialog can be accessed through the menu command 'Edit|Find & Replace|Search in Source Files...' (Figure 6):
Figure 6 - Search dialog of Compiler Monitoring UI
The dialog supports search in the current file, all of the currently opened files, or any folder of the file system. You can at any moment stop the search by clicking on the Cancel button in the modal window that will show up after the search starts. Once the first match is found, the results will start to be output right away into the child window Code Search Results (Figure 7):
Figure 7 - Results of text search in project source files
Of course, regular text search may be inconvenient or long when you need to find some identifier's or macro's declarations and/or uses. In this case, you can use the mechanism of dependency search and navigation through #include macros.
Dependency search in files allows you to search for a character/macro in those particular files that directly participated in compilation, or to be more exact, in the follow-up preprocessing when being checked by the analyzer. To run the dependency search, click on the character whose uses you want to find to open the context menu (Figure 8):
Figure 8 - Dependency search for a character
The search results, just like with the text search, will be output into a separate child window: 'Find Symbol Results'. You can at any moment stop the search by clicking on the Cancel button in the status bar of the Compiler Monitoring UI main window, near the progress indicator.
Navigation through the #include macros allows you to open in the Compiler Monitoring UI code editor files added into the current file through such a macro. To open an include macro, you also need to use the editor's context menu (Figure 9):
Figure 9 - Navigation through include macros
Keep in mind that information about dependencies is not available for every source file opened in Compiler Monitoring UI. When the dependencies base is not available for the utility, the above mentioned context menu items will be inactive, too.
The dependencies base is created only when analysis is run directly from the Compiler Monitoring UI itself. When opening a random C/C++ source file, the utility won't have this information. Note that when saving the analyzer's output as a plog file, this output having been obtained in the Compiler Monitoring UI itself, a special dpn file, associated with the plog file and containing dependencies of the analyzed files, will be created in the same folder. While present near the plog file, the dpn file enables the dependency search when viewing the plog file in the Compiler Monitoring UI.
Any static code analyzer works slower than a compiler. It is determined by the fact that the compiler must work very quickly, though to the detriment of analysis depth. Static analyzers have to store the parse tree to be able to gather more information. Storing the parse tree increases memory consumption, while a lot of checks turn the tree traverse operation into a resource-intensive and slow process. Well, actually it all is not so much crucial, since analysis is a rarer operation than compilation and users can wait a bit. However, we always want our tools to work faster. The article contains tips on how to significantly increase PVS-Studio's speed.
At first let's enumerate all the recommendations so that users learn right away how they can make the analyzer work faster:
Let's consider all these recommendations in detail, explaining why they allow the tool to work faster.
PVS-Studio has been supporting multi-thread operation for a long time already (starting with version 3.00 released in 2009). Parallelization is performed at the file level. If analysis is run on four cores, the tool is checking four files at a time. This level of parallelism enables you to get a significant performance boost. Judging by our measurements, there is a marked difference between the four-thread and one-thread analysis modes of test projects. One-thread analysis takes 3 hours and 11 minutes, while four-thread analysis takes 1 hour and 11 minutes (these data were obtained on a four-core computer with 8 Gbytes of memory). That is, the difference is 2.7 times.
It is recommended that you have at least one Gbyte of memory for each analyzer's thread. Otherwise (when there are many threads and little memory), the swap file will be used, which will slow down the analysis process. If necessary, you may restrict the number of the analyzer's threads in the PVS-Studio settings: Options -> Common Analyzer Settings -> Thread Count (documentation). By default, the number of threads launched corresponds to the number of cores available in the system.
We recommend that you use a computer with four cores and eight Gbytes of memory or better.
Strange as it may seem, a slow hard disk is a bottleneck for the code analyzer's work. But we must explain the mechanism of its work for you to understand why it is so. To analyze a file, the tool must first preprocess it, i.e. expand all the #define's, include all the #include's and so on. The preprocessed file has an average size of 10 Mbytes and is written on the disk into the project folder. Only then the analyzer reads and parses it. The file's size is growing because of that very inclusion of the contents of the #include-files read from the system folders.
I can't give exact results of measuring the influence of an SSD on the analysis speed because you have to test absolutely identical computers with only hard disks different. But visually the speed-up is great.
Judging by the character of its work, the analyzer is a complex and suspicious program from the viewpoint of an antivirus. Let's specify right away that we don't mean that the analyzer is recognized as a virus - we check this regularly. Besides, we use a code certificate signature. Let's go back to description of the code analyzer's work.
For each file being analyzed a separate analyzer's process is run (the PVS-Studio.exe module). If a project contains 3000 files, the same number of PVS-Studio.exe's instances will be launched. PVS-Studio.exe calls Visual C++ environment variable setting (files vcvars*.bat) for its purposes. It also creates a lot of preprocessed files (*.i) (one for each file being compiled) for the time of its work. Auxiliary command (.cmd) files are being used.
Although all these actions are not a virus activity, it still makes any antivirus spend many resources on meaningless check of the same things.
We recommend that you add the following exceptions in the antivirus's settings:
Perhaps this list is too excessive but we give it in this complete form so that you know regardless of a particular antivirus what files and processes do not need to be scanned.
Sometimes there can be no antivirus at all (for instance, on a computer intended specially to build code and run a code analyzer). In this case the speed will be the highest. Even if you have specified the above mentioned exceptions in your antivirus, it still will spend some time on scanning them.
Our test measurements show that an aggressive antivirus might slow down the code analyzer's work twice or more.
An external preprocessor is being used to preprocess source files before PVS-Studio analysis. When working from under Visual Studio IDE, the native Microsoft Visual C++ preprocessor, cl.exe, is used by default. In 4.50 version of PVS-Studio, the support for the Clang independent preprocessor had been added, as it lacks some of the Microsoft's preprocessor shortcomings (although it also possesses issues of its own).
In some of the older versions of Visual Studio (namely, 2010 and 2012), the cl.exe preprocessor is significantly slower than clang. Using Clang preprocessor with these IDEs provides an increase of operational performance by 1.5-1.7 times in most cases.
However, there is an aspect that should be considered. The preprocessor to be used can be specified from within the 'PVS-Studio|Options|Common Analyzer Settings|Preprocessor' field (documentation). The available options are: VisualCPP, Clang and VisualCPPAfterClang. The first two of these are self-evident. The third one indicates that Clang will be used at first, and if preprocessing errors are encountered, the same file will be preprocessed by the Visual C++ preprocessor instead.
If your project is analyzed with Clang without any problems, you may use the default option VisualCPPAfterClang or Clang - it doesn't matter. But if your project can be checked only with Visual C++, you'd better specify this option so that the analyzer doesn't launch Clang in vain trying to preprocess your files.
Any large software project uses a lot of third-party libraries such as zlib, libjpeg, Boost, etc. Sometimes these libraries are built separately, and in this case the main project has access only to the header and library (lib) files. And sometimes libraries are integrated very firmly into a project and virtually become part of it. In this case the main project is compiled together with the code files of these libraries.
The PVS-Studio analyzer can be set to not check code of third-party libraries: even if there are some errors there, you most likely won't fix them. But if you exclude such folders from analysis, you can significantly enhance the analysis speed in general.
It is also reasonable to exclude code that surely will not be changed for a long time from analysis.
To exclude some folders or separate files from analysis use the PVS-Studio settings -> Don't Check Files (documentation).
To exclude folders you can specify in the folder list either one common folder like c:\external-libs, or list some of the folders: c:\external-libs\zlib, c:\external-libs\libjpeg, etc. You can specify a full path, a relative path or a mask. For example, you can just specify zlib and libjpeg in the folder list - this will be automatically considered as a folder with mask *zlib* and *libjpeg*. To learn more, please see the documentation.
Let's once again list the methods of speeding up PVS-Studio:
The greatest effect can be achieved when applying a maximum number of these recommendations simultaneously.
PVS-Studio is composed of 2 basic components: the command-line analyzer (PVS-Studio.exe) and an IDE plugin through which the former is integrated into one of the supported development environments (Microsoft Visual Studio). The way command-line analyzer operates is quite similar to that of a compiler, that is, each file being analyzed is assigned to a separate analyzer instance that, in turn, is called with parameters which, in particular, include the original compilation arguments of the source file itself. Afterwards, the analyzer invokes a required preprocessor (also in accordance with the one that is used to compile the file being analyzed) and then analyzes the resulting temporary preprocessed file, i.e. the file in which all of the include and define directives were expanded.
Thus, the command-line analyzer - just like a compiler (for example Visual C++ cl.exe compiler) - is not designed to be used directly by the end user. To continue with the analogy, compilers in most cases are employed indirectly, through a special build system. Such a build system prepares launch parameters for each of the file to be built and also usually optimizes the building process by parallelizing it among all the available logic processors. The IDE PVS-Studio plugin operates in a similar fashion.
However, IDE plug-in is not the only method for the employment of PVS-Studio.exe command line analyzer. As mentioned above, the command-line analyzer is very similar to a compiler regarding its usage principles. Therefore, it can be directly integrated, if necessary, into a build system along with a compiler. This way of using the tool may be convenient when dealing with a build scenario which is not supported by PVS-Studio - for example, when utilizing a custom-made build system or an IDE other than Visual Studio. Note that PVS-Studio.exe supports analysis of source files intended to be compiled with gcc, clang, and cl compilers (including the support for specific keywords and constructs).
For instance, if you build your project in the Eclipse IDE with gcc, you can integrate PVS-Studio into your makefile build scripts. The only restriction is that PVS-Studio.exe can only operate under Windows NT operating systems family.
Besides IDE plugins, our distribution kit also includes a plugin for the Microsoft MSBuild build system which is utilized by Visual C++ projects in the Visual Studio IDE starting with version 2010. Don't confuse it with the plugin for the Visual Studio IDE itself!
Thus, you can analyze projects in Visual Studio (version 2010 or higher) in two different ways: either directly through our IDE plugin, or by integrating the analysis process into the build system (through the plugin for MSBuild). Of course, nothing prevents you, if the need arises, from creating your own static analysis plugin, be it for MSBuild or any other build system, or even integrating PVS-Studio.exe's call directly, if possible, into build scripts like in the case of makefile-based ones.
If PVS-Studio plug-in generates the message "C/C++ source code was not found" for your file, make sure that the file you are trying to analyze is included into the project for the build (PVS-Studio ignores files excluded from the build). If you get this message on the whole project, make sure that the type of your C/C++ project is supported by the analyzer. In Visual Studio, PVS-Studio supports only Visual C++ projects for versions 2005 and higher, as well as their corresponding MSBuild Platform Toolsets. Project extensions using other compilers (for example projects for the C++ compiler by Intel) or build parameters (Windows DDK drivers) are not supported. Despite the fact that the command-line analyzer PVS-Studio.exe in itself supports analysis of the source code intended for the gcc/clang compilers, IDE project extensions utilizing these compilers are not supported.
If your case is not covered by the ones described above, please contact our support service. If it is possible, please send us the temporary configuration files for the files you are having troubles with. You can get them by setting the option 'PVS-Studio -> Options -> Common Analyzer Settings -> Remove Intermediate Files' to 'False'. After that, the files with the name pattern %SourceFilename.cpp%.PVS-Studio.cfg will appear in the same directory where your project file (.vcxproj) is located. If possible, create an empty test project reproducing your issue and send it to us as well.
If, having checked your file/project, PVS-Studio generates the V008 message and/or a preprocessor error message (by clang/cl preprocessors) in the results window, make sure that the file(s) you are trying to analyze can be compiled without errors. PVS-Studio requires compilable C/C++ source files to be able to operate properly, while linking errors do not matter.
The V008 error means that preprocessor returned a non-zero exit code after finishing its work. The V008 message is usually accompanied by a message generated by a preprocessor itself describing the reason for the error (for example, it failed to find an include file). Note that, for the purpose of optimization, our Visual Studio IDE plugin utilizes a special dual-preprocessing mode: it will first try to preprocess the file with the faster clang preprocessor and then, in case of a failure (clang doesn't support certain Visual C++ specific constructs), launches the standard cl.exe preprocessor. If you get clang's preprocessing errors, try setting the plugin to use only the cl.exe preprocessor (PVS-Studio -> Options -> Common Analyzer Settings -> Preprocessor).
If you are sure that your files can be correctly built by the IDE/build system, perhaps the reason for the issue is that some compilation parameters are incorrectly passed into the PVS-Studio.exe analyzer. In this case, please contact our support service and send us the temporary configuration files for these files. You can get them by setting the option 'PVS-Studio -> Options -> Common Analyzer Settings -> Remove Intermediate Files' to 'False'. After that, files with the name pattern %SourceFilename.cpp%.PVS-Studio.cfg will appear in the same directory where your project file is located. If possible, create an empty test project reproducing your issue and send it to us as well.
If plugin crashes and generates the dialog box entitled 'PVS-Studio Internal Error', please contact our support service and send us the analyzer's crash stack (you can obtain it from the crash dialog box).
If the issue occurs regularly, then please send us the plugin's trace log together with the crash stack. You can obtain the trace log by enabling the tracing mode through the 'PVS-Studio -> Options -> Specific Analyzer Settings -> TraceMode (Verbose mode)' setting. The trace log will be saved into the default user directory Application Data\Roaming\PVS-Studio under the name PVSTracexxxx_yyy.log, where xxxx is PID of the process devenv.exe / bds.exe, while yyy is the log number for this process.
If you encounter regular crashes of your IDE which are presumably caused by PVS-Studio's operation, please check the Windows system event logs (in the Event Viewer) and contact our support service to provide us with the crash signature and stack (if available) for the application devenv.exe \ bds.exe (the 'Error' message level) which can be found in the Windows Logs -> Application list.
If you encounter regular unhandled crashes of the PVS-Studio.exe analyzer, please repeat the steps described in the section "IDE crashes when PVS-Studio is running", but for the PVS-Studio.exe process.
The error V003 actually means that PVS-Studio.exe has failed to check the file because of a handled internal exception. If you discover V003 error messages in the analyzer log, please send us an intermediate file (an i-file containing all the expanded include and define directives) generated by the preprocessor for the file that triggers the v003 error (you can find its name in the file field). You can get this file by setting the 'PVS-Studio -> Options -> Common Analyzer Settings -> Remove Intermediate Files' option to 'False'. Intermediate files with the name pattern SourceFileName.i will appear, after restarting the analysis, in the directory of the project that you are checking (i.e. in the same directory where the vcproj/vcxproj/cbproj files are located).
The analyzer may sometimes fail to perform a complete analysis of a source file. It is not always the analyzer's fault - see the documentation section on the V001 error to learn more about this issue. No matter what was the cause of a V001 message, it is usually not critical. Incomplete file parsing is insignificant from the analysis viewpoint. PVS-Studio simply skips a function/class with an error and continues with the analysis. It's only a very small portion of code which is left unchecked. If this portion contains fragments you consider relevant, you may send us an i-file for this source file as well.
If it seems to you that the analyzer fails to find errors in a code fragment that surely contains them or, on the contrary, generates false positives for a code fragment which you believe to be correct, please send us the preprocessor's temporary file. You can get it by setting the 'PVS-Studio -> Options -> Common Analyzer Settings -> Remove Intermediate Files' option to 'False'. Intermediate files with the name pattern SourceFileName.i will appear, after you restart the analysis, in the directory of the project you are checking (i.e. in the same directory where ycproj/vcxproj/cbproj files are located). Please attach the source file's code fragment that you have issues with as well.
We will consider adding a diagnostic rule for your sample or revise the current diagnostics to reduce the number of false positives in your code.
If you encounter any issues when handling the analyzer-generated log file within the window of our IDE plugin, namely: navigation on the analyzed source files is performed incorrectly and/or these files are not available for navigation at all; false positive markers or comments are added in wrong places of your code, and the like - please contact our support service to provide us with the plugin's trace log. You can get it by enabling the tracing mode through the 'PVS-Studio -> Options -> Specific Analyzer Settings -> TraceMode' option (Verbose mode). The trace log will be saved into the default user directory Application Data\Roaming\PVS-Studio under the name PVSTracexxxx_yyy.log, where xxxx is PID of the devenv.exe / bds.exe process, while yyy is the log number for this process.
Also, if it is possible, create an empty test project reproducing your trouble and attach it to the letter too.
The PVS-Studio plugin can parallelize code analysis at the level of source files, that is, you can have analysis for any files you need to check (even within one project) running in parallel. The plugin by default sets the number of threads into which the analysis process is parallelized according to the number of processors in your system. You may change this number through the option PVS-Studio -> Options -> Common Analyzer Settings -> ThreadCount.
If it seems to you that not all of the available logical processors in your system are being utilized, you can increase the number of threads used for parallel analysis. But keep in mind that static analysis, unlike compilation, requires a large amount of memory: each analyzer instance needs about 1.5 Gbytes.
If your system, even though possessing a multi-core processor, doesn't meet these requirements, you may encounter a sharp performance degradation caused by the analyzer having to rely on a swap file. In this case, we recommend you to reduce the number of parallel threads of the analyzer to meet the requirement of 1.5 Gbytes per thread, even if this number is smaller than the number of processor cores in your system.
Keep in mind that when you have many concurrent threads, your HDD, which stores temporary preprocessed *.i files, may become a bottleneck itself, as these files may grow in size quite quickly. One of the methods to significantly reduce the analysis time is to utilize SSD disks or a RAID array.
A performance loss may also be caused by poorly configured antivirus software. Because the PVS-Studio plugin launches quite a large number of analyzer and the cmd.exe instances, your antivirus may find this behavior suspicious. To optimize the analysis time, we recommend you to add PVS-Studio.exe, as well as all of the related directories, to the exceptions list of your antivirus or disable real-time protection while the analysis is running.
If you happen to utilize the Security Essentials antivirus (which has become a part of Windows Defender starting with Windows 8), you may face a sharp performance degradation on certain projects/configurations. Please refer to this article on our blog for details concerning this issue.
Projects excluded from the general build in the Configuration Manager window of the Visual Studio environment are not analyzed.
For the PVS-Studio analyzer to analyze C/C++ projects correctly, they must be compilable in Visual C++ and buildable without errors. That's why when checking a group of projects or an individual project, PVS-Studio will check only those projects which are included into the general build.
Projects excluded from the build won't be analyzed. If none of the projects is included into the build or you try to analyze one project that was not included into the build, the message "Files with C or C++ source code for analysis not found" will be generated, and analysis won't start. Use the Configuration Manager for the current Visual Studio solution to see which projects are included and which are excluded from the general build.
If you are encountering errors with missing includes, incorrect compiler switches (for example, the /MD switch) or macros while running static analysis on a project which can be compiled in Visual Studio IDE without such errors, then it is possible that this behavior is a manifestation of an incorrect precompiled header files being inserted during the preprocessing.
This issue arises because of the divergent behavior of Visual C++ compiler (cl.exe) in its' compiler and preprocessor modes. During a normal build, the compiler operates in the "regular" mode (i.e. the compilation results in the object, binary files). However, to perform static analysis, PVS-Studio invokes the compiler in the preprocessor mode. In this mode the compiler performs the expansion of macros and include directives.
But, when the compiled file utilizes a precompiled header, the compiler will use a header itself when it encounters the #include directive. It will use the previously generated pch file instead. However, in the preprocessing mode, the compiler will ignore the precompiled pch entirely and will try expanding such #include in a "regular way", i.e. by inserting the contents of the header file in question.
It is a common practice to use precompiled headers with the same name in multiple projects (the most common one being stdafx.h). This, because of the disparities in the compiler behavior described earlier, often leads to the header from an incorrect project being included into the source file. There are several reasons why this can happen. For example, a correct pch is specified for a file, but the Includes contain several paths containing several different stdafx.h files, and the incorrect one possesses a higher priority for being included (that is, its' include path occurs earlier on the compiler's command line). Another possible scenario is the one in which several projects include the same C++ source file. This file could be built with different options in different projects, and it uses the different pch files as well. But since this is just a single file in your file system, one of the stdafx.h files from one of the projects it is included into could be located in the same directory as the source file itself. And if the stdafx.h is included into this source file by the #include directive using the quotes, then the preprocessor will always use the header file from the same directory as this file, regardless of the includes passed through the command line.
Insertion of the incorrect precompiled header file will not always lead to the preprocessing errors. However, if one of the projects, for example, utilized MFC, and the other one is not, ore the projects possess a different set of Includes, the precompiled headers will be incompatible, and one of the preprocessing errors described in the title of this section will occur. As a result, you will not be able to perform static analysis on such a file.
Unfortunately, it is impossible to bypass this issue on the analyzer's side, as it concerns the external preprocessor, that is, the cl.exe. If you are encountering it on one of your projects, then it is possible to solve it by one of the methods described below, depending on the causes that lead to it.
In case the precompiled header was incorrectly inserted because of the position of its' include path on the compiler's command line, you can simply move a path for the correct header file to the first position on the command line.
If the incorrect header file was inserted because of its' location in the same directory as the source file into which it is included, then you can use the #include directive with pointy brackets, for example:
#include <stdafx.h>
While using this form, the compiler will ignore the files form the current directory when it performs the insertion.
When checking large (more than 1000 source files) projects with PVS-Studio under Windows 8, while using Visual Studio 2010 or newer versions, sometimes the errors of the 'Library not registered' kind can appear or analyzer can even halt the analysis process altogether with 'PVS-Studio is unable to continue due to IDE being busy' message.
Such errors can be caused by several factors: incorrect installation of Visual Studio and compatibility conflicts between different versions of IDE present within a system. Even if your system currently possesses a single IDE installation, but a different version was present in the past, it is possible that this previous version was uninstalled incorrectly or incompletely. In particular, the compatibility conflict can arise from simultaneously having installations of one of Visual Studio 2010\2012\2013\2015\2017\2019 and Visual Studio 2005 and\or 2008 on your system.
Unfortunately, PVS-Studio is unable to 'work around' these issues by itself, as they are caused by conflicts in COM interfaces, which are utilized by Visual Studio API. If you are one of such issues, then you have several different ways of dealing with it. Using PVS-Studio under a system with a 'clean' Visual Studio installation should resolve the issue. However, if it not an option, you can try analyzing your project in several go's, part by part. It is also worth noting that the issue at hand most often arises in the situation when PVS-Studio performs analysis simultaneously with some other IDE background operation (for example, when IntelliSense performs #include parsing). If you wait for this background operation to finish, then it will possibly allow you to analyze your whole project.
Another option is to use alternative methods of running the analyzer to check your files. You can check any project by using the compiler monitoring mode from C and C++ Compiler Monitoring UI (Standalone.exe).
After installing Visual Studio IDE on a machine with a previously installed PVS-Studio analyzer, the newly installed Visual Studio version lacks the 'PVS-Studio' menu item
Unfortunately, the specifics of Visual Studio extensibility implementation prevents PVS-Studio from automatically 'picking up' newly installed Visual Studio in case it happened after the installation of PVS-Studio itself.
Here is an example of such a situation. Let's assume that before the installation of PVS-Studio, the machine have only Visual Studio 2013 installed on it. After installing the analyzer, Visual Studio 2013 menu will contain the 'PVS-Studio' item (if the corresponding option was selected during the installation), which allows you to check your projects in this IDE. Now, if Visual Studio 2015 is installed on this machine next (after PVS-Studio was already installed), the menu of this IDE version will not contain 'PVS-Studio' item.
In order to add analyzer IDE integration to the newly installed Visual Studio, it is necessary to re-launch PVS-Studio installer (PVS-Studio_Setup.exe file). If you do not have this file already, you can download it from our site. The checkbox besides the required IDE version on the Visual Studio selection installer page will be enabled after the corresponding Visual Studio version is installed.
There are many system functions, such as malloc, realloc, and calloc, that return a null pointer in certain conditions. They return NULL when they fail to allocate a buffer of the specified size.
Sometimes you may want to change the analyzer's behavior and make it think, for example, that malloc cannot return NULL. This can be done by using the system libraries, where 'out of memory' errors are handled in a specific way.
An opposite scenario is also possible. You may want to help the analyzer by telling it that a certain system or user-made function can return a null pointer.
To help you with that, we added a mechanism that allows you to use special comments to tell the analyzer that a certain function can or cannot return NULL.
Comment format:
//V_RET_[NOT]_NULL, namespace:Space, class:Memory, function:my_malloc
The controlling comment can be written next to the function declaration.
However , you cannot do this for such functions as malloc because changing system header files is a bad idea.
A possible way out is to add the comment to one of the global headers included into each of the translation units. If you work in Visual Studio, the file stdafx.h would be a good choice.
Another solution is to use the diagnostic configuration file pvsconfig. See "Suppression of false alarms" (section "Mass suppression of false positives through diagnostic configuration files (pvsconfig)").
This is illustrated by the two examples below.
The function does not return NULL:
//V_RET_NOT_NULL, function:malloc
Now the analyzer thinks that the malloc function cannot return NULL and, therefore, will not issue the V522 warning for the following code:
int *p = (int *)malloc(sizeof(int) * 100);
p[0] = 12345; // ok
The function returns a pointer that could be null:
//V_RET_NULL, namesapce:Memory, function:QuickAlloc
With this comment, the following code will be triggering the warning:
char *p = Memory::QuickAlloc(strlen(src) + 1);
strcpy(p, src); // Warning!
In projects with special quality requirements, you might need to find all functions, returning a pointer. To do this, you can use the following comment:
//V_RET_NULL_ALL
We don't recommend using this mode because of issuing a large number of warnings. But if it's really needed in your project, you can use this special comment to add in code a check of a returned pointer for all such functions.
Analyzer warnings are of three levels of certainty: High, Medium, Low. Depending on the used constructs in code the analyzer estimates the certainty of warnings and assigns them an appropriate level in a report. Some warnings may be issued simultaneously on several levels.
In some projects, search for specific types of errors can be very important, regardless of the level of warning certainty. Sometimes there can be a reverse situation, when the error messages are of little use, but a programmer does not want to disable them at all. In such cases, you can manually set the diagnostics level of High/Medium/Low. To do this, you should use the special comments that can be added in code or the diagnostics configuration file. Examples of comments:
//V_LEVEL_1::501,502
//V_LEVEL_2::522,783,579
//V_LEVEL_3::773
Finding such comments, the analyzer issue warnings at the specified level.
You can specify that one or more entities should be replaced with some other one(s) in certain messages. This enables the analyzer to generate warnings taking into account the project's specifics. The control comment has the following format:
//+Vnnn:RENAME:{Aaaa:Bbbb},{<foo.h>:<myfoo.h>},{100:200},......
In all the messages Vnnn, the following replacements will be done:
The working principle of this mechanism is best to be explained by an example.
When coming across the number 3.1415 in code, the V624 diagnostic suggests replacing it with M_PI from the <math.h> library. But suppose our project uses a special math library and it is this library that we should use mathematical constants from. So the programmer may add the following comment in a global file (for example StdAfx.h):
//+V624:RENAME:{M_PI:OUR_PI},{<math.h>:"math/MMath.h"}
After that the analyzer will be warning that the OUR_PI constant from the header file "math/MMath.h" should be used.
You can also extend messages generated by PVS-Studio. The control comment has the following format:
//+Vnnn:ADD:{ Message}
The string specified by the programmer will be added to the end of every message with the number Vnnn.
Take diagnostic V2003, for example. The message associated with it is: "V2003 - Explicit conversion from 'float/double' type to signed integer type.". You can reflect some specifics of the project in the message and extend it by adding the following comment:
//+V2003:ADD:{ Consider using boost::numeric_cast instead.}
From now on, the analyzer will be generating a modified message: "V2003 - Explicit conversion from 'float/double' type to signed integer type. Consider using boost::numeric_cast instead.".
The analyzer equally checks the code where the assert() macro is presented regardless of the configuration of the project (Debug, Release, ...) and specifically doesn't take into account that the execution of the code is interrupted when having the false condition.
To set another analyzer behavior, use the following comment in code:
//V_ASSERT_CONTRACT
Note that in such a mode the analysis results may differ depending on the way the macro is expanded in the checked project configuration.
Let's look at this example to make it clear:
MyClass *p = dynamic_cast<MyClass *>(x);
assert(p);
p->foo();
The dynamic_cast operator can return the nullptr value. Thus, in the standard mode the analyzer will issue the warning that when calling the function foo(), null pointer dereference might occur.
But if we use the comment, the warning will be gone.
You can also use the assertMacro option to specify names of macros which the analyzer will handle in the same way it handles assert:
//V_ASSERT_CONTRACT, assertMacro:MY_CUSTOM_MACRO_NAME
MyClass *p = dynamic_cast<MyClass *>(x);
MY_CUSTOM_MACRO_NAME(p);
p->foo();
In order to specify several macro names, you need to add a separate V_ASSERT_CONTRACT comment for each of them.
Some projects use custom implementations of various system functions, such as memcpy, malloc, and so on. In this case, the analyzer doesn't understand that such functions behave in the same way as their standard analogues. Using the V_FUNC_ALIAS annotation, you can specify which custom functions correspond to which system ones.
Comment format:
//V_FUNC_ALIAS, implementation:sysf, function:f, namespace:ns, class:c
Consider this example:
//V_FUNC_ALIAS, implementation:memcpy, function:MyMemCpy
Now, the analyzer will process calls to the MyMemCpy function in the same way it processes calls to memcpy. For example, this code snippet will trigger the V512 warning:
int buf[] = { 1, 2, 3, 4 };
int out[2];
MyMemCpy (out, buf, 4 * sizeof(int)); // Warning!
Among the numerous filtration and message suppression methods of PVS-Studio analyzer is the PVS_STUDIO predefined macro.
The first case when it might come in handy is when one wants to prevent some code from getting in the analyzer for a check. For example, the analyzer generates a diagnostic message for the following code:
int rawArray[5];
rawArray[-1] = 0;
However, if you will 'wrap' it using this macro the message will not be generated.
int rawArray[5];
#ifndef PVS_STUDIO
rawArray[-1] = 0;
#endif
The PVS_STUDIO macro is automatically inserted while checking the code from the IDE. But if you are using PVS-Studio from the command line, the macro will not be passed by default to the analyzer and this should be done manually.
The second case is the override of the default and custom macros. For example, for the following code a warning will be generated about dereference of a potentially null pointer:
char *st = (char*)malloc(10);
TEST_MACRO(st != NULL);
st[0] = '\0'; //V522
To tell the analyzer that the execution of the program gets interrupted with certain conditions, you can override the macro in the following way:
#ifdef PVS_STUDIO
#undef TEST_MACRO
#define TEST_MACRO(expr) if (!(expr)) throw "PVS-Studio";
#endif
char *st = (char*)malloc(10);
TEST_MACRO(st != NULL);
st[0] = '\0';
This method allows to remove the analyzer warnings on the code checked using different libraries, as well as on any other macros that are used for debugging and testing.
See the discussion "Mark variable as not NULL after BOOST_REQUIRE in PVS-Studio" on StackOverflow.com site.
PVS_STUDIO macro will be automatically substituted when checking code from IDE. If you use the code check from the command line, macro is not passed to the analyzer by default, and it should be done manually.
When developing PVS-Studio we assigned primary importance to the simplicity of use. We took into account our experience of working with traditional lint-like code analyzers. And that is why one of the main advantages of PVS-Studio over other code analyzers is that you can start using it immediately. Besides, PVS-Studio has been designed in such a way that the developer using the analyzer would not have to set it up at all. We managed to solve this task: a developer has a powerful code analyzer which you need not to set up at the first launch.
But you should understand that the code analyzer is a powerful tool which needs competent use. It is this competent use of the analyzer (thanks to the settings system) that allows you to achieve significant results. Operation of the code analyzer implies that there should be a tool (a program) which performs routine work of searching potentially unsafe constructions in code and a master (a developer) who can make decisions on the basis of what he knows about the project being verified. Thus, for example, the developer can inform the analyzer that:
Correct setting of these parameters can greatly reduce the number of diagnostic messages produced by the code analyzer. It means that if the developer helps the analyzer and gives it some additional information by using the settings, the analyzer will in its turn reduce the number of places in the code which the developer must pay attention to when examining the analysis results.
PVS-Studio setting can be accessed through the PVS-Studio -> Options command in the IDE main menu. When selecting this command you will see the dialogue of PVS-Studio options.
Each settings page is extensively described in PVS-Studio documentation.
The tab of the analyzer's general settings displays the settings which do not depend on the particular analysis unit being used.
The analyzer can automatically check for updates on viva64.com site. It uses our update module.
If the CheckForNewVersions option is set to True, a special text file is downloaded from viva64.com site when you launch code checking (the commands Check Current File, Check Current Project, Check Solution in PVS-Studio menu). This file contains the number of the latest PVS-Studio version available on the site. If the version on the site is newer than the version installed on the user computer, the user will be asked for a permission to update the program. If the user agrees, a special separate application PVS-Studio-Updater will be launched that will automatically download and install the new PVS-Studio distribution kit. If the option CheckForNewVersions is set to False, it will not check for the updates.
Analysis of files is performed faster on multi-core computers. Thus, on a 4-core computer the analyzer can use all the four cores for its operation. But, if for some reason, you need to limit the number of cores being used, you can do this by selecting the required number. The number of processor cores will be used as a default value.
Setting the value of 'ThreadCount' option to more than '16' (or more than a number of processor cores, if processor possesses more than 16 cores) is available only in PVS-Studio Enterprise license. Please contact us to order a license.
When running analysis on a single system, we do not advise setting the value of this option greater, than the number of processor cores available. Setting the value larger than the number of cores could degrade the overall analyzer performance. If you wish to run more analysis tasks concurrently, you can use a distributed build system, for example, IncrediBuild. More detailed description of this mode of using PVS-Studio is described in the relevant section of documentation.
The analyzer creates a lot of temporary command files for its operation to launch the analysis unit itself, to perform preprocessing and to manage the whole process of analysis. Such files are created for each project file being analyzed. Usually they are not of interest for a user and are removed after the analysis process. But in some cases it can be useful to look through these files. So you can indicate to the analyzer not to remove them. In this case you can launch the analyzer outside the IDE from the command line.
This settings page allows you to manage the displaying of various types of PVS-Studio messages in the analysis results list.
All the diagnostic messages output by the analyzer are split into several groups. The display (show/hide) of each message type can be handled individually, while the following actions are available for a whole message group:
It may be sometimes useful to hide errors with certain codes in the list. For instance, if you know for sure that errors with the codes V505 and V506 are irrelevant for your project, you can hide them in the list by unticking the corresponding checkboxes.
Please mind that you don't need to relaunch the analysis when using the options "Show All" and "Hide All"! The analyzer always generates all the message types found in the project, while whether they should be shown or hidden in the list is defined by the settings on this page. When enabling/disabling error displaying, they will be shown/hidden in the analysis results list right away, without you having to re-analyze the whole project.
Complete disabling of message groups can be used to enhance the analyzer's performance and get the analysis reports (plog-files) of smaller sizes.
You may specify file masks to exclude some of the files or folders from analysis on the tab "Don't Check Files". The analyzer will not check those files that meet the masks' conditions.
Using this technique, you may, for instance, exclude autogenerated files from the analysis. Besides, you may define the files to be excluded from analysis by the name of the folder they are located in.
A mask is defined with the help of wild card match types. The '*' (any number of any characters) wild card can be used, the '?' symbol is not supported.
The case of a character is irrelevant. The '*' wildcard character could only be inserted at the beginning or at the end of the mask, therefore the masks of the 'a*b' kind are not supported. After exclusion masks were specified, the messages from files corresponding to these masks should disappear from PVS-Studio Output window, and the next time then analysis is started these files will be excluded from it. Thereby the total time of the entire project's analysis could be substantially decreased by excluding files and directories with these masks.
2 types of masks could be specified: the Path masks and the File name masks. The masks specified from within the FileNameMasks list are used to filter messages by the names of the corresponding files only and ignoring these files' location. The masks from the PathMasks list, on the other hand, are used to filter messages by taking into account their location within the filesystem on the disk and could be used to suppress diagnostics either from the single file or even from the whole directories and subdirectories. To filter the messages from one specific file, the full path to it should be added to the PathMasks list, but to filter files sharing the same name (or with the names complying to the wildcard mask), such names or masks should be inserted into the FileNameMask list.
Valid masks examples for the FileNameMask property:
*ex.c — all files with the names ending with "ex" characters and "c" extension will be excluded.
*.cpp — all files possessing the "cpp" extension will be excluded
stdafx.cpp — every file possessing such name will be excluded from analysis regardless of its location within the filesystem
Valid masks examples for the PathMasks property:
c:\Libs\ — all files located in this directory and its subdirectories will be excluded
\Libs\ or *\Libs\* — all files located in the directories with path containing the Libs subdirectory will be excluded.
Libs or *Libs* — the files possessing within their paths the subdirectory with the 'Libs' chars in its name will be excluded. Also the files with names containing the 'libs' characters will be excluded as well, for example 'c:\project\mylibs.cpp.' To avoid confusion we advise you always to specify folders with slash separators.
c:\proj\includes.cpp — a single file located in the c:\proj\ folder with the specified name will be excluded from the analysis.
In the keyword filtering tab you can filter analyzer messages by the text they contain.
When it's necessary you may hide from analyzer's report the diagnosed errors containing particular words or phrases. For example, if the report contains errors in which names of printf and scanf functions are indicated and you consider that there can be no errors relating to them just add these two words using the message suppression editor.
Please note! When changing the list of the hidden messages you don't need to restart analysis of the project. The analyzer always generates all the diagnostic messages and the display of various messages is managed with the help of this settings tab. When modifying message filters the changes will immediately appear in the report and you won't need to launch analysis of the whole project again.
Open PVS-Studio settings page. (PVS-Studio Menu -> Options...).
In the registration tab the licensing information is entered.
After purchasing the analyzer you receive registration information: the name and the serial number. These data must be entered in this tab. In the LicenseType field the licensing mode will be indicated.
Information on the licensing conditions is located in the ordering page on site.
The "Specific Analyzer Settings" tab contains additional advanced settings.
This setting allows you to set the time limit, by reaching which the analysis of individual files will be aborted with V006. File cannot be processed. Analysis aborted by timeout error, or to completely disable analysis termination by timeout. We strongly advise you to consult the description of the error cited above before modifying this setting. The timeout is often caused by the shortage of RAM. In such a case it is reasonable not to increase the time but to decrease the number of parallel threads being utilized. This can lead to a substantial increase in performance in case the processor possesses numerous cores but RAM capacity is insufficient.
This setting allows setting a time limit, after which the incremental analysis files will be aborted. All the warnings, detected at the moment of the analysis stoppage, will be output in the PVS-Studio window. Additionally, there will be issued a warning that the analyzer didn't have time to process all the modified files and the information about the total and analyzed number of files.
This option is relevant only for working in Visual Studio IDE.
When working on a large code base, the analyzer inevitably generates a large number of warning messages. Besides, it is often impossible to fix all the warnings straight out. Therefore, to concentrate on fixing the most important warnings first, the analysis can be made less "noisy" by using this option. It allows completely disabling the generation of Low Certainty (level 3) warnings. After restarting the analysis, the messages from this level will disappear from the analyzer's output.
When circumstances will allow it, and all of the more important messages are fixed, the 'No Noise' mode can be switched off – all of the messages that disappeared before will be available again.
Setting this option to 'true' enables the execution of actions specified in the 'Custom Build Step' section of Visual Studio project file (vcproj/vcxproj). It should be noted that the analyzer requires a fully-compilable code for its correct operation. So, if, for example, the 'Custom Build Section' contains actions used to auto-generate some header files, these actions should be executed (by enabling this setting) before starting the project's analysis. However, in case this step performs some actions concerning the linking process, for instance, then such actions will be irrelevant to code analysis. The 'Custom Build Step' actions are specified at the level of the project and will be executed by PVS-Studio during initial scanning of the project file tree. If this setting is enabled and its execution results in a non-zero exit code, the analysis of the corresponding project file will not be started.
Enabling of this option allows to automatically perform a checkout using the Team Foundation Version Control Tool when editing files, containing the suppressed analyzer warnings (.suppress files). Enabling of this option will not affect the work with projects, not managed by the TF versions control system, or not added in the Visual Studio workspace.
Additional information, in case if it is available (including the information about errors), will be shown in the PVS-Studio window.
This option is relevant only when working from the Visual Studio IDE.
Marking the message as False alarm requires the modification of source code files. By default the analyzer will save each source code file after making every such mark. However, if such frequent savings of files are undesirable (for example if the files are being stored on different machine in LAN), they can be disabled using this setting.
Exercise caution while modifying this setting because the not saving the files after marking them with false alarms can lead to a loss of work in case of IDE being closed.
Allows enabling the display of messages marked as 'False Alarms' in the PVS-Studio output window. This option will take effect immediately, without the need to re-run the analysis. When this option is set to 'true', an 'FA' indicator containing the number of false alarms on the output window panel will become visible.
The setting allows you to select a language to be used for integrated help on the diagnostic messages (a click to the message error code in PVS-Studio output window) and online documentation (the PVS-Studio -> Help -> Open PVS-Studio Documentation (html, online) menu command), which are also available at our site.
This setting will not change the language of IDE plug-in's interface and messages produced by the analyzer.
This setting allows you to control the notifications of PVS-Studio analyzer operations. In case PVS-Studio output window contains error messages after performing the analysis (the messages potentially can be concealed by various filters as false alarms, by the names of files being verified and so on; such messages will not be present in PVS-Studio window), the analyzer will inform you about their presence with popup message in the Windows notification area (System tray). Single mouse click on this message or PVS-Studio tray icon will open the output window containing the messages which were found by the analyzer.
This setting defines the mode of message display level in PVS-Studio Output window for the results of incremental analysis. Setting the display level depth here (correspondingly, Level 1 only; Levels 1 and 2; Levels 1, 2 and 3) will enable automatic activation of these display levels on each incremental analysis procedure. The "Preserve_Current_Levels" on the other hand will preserve the existing display setting.
This setting could be handful for periodic combined use of incremental and regular analysis modes, as the accidental disabling of, for example, level 1 diagnostics during the review of a large analysis log will also result in the concealment of portion of incremental analysis log afterwards. As the incremental analysis operates in the background, such situation could potentially lead to the loss of positives on existing issues within the project source code.
The setting allows you to select the tracing mode (logging of a program's execution path) for PVS-Studio IDE extension packages (the plug-ins for Visual Studio IDEs). There are several verbosity levels of the tracing (The Verbose mode is the most detailed one). When tracing is enabled PVS-Studio will automatically create a log file with the 'log' extension which will be located in the AppData\PVS-Studio directory (for example c:\Users\admin\AppData\Roaming\PVS-Studio\PVSTrace2168_000.log). Similarly, each of the running IDE processes will use a separate file to store its' logging results.
This option allows you to enable the automatic import of settings (xml files) from %AppData%\PVS-Studio\SettingsImports\' directory. The settings will be imported on each update from stored settings, i.e. when Visual Studio or PVS-Studio command line is started, when the settings are rest, etc. When importing settings, flag-style options (true\false) and all options containing a single value (a string, for example), will be overwritten by the settings from SettingsImports. The options containing several valued (for example, the excluded directories), will be merged.
If the SettingsImports folder contains several xml files, these files will be applied to the current settings in a sequential manner, according to their names.
By default PVS-Studio offers saving report file (.plog) inside the same folder as the current solution file.
Modifying this setting allows you to restore the usual behavior of Windows file dialogs, i.e. the dialog will remember the last folder that was opened in it and will use this folder as initial.
This setting specifies whether the 'Save log' confirmation prompt should be displayed before starting the analysis or loading another log file, in case output window already contains new, unsaved or modified analysis results. Setting the option to 'Yes' will enable automatic saving of analysis results to the current log file (after it was selected once in the 'Save File' dialog). Setting the option to 'No' will force IDE plug-in to discard any of the analysis results. The choice of the value 'Ask_Always' (used by default) will display the prompt to save the report each time, allowing the user to make the choice himself.
By default, PVS-Studio will produce diagnostic messages containing absolute paths to the files being verified. This setting could be utilized for specifying the 'root' section of the path, which will be replaced by a special marker in case the path to the file within the analyzer's diagnostic message also starts from this 'root'. For example, the absolute path to the C:\Projects\Project1\main.cpp file will be replaced to a relative path |?|Project1\main.cpp, if the 'C:\Projects\' was specified as a 'root'.
When handling PVS-Studio log containing messages with paths in such relative format, IDE plug-in will automatically replace the |?| with this setting's value. Thereby, utilizing this setting allows you to handle PVS-Studio report on any local machine with the access to the verified sources, regardless of the sources' location in the file system structure.
A detailed description of the mode is available here.
This setting enables or disables the mode of using the path to the folder, containing the solution file *.sln as a parameter 'SourceTreeRoot'.
Controls whether the analysis run statistics will be saved to '%AppData%\PVS-Studio\Statistics' folder. The statistics can be reviewed in the 'PVS-Studio|Analysis Statistics...' dialog.
Enabling this option allows to automatically load analyzer report that was generated as a result of the Unreal Engine project analysis, into the PVS-Studio output window.
This option is relevant only for working in Visual Studio IDE.
Note. To install the analyzer on Windows operating systems, you can use the installer available on the analyzer download page. Windows installer supports installation in both graphical and unattended (command-line installation) modes.
The PVS-Studio C# analyzer requires a number of additional packages. Depending on how PVS-Studio C# is installed these dependency packages will be installed automatically by the package manager, or they will need to be installed manually.
The analyzer requires .NET Core SDK 3.1 installed on a machine. Instructions for adding the .NET Core repository to various Linux distributions can be found here.
The .NET Core SDK for macOS can be downloaded from this page.
Note. When installing pvs-studio-dotnet via the package manager on Linux, the version of the .NET Core SDK required for the analyzer will be installed automatically, but the .NET Core repository must first be added manually.
The PVS-Studio C# analyzer requires the presence of the PVS-Studio C++ analyzer (pvs-studio) to work.
Note. When installing the PVS-Studio C# analyzer package (pvs-studio-dotnet) via the package manager, the C++ analyzer package (pvs-studio) will be installed automatically and you can skip this step.
When installing the C# analyzer via unpacking the archive, you must also install the C++ analyzer (pvs-studio). The C++ analyzer must be installed in the following directories:
Instructions for installing pvs-studio are available in the corresponding sections of the documentation: Linux; macOS.
Installing from the repository is the recommended method that allows you to automatically install the necessary dependencies and get updates.
wget -q -O - https://files.viva64.com/etc/pubkey.txt | \
sudo apt-key add -
sudo wget -O /etc/apt/sources.list.d/viva64.list \
https://files.viva64.com/etc/viva64.list
sudo apt-get update
sudo apt-get install pvs-studio-dotnet
wget -O /etc/yum.repos.d/viva64.repo \
https://files.viva64.com/etc/viva64.repo
yum update
yum install pvs-studio-dotnet
wget -q -O /tmp/viva64.key https://files.viva64.com/etc/pubkey.txt
sudo rpm --import /tmp/viva64.key
sudo zypper ar -f https://files.viva64.com/rpm viva64
sudo zypper update
sudo zypper install pvs-studio-dotnet
Direct links to download packages / archives are available on the download page. The installation / unpacking commands are given below.
sudo gdebi pvs-studio-dotnet-VERSION.deb
or
sudo apt-get -f install pvs-studio-dotnet-VERSION.deb
sudo dnf install pvs-studio-dotnet-VERSION.rpm
or
sudo zypper install pvs-studio-dotnet-VERSION.rpm
or
sudo yum install pvs-studio-dotnet-VERSION.rpm
or
sudo rpm -i pvs-studio-dotnet-VERSION.rpm
tar -xzf pvs-studio-dotnet-VERSION.tar.gz
Installation commands:
brew install viva64/pvs-studio/pvs-studio
brew install viva64/pvs-studio/pvs-studio-dotnet
Update commands:
brew upgrade pvs-studio
brew upgrade pvs-studio-dotnet
The command to unpack the archive:
tar -xzf pvs-studio-dotnet-VERSION.tar.gz
To enter a license, use the following command:
pvs-studio-analyzer credentials NAME XXXX-XXXX-XXXX-XXXX
Analyzer usage is described in the corresponding section of the documentation.
PVS-Studio analyzer can be used with JetBrains Rider IDE as a plugin providing a convenient GUI for analyzing projects and individual files as well as managing diagnostic messages.
PVS-Studio plugin for Rider can be installed from the official JetBrains plugin repository or from the repository on our website. Another way to install it is by using the PVS-Studio installer for Windows, which is available on our download page.
To install PVS-Studio plugin from the official JetBrains repository, open the settings window by clicking 'File -> Settings -> Plugins', choose the Marketplace tab, and enter 'PVS-Studio' in the search bar. The 'PVS-Studio for Rider' plugin will appear in the search results:
Click 'Install' next to the plugin name. Once the installation is finished, click Restart IDE.
After restarting the IDE, you can use PVS-Studio plugin to analyze your code.
In addition to the official JetBrains repository, PVS-Studio plugin for Rider is also available from PVS-Studio's own repository. To install the plug-in from there, you first need to add this repository to Rider IDE. To do this, click on the 'File -> Settings -> Plugins' command to open the plugin installation window.
In that window, click the gear icon in the top-right corner and select 'Manage Plugin Repositories' in the drop-down list.
In the opened window, add the http://files.viva64.com/java/pvsstudio-rider-plugins/updatePlugins.xml path, and click OK.
The final installation step is the same as in the previous scenario of installing the plugin from the official repository: open the Marketplace tab and enter "PVS-Studio" in the search box. Select the plugin 'PVS-Studio for Rider' in the search results, click 'Install', and restart the IDE.
To be able to use PVS-Studio in the Rider IDE, you will also need to install the kernel of the C# analyzer and its dependencies in addition to the plugin itself.
If you have installed the plugin using the PVS-Studio installer for Windows, then all the required components have been already installed on your system, so you can skip this step.
If you have installed the plugin separately (by adding the repository or from the official JetBrains repository), you fist need to download and install the PVS-Studio C# analyzer core for the relevant platform from here.
To enter your PVS-Studio license, open any project in Rider and then open the plugin settings window: 'Tools -> PVS-Studio -> Settings':
Choose the Registration tab.
Fill in the 'User Name' and 'Serial Number' fields with the corresponding values from your license.
If the license you have entered is correct, the 'Invalid License' label will be replaced with 'Valid License' and the license expiration date will appear in the 'Expires' field. Click 'Save' to confirm and save the license.
The Settings window of the PVS-Studio plugin comprises several tabs. Let's discuss each in detail.
Settings – settings of the PVS-Studio analyzer core. Hover the mouse pointer over the option name to see a pop-up tooltip for that option.
Warnings – a list of all the diagnostic rules supported by the analyzer. Unchecking a diagnostic rule prevents all warnings associated with it from being displayed in the analyzer's output window.
Excludes – contains masks for filenames and paths to be excluded from analysis.
Registration – contains information about the current license.
JetBrains Rider can open projects in two modes: the project itself, or the project's source folder. When opening a project, Rider can open both individual 'csproj' files, and a solution file containing one or more project files.
With a project or solution opened, you can choose to check:
To analyze the current project or solution, choose the 'Tools -> PVS-Studio -> Check Current Solution/Project' menu item.
To analyze an opened file, choose the 'Tools -> PVS-Studio -> Check Open File' command.
You can also select several items in the 'Explorer' window using the CTRL/SHIFT + mouse Left Click and then choose 'Tools -> PVS-Studio -> Check Selected Items' command:
Another way to do this is to open the drop-down menu by right-clicking in the 'Explorer' window and selecting 'Check Selected Items' menu item:
In the examples above, all of the *.cs and *.csproj files from the folders Core and Controllers, as well as the Startup.cs file will be analyzed.
When a project folder is opened in Rider, PVS-Studio doesn't know which project, file, or solution exactly should be analyzed, so the 'Check Current Solution/Project' and 'Check Open File' menu items are inactive. The only available option is to check the solution through the 'Tools -> PVS-Studio -> Check Selected Items' command:
Another way to achieve this is to open the drop-down menu by right-clicking in the 'Explorer' window and selecting 'Check Selected Items' menu item.
The analysis results produced by PVS-Studio analyzer will appear in the table inside the 'PVS-Studio' window:
The table is made up of 7 columns (from left to right: Favorite, Code, CWE, Message, Position, Projects, False Alarms). The analyzer messages can be sorted by any column. To change the sorting order, click on the column heading. The leftmost column (Favorite) can be used to bookmark warnings for quick search among marked messages by sorting the messages by the Favorite column.
When clicking on a warning code in the Code / CWE columns, a webpage will open in your browser providing a detailed description of the warning or potential vulnerability. The Message column provides brief descriptions of the warnings. The Position column contains a list of files the warning refers to. The Projects column is a list of projects containing the file the warning refers to. The rightmost column, False Alarms, contains warnings marked as false positives. Managing false positives will be described in detail further, in the corresponding section.
Double clicking on a table row opens a file at the line the warning was triggered at:
There are also two arrow buttons above the table – these can be used to move between the warnings and open the associated files in the source code editor. To the right of the arrow buttons, a number of filter buttons are available, which allow you to sort the warnings by severity level: High, Medium, Low, and Fails (failures of the analyzer itself).
When clicking the search icon, an additional panel opens with text fields for searching across the Code, CWE, Message, and Position columns. Each field is a string filter allowing you to filter the messages by the text you have entered.
The button with three horizontal lines across it can be found in the top-left corner above the table. Clicking it opens an additional settings panel:
Clicking the gear icon opens the plugin's settings main window, which is also available at 'Tools -> PVS-Studio -> Settings'.
Sometimes you may get a warning pointing out some spot in your code, but you know that there is no error in that spot. Such a warning is called a false positive.
PVS-Studio Rider plugin allows you to mark the analyzer's messages as false positives to prevent them from appearing in future checks.
To mark false positives, select one or more warnings in the 'PVS-Studio' table, right-click on any row to open the drop-down menu, and select the 'Mark selected messages as False Alarms' command:
The analyzer will add a special comment of the '\\-Vxxx' pattern to the line the warning has been triggered by, where xxx is the PVS-Studio's diagnostic number. You can also add such comments manually.
To have previously marked false warnings displayed in the table, enable the 'Show False Alarms' option at 'Tools -> PVS-Studio -> Settings':
Use the 'Remove False Alarm marks from selected messages' drop-down menu item to unmark selected warnings as false positives.
To learn more about suppressing analyzer-generated warnings and other ways of suppressing warnings by using configuration files (.pvsconfig) added to the project, see the Suppression of false alarms documentation section.
Getting started with static analysis and using it regularly may be difficult due to multiple warnings triggered by legacy code. Such code is typically well tested and stable, so fixing every warning in it isn't necessary – all the more so because if the code base is large, fixing it may take a long time. What's more, warnings issued on legacy code prevent you from focusing on warnings issued on newly written code still in development.
To solve this problem and start using static analysis regularly without delay, PVS-Studio allows you to "turn off" warnings in the legacy code. To do that, select 'Tools -> PVS-Studio -> Suppress All Messages' command or click the 'Suppress All Messages' button on the PVS-Studio window toolbar. After that, all messages will be added to special *.suppress files, which is what the suppression mechanism is based on. The next time you run the analysis, the warnings added to these *.suppress files will be excluded from the analyzer's report. This suppression mechanism is quite flexible and is able to "track" suppressed messages even after you modify or move the involved code fragments.
The *.suppress files are created at the project level, in the same location where the project file is stored, but you can also add them to any project or solution (for example, if you want to use one suppress file for several projects or an entire solution). To get those warnings back in the report, delete the suppress files associated with the affected projects.
To learn more about warning suppression and to see the guide on handling *.suppress files, see the Mass suppression of analyzer warnings documentation section.
Right-clicking on a warning in the PVS-Studio window table opens a drop-down menu, which contains additional items for managing selected warnings.
Clicking the 'Mark selected messages as False Alarms / Remove false alarm masks' item marks selected warnings as false positives by adding a special comment to the lines of code they refer to (see the section above on managing false positives).
The 'Exclude from analysis' item is used to add the full or partial pathname of the file containing a warning to the list of folders excluded from analysis. Every file whose pathname matches the filter will be excluded from the analysis.
Analysis results can be saved or loaded using the items of the 'Tools -> PVS-Studio' submenu:
The 'Open Report' command opens the .json report file and loads its contents into the table in the 'PVS-Studio' output window.
The 'Recent Reports' submenu contains a list of recently opened reports. Clicking an item on this list opens that file (given that it still exists at that location) and loads its contents into the table in the 'PVS-Studio' window.
Selecting the 'Save Report' item saves all the messages from the table (even the filtered ones) to a .json report file. If the current list of messages has never been saved before, you will be prompted for a name and location to store the report file to.
Similarly, the 'Save Report As' item is used to save all the warnings from the table (even the filtered ones) to a .json file and always prompts you to specify the location to store the report file to.
The analyzer sometimes fails to diagnose a file with source code completely.
There may be three reasons for that:
1) An error in code
There is a template class or template function with an error. If this function is not instantiated, the compiler fails to detect some errors in it. In other words, such an error does not hamper compilation. PVS-Studio tries to find potential errors even in classes and functions that are not used anywhere. If the analyzer cannot parse some code, it will generate the V001 warning. Consider a code sample:
template <class T>
class A
{
public:
void Foo()
{
//forget ;
int x
}
};
Visual C++ will compile this code if the A class is not used anywhere. But it contains an error, which hampers PVS-Studio's work.
2) An error in the Visual C++'s preprocessor
The analyzer uses the Visual C++'s preprocessor while working. From time to time this preprocessor makes errors when generating preprocessed "*.i" files. As a result, the analyzer receives incorrect data. Here is a sample:
hWnd = CreateWindow (
wndclass.lpszClassName, // window class name
__T("NcFTPBatch"), // window caption
WS_OVERLAPPED | WS_CAPTION | WS_SYSMENU | WS_MINIMIZEBOX,
// window style
100, // initial x position
100, // initial y position
450, // initial x size
100, // initial y size
NULL, // parent window handle
NULL, // window menu handle
hInstance, // program instance handle
NULL); // creation parameters
if (hWnd == NULL) {
...
Visual C++'s preprocessor turned this code fragment into:
hWnd = // window class name// window caption// window style//
initial x position// initial y position// initial x size//
initial y size// parent window handle// window menu handle//
program instance handleCreateWindowExA(0L,
wndclass.lpszClassName, "NcFTPBatch", 0x00000000L | 0x00C00000L |
0x00080000L | 0x00020000L, 100, 100,450, 100, ((void *)0),
((void *)0), hInstance, ((void *)0)); // creation parameters
if (hWnd == NULL) {
...
It turns out that we have the following code:
hWnd = // a long comment
if (hWnd == NULL) {
...
This code is incorrect and PVS-Studio will inform you about it. Of course it is a defect of PVS-Studio, so we will eliminate it in time.
It is necessary to note that Visual C++ successfully compiles this code because the algorithms it uses for compilation purposes and generation of preprocessed "*.i" files are different.
3) Defects inside PVS-Studio
On rare occasions PVS-Studio fails to parse complex template code.
Whatever the reason for generating the V001 warning, it is not crucial. Usually incomplete parse of a file is not very significant from the viewpoint of analysis. PVS-Studio simply skips a function/class with an error and continues analyzing the file. Only a small code fragment is left unanalyzed.
The analyzer can sometimes issue an error "Some diagnostic messages may contain incorrect line number". This occurs when it encounters multiline #pragma directives, on all supported versions of Microsoft Visual Studio.
Any code analyzer works only with preprocessed files, i.e. with those files in which all (#define) macros are expanded and all included files are substituted (#include). Also in the pre-processed file there is information about the substituted files and their positions. That means, in the preprocessed files there is information about line numbers.
Preprocessing is carried out in any case. For the user this procedure looks quite transparent. Sometimes the preprocessor is a part of the code analyzer and sometimes (like in the case with PVS-Studio) external preprocessor is used. In PVS-Studio we use the preprocessor by Microsoft Visual Studio or Clang. The analyzer starts the command line compiler cl.exe/clang.exe for each C/C++ file being processed and generates a preprocessed file with "i" extension.
Here is one the situation where the message "Some diagnostic messages may contain incorrect line number" is issued and a failure in positioning diagnostic messages occurs. It happens because of multiline directives #pragma of a special type. Here is an example of correct code:
#pragma warning(push)
void test()
{
int a;
if (a == 1) // PVS-Studio will inform about the error here
return;
}
If #pragma directive is written in two lines, PVS-Studio will point to an error in an incorrect fragment (there will be shift by one line):
#pragma \
warning(push)
void test()
{
int a; // PVS-Studio will show the error here,
if (a == 1) // actually, however, the error should be here.
return;
}
However, in another case there will be no error caused by the multiline #pragma directive:
#pragma warning \
(push)
void test()
{
int a;
if (a == 1) // PVS-Studio will inform about the error in this line
return;
}
Our recommendation here is either not to use the multiline #pragma directives at all, or to use them in such a way that they can be correctly processed.
The code analyzer tries to detect a failure in lines numbering in the processed file. This mechanism is a heuristic one and it cannot guarantee correct determination of diagnostic messages positioning in the program code. However, if it is possible to find out that a particular file contains multiline pragmas, and there exists a positioning error, then the message "Some diagnostic messages may contain incorrect line number" is issued.
This mechanism works in the following way.
The analyzer opens the source C/C++ file and searches for the very last token. It selects only those tokens that are not shorter than three symbols in order to ignore closing parentheses, etc. E.g., for the following code the "return" operator will be considered as the last token:
01 #include "stdafx.h"
02
03 int foo(int a)
04 {
05 assert(a >= 0 &&
06 a <= 1000);
07 int b = a + 1;
08 return b;
09 }
Having found the last token, the analyzer will determine the number of the line which contains it. In this very case it is line 8. Further on, the analyzer searches for the last token in the file which has already been preprocessed. If the last tokens do not coincide, then most likely the macro in the end of file was not expanded; the analyzer is unable to understand whether the lines are arranged correctly, and ignores the given situation. However, such situations occur very rarely and last tokens almost always coincide in the source and preprocessed files. If it is so, the line number is determined, in which the token in the preprocessed file is situated.
Thus, we have the line numbers in which the last token is located in the source file and in the preprocessed file. If these line numbers do not coincide, then there has been a failure in lines numbering. In this case, the analyzer will notify the user about it with the message "Some diagnostic messages may contain incorrect line number".
Please consider that if a multiline #pragma-directive is situated in the file after all the dangerous code areas are found, then all the line numbers for the found errors will be correct. Even though the analyzer issues the message "Some diagnostic messages may contain incorrect line number for file", this will not prevent you from analyzing the diagnostic messages issued by it.
Please pay attention that this error may lead to incorrect work of the code analyzer, although it is not an error of PVS-Studio itself.
Message V003 means that a critical error occurred in the analyzer. It is most likely that in this case you will not see any warning messages concerning the file being checked at all.
Although the message V003 is very rare, we will appreciate if you help us fix the issue that caused this message to appear. To do this, please send us a file with stacktrace ('*.PVS-Studio.stacktrace.txt'), a preprocessed i-file that caused the error ('*.PVS-Studio.i') and its corresponding configuration file ('*.PVS-Studio.cfg') to this e-mail support@viva64.com.
Note. A preprocessed i-file is generated from a source file (for example, 'file.cpp') when the preprocessor finishes its work. To get this file you should set the option 'RemoveIntermediateFiles' to 'False' on the tab 'Common Analyzer Settings' in PVS-Studio settings and restart the analysis of this one file.
If the 'pvs-studio-analyzer' utility is used to analyze the project, then the internal files can be obtained using the '--dump-files' flag.
After that you can find the corresponding i-file in the project folder (for example, 'file.PVS-Studio.i') and its corresponding 'file.PVS-Studio.cfg'.
When detecting issues of 64-bit code, it is 64-bit configuration of a project that the analyzer must always test. For it is 64-bit configuration where data types are correctly expanded and branches like "#ifdef WIN64" are selected, and so on. It is incorrect to try to detect issues of 64-bit code in a 32-bit configuration.
But sometimes it may be helpful to test the 32-bit configuration of a project. You can do it in case when there is no 64-bit configuration yet but you need to estimate the scope of work on porting the code to a 64-bit platform. In this case you can test a project in 32-bit mode. Testing the 32-bit configuration instead of the 64-bit one will show how many diagnostic warnings the analyzer will generate when testing the 64-bit configuration. Our experiments show that of course far not all the diagnostic warnings are generated when testing the 32-bit configuration. But about 95% of them in the 32-bit mode coincide with those in the 64-bit mode. It allows you to estimate the necessary scope of work.
Pay attention! Even if you correct all the errors detected when testing the 32-bit configuration of a project, you cannot consider the code fully compatible with 64-bit systems. You need to perform the final testing of the project in its 64-bit configuration.
The V004 message is generated only once for each project checked in the 32-bit configuration. The warning refers to the file which will be the first to be analyzed when checking the project. It is done for the purpose to avoid displaying a lot of similar warnings in the report.
This issue with PVS-Studio is caused by the mismatch of selected project's platform configurations declared in the solution file (Vault.sln) and platform configurations declared in the project file itself.
For example, the solution file may contain lines of this particular kind for concerned project:
{F56ECFEC-45F9-4485-8A1B-6269E0D27E49}.Release|x64.ActiveCfg = Release|x64
However, the project file itself may lack the declaration of Release|x64 configuration. Therefore trying to check this particular project, PVS-Studio is unable to locate the 'Release|x64' configuration. The following line is expected to be automatically generated by IDE in the solution file for such a case:
{F56ECFEC-45F9-4485-8A1B-6269E0D27E49}.Release|x64.ActiveCfg = Release|Win32
In automatically generated solution file the solution's active platform configuration (Release|x64.ActiveCfg) is set equal to one of project's existing configurations (I.e. in this particular case Release|Win32). Such a situation is expected and can be handled by PVS-Studio correctly.
Message V006 is generated when an analyzer cannot process a file for a particular time period and aborts. Such situation might happen in two cases.
The first reason - an error inside the analyzer that does not allow it to parse some code fragment. It happens rather seldom, yet it is possible. Although message V006 appears rather seldom, we would appreciate if you help us eliminate the issue which causes the message to appear. If you have worked with projects in C/C++, please send your preprocessed i-file where this issue occurs and its corresponding configuration launch files (*.PVS-Studio.cfg and *.PVS-Studio.cmd) to the address support@viva64.com.
Note. A preprocessed i-file is generated from a source file (for example, file.cpp) when the preprocessor finishes its work. To get this file you should set the option RemoveIntermediateFiles to False on the tab "Common Analyzer Settings" in PVS-Studio settings and restart the analysis of this one file. After that you can find the corresponding i-file in the project folder (for example, file.i and its corresponding file.PVS-Studio.cfg and file.PVS-Studio.cmd).
The second possible reason is the following: although the analyzer could process the file correctly, it does not have enough time to do that because it gets too few system resources due to high processor load. By default, the number of threads spawned for analysis is equal to the number of processor cores. For example, if we have four cores in our machine, the tool will start analysis of four files at once. Each instance of an analyzer's process requires about 1.5 Gbytes of memory. If your computer does not have enough memory, the tool will start using the swap file and analysis will run slowly and fail to fit into the required time period. Besides, you may encounter this problem when you have other "heavy" applications running on your computer simultaneously with the analyzer.
To solve this issue, you may directly restrict the number of cores to be used for analysis in the PVS-Studio settings (ThreadCount option on the "Common Analyzer Settings" tab).
The V007 message appears when the projects utilizing the C++/Common Language Infrastructure Microsoft specification, containing one of the deprecated /clr compiler switches, are selected for analysis. Although you may continue analyzing such a project, PVS-Studio does not officially support these compiler flags. It is possible that some analyzer errors will be incorrect.
PVS-Studio was unable to start the analysis of the designated file. This message indicates that an external C++ preprocessor, started by the analyzer to create a preprocessed source code file, exited with a non-zero error code. Moreover, std error can also contain detailed description of this error, which can be viewed in PVS-Studio Output Window for this file.
There could be several reasons for the V008 error:
1) The source code is not compilable
If the C++ sources code is not compilable for some reason (a missing header file for example), then the preprocessor will exit with non-zero error code and the "fatal compilation error" type message will be outputted into std error. PVS-Studio is unable to initiate the analysis in case C++ file hadn't been successfully preprocessed. To resolve this error you should ensure the compilability of the file being analyzed.
2) The preprocessor's executable file had been damaged\locked
Such a situation is possible when the preprocessor's executable file had been damaged or locked by system antiviral software. In this case the PVS-Studio Output window could also contain the error messages of this kind: "The system cannot execute the specified program". To resolve it you should verify the integrity of the utilized preprocessor's executable and lower the security policies' level of system's antiviral software.
3) One of PVS-Studio auxiliary command files had been locked.
PVS-Studio analyzer is not launching the C++ preprocessor directly, but with the help of its own pre-generated command files. In case of strict system security policies, antiviral software could potentially block the correct initialization of C++ preprocessor. This could be also resolved by easing the system security policies toward the analyzer.
4) There are non-latin characters in the used file paths. These characters may not properly show for the current console code page.
PVS-Studio uses 'preprocessing.cmd' batch file (located in the PVS-Studio installation directory) to start preprocessing. In this batch file, you can set the correct code page (using chcp).
You entered a free license key allowing you to use the analyzer in free mode. To be able to run the tool with this key, you need to add special comments to your source files with the following extensions: .c, .cc, .cpp, .cp, .cxx, .c++, .cs. Header files do not need to be modified.
You can insert the comments manually or by using a special open-source utility available at GitHub: how-to-use-pvs-studio-free.
Types of comments:
Comments for students (academic license):
// This is a personal academic project. Dear PVS-Studio, please check it.
// PVS-Studio Static Code Analyzer for C, C++ and C#: http://www.viva64.com
Comments for open-source non-commercial projects:
// This is an open source non-commercial project. Dear PVS-Studio, please check it.
// PVS-Studio Static Code Analyzer for C, C++ and C#: http://www.viva64.com
Comments for individual developers:
// This is an independent project of an individual developer. Dear PVS-Studio, please check it.
// PVS-Studio Static Code Analyzer for C, C++ and C#: http://www.viva64.com
Some developers might not want additional commented lines not related to the project in their files. It is their right, and they can simply choose not to use the analyzer. Another option is to purchase a commercial license and use the tool without any limitations. We consider your adding these comments as your way to say thank you to us for the granted license and help us promote our product.
If you have any questions, please contact our support.
The warning V010 appears upon the attempt to check .vcxproj - projects, having the configuration type 'makefile' or 'utility'. PVS-Studio doesn't support such projects, using the plugin or a command line version of the analyzer. This is due to the fact that in the makefile/utility projects, the information necessary to the analyzer (the compilation parameters, in particular) about the build details is not available.
In case if the analysis of such projects is needed, please use a compiler monitoring system or direct integration of the analyzer. You can also disable this warning on the settings page of PVS-Studio (Detectable errors (C++), Fails list).
A #line directive is generated by the preprocessor and specifies the filename and line number that a particular line in the preprocessed file refers to.
This is demonstrated by the following example.
#line 20 "a.h"
void X(); // Function X is declared at line 20 in file a.h
void Y(); // Function Y is declared at line 21 in file a.h
void Z(); // Function Z is declared at line 22 in file a.h
#line 5 "a.cpp"
int foo; // Variable foo is declared at line 5 in file a.cpp
int X() { // Definition of function X starts at line 6 in file a.cpp
return 0; // Line 7
} // Line 8
#line directives are used by various tools, including the PVS-Studio analyzer, to navigate the file.
Sometimes source files (*.c; *.cpp; *.h, etc.) happen to include #line directives as well. This may happen, for example, when the file is generated automatically by some code-generating software (example).
When preprocessing such a file, those #line directives will be added to the resulting *.i file. Suppose, for example, that we have a file named A.cpp:
int a;
#line 30 "My.y"
int b = 10 / 0;
After the preprocessing, we get the file A.i with the following contents:
#line 1 "A.cpp"
int a;
#line 30 "My.y"
int b = 10 / 0;
This makes correct navigation impossible. On detecting a division by zero, the analyzer will report this error as occurring at line 30 in the My.y file. Technically speaking, the analyzer is correct, as the error is indeed a result of the incorrect code in the My.y file. However, with the navigation broken, you will not be able to view the My.y file since the project may simply have no such file. In addition, you will never know that currently, the division-by-zero error actually occurs at line 3 in the A.cpp file.
To fix this issue, we recommend deleting all #line directives in the source files of your project. These directives typically get there by accident and only hinder the work of various tools, such as code analyzers, rather than help.
V011 diagnostic was developed to detect such unwanted #line directives in the source code. The analyzer reports the first 10 #line's in a file. Reporting more makes no sense since you can easily find and delete the remaining #line directives using the search option of your editor.
This is the fixed code:
int a;
int b = 10 / 0;
After the preprocessing, you get the following *.i file:
#line 1 "A.cpp"
int a;
int b = 10 / 0;
The navigation is fixed, and the analyzer will correctly report that the division by zero occurs at line 2 in the A.cpp file.
Some of the false positive suppression methods allow complete disabling of diagnostics. As a result, such warnings will not be merely marked as false positives in the analysis report but may never appear there in the first place.
To find out which mechanisms exactly were used to disable diagnostics, you can turn on special messages to be included in the log.
pvs-studio-analyzer analyze ... --cfg source.cpp.PVS-Studio.cfg
With this option enabled, the analyzer will include V012 messages in its output to provide information about the exact spots where diagnostics were turned off. PVS-Studio's IDE plugins support navigation by those spots in the source files and rule configuration files (.pvsconfig). The paths to configuration files storing ignore rules will also be added to the log as V012 messages.
A V051 message indicates that the C# project, loaded in the analyzer, contains compilation errors. These usually include unknown data types, namespaces, and assemblies (dll files), and generally occur when you try to analyze a project that has dependent assemblies of nuget packages absent on the local machine, or third-party libraries absent among the projects of the current solution.
Despite this error, the analyzer will try to scan the part of the code that doesn't contain unknown types, but results of such analysis may be incomplete, as some of the messages may be lost. The reason is that most diagnostics can work properly only when the analyzer has complete information about all the data types contained in the source files to be analyzed, including the types implemented in third-party assemblies.
Even if rebuilding of dependency files is provided for in the build scenario of the project, the analyzer won't automatically rebuild the entire project. That's why we recommend that, before scanning it, you ensure that the project is fully compilable, including making sure that all the dependency assemblies (dll files) are present.
Sometimes the analyzer may mistakenly generate this message on a fully compilable project, with all the dependencies present. It may happen, for example, when the project uses a non-standard MSBuild scenario - say, csproj files are importing some additional props and target files. In this case, you can ignore the V051 message or turn it off in the analyzer settings.
If you wish to learn which compiler errors are causing the V051 error, start the analysis of your projects with the analyzer's cmd version, and add the '--logCompilerErrors' flag to its arguments (in a single line):
PVS-Studio_Cmd.exe –t MyProject.sln –p "Any CPU" –c "Debug"
--logCompilerErrors
The appearance of V052 message means that a critical error had occurred inside the analyzer. It is most likely that several source files will not be analyzed.
You can get additional information about this error from two sources: the analyzer report file (plog) and standard output stream of error messages stderr (when you use the command line version).
If you are using the IDE Visual Studio or Standalone application the error stack is displayed in PVS-Studio window. The stack will be recorded in the very beginning of the plog file. At the same time the stack is divided into substrings, and each of them is recorded and displayed as a separate error without number.
If you are working from the command line, you can analyze the return code of the command line version to understand that the exception occurred, and then examine the plog, without opening it in the IDE Visual Studio or Standalone application. For this purpose, the report can be converted, for example, to a text file using the PlogConverter utility. Return codes of the command line version are described in the section "Analyzing Visual C++ (.vcxproj) and Visual C # project (.csproj) projects from the command line", used utilities - PlogConverter - "Managing the Analysis Results (plog file)".
Although the V052 message is quite rare, we will appreciate if you can help us fixing the issue that had cause it. To accomplish this, please send the exception stack from PVS-Studio output window (or the message from stderr in case the command line version was utilized) to support@viva64.com.
A V061 message indicates that an error related to the analyzer's functioning has occurred.
It could be an unexpected exception in the analyzer, failure to build a semantic model of the program, and so on.
In this case, please email us (support@viva64.com) and attach the text files from the .PVS-Studio directory (you can find them in the project directory) so that we could fix the bug as soon as possible.
In addition, you can use the 'verbose' parameter to tell the analyzer to save additional information to the .PVS-Studio directory while running. That information could also be helpful.
Maven plugin:
<verbose>true</verbose>
Gradle plugin:
verbose = true
IntelliJ IDEA plugin:
1) Analyze -> PVS-Studio -> Settings
2) Tab Misc -> uncheck 'Remove intermediate files'
A V062 message means that the plugin has failed to run the analyzer core. This message typically appears when attempting to launch the core with an incorrect Java version. The core can work correctly only with the 64-bit Java version 8 or higher. The analyzer retrieves the path to the Java interpreter from the PATH environment variable by default.
You can also specify the path to the required Java interpreter manually.
Maven plugin:
<javaPath>C:/Program Files/Java/jdk1.8.0_162/bin/java.exe</javaPath>
Gradle plugin:
javaPath = "C:/Program Files/Java/jdk1.8.0_162/bin/java.exe"
IntelliJ IDEA plugin:
1) Analyze -> PVS-Studio -> Settings
2) Tab Environment -> Java executable
If you still cannot launch the analyzer, please email us (support@viva64.com) and attach the text files from the .PVS-Studio directory (you can find it in the project directory). We will try to find a solution as soon as possible.
A V063 message means that the analyzer has failed to check a file in the given time frame (10 minutes by default). Such messages are often accompanied by "GC overhead limit exceeded" messages.
In some cases, this problem can be solved by simply increasing the amount of memory and stack available to the analyzer.
Maven plugin:
<jvmArguments>-Xmx4096m, -Xss256m</jvmArguments>
Gradle plugin:
jvmArguments = ["-Xmx4096m", "-Xss256m"]
IntelliJ IDEA plugin:
1) Analyze -> PVS-Studio -> Settings
2) Tab Environment -> JVM arguments
The amount of memory available by default could be insufficient when analyzing generated code with numerous nested constructs.
You may want to exclude such code from analysis (using the 'exclude' option) so that the analyzer does not waste time checking it.
A V063 message can also appear when the analyzer does not get enough system resources because of high CPU load. It could process the file correctly if given enough time, but the default time frame is too small.
If you are still getting this message, it may be a sign of a bug in the analyzer. In this case, please email us (support@viva64.com) and attach the text files from the .PVS-Studio directory (you can find it in the project directory) together with the code that seems to trigger this error so that we could fix the bug as soon as possible.
The analyzer detected a potential error relating to implicit type conversion while executing the assignment operator "=". The error may consist in incorrect calculating of the value of the expression to the right of the assignment operator "=".
An example of the code causing the warning message:
size_t a;
unsigned b;
...
a = b; // V101
The operation of converting a 32-bit type to memsize-type is safe in itself as there is no data loss. For example, you can always save the value of unsigned-type variable into a variable of size_t type. But the presence of this type conversion may indicate a hidden error made before.
The first cause of the error occurrence on a 64-bit system may be the change of the expression calculation process. Let's consider an example:
unsigned a = 10;
int b = -11;
ptrdiff_t c = a + b; //V101
cout << c << endl;
On a 32-bit system this code will display the value -1, while on a 64-bit system it will be 4294967295. This behaviour fully meets the rules of type converion in C++ but most likely it will cause an error in a real code.
Let's explain the example. According to C++ rules a+b expression has unsigned type and contains the value 0xFFFFFFFFu. On a 32-bit system ptrdiff_t type is a sign 32-bit type. After 0xFFFFFFFFu value is assigned to the 32-bit sign variable it will contain the value -1. On a 64-bit system ptrdiff_t type is a sign 64-bit type. It means 0xFFFFFFFFu value will be represented as it is. That is, the value of the variable after assignment will be 4294967295.
The error may be corrected by excluding mixed use of memsize and non-memsize-types in one expression. An example of code correction:
size_t a = 10;
ptrdiff_t b = -11;
ptrdiff_t c = a + b;
cout << c << endl;
A more proper way of correction is to refuse using sign and non-sign data types together.
The second cause of the error may be an overflow occurring in 32-bit data types. In this case the error may stand before the assignment operator but you can detect it only indirectly. Such errors occur in code allocating large memory sizes. Let's consider an example:
unsigned Width = 1800;
unsigned Height = 1800;
unsigned Depth = 1800;
// Real error is here
unsigned CellCount = Width * Height * Depth;
// Here we get a diagnostic message V101
size_t ArraySize = CellCount * sizeof(char);
cout << ArraySize << endl;
void *Array = malloc(ArraySize);
Suppose that we decided to process data arrays of more than 4 Gb on a 64-bit system. In this case the given code will cause allocation of a wrong memory size. The programmer is planning to allocate 5832000000 memory bytes but he gets only 1537032704 instead. It happens because of an overflow occurring while calculating Width * Height * Depth expression. Unfortunately, we cannot diagnose the error in the line containing this expression but we can indirectly indicate the presence of the error detecting type conversion in the line:
size_t ArraySize = CellCount * sizeof(char); //V101
To correct the error you should use types allowing you to store the necessary range of values. Mind that correction of the following kind is not appropriate:
size_t CellCount = Width * Height * Depth;
We still have the overflow here. Let's consider two examples of proper code correction:
// 1)
unsigned Width = 1800;
unsigned Height = 1800;
unsigned Depth = 1800;
size_t CellCount =
static_cast<size_t>(Width) *
static_cast<size_t>(Height) *
static_cast<size_t>(Depth);
// 2)
size_t Width = 1800;
size_t Height = 1800;
size_t Depth = 1800;
size_t CellCount = Width * Height * Depth;
You should keep in mind that the error can be situated not only higher but even in another module. Let's give a corresponding example. Here the error consists in incorrect index calculation when the array's size exceeds 4 Gb.
Suppose that the application uses a large one-dimensional array and CalcIndex function allows you to address this array as a two-dimensional one.
extern unsigned ArrayWidth;
unsigned CalcIndex(unsigned x, unsigned y) {
return x + y * ArrayWidth;
}
...
const size_t index = CalcIndex(x, y); //V101
The analyzer will warn about the problem in the line: const size_t index = CalcIndex(x, y). But the error is in incorrect implementation of CalcIndex function. If we take CalcIndex separately it is absolutely correct. The output and input values have unsigned type. Calculations are also performed only with unsigned types participating. There are no explicit or implicit type conversions and the analyzer has no opportunity to detect a logic problem relating to CalcIndex function. The error consists in that the result returned by the function and possibly the result of the input values was chosen incorrectly. The function's result must have memsize type.
Fortunately, the analyzer managed to detect implicit conversion of CalcIndex function's result to size_t type. It allows you to analyze the situation and bring necessary changes into the program. Correction of the error may be, for example, the following:
extern size_t ArrayWidth;
size_t CalcIndex(size_t x, size_t y) {
return x + y * ArrayWidth;
}
...
const size_t index = CalcIndex(x, y);
If you are sure that the code is correct and the array's size will never reach 4 Gb you can suppress the analyzer's warning message by explicit type conversion:
extern unsigned ArrayWidth;
unsigned CalcIndex(unsigned x, unsigned y) {
return x + y * ArrayWidth;
}
...
const size_t index = static_cast<size_t>(CalcIndex(x, y));
In some cases the analyzer can understand itself that an overflow is impossible and the message won't be displayed.
Let's consider the last example related to incorrect shift operations
ptrdiff_t SetBitN(ptrdiff_t value, unsigned bitNum) {
ptrdiff_t mask = 1 << bitNum; //V101
return value | mask;
}
The expression " mask = 1 << bitNum " is unsafe because this code cannot set the high-order bits of the 64-bit variable mask into ones. If you try to use SetBitN function for setting, for example, the 33rd bit, an overflow will occur when performing the shift operation and you will not get the result you've expected.
Additional materials on this topic:
The analyzer found a possible error in pointer arithmetic. The error may be caused by an overflow during the determination of the expression.
Let's take up the first example.
short a16, b16, c16;
char *pointer;
...
pointer += a16 * b16 * c16;
The given example works correctly with pointers if the value of the expression "a16 * b16 * c16" does not excess 'INT_MAX' (2Gb). This code could always work correctly on the 32-bit platform because the program never allocated large-sized arrays. On the 64-bit platform the programmer using the previous code while working with an array of a large size would be disappointed. Suppose, we would like to shift the pointer value in 3000000000 bytes, and the variables 'a16', 'b16' and 'c16' have values 3000, 1000 and 1000 correspondingly. During the determination of the expression "a16 * b16 * c16" all the variables, according to the C++ rules, will be converted to type int, and only then the multiplication will take place. While multiplying an overflow will occur, and the result of this would be the number -1294967296. The incorrect expression result will be extended to type 'ptrdiff_t' and pointer determination will be launched. As a result, we'll face an abnormal program termination while trying to use the incorrect pointer.
To prevent such errors one should use memsize types. In our case it will be correct to change the types of the variables 'a16', 'b16', 'c16' or to use the explicit type conversion to type 'ptrdiff_t' as follows:
short a16, b16, c16;
char *pointer;
...
pointer += static_cast<ptrdiff_t>(a16) *
static_cast<ptrdiff_t>(b16) *
static_cast<ptrdiff_t>(c16)
It's worth mentioning that it is not always incorrect not to use memsize type in pointer arithmetic. Let's examine the following situation:
char ch;
short a16;
int *pointer;
...
int *decodePtr = pointer + ch * a16;
The analyzer does not show a message on it because it is correct. There are no determinations which may cause an overflow and the result of this expression will be always correct on the 32-bit platform as well as on the 64-bit platform.
Additional materials on this topic:
The analyzer found a possible error related to the implicit memsize-type conversion to 32-bit type. The error consists in the loss of high bits in 64-bit type which causes the loss of the value.
The compiler also diagnoses such type conversions and shows warnings. Unfortunately, such warnings are often switched off, especially when the project contains a great deal of the previous legacy code or old libraries are used. In order not to make a programmer look through hundreds and thousands of such warnings, showed by the compiler, the analyzer informs only about those which may be the cause of the incorrect work of the code on the 64-bit platform.
The first example.
Our application works with videos and we want to calculate what file-size we'll need in order to store all the shots kept in memory into a file.
size_t Width, Height, FrameCount;
...
unsigned BufferSizeForWrite = Width * Height * FrameCount *
sizeof(RGBStruct);
Earlier the general size of the shots in memory could never excess 4 Gb (practically 2-3 Gb depending on the kind of OS Windows). On the 64-bit platform we have an opportunity to store much more shots in memory, and let's suppose that their general size is 10 Gb. After putting the result of the expression "Width * Height * FrameCount * sizeof(RGBStruct)" into the variable 'BufferSizeForWrite', we'll truncate high bits and will deal with the incorrect value.
The correct solution will be to change the type of the variable 'BufferSizeForWrite' into type 'size_t'.
size_t Width, Height, FrameCount;
...
size_t BufferSizeForWrite = Width * Height * FrameCount *
sizeof(RGBStruct);
The second example.
Saving of the result of pointers subtraction.
char *ptr_1, *ptr_2;
...
int diff = ptr_2 - ptr_1;
If pointers differ more than in one 'INT_MAX' byte (2 Gb) a value cutoff during the assignment will occur. As a result the variable 'diff' will have an incorrect value. For the storing of the given value we should use type 'ptrdiff_t' or another memsize type.
char *ptr_1, *ptr_2;
...
ptrdiff_t diff = ptr_2 - ptr_1;
When you are sure about the correctness of the code and the implicit type conversion does not cause errors while changing over to the 64-bit platform, you may use the explicit type conversion in order to avoid error messages showed in this line. For example:
unsigned BitCount = static_cast<unsigned>(sizeof(RGBStruct) * 8);
If you suspect that the code contains incorrect explicit conversions of memsize types to 32-bit types about which the analyzer does not warn, you can use the V202.
As was said before analyzer informs only about those type conversions which can cause incorrect code work on a 64-bit platform. The code given below won't be considered incorrect though there occurs conversion of memsize type to int type:
int size = sizeof(float);
Additional materials on this topic:
The analyzer found a possible error inside an arithmetic expression and this error is related to the implicit type conversion to memsize type. The error of an overflow may be caused by the changing of the permissible interval of the values of the variables included into the expression.
The first example.
The incorrect comparison expression. Let's examine the code:
size_t n;
unsigned i;
// Infinite loop (n > UINT_MAX).
for (i = 0; i != n; ++i) { ... }
In this example the error are shown which are related to the implicit conversion of type 'unsigned' to type 'size_t' while performing the comparison operation.
On the 64-bit platform you may have an opportunity to process a larger data size and the value of the variable 'n' may excess the number 'UINT_MAX' (4 Gb). As a result, the condition "i != n" will be always true and that will cause an eternal cycle.
An example of the corrected code:
size_t n;
size_t i;
for (i = 0; i != n; ++i) { ... }
The second example.
char *begin, *end;
int bufLen, bufCount;
...
ptrdiff_t diff = begin - end + bufLen * bufCount;
The implicit conversion of type 'int' to type 'ptrdiff_t' often indicates an error. One should pay attention that the conversion takes place not while performing operator "=" (for the expression "begin - end + bufLen * bufCount" has type 'ptrdiff_t'), but inside this expression. The subexpression "begin - end" according to C++ rules has type 'ptrdiff_t', and the right "bufLen * bufCount" type 'int'. While changing over to 64-bit platform the program may begin to process a larger data size which may result in an overflow while determining the subexpression "bufLen * bufCount".
You should change the type of the variables 'bufLen' and 'bufCount' into memsize type or use the explicit type conversion, as follows:
char *begin, *end;
int bufLen, bufCount;
...
ptrdiff_t diff = begin - end +
ptrdiff_t(bufLen) * ptrdiff_t(bufCount);
Let's notice that the implicit conversion to memsize type inside the expressions is not always incorrect. Let's examine the following situation:
size_t value;
char c1, c2;
size_t result = value + c1 * c2;
The analyzer does not show error message although the conversion of type 'int' to 'size_t' occurs in this case, for there can be no overflow while determining the subexpression "c1 * c2".
If you suspect that the program may contain errors related to the incorrect explicit type conversion in expressions, you may use the V201. Here is an example when the explicit type conversion to type 'size_t' hides an error:
int i;
size_t st;
...
st = size_t(i * i * i) * st;
Additional materials on this topic:
The analyzer found a possible error inside an arithmetic expression related to the implicit type conversion to memsize type. An overflow error may be caused by the changing of the permissible interval of the values of the variables included into the expression. This warning is almost equivalent to warning V104 with the exception that the implicit type conversion occurs due to the use of '?:' operation.
Let's give an example of the implicit type conversion while using operation:
int i32;
float f = b != 1 ? sizeof(int) : i32;
In the arithmetic expression the ternary operation '?:' is used which has three operands:
The result of the expression "b != 1 ? sizeof(int) : i32" is the value of type 'size_t' which is then converted into type 'float' value. Thus, the implicit type conversion realized for the 3rd operand of '?:' operation.
Let's examine an example of the incorrect code:
bool useDefaultVolume;
size_t defaultVolume;
unsigned width, height, depth;
...
size_t volume = useDefaultVolume ?
defaultVolume :
width * height * depth;
Let's suppose, we're developing an application of computational modeling which requires three-dimensional calculation area. The number of calculating elements which are used is determined according to the variable 'useDefaultSize' value and is assigned on default or by multiplication of length, height and depth of the calculating area. On the 32-bit platform the size of memory which was already allocated, cannot excess 2-3 Gb (depending on the kind of OS Windows) and as consequence the result of the expression "width * height * depth" will be always correct. On the 64-bit platform, using the opportunity to deal with a larger memory size, the number of calculating elements may excess the value 'UINT_MAX' (4 Gb). In this case an overflow will occur while determining the expression "width * height * depth" because the result of this expression had type 'unsigned'.
Correction of the code may consist in the changing of the type of the variables 'width', 'height' and 'depth' to memsize type as follows:
...
size_t width, height, depth;
...
size_t volume = useDefaultVolume ?
defaultVolume :
width * height * depth;
Or in use of the explicit type conversion:
unsigned width, height, depth;
...
size_t volume = useDefaultVolume ?
defaultVolume :
size_t(width) * size_t(height) * size_t(depth);
In addition, we advise to read the description of a similar warning V104, where one can learn about other effects of the implicit type conversion to memsize type.
Additional materials on this topic:
The analyzer found a possible error related to the implicit actual function argument conversion to memsize type.
The first example.
The program deals with large arrays using container 'CArray' from library MFC. On the 64-bit platform the number of array items may excess the value 'INT_MAX' (2Gb), which will make the work of the following code impossible:
CArray<int, int> myArray;
...
int invalidIndex = 0;
INT_PTR validIndex = 0;
while (validIndex != myArray.GetSize()) {
myArray.SetAt(invalidIndex, 123);
++invalidIndex;
++validIndex;
}
The given code fills all the array 'myArray' items with value 123. It seems to be absolutely correct and the compiler won't show any warnings in spite of its impossibility to work on the 64-bit platform. The error consists in the use of type int as an index of the variable 'invalidIndex'. When the value of the variable 'invalidIndex' excesses 'INT_MAX' an overflow will occur and it will receive value "-1". The analyzer diagnoses this error and warns that the implicit conversion of the first argument of the function 'SetAt' to memsize type (here it is type 'INT_PTR') occurs. When seeing such a warning you may correct the error replacing 'int' type with a more appropriate one.
The given example is significant because it is rather unfair to blame a programmer for the ineffective code. The reason is that 'GetAt' function in class 'CArray' in the previous MFC library version was declared as follows:
void SetAt(int nIndex, ARG_TYPE newElement);
And in the new version:
void SetAt(INT_PTR nIndex, ARG_TYPE newElement);
Even the Microsoft developers creating MFC could not take into account all the possible consequences of the use of 'int' type for indexing in the array and we can forgive the common developer who has written this code.
Here is the correct variant:
...
INT_PTR invalidIndex = 0;
INT_PTR validIndex = 0;
while (validIndex != myArray.GetSize()) {
myArray.SetAt(invalidIndex, 123);
++invalidIndex;
++validIndex;
}
The second example.
The program determines the necessary data array size and then allocated it using function 'malloc' as follows:
unsigned GetArraySize();
...
unsigned size = GetArraySize();
void *p = malloc(size);
The analyzer will warn about the line "void *p = malloc(size);". Looking through the definition of function 'malloc' we will see that its formal argument assigning the size of the allocated memory is represented by type 'size_t'. But in the program the variable 'size' of 'unsigned' type is used as the actual argument. If your program on the 64-bit platform needs an array more than 'UINT_MAX' bytes (4Gb), we can be sure that the given code is incorrect for type 'unsigned' cannot keep a value more than 'UINT_MAX'. The program correction consists in changing the types of the variables and functions used in the determination of the data array size. In the given example we should replace 'unsigned' type with one of memsize types, and also if it is necessary modify the function 'GetArraySize' code.
...
size_t GetArraySize();
...
size_t size = GetArraySize();
void *p = malloc(size);
The analyzer show warnings on the implicit type conversion only if it may cause an error during program port on the 64-bit platform. Here it is the code which contains the implicit type conversion but does not cause errors:
void MyFoo(SSIZE_T index);
...
char c = 'z';
MyFoo(0);
MyFoo(c);
If you are sure that the implicit type conversion of the actual function argument is absolutely correct you may use the explicit type conversion to suppress the analyzer's warnings as follows:
typedef size_t TYear;
void MyFoo(TYear year);
int year;
...
MyFoo(static_cast<TYear>(year));
Sometimes the explicit type conversion may hide an error. In this case you may use the V201.
Additional materials on this topic:
The analyzer found a possible error related to the implicit conversion of the actual function argument which has memsize type to 32-bit type.
Let's examine an example of the code which contains the function for searching for the max array item:
float FindMaxItem(float *array, int arraySize) {
float max = -FLT_MAX;
for (int i = 0; i != arraySize; ++i) {
float item = *array++;
if (max < item)
max = item;
}
return max;
}
...
float *beginArray;
float *endArray;
float maxValue = FindMaxItem(beginArray, endArray - beginArray);
This code may work successfully on the 32-bit platform but it won't be able to process arrays containing more than 'INT_MAX' (2Gb) items on the 64-bit architecture. This limitation is caused by the use of int type for the argument 'arraySize'. Pay attention that the function code looks absolutely correct not only from the compiler's point of view but also from that of the analyzer. There is no type conversion in this function and one cannot find the possible problem.
The analyzer will warn about the implicit conversion of memsize type to a 32-bit type during the invocation of 'FindMaxItem' function. Let's try to find out why it happens so. According to C++ rules the result of the subtraction of two pointers has type 'ptrdiff_t'. When invocating 'FindMaxItem' function the implicit conversion of 'ptrdiff_t' type to 'int' type occurs which will cause the loss of the high bits. This may be the reason for the incorrect program behavior while processing a large data size.
The correct solution will be to replace 'int' type with 'ptrdiff_t' type for it will allow to keep the whole range of values. The corrected code:
float FindMaxItem(float *array, ptrdiff_t arraySize) {
float max = -FLT_MAX;
for (ptrdiff_t i = 0; i != arraySize; ++i) {
float item = *array++;
if (max < item)
max = item;
}
return max;
}
Analyzer tries as far as possible to recognize safe type conversions and keep from displaying warning messages on them. For example, the analyzer won't give a warning message on 'FindMaxItem' function's call in the following code:
float Arr[1000];
float maxValue =
FindMaxItem(Arr, sizeof(Arr)/sizeof(float));
When you are sure that the code is correct and the implicit type conversion of the actual function argument does not cause errors you may use the explicit type conversion so that to avoid showing warning messages. An example:
extern int nPenStyle
extern size_t nWidth;
extern COLORREF crColor;
...
// Call constructor CPen::CPen(int, int, COLORREF)
CPen myPen(nPenStyle, static_cast<int>(nWidth), crColor);
In that case if you suspect that the code contains incorrect explicit conversions of memsize types to 32-bit types about which the analyzer does not warn, you may use the V202.
Additional materials on this topic:
The analyzer found a possible error of indexing large arrays. The error may consist in the incorrect index determination.
The first example.
extern char *longString;
extern bool *isAlnum;
...
unsigned i = 0;
while (*longString) {
isAlnum[i] = isalnum(*longString++);
++i;
}
The given code is absolutely correct for the 32-bit platform where it is actually impossible to process arrays more than 'UINT_MAX' bytes (4Gb). On the 64-bit platform it is possible to process an array with the size more than 4 Gb that is sometimes very convenient. The error consists in the use of the variable of 'unsigned' type for indexing the array 'isAlnum'. When we fill the first 'UINT_MAX' of the items the variable 'i' overflow will occur and it will equal zero. As the result we'll begin to rewrite the array 'isAlnum' items which are situated in the beginning and some items will be left unassigned.
The correction is to replace the variable 'i' type with memsize type:
...
size_t i = 0;
while (*longString)
isAlnum[i++] = isalnum(*longString++);
The second example.
class Region {
float *array;
int Width, Height, Depth;
float Region::GetCell(int x, int y, int z) const;
...
};
float Region::GetCell(int x, int y, int z) const {
return array[x + y * Width + z * Width * Height];
}
For computational modeling programs the main memory size is an important source, and the possibility to use more than 4 Gb of memory on the 64-bit architecture increases calculating possibilities greatly. In such programs one-dimensional arrays are often used which are then dealt with as three-dimensional ones. There are functions for that which similar to 'GetCell' that provides access to the necessary items of the calculation area. But the given code may deal correctly with arrays containing not more than 'INT_MAX' (2Gb) items. The reason is in the use of 32-bit 'int' types which participate in calculating the item index. If the number of items in the array 'array' excesses 'INT_MAX' (2 Gb) an overflow will occur and the index value will be determined incorrectly. Programmers often make a mistake trying to correct the code in the following way:
float Region::GetCell(int x, int y, int z) const {
return array[static_cast<ptrdiff_t>(x) + y * Width +
z * Width * Height];
}
They know that according to C++ rules the expression for calculating the index will have 'ptrdiff_t' type and because of it hope to avoid the overflow. Unfortunately, the overflow may occur inside the subexpression "y * Width or z * Width * Height" for to determine them 'int' type is still used.
If you want to correct the code without changing the types of the variables included into the expression you should convert each variable explicitly to memsize type:
float Region::GetCell(int x, int y, int z) const {
return array[ptrdiff_t(x) +
ptrdiff_t(y) * ptrdiff_t(Width) +
ptrdiff_t(z) * ptrdiff_t(Width) *
ptrdiff_t(Height)];
}
Another decision is to replace the variables types with memsize type:
class Region {
float *array;
ptrdiff_t Width, Height, Depth;
float
Region::GetCell(ptrdiff_t x, ptrdiff_t y, ptrdiff_t z) const;
...
};
float Region::GetCell(ptrdiff_t x, ptrdiff_t y, ptrdiff_t z) const
{
return array[x + y * Width + z * Width * Height];
}
If you use expressions which type is different from memsize type for indexing but are sure about the code correctness, you may use the explicit type conversion to suppress the analyzer's warning messages as follows:
bool *Seconds;
int min, sec;
...
bool flag = Seconds[static_cast<size_t>(min * 60 + sec)];
If you suspect that the program may contain errors related to the incorrect explicit type conversion in expressions you may use the V201.
The analyzer tries as far as possible to understand when using non-memsize-type as the array's index is safe and keep from displaying warnings in such cases. As the result the analyzer's behaviour can sometimes seem strange. In such situations we ask you not to hurry and try to analyze the situation. Let's consider the following code:
char Arr[] = { '0', '1', '2', '3', '4' };
char *p = Arr + 2;
cout << p[0u + 1] << endl;
cout << p[0u - 1] << endl; //V108
This code works correctly in 32-bit mode and displays numbers 3 and 1. While testing this code we'll get a warning message only on one line with the expression "p[0u - 1]". And it's absolutely right. If you compile and launch this example in 64-bit mode you'll see that the value 3 will be displayed and after that a program crash will occur.
The error relates to that indexing of "p[0u - 1]" is incorrect on a 64-bit system and this is what analyzer warns about. According to C++ rules "0u - 1" expression will have unsigned type and equal 0xFFFFFFFFu. On a 32-bit architecture addition of an index with this number will be the same as substraction of 1. And on a 64-bit system 0xFFFFFFFFu value will be justly added to the index and memory will be addressed outside the array.
Of course indexing to arrays with the use of such types as int and unsigned is often safe. In this case analyzer's warnings may seem inappropriate. But you should keep in mind that such code still may be unsafe in case of its modernization for processing a different data set. The code with int and unsigned types can appear to be less efficient than it is possible on a 64-bit architecture.
If you are sure that indexation is correct you use "Suppression of false alarms" or use filters. You can use explicit type conversion in the code:
for (int i = 0; i != n; ++i)
Array[static_cast<ptrdiff_t>(i)] = 0;
Additional materials on this topic:
The analyzer found a possible error related to the implicit conversion of the return value type. The error may consist in the incorrect determination of the return value.
Let's examine an example.
extern int Width, Height, Depth;
size_t GetIndex(int x, int y, int z) {
return x + y * Width + z * Width * Height;
}
...
array[GetIndex(x, y, z)] = 0.0f;
If the code deals with large arrays (more than 'INT_MAX' items) it will behave incorrectly and we will address not those items of the array 'array' that we want. But the analyzer won't show a warning message on the line "array[GetIndex(x, y, z)] = 0.0f;" for it is absolutely correct. The analyzer informs about a possible error inside the function and is right for the error is located exactly there and is related to the arithmetic overflow. In spite of the facte that we return the type 'size_t' value the expression "x + y * Width + z * Width * Height" is determined with the use of type 'int'.
To correct the error we should use the explicit conversion of all the variables included into the expression to memsize types.
extern int Width, Height, Depth;
size_t GetIndex(int x, int y, int z) {
return (size_t)(x) +
(size_t)(y) * (size_t)(Width) +
(size_t)(z) * (size_t)(Width) * (size_t)(Height);
}
Another variant of correction is the use of other types for the variables included into the expression.
extern size_t Width, Height, Depth;
size_t GetIndex(size_t x, size_t y, size_t z) {
return x + y * Width + z * Width * Height;
}
When you are sure that the code is correct and the implicit type conversion does not cause errors while porting to the 64-bit architecture you may use the explicit type conversion so that to avoid showing of the warning messages in this line. For example:
DWORD_PTR Calc(unsigned a) {
return (DWORD_PTR)(10 * a);
}
In case you suspect that the code contains incorrect explicit type conversions to memsize types about which the analyzer does not show warnings you may use the V201.
Additional materials on this topic:
The analyzer found a possible error related to the implicit conversion of the return value. The error consists in dropping of the high bits in the 64-bit type which causes the loss of value.
Let's examine an example.
extern char *begin, *end;
unsigned GetSize() {
return end - begin;
}
The result of the "end - begin" expression has type 'ptrdiff_t'. But as the function returns type 'unsigned' the implicit type conversion occurs which causes the loss of the result high bits. Thus, if the pointers 'begin' and 'end' refer to the beginning and the end of the array according to a larger 'UINT_MAX' (4Gb), the function will return the incorrect value.
The correction consists in modifying the program in such a way so that the arrays sizes are kept and transported in memsize types. In this case the correct code of the 'GetSize' function should look as follows:
extern char *begin, *end;
size_t GetSize() {
return end - begin;
}
In some cases the analyzer won't display a warning message on type conversion if it is obviously correct. For example, the analyzer won't display a warning message on the following code where despite the fact that sizeof() operator's result is size_t type it can be safely placed into unsigned type:
unsigned GetSize() {
return sizeof(double);
}
When you are sure that the code is correct and the implicit type conversion does not cause errors while porting to the 64-bit architecture you may use the explicit type conversion so that to avoid showing of the warning messages. For example:
unsigned GetBitCount() {
return static_cast<unsigned>(sizeof(TypeRGBA) * 8);
}
If you suspect that the code contains incorrect explicit conversions of the return values types about which the analyzer does not warn you may use the V202.
Additional materials on this topic:
The analyzer found a possible error related to the transfer of the actual argument of memsize type into the function with variable number of arguments. The possible error may consist in the change of demands made to the function on the 64-bit system.
Let's examine an example.
const char *invalidFormat = "%u";
size_t value = SIZE_MAX;
printf(invalidFormat, value);
The given code does not take into account that 'size_t' type does not coincide with 'unsigned' type on the 64-bit platform. It will cause the printing of the incorrect result in case if "value > UINT_MAX". The analyzer warns you that memsize type is used as an actual argument. It means that you should check the line 'invalidFormat' assigning the printing format. The correct variant may look as follows:
const char *validFormat = "%Iu";
size_t value = SIZE_MAX;
printf(validFormat, value);
In the code of a real application, this error can occur in the following form, e.g.:
wsprintf(szDebugMessage,
_T("%s location %08x caused an access violation.\r\n"),
readwrite,
Exception->m_pAddr);
The second example.
char buf[9];
sprintf(buf, "%p", pointer);
The author of this inaccurate code did not take into account that the pointer size may excess 32 bits later. As a result, this code will cause buffer overflow on the 64-bit architecture. After checking the code on which the V111 warning message is shown you may choose one of the two ways: to increase the buffer size or rewrite the code using safe constructions.
char buf[sizeof(pointer) * 2 + 1];
sprintf(buf, "%p", pointer);
// --- or ---
std::stringstream s;
s << pointer;
The third example.
char buf[9];
sprintf_s(buf, sizeof(buf), "%p", pointer);
While examining the second example you could rightly notice that in order to prevent the overflow you should use functions with security enhancements. In this case the buffer overflow won't occur but unfortunately the correct result won't be shown as well.
If the arguments types did not change their digit capacity the code is considered to be correct and warning messages won't be shown. The example:
printf("%d", 10*5);
CString str;
size_t n = sizeof(float);
str.Format(StrFormat, static_cast<int>(n));
Unfortunately, we often cannot distinguish the correct code from the incorrect one while diagnosing the described type of errors. This warning message will be shown on many of calls of the functions with variable items number even when the call is absolutely correct. It is related to the principal danger of using such C++ constructions. Most frequent problems are the problems with the use of variants of the following functions: 'printf', 'scanf', 'CString::Format'. The generally accepted practice is to refuse them and to use safe programming methods. For example, you may replace 'printf' with 'cout' and 'sprintf' with 'boost::format' or 'std::stringstream'.
Note. Eliminating false positives when working with formatted output functions
The V111 diagnostic is very simple. When the analyzer has no information about a variadic function, it warns you about every case when variable of memsize-type is passed to that function. When it does have the information, the more accurate diagnostic V576 joins in and V111 diagnostic will not issue a warning. When V576 is disabled, V111 will work in any case.
Therefore, you can reduce the number of false positives by providing the analyzer with information about the format functions. The analyzer is already familiar with such typical functions as 'printf', 'sprintf', etc., so it is user-implemented functions that you want to annotate. See the description of the V576 diagnostic for details about annotating functions.
Consider the following example. You may ask, "Why does not the analyzer output a V111 warning in case N1, but does that in case N2?"
void OurLoggerFn(wchar_t const* const _Format, ...)
{
....
}
void Foo(size_t length)
{
wprintf( L"%Iu", length ); // N1
OurLoggerFn( L"%Iu", length ); // N2
}
The reason is that the analyzer knows how standard function 'wprintf' works, while it knows nothing about 'OurLoggerFn', so it prefers to be overcautious and issues a warning about passing a memsize-type variable ('size_t' in this case) as an actual argument to a variadic function.
To eliminate the V111 warning, annotate the 'OurLoggerFn' function as follows:
//+V576, function:OurLoggerFn, format_arg:1, ellipsis_arg:2
void OurLoggerFn(wchar_t const* const _Format, ...)
.....
Additional materials on this topic:
The analyzer found the use of a dangerous magic number. The possible error may consist in the use of numeric literal as special values or size of memsize type.
Let's examine the first example.
size_t ArraySize = N * 4;
size_t *Array = (size_t *)malloc(ArraySize);
A programmer while writing the program relied on that the size 'size_t' will be always equal 4 and wrote the calculation of the array size "N * 4". This code does not take into account that 'size_t' on the 64-bit system will have 8 bytes and will allocate less memory than it is necessary. The correction of the code consists in the use of 'sizeof' operator instead of a constant 4.
size_t ArraySize = N * sizeof(size_t);
size_t *Array = (size_t *)malloc(ArraySize);
The second example.
size_t n = static_cast<size_t>(-1);
if (n == 0xffffffffu) { ... }
Sometimes as an error code or other special marker the value "-1" is used which is written as "0xffffffff". On the 64-bit platform the written expression is incorrect and one should evidently use the value "-1".
size_t n = static_cast<size_t>(-1);
if (n == static_cast<size_t>(-1)) { ... }
Let's list magic numbers which may influence the efficiency of an application while porting it on the 64-bit system and due to this are diagnosed by analyzer.
You should study the code thoroughly in order to see if there are magic constants and replace them with safe constants and expressions. For this purpose you may use 'sizeof()' operator, special value from <limits.h>, <inttypes.h> etc.
In some cases magic constants are not considered unsafe. For example, there will be no warning on this code:
float Color[4];
Additional materials on this topic:
The analyzer found a possible error related to the implicit conversion of memsize type to 'double' type of vice versa. The possible error may consist in the impossibility of storing the whole value range of memsize type in variables of 'double' type.
Let's study an example.
SIZE_T size = SIZE_MAX;
double tmp = size;
size = tmp; // x86: size == SIZE_MAX
// x64: size != SIZE_MAX
'double' type has size 64 bits and is compatible IEEE-754 standard on 32-bit and 64-bit systems. Some programmers use 'double' type to store and work with integer types.
The given example may be justified on a 32-bit system for 'double' type has 52 significant bits and is capable to store a 32-bit integer value without a loss. But while trying to store an integer number in a variable of 'double' type the exact value can be lost (see picture).
If an approximate value can be used for the work algorithm in your program no corrections are needed. But we would like to warn you about the results of the change of behavior of a code like this on 64-bit systems. In any case it is not recommended to mix integer arithmetic with floating point arithmetic.
Additional materials on this topic:
The analyzer found a possible error related to the dangerous explicit type conversion of a pointer of one type to a pointer of another. The error may consist in the incorrect work with the objects to which the analyzer refers.
Let's examine an example. It contains the explicit type conversion of a 'int' pointer to a 'size_t' pointer.
int array[4] = { 1, 2, 3, 4 };
size_t *sizetPtr = (size_t *)(array);
cout << sizetPtr[1] << endl;
As you can see the result of the program output is different in 32-bit and 64-bit variants. On the 32-bit system the access to the array items is correct for the sizes of 'size_t' and 'int' types coincide and we see the output "2". On the 64-bit system we got "17179869187" in output for it is this value 17179869187 which stays in the first item of array 'sizetPtr'.
The correction of the situation described consists in refusing dangerous type conversions with the help of the program modernization. Another variant is to create a new array and to copy into it the values from the original array.
Of course not all the explicit conversions of pointer types are dangerous. In the following example the work result does not depend on the system capacity for 'enum' type and 'int' type have the same size on the 32-bit system and the 64-bit system as well. So the analyzer won't show any warning messages on this code.
int array[4] = { 1, 2, 3, 4 };
enum ENumbers { ZERO, ONE, TWO, THREE, FOUR };
ENumbers *enumPtr = (ENumbers *)(array);
cout << enumPtr[1] << endl;
Additional materials on this topic:
The analyzer found a possible error related to the use of memsize type for throwing an exception. The error may consist in the incorrect exception handling.
Let's examine an example of the code which contains 'throw' and 'catch' operators.
char *ptr1, *ptr2;
...
try {
throw ptr2 - ptr1;
}
catch(int) {
Foo();
}
On 64-bit system the exception handler will not work and the function 'Foo()' will not be called. This results from the fact that expression "ptr2 - ptr1" has type 'ptrdiff_t' which on 64-bit system does not equivalent with type 'int'.
The correction of the situation described consists in use of correct type for catch of exception. In this case is necessary use of 'ptrdiff_t' type, as noted below.
try {
throw ptr2 - ptr1;
}
catch(ptrdiff_t) {
Foo();
}
More right correction will consist in refusal of similar practice of programming. We recommend to use special classes for sending information about the error.
Additional materials on this topic:
The analyzer found a possible error related to the use of memsize type for catching exception. The error may consist in the incorrect exception handling.
Let's examine an example of the code which contains 'throw' and 'catch' operators.
try {
try {
throw UINT64(-1);
}
catch(size_t) {
cout << "x64 portability issues" << endl;
}
}
catch(UINT64) {
cout << "OK" << endl;
}
The work result on the 32-bit system: OKThe work result on the 64-bit system: x64 portability issues
This behavior change is connected with what on 64-bit system the 'size_t' type is equivalent to 'UINT64'.
Correction of the described situation consists in change of a code for achievement of necessary logic of work.
More right correction will consist in refusal of similar practice of programming. We recommend using special classes for sending information about the error.
Additional materials on this topic:
The analyzer found a possible error related to the use of memsize inside a union. The error may occur while working with such unions without taking into account the size changes of memsize types on the 64-bit system.
One should be attentive to the unions which contain pointers and other members of memsize type.
The first example.
Sometimes one needs to work with a pointer as with an integer. The code in the example is convenient because the explicit type conversions are not used for work with the pointer number form.
union PtrNumUnion {
char *m_p;
unsigned m_n;
} u;
...
u.m_p = str;
u.m_n += delta;
This code is correct on 32-bit systems and is incorrect on 64-bit ones. Changing the 'm_n' member on the 64-bit system we work only with a part of the 'm_p' pointer. One should use that type which would conform with the pointer size as follows.
union PtrNumUnion {
char *m_p;
size_t m_n; //type fixed
} u;
The second example.
Another frequent case of use of a union is the representation of one member as a set of smaller ones. For example, we may need to split the 'size_t' type value into bytes for realization of the table algorithm of counting zero bits in a byte.
union SizetToBytesUnion {
size_t value;
struct {
unsigned char b0, b1, b2, b3;
} bytes;
} u;
SizetToBytesUnion u;
u.value = value;
size_t zeroBitsN = TranslateTable[u.bytes.b0] +
TranslateTable[u.bytes.b1] +
TranslateTable[u.bytes.b2] +
TranslateTable[u.bytes.b3];
A fundamental algorithmic error is made here which is based on the supposition that the 'size_t' type consists of 4 bytes. The automatic search of algorithmic errors is not possible on the current stage of development of static analyzers but Viva64 provides search of all the unions which contain memsize types. Looking through the list of such potentially dangerous unions a user can find logical errors. On finding the union given in the example a user can detect an algorithmic error and rewrite the code in the following way.
union SizetToBytesUnion {
size_t value;
unsigned char bytes[sizeof(value)];
} u;
SizetToBytesUnion u;
u.value = value;
size_t zeroBitsN = 0;
for (size_t i = 0; i != sizeof(u.bytes); ++i)
zeroBitsN += TranslateTable[u.bytes[i]];
This warning message is similar to the warning V122.
Additional materials on this topic:
The analyzer detected a potential error relating to using a dangerous expression serving as an actual argument for malloc function. The error may lie in incorrect suggestions about types' sizes defined as numerical constants.
The analyzer considers suspicious those expressions which contain constant literals multiple of four but which lack sizeof() operator.
Example 1.
An incorrect code of memory allocation for a matrix 3x3 of items of size_t type may look as follows:
size_t *pMatrix = (size_t *)malloc(36); // V118
Although this code could work very well in a 32-bit system, using number 36 is incorrect. When compiling a 64-bit version 72 bytes must be allocated. You may use sizeof () operator to correct this error:
size_t *pMatrix = (size_t *)malloc(9 * sizeof(size_t));
Example 2.
The following code based on the suggestion that the size of Item structure is 12 bytes is also incorrect for a 64-bit system:
struct Item {
int m_a;
int m_b;
Item *m_pParent;
};
Item *items = (Item *)malloc(GetArraySize() * 12); // V118
Correction of this error also consists in using sizeof() operator to correctly calculate the size of the structure:
Item *items = (Item *)malloc(GetArraySize() * sizeof(Item));
These errors are simple and easy to correct. But they are nevertheless dangerous and difficult to find in case of large applications. That's why diagnosis of such errors is implemented as a separate rule.
Presence of a constant in an expression which is a parameter for malloc() function does not necessarily means that V118 warning will be always shown on it. If sizeof() operator participates in the expression this construction is safe. Here is an example of a code which the analyzer considers safe:
int *items = (int *)malloc(sizeof(int) * 12);
Additional materials on this topic:
The analyzer detected an unsafe arithmetic expression containing several sizeof() operators. Such expressions can potentially contain errors relating to incorrect calculations of the structures' sizes without taking into account field alignment.
Example:
struct MyBigStruct {
unsigned m_numberOfPointers;
void *m_Pointers[1];
};
size_t n2 = 1000;
void *p;
p = malloc(sizeof(unsigned) + n2 * sizeof(void *));
To calculate the size of the structure which will contain 1000 pointers, an arithmetic expression is used which is correct at first sight. The sizes of the base types are defined by sizeof() operators. It is good but not sufficient for correct calculation of the necessary memory size. You should also take into account field alignment.
This example is correct for a 32-bit mode for the sizes of the pointers and unsigned type coincide. They are both 4 bytes. The pointers and unsigned type are aligned also at the boundary of four bytes. So the necessary memory size will be calculated correctly.
In a 64-bit code the size of the pointer is 8 bytes. Pointers are aligned at the boundary of 8 bytes as well. It leads to that after m_numberOfPointers variable 4 additional bytes will be situated at the boundary of 8 bytes to align the pointers.
To calculate the correct size you should use offsetof function:
p = malloc(offsetof(MyBigStruct, m_Pointers) +
n * sizeof(void *));
In many cases using several sizeof() operators in one expression is correct and the analyzer ignores such constructions. Here is an example of safe expressions with several sizeof operators:
int MyArray[] = { 1, 2, 3 };
size_t MyArraySize =
sizeof(MyArray) / sizeof(MyArray[0]);
assert(sizeof(unsigned) < sizeof(size_t));
size_t strLen = sizeof(String) - sizeof(TCHAR);
Additional materials on this topic:
The analyzer detected a potential error of working with classes that contain operator[].
Classes with an overloaded operator[] are usually a kind of an array where the index of the item being called is operator[] argument. If operator[] has a 32-bit type formal argument but memsize-type is used as an actual argument, it might indicate an error. Let us consider an example leading to the warning V120:
class MyArray {
int m_arr[10];
public:
int &operator;[](unsigned i) { return m_arr[i]; }
} Object;
size_t k = 1;
Object[k] = 44; //V120
This example does not contain an error but might indicate an architecture shortcoming. You should either work with MyArray using 32-bit indexes or modify operator[] so that it takes an argument of size_t type. The latter is preferable because memsize-types not only serve to make a program safer but sometimes allow the compiler to build a more efficient code.
The related diagnostic warnings are V108 and V302.
The analyzer detected a potential error related to calling the operator new. A value of a non-memsize type is passed to the operator "new" as an argument. The operator new takes values of the type size_t, and passing a 32-bit actual argument may signal a potential overflow that may occur when calculating the memory amount being allocated.
Here is an example:
unsigned a = 5;
unsigned b = 1024;
unsigned c = 1024;
unsigned d = 1024;
char *ptr = new char[a*b*c*d]; //V121
Here you may see an overflow occurring when calculating the expression "a*b*c*d". As a result, the program allocates less memory than it should. To correct the code, use the type size_t:
size_t a = 5;
size_t b = 1024;
size_t c = 1024;
size_t d = 1024;
char *ptr = new char[a*b*c*d]; //Ok
The error will not be diagnosed if the value of the argument is defined as a safe 32-bit constant value. Here is an example of safe code:
char *ptr = new char[100];
const int size = 3*3;
char *p2 = new char[size];
This warning message is similar to the warning V106.
Additional materials on this topic:
Sometimes you might need to find all the fields in the structures that have a memsize-type. You can find such fields using the V122 diagnostic rule.
The necessity to view all the memsize-fields might appear when you port a program that has structure serialization, for example, into a file. Consider an example:
struct Header
{
unsigned m_version;
size_t m_bodyLen;
};
...
size_t size = fwrite(&header, 1, sizeof(header), file);
...
This code writes a different number of bytes into the file depending on the mode it is compiled in - either Win32 or Win64. This might violate compatibility of files' formats or cause other errors.
The task of automating the detection of such errors is almost impossible to solve. However, if there are some reasons to suppose that the code might contain such errors, developers can once check all the structures that participate in serialization. It is for this purpose that you may need a check with the V122 rule. By default it is disabled since it generates false warnings in more than 99% of cases.
In the example above, the V122 message will be produced on the line "size_t m_bodyLen;". To correct this code, you may use types of fixed size:
struct Header
{
My_UInt32 m_version;
My_UInt32 m_bodyLen;
};
...
size_t size = fwrite(&header, 1, sizeof(header), file);
...
Let's consider other examples where the V122 message will be generated:
class X
{
int i;
DWORD_PTR a; //V122
DWORD_PTR b[3]; //V122
float c[3][4];
float *ptr; //V122
};
V117 is a related diagnostic message.
Note. If you are sure that structures containing pointers will never serialize, you may use this comment:
//-V122_NOPTR
It will suppress all warnings related to pointers.
This comment should be added into the header file included into all the other files. For example, such is the "stdafx.h" file. If you add this comment into a "*.cpp" file, it will affect only this particular file.
The analyzer found a potential error related to the operation of memory allocation. When calculating the amount of memory to be allocated, the sizeof(X) operator is used. The result returned by the memory allocation function is converted to a different type, "(Y *)", instead of "(X *)". It may indicate allocation of insufficient or excessive amount of memory.
Consider the first example:
int **ArrayOfPointers = (int **)malloc(n * sizeof(int));
The misprint in the 64-bit program here will cause allocation of memory twice less than necessary. In the 32-bit program, the sizes of the "int" type and "pointer to int" coincide and the program works correctly despite the misprint.
This is the correct version of the code:
int **ArrayOfPointers = (int **)malloc(n * sizeof(int *));
Consider another example where more memory is allocated than needed:
unsigned *p = (unsigned *)malloc(len * sizeof(size_t));
A program with such code will most probably work correctly both in the 32-bit and 64-bit versions. But in the 64-bit version, it will allocate more memory than it needs. This is the correct code:
unsigned *p = (unsigned *)malloc(len * sizeof(unsigned));
In some cases the analyzer does not generate a warning although the types X and Y do not coincide. Here is an example of such correct code:
BYTE *simpleBuf = (BYTE *)malloc(n * sizeof(float));
The analyzer detected a potential error: the size of data being written or read is defined by a constant.
When the code is compiled in the 64-bit mode, the sizes of some data and their alignment boundaries will change. The sizes of base types and their alignment boundaries are shown in the picture:
The analyzer examines code fragments where the size of data being written or read is defined explicitly. The programmer must review these fragments. Here is a code sample:
size_t n = fread(buf, 1, 40, f_in);
Constant 40 may be an incorrect value from the viewpoint of the 64-bit system. Perhaps you should write it so:
size_t n = fread(buf, 1, 10 * sizeof(size_t), f_in);
The analyzer detected a potential error: 64-bit code contains definitions of reserved types, the latter being defined as 32-bit ones.
For example:
typedef unsigned size_t;
typedef __int32 INT_PTR;
Such type definitions may cause various errors since these types have different sizes in different parts of the program and libraries. For instance, the size_t type is defined in the stddef.h header file for the C language and in the cstddef file for the C++ language.
References:
This diagnostic message lets you find all the 'long' types used in a program.
Of course, presence of the 'long' type in a program is not an error in itself. But you may need to review all the fragments of the program text where this type is used when you create portable 64-bit code that must work well in Windows and Linux.
Windows and Linux use different data models for the 64-bit architecture. A data model means correlations of sizes of base data types such as int, float, pointer, etc. Windows uses the LLP64 data model while Linux uses the LP64 data model. In these models, the sizes of the 'long' type are different.
In Windows (LLP64), the size of the 'long' type is 4 bytes.
In Linux (LP64), the size of the 'long' type is 8 bytes.
The difference of the 'long' type's sizes may make files' formats incompatible or cause errors when developing code executed in Linux and Windows. So if you want, you may use PVS-Studio to review all the code fragments where the 'long' type is used.
References:
The analyzer detected a potential error: a 32-bit variable might overflow in a long loop.
Of course, the analyzer will not be able to find all the possible cases when variable overflows in loops occur.
But it will help you find some incorrect type constructs.
For example:
int count = 0;
for (size_t i = 0; i != N; i++)
{
if ((A[i] & MASK) != 0)
count++;
}
This code works well in a 32-bit program. The variable of the 'int' type is enough to count the number of some items in the array. But in a 64-bit program the number of these items may exceed INT_MAX and an overflow of the 'count' variable will occur. This is what the analyzer warns you about by generating the V127 message. This is the correct code:
size_t count = 0;
for (size_t i = 0; i != N; i++)
{
if ((A[i] & MASK) != 0)
count++;
}
The analyzer also contains several additional checks to make false reports fewer. For instance, the V127 warning will not be generated when we deal with a short loop. Here you are a sample of code the analyzer considers safe:
int count = 0;
for (size_t i = 0; i < 100; i++)
{
if ((A[i] & MASK) != 0)
count++;
}
The analyzer has detected a potential error related to data incompatibility between the 32-bit and 64-bit versions of an application, when memsize-variables are being written to or read from the stream. The error is this: data written to the binary file in the 32-bit program version will be read incorrectly by the 64-bit one.
For example:
std::vector<int> v;
....
ofstream os("myproject.dat", ios::binary);
....
os << v.size();
The 'size()' function returns a value of the size_t type whose size is different in 32-bit and 64-bit applications. Consequently, different numbers of bytes will be written to the file.
There exist many ways to avoid the data incompatibility issue. The simplest and crudest one is to strictly define the size of types being written and read. For example:
std::vector<int> v;
....
ofstream os("myproject.dat", ios::binary);
....
os << static_cast<__int64>(v.size());
A strictly defined cast to 64-bit types cannot be called a nice solution, of course. The reason is that this method won't let the program read data written by the old 32-bit program version. On the other hand, if data are defined to be read and written as 32-bit values, we face another problem: the 64-bit program version won't be able to write information about arrays consisting of more than 2^32 items. This may be a disappointing limitation, as 64-bit software is usually created to handle huge data arrays.
A way out can be found through introducing a notion of the version of saved data. For example, 32-bit applications can open files created by the 32-bit version of your program, while 64-bit applications can handle data generated both by the 32-bit and 64-bit versions.
One more way to solve the compatibility problem is to store data in the text format or the XML format.
Note that this compatibility issue is irrelevant in many programs. If your application doesn't create projects and other files to be opened on other computers, you may turn off the V128 diagnostic.
You also shouldn't worry if the stream is used to print values on the screen. PVS-Studio tries to detect these situations and avoid generating the message. False positives are, however, still possible. If you get them, use one of the false positive suppression mechanisms described in the documentation.
Additional features
According to users demand, we added a possibility to manually point out functions, which saves or loads data. When somewhere in code a memsize-type is passed to one of these functions, this code considered dangerous.
Addition format is as follows: just above function prototype (or near its realization, or in standard header file) user should add a special comment. Let us start with the usage example:
//+V128, function:write, non_memsize:2
void write(string name, char);
void write(string name, int32);
void write(string name, int64);
foo()
{
write("zz", array.size()); // warning V128
}
Format:
Warning level in case of user functions is always first.
At last, here is full usage example:
// Warns when in method C of class B
// from A namespace memsize-type value
// is put as a second or third argument.
//+V128,namespace:A,class:B,function:C,non_memsize:3,non_memsize:2
It informs about the presence of the explicit type conversion from 32-bit integer type to memsize type which may hide one of the following errors: V101, V102, V104, V105, V106, V108, V109. You may address to the given warnings list to find out the cause of showing the diagnosis message V201.
The V201 warning applied to conversions of 32-bit integer types to pointers before. Such conversions are rather dangerous, so we singled them out into a separate diagnostic rule V204.
Keep in mind that most of the warnings of this type will be likely shown on the correct code. Here are some examples of the correct and incorrect code on which this warning will be shown.
The examples of the incorrect code.
int i;
ptrdiff_t n;
...
for (i = 0; (ptrdiff_t)(i) != n; ++i) { //V201
...
}
unsigned width, height, depth;
...
size_t arraySize = size_t(width * height * depth); //V201
The examples of the correct code.
const size_t seconds = static_cast<size_t>(60 * 60); //V201
unsigned *array;
...
size_t sum = 0;
for (size_t i = 0; i != n; i++) {
sum += static_cast<size_t>(array[i] / 4); //V201
}
unsigned width, height, depth;
...
size_t arraySize =
size_t(width) * size_t(height) * size_t(depth); //V201
It informs about the presence of the explicit integer memsize type conversion to 32-bit type which may hide one of the following errors: V103, V107, V110. You may see the given warnings list to find out the cause of showing the warning message V202.
The V202 warning applied to conversions of pointers to 32-bit integer types before. Such conversions are rather dangerous, so we singled them out into a separate rule V205.
Keep in mind that most of the warnings of this type will be likely shown on the correct code. Here are some examples of the correct and incorrect code on which this warning will be shown.
The examples of the incorrect code.
size_t n;
...
for (unsigned i = 0; i != (unsigned)n; ++i) { //V202
...
}
UINT_PTR width, height, depth;
...
UINT arraySize = UINT(width * height * depth); //V202
The examples of the correct code.
const unsigned bits =
unsigned(sizeof(object) * 8); //V202
extern size_t nPane;
extern HICON hIcon;
BOOL result =
SetIcon(static_cast<int>(nPane), hIcon); //V202
Additional materials on this topic:
The analyzer found a possible error related to the explicit conversion of memsize type into 'double' type and vice versa. The possible error may consist in the impossibility to save the whole range of values of memsize type in variables of 'double' type.
This error is completely similar to error V113. The difference is in that the explicit type conversion is used as in a further example:
SIZE_T size = SIZE_MAX;
double tmp = static_cast<double>(size);
size = static_cast<SIZE_T>(tmp); // x86: size == SIZE_T
// x64: size != SIZE_T
To study this kind of errors see the description of error V113.
Additional materials on this topic:
This warning informs you about an explicit conversion of a 32-bit integer type to a pointer type. We used the V201 diagnostic rule before to diagnose this situation. But explicit conversion of the 'int' type to pointer is much more dangerous than conversion of 'int' to 'intptr_t'. That is why we created a separate rule to search for explicit type conversions when handling pointers.
Here is a sample of incorrect code.
int n;
float *ptr;
...
ptr = (float *)(n);
The 'int' type's size is 4 bytes in a 64-bit program, so it cannot store a pointer whose size is 8 bytes. Type conversion like in the sample above usually signals an error.
What is very unpleasant about such errors is that they can hide for a long time before you reveal them. A program might store pointers in 32-bit variables and work correctly for some time as long as all the objects created in the program are located in low-order addresses of memory.
If you need to store a pointer in an integer variable for some reason, you'd better use memsize-types. For instance: size_t, ptrdiff_t, intptr_t, uintptr_t.
This is the correct code:
intptr_t n;
float *ptr;
...
ptr = (float *)(n);
However, there is a specific case when you may store a pointer in 32-bit types. I am speaking about handles which are used in Windows to work with various system objects. Here are examples of such types: HANDLE, HWND, HMENU, HPALETTE, HBITMAP, etc. Actually these types are pointers. For instance, HANDLE is defined in header files as "typedef void *HANDLE;".
Although handles are 64-bit pointers, only the less significant 32 bits are employed in them for the purpose of better compatibility (for example, to enable 32-bit and 64-bit processes interact with each other). For details, see "Microsoft Interface Definition Language (MIDL): 64-Bit Porting Guide" (USER and GDI handles are sign extended 32b values).
Such pointers can be stored in 32-bit data types (for instance, int, DWORD). To cast such pointers to 32-bit types and vice versa special functions are used:
void * Handle64ToHandle( const void * POINTER_64 h )
void * POINTER_64 HandleToHandle64( const void *h )
long HandleToLong ( const void *h )
unsigned long HandleToUlong ( const void *h )
void * IntToPtr ( const int i )
void * LongToHandle ( const long h )
void * LongToPtr ( const long l )
void * Ptr64ToPtr ( const void * POINTER_64 p )
int PtrToInt ( const void *p )
long PtrToLong ( const void *p )
void * POINTER_64 PtrToPtr64 ( const void *p )
short PtrToShort ( const void *p )
unsigned int PtrToUint ( const void *p )
unsigned long PtrToUlong ( const void *p )
unsigned short PtrToUshort ( const void *p )
void * UIntToPtr ( const unsigned int ui )
void * ULongToPtr ( const unsigned long ul )
Additional materials on this topic:
This warning informs you about an explicit conversion of a pointer type to a 32-bit integer type. We used the V202 diagnostic rule before to diagnose this situation. But explicit conversion of a pointer to the 'int' type is much more dangerous than conversion of 'intptr_t' to 'int'. That is why we created a separate rule to search for explicit type conversions when handling pointers.
Here is a sample of incorrect code.
int n;
float *ptr;
...
n = (int)ptr;
The 'int' type's size is 4 bytes in a 64-bit program, so it cannot store a pointer whose size is 8 bytes. Type conversion like in the sample above usually signals an error.
What is very unpleasant about such errors is that they can hide for a long time before you reveal them. A program might store pointers in 32-bit variables and work correctly for some time as long as all the objects created in the program are located in low-order addresses of memory.
If you need to store a pointer in an integer variable for some reason, you'd better use memsize-types. For instance: size_t, ptrdiff_t, intptr_t, uintptr_t.
This is the correct code:
intptr_t n;
float *ptr;
...
n = (intptr_t)ptr;
However, there is a specific case when you may store a pointer in 32-bit types. I am speaking about handles which are used in Windows to work with various system objects. Here are examples of such types: HANDLE, HWND, HMENU, HPALETTE, HBITMAP, etc. Actually these types are pointers. For instance, HANDLE is defined in header files as "typedef void *HANDLE;".
Although handles are 64-bit pointers, only the less significant 32 bits are employed in them for the purpose of better compatibility (for example, to enable 32-bit and 64-bit processes interact with each other). For details, see "Microsoft Interface Definition Language (MIDL): 64-Bit Porting Guide" (USER and GDI handles are sign extended 32b values).
Such pointers can be stored in 32-bit data types (for instance, int, DWORD). To cast such pointers to 32-bit types and vice versa special functions are used:
void * Handle64ToHandle( const void * POINTER_64 h )
void * POINTER_64 HandleToHandle64( const void *h )
long HandleToLong ( const void *h )
unsigned long HandleToUlong ( const void *h )
void * IntToPtr ( const int i )
void * LongToHandle ( const long h )
void * LongToPtr ( const long l )
void * Ptr64ToPtr ( const void * POINTER_64 p )
int PtrToInt ( const void *p )
long PtrToLong ( const void *p )
void * POINTER_64 PtrToPtr64 ( const void *p )
short PtrToShort ( const void *p )
unsigned int PtrToUint ( const void *p )
unsigned long PtrToUlong ( const void *p )
unsigned short PtrToUshort ( const void *p )
void * UIntToPtr ( const unsigned int ui )
void * ULongToPtr ( const unsigned long ul )
Let's take a look at the following example:
HANDLE h = Get();
UINT uId = (UINT)h;
The analyzer does not generate the message here, though HANDLE is nothing but a pointer. Values of this pointer always fit into 32 bits. Just make sure you take care when working with them in future. Keep in mind that non-valid handles are declared in the following way:
#define INVALID_HANDLE_VALUE ((HANDLE)(LONG_PTR)-1)
That's why it would be incorrect to write the next line like this:
if (HANDLE(uID) == INVALID_HANDLE_VALUE)
Since the 'uID' variable is unsigned, the pointer's value will equal 0x00000000FFFFFFFF, not 0xFFFFFFFFFFFFFFFF.
The analyzer will generate the V204 warning for a suspicious check when unsigned turns into a pointer.
Additional materials on this topic:
This warning informs you about an explicit conversion of the 'void *' or 'byte *' pointer to a function pointer or 32/64-bit integer pointer. Or vice versa.
Of course, the type conversion like that is not in itself an error. Let's figure out what for we have implemented this diagnostic.
It is a pretty frequent situation when a pointer to some memory buffer is passed into another part of the program through a void * or byte * pointer. There may be different reasons for doing so; it usually indicates a poor code design, but this question is out of the scope of this paper. Function pointers are often stored as void * pointers, too.
So, assume we have an array/function pointer saved as void * in some part of the program while it is cast back in another part. When porting such a code, you may get unpleasant errors: a type may change in one place but stay unchanged in some other place.
For example:
size_t array[20];
void *v = array;
....
unsigned* sizes = (unsigned*)(v);
This code works well in the 32-bit mode as the sizes of the 'unsigned' and 'size_t' types coincide. In the 64-bit mode, however, their sizes are different and the program will behave unexpectedly. See also pattern 6, changing an array type.
The analyzer will point out the line with the explicit type conversion where you will discover an error if study it carefully. The fixed code may look like this:
unsigned array[20];
void *v = array;
....
unsigned* sizes = (unsigned*)(v);
or like this:
size_t array[20];
void *v = array;
....
size_t* sizes = (size_t*)(v);
A similar error may occur when working with function pointers.
void Do(void *ptr, unsigned a)
{
typedef void (*PtrFoo)(DWORD);
PtrFoo f = (PtrFoo)(ptr);
f(a);
}
void Foo(DWORD_PTR a) { /*... */ }
void Call()
{
Do(Foo, 1);
}
The fixed code: