# PVS-Studio Documentation (single file)

You can open full PVS-Studio documentation as single file. In addition, you can print it as .pdf with help virtual printer.

## Analyzer Diagnostics

• PVS-Studio Messages.
• General Analysis (C++)
• General Analysis (C#)
• Diagnosis of micro-optimizations (C++)
• Diagnosis of 64-bit errors (C++)
• Customer's Specific Requests (C++)
• Problems related to code analyzer (C++, C#)

# PVS-Studio Messages

## What bugs can PVS-Studio detect?

We grouped the diagnostic, so that you can get the general idea of what PVS-Studio is capable of.

As it is hard to do strict grouping, some diagnostics belong to several groups. For example, the incorrect condition "if (abc == abc)" can be interpreted both as a simple typo, but also as a security issue, because it leads to the program vulnerability if the input data are incorrect.

Some of the errors, on the contrary, couldn't fit any of the groups, because they were too specific. Nevertheless this table gives the insight about the functionality of the static code analyzer.

## List of all analyzer rules in XML

You can find a permanent link to machine-readable map of all analyzer's rules in XML format here.

 Main PVS-Studio diagnostic abilities C, C++ diagnostics C# diagnostics 64-bit issues V101-V128, V201-V207, V220, V221, V301-V303 - Check that addresses to stack memory does not leave the function V506, V507, V558, V758 - Arithmetic over/underflow V636, V658, V784, V786, V1012 V3040, V3041 Array index out of bounds V557, V582, V643, V781 V3106 Double-free V586, V749, V1002, V1006 - Dead code V606, V607 - Microoptimization V801-V821 - Unreachable code V551, V695, V734, V776, V779, V785 V3136 Uninitialized variables V573, V614, V679, V730, V737, V788, V1007 V3070, V3128 Unused variables V603, V751, V763, V1001 V3061, V3065, V3077, V3117, V3137 Illegal bitwise/shift operations V610, V629, V673, V684, V770 V3134 Undefined/unspecified behavior V567, V610, V611, V681, V704, V708, V726, V736, V772, V1016 - Incorrect handling of the types (HRESULT, BSTR, BOOL, VARIANT_BOOL) V543, V544, V545, V716, V721, V724, V745, V750, V676, V767, V768, V775 V3111, V3121 Improper understanding of function/class operation logic V518, V530, V540, V541, V554, V575, V597, V598, V618, V630, V632, V663, V668, V698, V701, V702, V717, V718, V720, V723, V725, V727, V738, V742, V743, V748, V762, V764, V780, V789, V797, V1014 V3010, V3057, V3068, V3072, V3073, V3074, V3082, V3084, V3094, V3096, V3097, V3102, V3103, V3104, V3108, V3114, V3115, V3118, V3123, V3126 Misprints V501, V503, V504, V508, V511, V516, V519, V520, V521, V525, V527, V528, V529, V532, V533, V534, V535, V536, V537, V539, V546, V549, V552, V556, V559, V560, V561, V564, V568, V570, V571, V575, V577, V578, V584, V587, V588, V589, V590, V592, V600, V602, V604, V606, V607, V616, V617, V620, V621, V622, V625, V626, V627, V633, V637, V638, V639, V644, V646, V650, V651, V653, V654, V655, V660, V661, V662, V666, V669, V671, V672, V678, V682, V683, V693, V715, V722, V735, V747, V754, V756, V765, V767, V787, V791, V792, V796, V1013, V1015 V3001, V3003, V3005, V3007, V3008, V3009, V3011, V3012, V3014, V3015, V3016, V3020, V3028, V3029, V3034, V3035, V3036, V3037, V3038, V3050, V3055, V3056, V3057, V3062, V3063, V3066, V3081, V3086, V3091, V3092, V3107, V3109, V3110, V3112, V3113, V3116, V3122, V3124, V3132 Missing Virtual destructor V599, V689 - Coding style not matching the operation logic of the source code V563, V612, V628, V640, V646, V705 V3018, V3033, V3043, V3067, V3069 Copy-Paste V501, V517, V519, V523, V524, V571, V581, V649, V656, V691, V760, V766, V778 V3001, V3003, V3004, V3008, V3012, V3013, V3021, V3030, V3058, V3127 Incorrect usage of exceptions V509, V565, V596, V667, V740, V741, V746, V759 V3006, V3052, V3100 Buffer overrun V512, V514, V594, V635, V641, V645, V752, V755 - Security issues V505, V510, V511, V512, V518, V531, V541, V547, V559, V560, V569, V570, V575, V576, V579, V583, V597, V598, V618, V623, V642, V645, V675, V676, V724, V727, V729, V733, V743, V745, V750, V771, V774, V782, V1003, V1005, V1010, V1017 V3022, V3023, V3025, V3027, V3053, V3063 Operation priority V502, V562, V593, V634, V648 V3130, V3133 Null pointer / null reference dereference V522, V595, V664, V757, V769 V3019, V3042, V3080, V3095, V3105, V3125 Unchecked parameter dereference V595, V664, V783, V1004 V3095 Synchronization errors V712, V1011, V1018 V3032, V3054, V3079, V3083, V3089, V3090 WPF usage errors - V3044 - V3049 Resource leaks V701, V773, V1020 - Check for integer division by zero V609 V3064 Customized user rules V2001-V2013 -

Table – PVS-Studio functionality.

As you see, the analyzer is especially useful is such spheres as looking for bugs caused by Copy-Paste and detecting security flaws.

To these diagnostics in action, have a look at the error base. We collect all the errors that we have found, checking various open source projects with PVS-Studio.

## General Analysis (C++)

• V501. There are identical sub-expressions to the left and to the right of the 'foo' operator.
• V502. Perhaps the '?:' operator works in a different way than it was expected. The '?:' operator has a lower priority than the 'foo' operator.
• V503. This is a nonsensical comparison: pointer < 0.
• V504. It is highly probable that the semicolon ';' is missing after 'return' keyword.
• V505. The 'alloca' function is used inside the loop. This can quickly overflow stack.
• V506. Pointer to local variable 'X' is stored outside the scope of this variable. Such a pointer will become invalid.
• V507. Pointer to local array 'X' is stored outside the scope of this array. Such a pointer will become invalid.
• V508. The use of 'new type(n)' pattern was detected. Probably meant: 'new type[n]'.
• V509. The 'throw' operator inside the destructor should be placed within the try..catch block. Raising exception inside the destructor is illegal.
• V510. The 'Foo' function is not expected to receive class-type variable as 'N' actual argument.
• V511. The sizeof() operator returns size of the pointer, and not of the array, in given expression.
• V512. A call of the 'Foo' function will lead to a buffer overflow or underflow.
• V514. Dividing sizeof a pointer by another value. There is a probability of logical error presence.
• V515. The 'delete' operator is applied to non-pointer.
• V516. Consider inspecting an odd expression. Non-null function pointer is compared to null.
• V517. The use of 'if (A) {...} else if (A) {...}' pattern was detected. There is a probability of logical error presence.
• V518. The 'malloc' function allocates strange amount of memory calculated by 'strlen(expr)'. Perhaps the correct variant is strlen(expr) + 1.
• V519. The 'x' variable is assigned values twice successively. Perhaps this is a mistake.
• V520. The comma operator ',' in array index expression.
• V521. Such expressions using the ',' operator are dangerous. Make sure the expression is correct.
• V522. Dereferencing of the null pointer might take place.
• V523. The 'then' statement is equivalent to the 'else' statement.
• V524. It is odd that the body of 'Foo_1' function is fully equivalent to the body of 'Foo_2' function.
• V525. The code contains the collection of similar blocks. Check items X, Y, Z, ... in lines N1, N2, N3, ...
• V526. The 'strcmp' function returns 0 if corresponding strings are equal. Consider examining the condition for mistakes.
• V527. It is odd that the 'zero' value is assigned to pointer. Probably meant: *ptr = zero.
• V528. It is odd that pointer is compared with the 'zero' value. Probably meant: *ptr != zero.
• V529. Odd semicolon ';' after 'if/for/while' operator.
• V530. The return value of function 'Foo' is required to be utilized.
• V531. It is odd that a sizeof() operator is multiplied by sizeof().
• V532. Consider inspecting the statement of '*pointer++' pattern. Probably meant: '(*pointer)++'.
• V533. It is likely that a wrong variable is being incremented inside the 'for' operator. Consider reviewing 'X'.
• V534. It is likely that a wrong variable is being compared inside the 'for' operator. Consider reviewing 'X'.
• V535. The variable 'X' is being used for this loop and for the outer loop.
• V536. Be advised that the utilized constant value is represented by an octal form.
• V537. Consider reviewing the correctness of 'X' item's usage.
• V538. The line contains control character 0x0B (vertical tabulation).
• V539. Consider inspecting iterators which are being passed as arguments to function 'Foo'.
• V540. Member 'x' should point to string terminated by two 0 characters.
• V541. It is dangerous to print a string into itself.
• V542. Consider inspecting an odd type cast: 'Type1' to ' Type2'.
• V543. It is odd that value 'X' is assigned to the variable 'Y' of HRESULT type.
• V544. It is odd that the value 'X' of HRESULT type is compared with 'Y'.
• V545. Such conditional expression of 'if' statement is incorrect for the HRESULT type value 'Foo'. The SUCCEEDED or FAILED macro should be used instead.
• V546. Member of a class is initialized with itself: 'Foo(Foo)'.
• V547. Expression is always true/false.
• V548. Consider reviewing type casting. TYPE X[][] is not equivalent to TYPE **X.
• V549. The 'first' argument of 'Foo' function is equal to the 'second' argument.
• V550. An odd precise comparison. It's probably better to use a comparison with defined precision: fabs(A - B) < Epsilon or fabs(A - B) > Epsilon.
• V551. The code under this 'case' label is unreachable.
• V552. A bool type variable is being incremented. Perhaps another variable should be incremented instead.
• V553. The length of function's body or class's declaration is more than 2000 lines long. You should consider refactoring the code.
• V554. Incorrect use of smart pointer.
• V555. The expression of the 'A - B > 0' kind will work as 'A != B'.
• V556. The values of different enum types are compared.
• V557. Array overrun is possible.
• V558. Function returns the pointer/reference to temporary local object.
• V559. Suspicious assignment inside the conditional expression of 'if/while/for' statement.
• V560. A part of conditional expression is always true/false.
• V561. It's probably better to assign value to 'foo' variable than to declare it anew.
• V562. It's odd to compare a bool type value with a value of N.
• V563. It is possible that this 'else' branch must apply to the previous 'if' statement.
• V564. The '&' or '|' operator is applied to bool type value. You've probably forgotten to include parentheses or intended to use the '&&' or '||' operator.
• V565. An empty exception handler. Silent suppression of exceptions can hide the presence of bugs in source code during testing.
• V566. The integer constant is converted to pointer. Possibly an error or a bad coding style.
• V567. The modification of a variable is unsequenced relative to another operation on the same variable. This may lead to undefined behavior.
• V568. It's odd that the argument of sizeof() operator is the expression.
• V569. Truncation of constant value.
• V570. The variable is assigned to itself.
• V571. Recurring check. This condition was already verified in previous line.
• V572. It is odd that the object which was created using 'new' operator is immediately cast to another type.
• V573. Uninitialized variable 'Foo' was used. The variable was used to initialize itself.
• V574. The pointer is used simultaneously as an array and as a pointer to single object.
• V575. Function receives an odd argument.
• V576. Incorrect format. Consider checking the N actual argument of the 'Foo' function.
• V577. Label is present inside a switch(). It is possible that these are misprints and 'default:' operator should be used instead.
• V578. An odd bitwise operation detected. Consider verifying it.
• V579. The 'Foo' function receives the pointer and its size as arguments. It is possibly a mistake. Inspect the N argument.
• V580. An odd explicit type casting. Consider verifying it.
• V581. The conditional expressions of the 'if' statements situated alongside each other are identical.
• V582. Consider reviewing the source code which operates the container.
• V583. The '?:' operator, regardless of its conditional expression, always returns one and the same value.
• V584. The same value is present on both sides of the operator. The expression is incorrect or it can be simplified.
• V585. An attempt to release the memory in which the 'Foo' local variable is stored.
• V586. The 'Foo' function is called twice for deallocation of the same resource.
• V587. An odd sequence of assignments of this kind: A = B; B = A;.
• V588. The expression of the 'A =+ B' kind is utilized. Consider reviewing it, as it is possible that 'A += B' was meant.
• V589. The expression of the 'A =- B' kind is utilized. Consider reviewing it, as it is possible that 'A -= B' was meant.
• V590. Consider inspecting this expression. The expression is excessive or contains a misprint.
• V591. Non-void function should return a value.
• V592. The expression was enclosed by parentheses twice: ((expression)). One pair of parentheses is unnecessary or misprint is present.
• V593. Consider reviewing the expression of the 'A = B == C' kind. The expression is calculated as following: 'A = (B == C)'.
• V594. The pointer steps out of array's bounds.
• V595. The pointer was utilized before it was verified against nullptr. Check lines: N1, N2.
• V596. The object was created but it is not being used. The 'throw' keyword could be missing.
• V597. The compiler could delete the 'memset' function call, which is used to flush 'Foo' buffer. The RtlSecureZeroMemory() function should be used to erase the private data.
• V598. The 'memset/memcpy' function is used to nullify/copy the fields of 'Foo' class. Virtual table pointer will be damaged by this.
• V599. The virtual destructor is not present, although the 'Foo' class contains virtual functions.
• V600. Consider inspecting the condition. The 'Foo' pointer is always not equal to NULL.
• V601. An odd implicit type casting.
• V602. Consider inspecting this expression. '<' possibly should be replaced with '<<'.
• V603. The object was created but it is not being used. If you wish to call constructor, 'this->Foo::Foo(....)' should be used.
• V604. It is odd that the number of iterations in the loop equals to the size of the pointer.
• V605. Consider verifying the expression. An unsigned value is compared to the number - NN.
• V606. Ownerless token 'Foo'.
• V607. Ownerless expression 'Foo'.
• V608. Recurring sequence of explicit type casts.
• V609. Divide or mod by zero.
• V610. Undefined behavior. Check the shift operator.
• V611. The memory allocation and deallocation methods are incompatible.
• V612. An unconditional 'break/continue/return/goto' within a loop.
• V613. Strange pointer arithmetic with 'malloc/new'.
• V614. Uninitialized variable 'Foo' used.
• V615. An odd explicit conversion from 'float *' type to 'double *' type.
• V616. The 'Foo' named constant with the value of 0 is used in the bitwise operation.
• V617. Consider inspecting the condition. An argument of the '|' bitwise operation always contains a non-zero value.
• V618. It's dangerous to call the 'Foo' function in such a manner, as the line being passed could contain format specification. The example of the safe code: printf("%s", str);
• V619. An array is being utilized as a pointer to single object.
• V620. It's unusual that the expression of sizeof(T)*N kind is being summed with the pointer to T type.
• V621. Consider inspecting the 'for' operator. It's possible that the loop will be executed incorrectly or won't be executed at all.
• V622. Consider inspecting the 'switch' statement. It's possible that the first 'case' operator is missing.
• V623. Consider inspecting the '?:' operator. A temporary object is being created and subsequently destroyed.
• V624. The constant NN is being utilized. The resulting value could be inaccurate. Consider using the M_NN constant from <math.h>.
• V625. Consider inspecting the 'for' operator. Initial and final values of the iterator are the same.
• V626. Consider checking for misprints. It's possible that ',' should be replaced by ';'.
• V627. Consider inspecting the expression. The argument of sizeof() is the macro which expands to a number.
• V628. It's possible that the line was commented out improperly, thus altering the program's operation logics.
• V629. Consider inspecting the expression. Bit shifting of the 32-bit value with a subsequent expansion to the 64-bit type.
• V630. The 'malloc' function is used to allocate memory for an array of objects which are classes containing constructors/destructors.
• V631. Consider inspecting the 'Foo' function call. Defining an absolute path to the file or directory is considered a poor style.
• V632. Consider inspecting the NN argument of the 'Foo' function. It is odd that the argument is of the 'T' type.
• V633. Consider inspecting the expression. Probably the '!=' should be used here.
• V634. The priority of the '+' operation is higher than that of the '<<' operation. It's possible that parentheses should be used in the expression.
• V635. Consider inspecting the expression. The length should probably be multiplied by the sizeof(wchar_t).
• V636. The expression was implicitly cast from integer type to real type. Consider utilizing an explicit type cast to avoid overflow or loss of a fractional part.
• V637. Two opposite conditions were encountered. The second condition is always false.
• V638. A terminal null is present inside a string. The '\0xNN' characters were encountered. Probably meant: '\xNN'.
• V639. Consider inspecting the expression for function call. It is possible that one of the closing ')' parentheses was positioned incorrectly.
• V640. The code's operational logic does not correspond with its formatting.
• V641. The buffer size is not a multiple of the element size.
• V642. Saving the function result inside the 'byte' type variable is inappropriate. The significant bits could be lost breaking the program's logic.
• V643. Unusual pointer arithmetic. The value of the 'char' type is being added to the string pointer.
• V644. A suspicious function declaration. It is possible that the T type object was meant to be created.
• V645. The function call could lead to the buffer overflow. The bounds should not contain the size of the buffer, but a number of characters it can hold.
• V646. Consider inspecting the application's logic. It's possible that 'else' keyword is missing.
• V647. The value of 'A' type is assigned to the pointer of 'B' type.
• V648. Priority of the '&&' operation is higher than that of the '||' operation.
• V649. There are two 'if' statements with identical conditional expressions. The first 'if' statement contains function return. This means that the second 'if' statement is senseless.
• V650. Type casting operation is utilized 2 times in succession. Next, the '+' operation is executed. Probably meant: (T1)((T2)a + b).
• V651. An odd operation of the 'sizeof(X)/sizeof(T)' kind is performed, where 'X' is of the 'class' type.
• V652. The operation is executed 3 or more times in succession.
• V653. A suspicious string consisting of two parts is used for the initialization. It is possible that a comma is missing.
• V654. The condition of loop is always true/false.
• V655. The strings were concatenated but are not utilized. Consider inspecting the expression.
• V656. Variables are initialized through the call to the same function. It's probably an error or un-optimized code.
• V657. It's odd that this function always returns one and the same value of NN.
• V658. A value is being subtracted from the unsigned variable. This can result in an overflow. In such a case, the comparison operation can potentially behave unexpectedly.
• V659. Declarations of functions with 'Foo' name differ in the 'const' keyword only, but the bodies of these functions have different composition. This is suspicious and can possibly be an error.
• V660. The program contains an unused label and a function call: 'CC:AA()'. It's possible that the following was intended: 'CC::AA()'.
• V661. A suspicious expression 'A[B < C]'. Probably meant 'A[B] < C'.
• V662. Consider inspecting the loop expression. Different containers are utilized for setting up initial and final values of the iterator.
• V663. Infinite loop is possible. The 'cin.eof()' condition is insufficient to break from the loop. Consider adding the 'cin.fail()' function call to the conditional expression.
• V664. The pointer is being dereferenced on the initialization list before it is verified against null inside the body of the constructor function.
• V665. Possibly, the usage of '#pragma warning(default: X)' is incorrect in this context. The '#pragma warning(push/pop)' should be used instead.
• V666. Consider inspecting NN argument of the function 'Foo'. It is possible that the value does not correspond with the length of a string which was passed with the YY argument.
• V667. The 'throw' operator does not possess any arguments and is not situated within the 'catch' block.
• V668. There is no sense in testing the pointer against null, as the memory was allocated using the 'new' operator. The exception will be generated in the case of memory allocation error.
• V669. The argument is a non-constant reference. The analyzer is unable to determine the position at which this argument is being modified. It is possible that the function contains an error.
• V670. An uninitialized class member is used to initialize another member. Remember that members are initialized in the order of their declarations inside a class.
• V671. It is possible that the 'swap' function interchanges a variable with itself.
• V672. There is probably no need in creating a new variable here. One of the function's arguments possesses the same name and this argument is a reference.
• V673. More than N bits are required to store the value, but the expression evaluates to the T type which can only hold K bits.
• V674. The expression contains a suspicious mix of integer and real types.
• V675. Writing into the read-only memory.
• V676. It is incorrect to compare the variable of BOOL type with TRUE.
• V677. Custom declaration of a standard type. The declaration from system header files should be used instead.
• V678. An object is used as an argument to its own method. Consider checking the first actual argument of the 'Foo' function.
• V679. The 'X' variable was not initialized. This variable is passed by a reference to the 'Foo' function in which its value will be utilized.
• V680. The 'delete A, B' expression only destroys the 'A' object. Then the ',' operator returns a resulting value from the right side of the expression.
• V681. The language standard does not define an order in which the 'Foo' functions will be called during evaluation of arguments.
• V682. Suspicious literal is present: '/r'. It is possible that a backslash should be used here instead: '\r'.
• V683. Consider inspecting the loop expression. It is possible that the 'i' variable should be incremented instead of the 'n' variable.
• V684. A value of variable is not modified. Consider inspecting the expression. It is possible that '1' should be present instead of '0'.
• V685. Consider inspecting the return statement. The expression contains a comma.
• V686. A pattern was detected: A || (A && ...). The expression is excessive or contains a logical error.
• V687. Size of an array calculated by the sizeof() operator was added to a pointer. It is possible that the number of elements should be calculated by sizeof(A)/sizeof(A[0]).
• V688. The 'foo' local variable possesses the same name as one of the class members, which can result in a confusion.
• V689. The destructor of the 'Foo' class is not declared as a virtual. It is possible that a smart pointer will not destroy an object correctly.
• V690. The class implements a copy constructor/operator=, but lacks the operator=/copy constructor.
• V691. Empirical analysis. It is possible that a typo is present inside the string literal. The 'foo' word is suspicious.
• V692. An inappropriate attempt to append a null character to a string. To determine the length of a string by 'strlen' function correctly, a string ending with a null terminator should be used in the first place.
• V693. Consider inspecting conditional expression of the loop. It is possible that 'i < X.size()' should be used instead of 'X.size()'.
• V694. The condition (ptr - const_value) is only false if the value of a pointer equals a magic constant.
• V695. Range intersections are possible within conditional expressions.
• V696. The 'continue' operator will terminate 'do { ... } while (FALSE)' loop because the condition is always false.
• V697. A number of elements in the allocated array is equal to size of a pointer in bytes.
• V698. strcmp()-like functions can return not only the values -1, 0 and 1, but any values.
• V699. Consider inspecting the 'foo = bar = baz ? .... : ....' expression. It is possible that 'foo = bar == baz ? .... : ....' should be used here instead.
• V700. Consider inspecting the 'T foo = foo = x;' expression. It is odd that variable is initialized through itself.
• V701. realloc() possible leak: when realloc() fails in allocating memory, original pointer is lost. Consider assigning realloc() to a temporary pointer.
• V702. Classes should always be derived from std::exception (and alike) as 'public'.
• V703. It is odd that the 'foo' field in derived class overwrites field in base class.
• V704. 'this == 0' comparison should be avoided - this comparison is always false on newer compilers.
• V705. It is possible that 'else' block was forgotten or commented out, thus altering the program's operation logics.
• V706. Suspicious division: sizeof(X) / Value. Size of every element in X array does not equal to divisor.
• V707. Giving short names to global variables is considered to be bad practice.
• V708. Dangerous construction is used: 'm[x] = m.size()', where 'm' is of 'T' class. This may lead to undefined behavior.
• V709. Suspicious comparison found: 'a == b == c'. Remember that 'a == b == c' is not equal to 'a == b && b == c'.
• V710. Suspicious declaration found. There is no point to declare constant reference to a number.
• V711. It is dangerous to create a local variable within a loop with a same name as a variable controlling this loop.
• V712. Be advised that compiler may delete this cycle or make it infinity. Use volatile variable(s) or synchronization primitives to avoid this.
• V713. The pointer was utilized in the logical expression before it was verified against nullptr in the same logical expression.
• V714. Variable is not passed into foreach loop by a reference, but its value is changed inside of the loop.
• V715. The 'while' operator has empty body. Suspicious pattern detected.
• V716. Suspicious type conversion: HRESULT -> BOOL (BOOL -> HRESULT).
• V717. It is suspicious to cast object of base class V to derived class U.
• V718. The 'Foo' function should not be called from 'DllMain' function.
• V719. The switch statement does not cover all values of the enum.
• V720. It is advised to utilize the 'SuspendThread' function only when developing a debugger (see documentation for details).
• V721. The VARIANT_BOOL type is utilized incorrectly. The true value (VARIANT_TRUE) is defined as -1.
• V722. An abnormality within similar comparisons. It is possible that a typo is present inside the expression.
• V723. Function returns a pointer to the internal string buffer of a local object, which will be destroyed.
• V724. Converting integers or pointers to BOOL can lead to a loss of high-order bits. Non-zero value can become 'FALSE'.
• V725. A dangerous cast of 'this' to 'void*' type in the 'Base' class, as it is followed by a subsequent cast to 'Class' type.
• V726. An attempt to free memory containing the 'int A[10]' array by using the 'free(A)' function.
• V727. Return value of 'wcslen' function is not multiplied by 'sizeof(wchar_t)'
• V728. An excessive check can be simplified. The '||' operator is surrounded by opposite expressions 'x' and '!x'.
• V729. Function body contains the 'X' label that is not used by any 'goto' statements.
• V730. Not all members of a class are initialized inside the constructor.
• V731. The variable of char type is compared with pointer to string.
• V732. Unary minus operator does not modify a bool type value.
• V733. It is possible that macro expansion resulted in incorrect evaluation order.
• V734. An excessive expression. Examine the substrings "abc" and "abcd".
• V735. Possibly an incorrect HTML. The "</XX" closing tag was encountered, while the "</YY" tag was expected.
• V736. The behavior is undefined for arithmetic or comparisons with pointers that do not point to members of the same array.
• V737. It is possible that ',' comma is missing at the end of the string.
• V738. Temporary anonymous object is used.
• V739. EOF should not be compared with a value of the 'char' type. Consider using the 'int' type.
• V740. Because NULL is defined as 0, the exception is of the 'int' type. Keyword 'nullptr' could be used for 'pointer' type exception.
• V741. The following pattern is used: throw (a, b);. It is possible that type name was omitted: throw MyException(a, b);.
• V742. Function receives an address of a 'char' type variable instead of pointer to a buffer.
• V743. The memory areas must not overlap. Use 'memmove' function.
• V744. Temporary object is immediately destroyed after being created. Consider naming the object.
• V745. A 'wchar_t *' type string is incorrectly converted to 'BSTR' type string.
• V746. Object slicing. An exception should be caught by reference rather than by value.
• V747. An odd expression inside parenthesis. It is possible that a function name is missing.
• V748. Memory for 'getline' function should be allocated only by 'malloc' or 'realloc' functions. Consider inspecting the first parameter of 'getline' function.
• V749. Destructor of the object will be invoked a second time after leaving the object's scope.
• V750. BSTR string becomes invalid. Notice that BSTR strings store their length before start of the text.
• V751. Parameter is not used inside function's body.
• V752. Creating an object with placement new requires a buffer of large size.
• V753. The '&=' operation always sets a value of 'Foo' variable to zero.
• V754. The expression of 'foo(foo(x))' pattern is excessive or contains an error.
• V755. Copying from unsafe data source. Buffer overflow is possible.
• V756. The 'X' counter is not used inside a nested loop. Consider inspecting usage of 'Y' counter.
• V757. It is possible that an incorrect variable is compared with null after type conversion using 'dynamic_cast'.
• V758. Reference invalidated because of the destruction of the temporary object returned by the function.
• V759. Violated order of exception handlers. Exception caught by handler for base class.
• V760. Two identical text blocks detected. The second block starts with NN string.
• V761. NN identical blocks were found.
• V762. Consider inspecting virtual function arguments. See NN argument of function 'Foo' in derived class and base class.
• V763. Parameter is always rewritten in function body before being used.
• V764. Possible incorrect order of arguments passed to function.
• V765. A compound assignment expression 'X += X + N' is suspicious. Consider inspecting it for a possible error.
• V766. An item with the same key has already been added.
• V767. Suspicious access to element by a constant index inside a loop.
• V768. The variable is of enum type. It is odd that it is used as a variable of a Boolean-type.
• V769. The pointer in the expression equals nullptr. The resulting value is meaningless and should not be used.
• V770. Possible use of a left shift operator instead of a comparison operator.
• V771. The '?:' operator uses constants from different enums.
• V772. Calling the 'delete' operator for a void pointer will cause undefined behavior.
• V773. The function was exited without releasing the pointer/handle. A memory/resource leak is possible.
• V774. The pointer was used after the memory was released.
• V775. It is odd that the BSTR data type is compared using a relational operator.
• V776. Potentially infinite loop. The variable in the loop exit condition does not change its value between iterations.
• V777. Dangerous widening type conversion from an array of derived-class objects to a base-class pointer.
• V778. Two similar code fragments were found. Perhaps, this is a typo and 'X' variable should be used instead of 'Y'.
• V779. Unreachable code detected. It is possible that an error is present.
• V780. The object of non-passive (non-PDS) type cannot be used with the function.
• V781. The value of the variable is checked after it was used. Perhaps there is a mistake in program logic. Check lines: N1, N2.
• V782. It is pointless to compute the distance between the elements of different arrays.
• V783. Dereferencing of invalid iterator 'X' might take place.
• V784. The size of the bit mask is less than the size of the first operand. This will cause the loss of the higher bits.
• V785. Constant expression in switch statement.
• V786. Assigning the value C to the X variable looks suspicious. The value range of the variable: [A, B].
• V787. A wrong variable is probably used as an index in the for statement.
• V788. Review captured variable in lambda expression.
• V789. Iterators for the container, used in the range-based for loop, become invalid upon a function call.
• V790. It is odd that the assignment operator takes an object by a non-constant reference and returns this object.
• V791. The initial value of the index in the nested loop equals 'i'. Consider using 'i + 1' instead.
• V792. The function located to the right of the '|' and '&' operators will be called regardless of the value of the left operand. Consider using '||' and '&&' instead.
• V793. It is odd that the result of the statement is a part of the condition. Perhaps, this statement should have been compared with something else.
• V794. The assignment operator should be protected from the case of this == &src.
• V795. Note that the size of the 'time_t' type is not 64 bits. After the year 2038, the program will work incorrectly.
• V796. A 'break' statement is probably missing in a 'switch' statement.
• V797. The function is used as if it returned a bool type. The return value of the function should probably be compared with std::string::npos.
• V798. The size of the dynamic array can be less than the number of elements in the initializer.
• V799. The variable is not used after memory has been allocated for it. Consider checking the use of this variable.
• V1001. The variable is assigned but is not used by the end of the function.
• V1002. A class, containing pointers, constructor and destructor, is copied by the automatically generated operator= or copy constructor.
• V1003. The macro expression is dangerous, or it is suspicious.
• V1004. The pointer was used unsafely after it was verified against nullptr.
• V1005. The resource was acquired using 'X' function but was released using incompatible 'Y' function.
• V1006. Several shared_ptr objects are initialized by the same pointer. A double memory deallocation will occur.
• V1007. The value from the uninitialized optional is used. Probably it is a mistake.
• V1008. Consider inspecting the 'for' operator. No more than one iteration of the loop will be performed.
• V1009. Check the array initialization. Only the first element is initialized explicitly.
• V1010. Unchecked tainted data is used in expression.
• V1011. Function execution could be deferred. Consider specifying execution policy explicitly.
• V1012. The expression is always false. Overflow check is incorrect.
• V1013. Suspicious subexpression in a sequence of similar comparisons.
• V1014. Structures with members of real type are compared byte-wise.
• V1015. Suspicious simultaneous use of bitwise and logical operators.
• V1016. The value is out of range of enum values. This causes unspecified or undefined behavior.
• V1017. Variable of the 'string_view' type references a temporary object which will be removed after evaluation of an expression.
• V1018. Usage of a suspicious mutex wrapper. It is probably unused, uninitialized, or already locked.
• V1019. Compound assignment expression is used inside condition.
• V1020. Function exited without performing epilogue actions. It is possible that there is an error.
• V1021. The variable is assigned the same value on several loop iterations.
• V1022. An exception was thrown by pointer. Consider throwing it by value instead.
• V1023. A pointer without owner is added to the container by the 'emplace_back' method. A memory leak will occur in case of an exception.
• V1024. The stream is checked for EOF before reading from it, but is not checked after reading. Potential use of invalid data.
• V1025. Rather than creating 'std::unique_lock' to lock on the mutex, a new variable with default value is created.
• V1026. The variable is incremented in the loop. Undefined behavior will occur in case of signed integer overflow.
• V1027. Pointer to an object of the class is cast to unrelated class.
• V1028. Possible overflow. Consider casting operands, not the result.
• V1029. Numeric Truncation Error. Return value of function is written to N-bit variable.
• V1030. The variable is used after it was moved.
• V1031. Function is not declared. The passing of data to or from this function may be affected.
• V1032. Pointer is cast to a more strictly aligned pointer type.
• V1033. Variable is declared as auto in C. Its default type is int.
• V1034. Do not use real-type variables as loop counters.
• V1035. Only values that are returned from fgetpos() can be used as arguments to fsetpos().
• V1036. Potentially unsafe double-checked locking.

## General Analysis (C#)

• V3001. There are identical sub-expressions to the left and to the right of the 'foo' operator.
• V3002. The switch statement does not cover all values of the enum.
• V3003. The use of 'if (A) {...} else if (A) {...}' pattern was detected. There is a probability of logical error presence.
• V3004. The 'then' statement is equivalent to the 'else' statement.
• V3005. The 'x' variable is assigned to itself.
• V3006. The object was created but it is not being used. The 'throw' keyword could be missing.
• V3007. Odd semicolon ';' after 'if/for/while' operator.
• V3008. The 'x' variable is assigned values twice successively. Perhaps this is a mistake.
• V3009. It's odd that this method always returns one and the same value of NN.
• V3010. The return value of function 'Foo' is required to be utilized.
• V3011. Two opposite conditions were encountered. The second condition is always false.
• V3012. The '?:' operator, regardless of its conditional expression, always returns one and the same value.
• V3013. It is odd that the body of 'Foo_1' function is fully equivalent to the body of 'Foo_2' function.
• V3014. It is likely that a wrong variable is being incremented inside the 'for' operator. Consider reviewing 'X'.
• V3015. It is likely that a wrong variable is being compared inside the 'for' operator. Consider reviewing 'X'.
• V3016. The variable 'X' is being used for this loop and for the outer loop.
• V3017. A pattern was detected: A || (A && ...). The expression is excessive or contains a logical error.
• V3018. Consider inspecting the application's logic. It's possible that 'else' keyword is missing.
• V3019. It is possible that an incorrect variable is compared with null after type conversion using 'as' keyword.
• V3020. An unconditional 'break/continue/return/goto' within a loop.
• V3021. There are two 'if' statements with identical conditional expressions. The first 'if' statement contains method return. This means that the second 'if' statement is senseless.
• V3022. Expression is always true/false.
• V3023. Consider inspecting this expression. The expression is excessive or contains a misprint.
• V3024. An odd precise comparison. Consider using a comparison with defined precision: Math.Abs(A - B) < Epsilon or Math.Abs(A - B) > Epsilon.
• V3025. Incorrect format. Consider checking the N format items of the 'Foo' function.
• V3026. The constant NN is being utilized. The resulting value could be inaccurate. Consider using the KK constant.
• V3027. The variable was utilized in the logical expression before it was verified against null in the same logical expression.
• V3028. Consider inspecting the 'for' operator. Initial and final values of the iterator are the same.
• V3029. The conditional expressions of the 'if' statements situated alongside each other are identical.
• V3030. Recurring check. This condition was already verified in previous line.
• V3031. An excessive check can be simplified. The operator '||' operator is surrounded by opposite expressions 'x' and '!x'.
• V3032. Waiting on this expression is unreliable, as compiler may optimize some of the variables. Use volatile variable(s) or synchronization primitives to avoid this.
• V3033. It is possible that this 'else' branch must apply to the previous 'if' statement.
• V3034. Consider inspecting the expression. Probably the '!=' should be used here.
• V3035. Consider inspecting the expression. Probably the '+=' should be used here.
• V3036. Consider inspecting the expression. Probably the '-=' should be used here.
• V3037. An odd sequence of assignments of this kind: A = B; B = A;
• V3038. The argument was passed to method several times. It is possible that another argument should be passed instead.
• V3039. Consider inspecting the 'Foo' function call. Defining an absolute path to the file or directory is considered a poor style.
• V3040. The expression contains a suspicious mix of integer and real types
• V3041. The expression was implicitly cast from integer type to real type. Consider utilizing an explicit type cast to avoid the loss of a fractional part.
• V3042. Possible NullReferenceException. The '?.' and '.' operators are used for accessing members of the same object.
• V3043. The code's operational logic does not correspond with its formatting.
• V3044. WPF: writing and reading are performed on a different Dependency Properties.
• V3045. WPF: the names of the property registered for DependencyProperty, and of the property used to access it, do not correspond with each other.
• V3046. WPF: the type registered for DependencyProperty does not correspond with the type of the property used to access it.
• V3047. WPF: A class containing registered property does not correspond with a type that is passed as the ownerType.type.
• V3048. WPF: several Dependency Properties are registered with a same name within the owner type.
• V3049. WPF: readonly field of 'DependencyProperty' type is not initialized.
• V3050. Possibly an incorrect HTML. The </XX> closing tag was encountered, while the </YY> tag was expected.
• V3051. An excessive type cast or check. The object is already of the same type.
• V3052. The original exception object was swallowed. Stack of original exception could be lost.
• V3053. An excessive expression. Examine the substrings "abc" and "abcd".
• V3054. Potentially unsafe double-checked locking. Use volatile variable(s) or synchronization primitives to avoid this.
• V3055. Suspicious assignment inside the condition expression of 'if/while/for' operator.
• V3056. Consider reviewing the correctness of 'X' item's usage.
• V3057. Function receives an odd argument.
• V3058. An item with the same key has already been added.
• V3059. Consider adding '[Flags]' attribute to the enum.
• V3060. A value of variable is not modified. Consider inspecting the expression. It is possible that other value should be present instead of '0'.
• V3061. Parameter 'A' is always rewritten in method body before being used.
• V3062. An object is used as an argument to its own method. Consider checking the first actual argument of the 'Foo' method
• V3063. A part of conditional expression is always true/false if it is evaluated.
• V3064. Division or mod division by zero.
• V3065. Parameter is not utilized inside method's body.
• V3066. Possible incorrect order of arguments passed to method.
• V3067. It is possible that 'else' block was forgotten or commented out, thus altering the program's operation logics.
• V3068. Calling overrideable class member from constructor is dangerous.
• V3069. It's possible that the line was commented out improperly, thus altering the program's operation logics.
• V3070. Uninitialized variables are used when initializing the 'A' variable.
• V3071. The object is returned from inside 'using' block. 'Dispose' will be invoked before exiting method.
• V3072. The 'A' class containing IDisposable members does not itself implement IDisposable.
• V3073. Not all IDisposable members are properly disposed. Call 'Dispose' when disposing 'A' class.
• V3074. The 'A' class contains 'Dispose' method. Consider making it implement 'IDisposable' interface.
• V3075. The operation is executed 2 or more times in succession.
• V3076. Comparison with 'double.NaN' is meaningless. Use 'double.IsNaN()' method instead.
• V3077. Property setter / event accessor does not utilize its 'value' parameter.
• V3078. Original sorting order will be lost after repetitive call to 'OrderBy' method. Use 'ThenBy' method to preserve the original sorting.
• V3079. 'ThreadStatic' attribute is applied to a non-static 'A' field and will be ignored.
• V3080. Possible null dereference.
• V3081. The 'X' counter is not used inside a nested loop. Consider inspecting usage of 'Y' counter.
• V3082. The 'Thread' object is created but is not started. It is possible that a call to 'Start' method is missing.
• V3083. Unsafe invocation of event, NullReferenceException is possible. Consider assigning event to a local variable before invoking it.
• V3084. Anonymous function is used to unsubscribe from event. No handlers will be unsubscribed, as a separate delegate instance is created for each anonymous function declaration.
• V3085. The name of 'X' field/property in a nested type is ambiguous. The outer type contains static field/property with identical name.
• V3086. Variables are initialized through the call to the same function. It's probably an error or un-optimized code.
• V3087. Type of variable enumerated in 'foreach' is not guaranteed to be castable to the type of collection's elements.
• V3088. The expression was enclosed by parentheses twice: ((expression)). One pair of parentheses is unnecessary or misprint is present.
• V3089. Initializer of a field marked by [ThreadStatic] attribute will be called once on the first accessing thread. The field will have default value on different threads.
• V3090. Unsafe locking on an object.
• V3091. Empirical analysis. It is possible that a typo is present inside the string literal. The 'foo' word is suspicious.
• V3092. Range intersections are possible within conditional expressions.
• V3093. The operator evaluates both operands. Perhaps a short-circuit operator should be used instead.
• V3094. Possible exception when deserializing type. The Ctor(SerializationInfo, StreamingContext) constructor is missing.
• V3095. The object was used before it was verified against null. Check lines: N1, N2.
• V3096. Possible exception when serializing type. [Serializable] attribute is missing.
• V3097. Possible exception: type marked by [Serializable] contains non-serializable members not marked by [NonSerialized].
• V3098. The 'continue' operator will terminate 'do { ... } while (false)' loop because the condition is always false.
• V3099. Not all the members of type are serialized inside 'GetObjectData' method.
• V3100. NullReferenceException is possible. Unhandled exceptions in destructor lead to termination of runtime.
• V3101. Potential resurrection of 'this' object instance from destructor. Without re-registering for finalization, destructor will not be called a second time on resurrected object.
• V3102. Suspicious access to element by a constant index inside a loop.
• V3103. A private Ctor(SerializationInfo, StreamingContext) constructor in unsealed type will not be accessible when deserializing derived types.
• V3104. 'GetObjectData' implementation in unsealed type is not virtual, incorrect serialization of derived type is possible.
• V3105. The 'a' variable was used after it was assigned through null-conditional operator. NullReferenceException is possible.
• V3106. Possibly index is out of bound.
• V3107. Identical expressions to the left and to the right of compound assignment.
• V3108. It is not recommended to return null or throw exceptions from 'ToString()' method.
• V3109. The same sub-expression is present on both sides of the operator. The expression is incorrect or it can be simplified.
• V3110. Possible infinite recursion.
• V3111. Checking value for null will always return false when generic type is instantiated with a value type.
• V3112. An abnormality within similar comparisons. It is possible that a typo is present inside the expression.
• V3113. Consider inspecting the loop expression. It is possible that different variables are used inside initializer and iterator.
• V3114. IDisposable object is not disposed before method returns.
• V3115. It is not recommended to throw exceptions from 'Equals(object obj)' method.
• V3116. Consider inspecting the 'for' operator. It's possible that the loop will be executed incorrectly or won't be executed at all.
• V3117. Constructor parameter is not used.
• V3118. A component of TimeSpan is used, which does not represent full time interval. Possibly 'Total*' value was intended instead.
• V3119. Calling a virtual (overridden) event may lead to unpredictable behavior. Consider implementing event accessors explicitly or use 'sealed' keyword.
• V3120. Potentially infinite loop. The variable in the loop exit condition does not change its value between iterations.
• V3121. An enumeration was declared with 'Flags' attribute, but no initializers were set to override default values.
• V3122. Uppercase (lowercase) string is compared with a different lowercase (uppercase) string.
• V3123. Perhaps the '??' operator works differently from what was expected. Its priority is lower than that of other operators in its left part.
• V3124. Appending an element and checking for key uniqueness is performed on two different variables.
• V3125. The object was used after it was verified against null. Check lines: N1, N2.
• V3126. Type implementing IEquatable<T> interface does not override 'GetHashCode' method.
• V3127. Two similar code fragments were found. Perhaps this is a typo and 'X' variable should be used instead of 'Y'.
• V3128. The field (property) is used before it is initialized in constructor.
• V3129. The value of the captured variable will be overwritten on the next iteration of the loop in each instance of anonymous function that captures it.
• V3130. Priority of the '&&' operator is higher than that of the '||' operator. Possible missing parentheses.
• V3131. The expression is checked for compatibility with type 'A' but is cast to type 'B'.
• V3132. A terminal null is present inside a string. '\0xNN' character sequence was encountered. Probably meant: '\xNN'.
• V3133. Postfix increment/decrement is meaningless because this variable is overwritten.
• V3134. Shift by N bits is greater than the size of type.
• V3135. The initial value of the index in the nested loop equals 'i'. Consider using 'i + 1' instead.
• V3136. Constant expression in switch statement.
• V3137. The variable is assigned but is not used by the end of the function.

## General Analysis (Java)

• V6001. There are identical sub-expressions to the left and to the right of the 'foo' operator.
• V6002. The switch statement does not cover all values of the enum.
• V6003. The use of 'if (A) {...} else if (A) {...}' pattern was detected. There is a probability of logical error presence.
• V6004. The 'then' statement is equivalent to the 'else' statement.
• V6005. The 'x' variable is assigned to itself.
• V6006. The object was created but it is not being used. The 'throw' keyword could be missing.
• V6007. Expression is always true/false.
• V6008. Potential null dereference.
• V6009. Function receives an odd argument.
• V6010. The return value of function 'Foo' is required to be utilized.
• V6011. The expression contains a suspicious mix of integer and real types
• V6012. The '?:' operator, regardless of its conditional expression, always returns one and the same value.
• V6013. Comparison of arrays, strings, collections by reference. Possibly an equality comparison was intended.
• V6014. It's odd that this method always returns one and the same value of NN.
• V6015. Consider inspecting the expression. Probably the '!='/'-='/'+=' should be used here.
• V6016. Suspicious access to element by a constant index inside a loop.
• V6017. The 'X' counter is not used inside a nested loop. Consider inspecting usage of 'Y' counter.
• V6018. Constant expression in switch statement.
• V6019. Unreachable code detected. It is possible that an error is present.
• V6020. Division or mod division by zero.
• V6021. The value is assigned to the 'x' variable but is not used.
• V6022. Parameter is not used inside method's body.
• V6023. Parameter 'A' is always rewritten in method body before being used.
• V6024. The 'continue' operator will terminate 'do { ... } while (false)' loop because the condition is always false.
• V6025. Possibly index is out of bound.
• V6026. This value is already assigned to the 'b' variable.
• V6027. Variables are initialized through the call to the same function. It's probably an error or un-optimized code.
• V6028. Identical expressions to the left and to the right of compound assignment.
• V6029. Possible incorrect order of arguments passed to method.
• V6030. The function located to the right of the '|' and '&' operators will be called regardless of the value of the left operand. Consider using '||' and '&&' instead.
• V6031. The variable 'X' is being used for this loop and for the outer loop.
• V6032. It is odd that the body of 'Foo_1' function is fully equivalent to the body of 'Foo_2' function.
• V6033. An item with the same key has already been added.
• V6034. Shift by N bits is inconsistent with the size of type.
• V6035. Double negation is present in the expression: !!x.
• V6036. The value from the uninitialized optional is used.
• V6037. An unconditional 'break/continue/return/goto' within a loop.
• V6038. Comparison with 'double.NaN' is meaningless. Use 'double.isNaN()' method instead.
• V6039. There are two 'if' statements with identical conditional expressions. The first 'if' statement contains method return. This means that the second 'if' statement is senseless.
• V6040. The code's operational logic does not correspond with its formatting.
• V6041. Suspicious assignment inside the conditional expression of 'if/while/do...while' statement.
• V6042. The expression is checked for compatibility with type 'A', but is cast to type 'B'.
• V6043. Consider inspecting the 'for' operator. Initial and final values of the iterator are the same.
• V6044. Postfix increment/decrement is senseless because this variable is overwritten.
• V6045. Suspicious subexpression in a sequence of similar comparisons.
• V6046. Incorrect format. Consider checking the N format items of the 'Foo' function.
• V6047. It is possible that this 'else' branch must apply to the previous 'if' statement.
• V6048. This expression can be simplified. One of the operands in the operation equals NN. Probably it is a mistake.
• V6049. Classes that define 'equals' method must also define 'hashCode' method.
• V6050. Class initialization cycle is present.
• V6051. Use of jump statements in 'finally' block can lead to the loss of unhandled exceptions.
• V6052. Calling an overridden method in parent-class constructor may lead to use of uninitialized data.
• V6053. Collection is modified while iteration is in progress. ConcurrentModificationException may occur.
• V6054. Classes should not be compared by their name.
• V6055. Expression inside assert statement can change object's state.
• V6056. Implementation of 'compareTo' overloads the method from a base class. Possibly, an override was intended.
• V6057. Consider inspecting this expression. The expression is excessive or contains a misprint.
• V6058. Comparing objects of incompatible types.
• V6059. Odd use of special character in regular expression. Possibly, it was intended to be escaped.
• V6060. The reference was used before it was verified against null.
• V6061. The used constant value is represented by an octal form.
• V6062. Possible infinite recursion.
• V6063. Odd semicolon ';' after 'if/foreach' operator.
• V6064. Suspicious invocation of Thread.run().
• V6065. A non-serializable class should not be serialized.
• V6066. Passing objects of incompatible types to the method of collection.

## Diagnosis of micro-optimizations (C++)

• V801. Decreased performance. It is better to redefine the N function argument as a reference. Consider replacing 'const T' with 'const .. &T' / 'const .. *T'.
• V802. On 32-bit/64-bit platform, structure size can be reduced from N to K bytes by rearranging the fields according to their sizes in decreasing order.
• V803. Decreased performance. It is more effective to use the prefix form of ++it. Replace iterator++ with ++iterator.
• V804. Decreased performance. The 'Foo' function is called twice in the specified expression to calculate length of the same string.
• V805. Decreased performance. It is inefficient to identify an empty string by using 'strlen(str) > 0' construct. A more efficient way is to check: str[0] != '\0'
• V806. Decreased performance. The expression of strlen(MyStr.c_str()) kind can be rewritten as MyStr.length().
• V807. Decreased performance. Consider creating a pointer/reference to avoid using the same expression repeatedly.
• V808. An array/object was declared but was not utilized.
• V809. Verifying that a pointer value is not NULL is not required. The 'if (ptr != NULL)' check can be removed.
• V810. Decreased performance. The 'A' function was called several times with identical arguments. The result should possibly be saved to a temporary variable, which then could be used while calling the 'B' function.
• V811. Decreased performance. Excessive type casting: string -> char * -> string.
• V812. Decreased performance. Ineffective use of the 'count' function. It can possibly be replaced by the call to the 'find' function.
• V813. Decreased performance. The argument should probably be rendered as a constant pointer/reference.
• V814. Decreased performance. The 'strlen' function was called multiple times inside the body of a loop.
• V815. Decreased performance. Consider replacing the expression 'AA' with 'BB'.
• V816. It is more efficient to catch exception by reference rather than by value.
• V817. It is more efficient to search for 'X' character rather than a string.
• V818. It is more efficient to use an initialization list rather than an assignment operator.
• V819. Decreased performance. Memory is allocated and released multiple times inside the loop body.
• V820. The variable is not used after copying. Copying can be replaced with move/swap for optimization.
• V821. The variable can be constructed in a lower level scope.

## Diagnosis of 64-bit errors (Viva64, C++)

• V101. Implicit assignment type conversion to memsize type.
• V102. Usage of non memsize type for pointer arithmetic.
• V103. Implicit type conversion from memsize type to 32-bit type.
• V104. Implicit type conversion to memsize type in an arithmetic expression.
• V105. N operand of '?:' operation: implicit type conversion to memsize type.
• V106. Implicit type conversion N argument of function 'foo' to memsize type.
• V107. Implicit type conversion N argument of function 'foo' to 32-bit type.
• V108. Incorrect index type: 'foo[not a memsize-type]'. Use memsize type instead.
• V109. Implicit type conversion of return value to memsize type.
• V110. Implicit type conversion of return value from memsize type to 32-bit type.
• V111. Call of function 'foo' with variable number of arguments. N argument has memsize type.
• V112. Dangerous magic number N used.
• V113. Implicit type conversion from memsize to double type or vice versa.
• V114. Dangerous explicit type pointer conversion.
• V115. Memsize type is used for throw.
• V116. Memsize type is used for catch.
• V117. Memsize type is used in the union.
• V118. malloc() function accepts a dangerous expression in the capacity of an argument.
• V119. More than one sizeof() operator is used in one expression.
• V120. Member operator[] of object 'foo' is declared with 32-bit type argument, but is called with memsize type argument.
• V121. Implicit conversion of the type of 'new' operator's argument to size_t type.
• V122. Memsize type is used in the struct/class.
• V123. Allocation of memory by the pattern "(X*)malloc(sizeof(Y))" where the sizes of X and Y types are not equal.
• V124. Function 'Foo' writes/reads 'N' bytes. The alignment rules and type sizes have been changed. Consider reviewing this value.
• V125. It is not advised to declare type 'T' as 32-bit type.
• V126. Be advised that the size of the type 'long' varies between LLP64/LP64 data models.
• V127. An overflow of the 32-bit variable is possible inside a long cycle which utilizes a memsize-type loop counter.
• V128. A variable of the memsize type is read from a stream. Consider verifying the compatibility of 32 and 64 bit versions of the application in the context of a stored data.
• V201. Explicit conversion from 32-bit integer type to memsize type.
• V202. Explicit conversion from memsize type to 32-bit integer type.
• V203. Explicit type conversion from memsize to double type or vice versa.
• V204. Explicit conversion from 32-bit integer type to pointer type.
• V205. Explicit conversion of pointer type to 32-bit integer type.
• V206. Explicit conversion from 'void *' to 'int *'.
• V207. A 32-bit variable is utilized as a reference to a pointer. A write outside the bounds of this variable may occur.
• V220. Suspicious sequence of types castings: memsize -> 32-bit integer -> memsize.
• V221. Suspicious sequence of types castings: pointer -> memsize -> 32-bit integer.
• V301. Unexpected function overloading behavior. See N argument of function 'foo' in derived class 'derived' and base class 'base'.
• V302. Member operator[] of 'foo' class has a 32-bit type argument. Use memsize-type here.
• V303. The function is deprecated in the Win64 system. It is safer to use the 'foo' function.

## Customers Specific Requests (C++)

• V2001. Consider using the extended version of the 'foo'function here.
• V2002. Consider using the 'Ptr' version of the 'foo' function here.
• V2003. Explicit conversion from 'float/double' type to signed integer type.
• V2004. Explicit conversion from 'float/double' type to unsigned integer type.
• V2005. C-style explicit type casting is utilized. Consider using: static_cast/const_cast/reinterpret_cast.
• V2006. Implicit type conversion from enum type to integer type.
• V2007. This expression can be simplified. One of the operands in the operation equals NN. Probably it is a mistake.
• V2008. Cyclomatic complexity: NN. Consider refactoring the 'Foo' function.
• V2009. Consider passing the 'Foo' argument as a constant pointer/reference.
• V2010. Handling of two different exception types is identical.
• V2011. Consider inspecting signed and unsigned function arguments. See NN argument of function 'Foo' in derived class and base class.
• V2012. Possibility of decreased performance. It is advised to pass arguments to std::unary_function/std::binary_function template as references.
• V2013. Consider inspecting the correctness of handling the N argument in the 'Foo' function.
• V2014. Don't use terminating functions in library code.

## MISRA errors

• V2501. MISRA. Octal constants should not be used.
• V2502. MISRA. The 'goto' statement should not be used.
• V2503. MISRA. Implicitly specified enumeration constants should be unique – consider specifying non-unique constants explicitly.
• V2504. MISRA. Size of an array is not specified.
• V2505. MISRA. The 'goto' statement shouldn't jump to a label declared earlier.
• V2506. MISRA. A function should have a single point of exit at the end.
• V2507. MISRA. The body of a loop\conditional statement should be enclosed in braces.
• V2508. MISRA. The function with the 'atof/atoi/atoll/atoll' name should not be used.
• V2509. MISRA. The function with the 'abort/exit/getenv/system' name should not be used.
• V2510. MISRA. The function with the 'qsort/bsearch' name should not be used.
• V2511. MISRA. Memory allocation and deallocation functions should not be used.
• V2512. MISRA. The macro with the 'setjmp' name and the function with the 'longjmp' name should not be used.
• V2513. MISRA. Unbounded functions performing string operations should not be used.
• V2514. MISRA. Unions should not be used.
• V2515. MISRA. Declaration should contain no more than two levels of pointer nesting.
• V2516. MISRA. The 'if' ... 'else if' construct should be terminated with an 'else' statement.
• V2517. MISRA. Literal suffixes should not contain lowercase characters.
• V2518. MISRA. The 'default' label should be either the first or the last label of a 'switch' statement.
• V2519. MISRA. Every 'switch' statement should have a 'default' label, which, in addition to the terminating 'break' statement, should contain either a statement or a comment.
• V2520. MISRA. Every switch-clause should be terminated by an unconditional 'break' or 'throw' statement.
• V2521. MISRA. Only the first member of enumerator list should be explicitly initialized, unless all members are explicitly initialized.
• V2522. MISRA. The 'switch' statement should have 'default' as the last label.
• V2523. MISRA. All integer constants of unsigned type should have 'u' or 'U' suffix.
• V2524. MISRA. A switch-label should only appear at the top level of the compound statement forming the body of a 'switch' statement.
• V2525. MISRA. Every 'switch' statement should contain non-empty switch-clauses.
• V2526. MISRA. The functions from time.h/ctime should not be used.
• V2527. A switch-expression should not have Boolean type. Consider using of 'if-else' construct.
• V2528. MISRA. The comma operator should not be used.

## Problems related to code analyzer (C++, C#)

• V001. A code fragment from 'file' cannot be analyzed.
• V002. Some diagnostic messages may contain incorrect line number.
• V003. Unrecognized error found...
• V004. Diagnostics from the 64-bit rule set are not entirely accurate without the appropriate 64-bit compiler. Consider utilizing 64-bit compiler if possible.
• V005. Cannot determine active configuration for project. Please check projects and solution configurations.
• V006. File cannot be processed. Analysis aborted by timeout.
• V007. Deprecated CLR switch was detected. Incorrect diagnostics are possible.
• V008. Unable to start the analysis on this file.
• V009. To use free version of PVS-Studio, source code files are required to start with a special comment.
• V010. Analysis of 'Makefile/Utility' type projects is not supported in this tool. Use direct analyzer integration or compiler monitoring instead.
• V011. Presence of #line directives may cause some diagnostic messages to have incorrect file name and line number.
• V051. Some of the references in project are missing or incorrect. The analysis results could be incomplete. Consider making the project fully compilable and building it before analysis.
• V052. A critical error had occurred.

# Getting Acquainted with the PVS-Studio Static Code Analyzer

PVS-Studio is a static analyzer for C/C++/C# code designed to assist programmers in searching for and fixing a number of software errors of different patterns. The analyzer can be used in Windows and Linux.

Working under Windows, the analyzer integrates into Visual Studio as a plugin, providing a convenient user interface for easy code navigation and error search. There is also a C and C++ Compiler Monitoring UI (Standalone.exe) available which is used independently of Visual Studio and allows analyzing files compiled with, besides Visual C++, such compilers as GCC (MinGW) and Clang. Command line utility PVS-Studio_Cmd.exe will allow to perform analysis of MSBuild / Visual Studio projects without a run of IDE or Compiler Monitoring UI, that will let, for instance, use the analyzer as a part of CI process.

PVS-Studio for Linux is a console application.

 This document describes the basics of using PVS-Studio on Windows. To get information about working in Linux environment refer to articles "Installing and updating PVS-Studio on Linux" and "How to run PVS-Studio on Linux and macOS".

## Pros of using a static analyzer

A static analyzer does not substitute other bug searching tools - it just complements them. Integrating a static analysis tool with the development process helps to eliminate plenty of errors at the moment when they are only "born", thus saving your time and resources on their subsequent elimination. As everyone knows, the earlier a bug is found, the easier it is to fix it. What follows from this is the idea that a static analyzer should be used regularly, for it is the only best way to get most of it.

## A brief overview PVS-Studio's capabilities

### Warning levels and diagnostic rule sets

PVS-Studio divides all the warnings into 3 levels of certainty: High, Medium and Low. Some warnings refer to a special Fails category. Let's consider these levels in more detail:

• High(1) - warnings with the maximum level of certainty. Such warnings often indicate errors, requiring immediate correction.
• Medium(2) - errors with lower degree of certainty, which are still worth paying attention to.
• Low(3) - warnings with a minimum level of certainty, pointing to minor flaws in the code. Warnings of this level usually have a high percentage of false positives.
• Fails - internal warnings of the analyzer informing of some problems during the work. These are warnings of the analyzer errors (for example, messages V001, V003 ans so on) and any unprocessed output of utilities, used by the analyzer itself during the analysis (a preprocessor, a command preprocessor cmd), displayed in stdout/stderr. For example, Fails messages can be the message of a preprocessor about preprocessing errors of the source code, errors of access to files (a file doesn't exist, or it is blocked by an anti-virus) and so on.

It should be borne in mind that a certain code of the error does not necessarily bind it to a particular level of certainty, and the distribution across the levels highly depends on the context, where they were generated. The output window of diagnostic messages in the plugin for Microsoft Visual Studio and the Compiler Monitoring UI has buttons of the levels, allowing to sort the warnings as needed.

The analyzer has 5 types of diagnostic rules:

• General (GA) - General Analysis diagnostics. This is the main set of diagnostic rules in PVS-Studio.
• Optimization (OP) - Diagnostics of micro-optimization. These are tips concerning the improvement of efficiency and safety of the code.
• 64-bit (64) - diagnostics, allowing to detect specific errors, related to the development of 64-bit applications and migrating the code from a 32-bit platform to a 64-bit one.
• Customers' Specific (CS) - highly specialized diagnostics, developed by user requests. By default, this set of diagnostics is disabled.
• MISRA - a set of diagnostics, developed according to the MISRA standard (Motor Industry Software Reliability Association). This set of diagnostics is disabled by default.

Short description of the diagnostic groups (GA, OP, 64, CS, MISRA) with the numbers of certainty levels (1, 2, 3) are used for the shorthand notation, for example in the command line parameters. Example: GA: 1,2.

Switching a certain group of diagnostics rules set shows or hides the corresponding messages.

Figure 1 - Message output window in Microsoft Visual Studio or in Compiler Monitoring UI (click on the image to enlarge).

You may find the detailed list of diagnostic rules in the corresponding section of the documentation.

Analyzer messages can be grouped and filtered by various criteria To get more detailed information about a work with a list of analyzer warnings, please, refer to the article " Handling the diagnostic messages list ".

### PVS-Studio and Microsoft Visual Studio

When installing PVS-Studio, you can choose which versions of the Microsoft Visual Studio IDE the analyzer should integrate with.

After deciding on all the necessary options and completing the setup, PVS-Studio will integrate into the IDE's menu. In Figure 2, you can see that the corresponding command has appeared in Visual Studio's menu, as well as the message output window.

Figure 2 - Microsoft Visual Studio's appearance after PVS-Studio's integration (click on the image to enlarge it)

In the settings menu, you can customize PVS-Studio as you need to make it most convenient to work with. For example, it provides the following options:

• Preprocessor selection;
• Exclusion of files and folders from analysis;
• Selection of the diagnostic message types to be displayed during the analysis;
• Plenty of other settings.

Most likely, you won't need any of those at your first encounter with PVS-Studio, but later, they will help you optimize your work with the tool.

### C and C++ Compiler Monitoring UI (Standalone.exe)

PVS-Studio can be used independently of the Microsoft Visual Studio IDE. The Compiler Monitoring UI allows analyzing projects while building them. It also supports code navigation through clicking on the diagnostic messages, and search for code fragments and definitions of macros and data types. To learn more about how to work with the Compiler Monitoring UI, see the article "Viewing Analysis Results with C and C++ Compiler Monitoring UI".

Figure 3 - C and C++ Compiler Monitoring UI start page (click on the image to enlarge it)

### PVS-Studio_Cmd.exe

PVS-Studio_Cmd.exe is a tool, which enables the analysis of Visual Studio solutions (.sln), as well as Visual C++ and Visual C# projects (.vcxproj, .csproj) from the command line. This can be useful, for example, in the case of a need to integrate static analysis on the build server. PVS-Studio_Cmd.exe allows to perform as a full analysis of the target project, and incremental (analysis of files that have changed since the last build). View of return code of the utility work as a bitmask enables you to get detailed information on the results of the analysis and identify the problems, in case of their occurrence. Thus, using the PVS-Studio_Cmd.exe utility you can configure a scenario of static code analysis 'subtly' enough and embed it into the CI process. Using of PVS-Studio_Cmd.exe module is described in more detail in the section "Analyzing Visual C++ (.vcxproj) and Visual C# (.csproj) projects from the command line".

### Help system and technical support

PVS-Studio provides an extensive help system on its diagnostic messages. This message database is accessible both from PVS-Studio's interface and at the official site. The message descriptions are accompanied by code samples with error examples, the error description, and available fixing solutions.

To open a diagnostic description, just click with the left mouse button on the diagnostic number in the message output window. These numbers are implemented as hyperlinks.

Technical support for PVS-Studio is carried out via e-mail. Since our technical support is delivered by the tool developers themselves, our users can promptly get responses to a wide variety of questions.

## System requirements and installation of PVS-Studio

PVS-Studio integrates into Microsoft Visual Studio 2017, 2015, 2013, 2012, 2010 development environments. You may learn about the system requirements for the analyzer in the corresponding section of the documentation.

After you obtain the PVS-Studio installation package, you may start installing the program.

Figure 4 - Installation of PVS-Studio

After approval of the license agreement, integration options will be presented for various supported versions of Microsoft Visual Studio. Integration options which are unavailable on a particular system will be greyed-out. In case different versions of the IDE or several IDEs are present on the system, it is possible to integrate the analyzer into every version available.

Figure 5- PVS-Studio integration options in various IDE

To make sure that the PVS-Studio tool was correctly installed, you may open the About window (Help/About menu item). The PVS-Studio analyzer must be present in the list of installed components.

Figure 6 - "About Microsoft Visual Studio" window with PVS-Studio component installed

## The basics of using PVS-Studio

When working in the Visual Studio IDE, you can run different types of the analysis - at the solution, project file, the selected items, etc. For example, the analysis solutions run is executed as follows: "PVS-Studio -> Check -> Solution".

Figure 7- Analysis run of PVS-Studio

After launching the verification, the progress bar will appear with the buttons Pause (to pause the analysis) and Stop (to terminate the analysis). Potentially dangerous constructs will be displayed in the list of detected errors during the analysis procedure.

Figure 8 - A window of code files analysis

The term "a potentially dangerous construct" means that the analyzer considers a particular code line a defect. Whether this line is a real defect in an application or not is determined only by the programmer who knows the application. You must correctly understand this principle of working with code analyzers: no tool can completely replace a programmer when solving the task of fixing errors in programs. Only the programmer who relies on his knowledge can do this. But the tool can and must help him with it. That is why the main task of the code analyzer is to reduce the number of code fragments the programmer must look through and decide what to do with them.

## Work with a list of diagnostic messages

in real large projects, there will be not dozens but hundreds or even thousands of diagnostic messages and it will be a hard task to review them all. To make it easier, the PVS-Studio analyzer provides several mechanisms. The first mechanism is filtering by the error code. The second is filtering by the contents of the diagnostic messages' text. The third is filtering by file paths. Let's examine examples of using filtering systems.

Suppose you are sure that the diagnostic messages with the code V112 (using magic numbers) are never real errors in your application. In this case you may turn off the display of these diagnostic warnings in the analyzer's settings:

Figure 9 - Filtering diagnostic messages by code

After that, all the diagnostic warnings with the code V112 will disappear from the error list. Note that you do not need to restart the analyzer. If you turn on these messages again, they will appear in the list without relaunching the analysis as well.

Now let's look at another option - a text-based diagnostic messages filtering. Let's look at an example of analyzer warning and code on which it was issued:

obj.specialFunc(obj);

Analyzer warning: V678 An object is used as an argument to its own method. Consider checking the first actual argument of the 'specialFunc' function.

The analyzer found it suspicious that the same object is passed as an argument to from which this method is called. A programmer, as opposed to the analyzer may be aware of what usage of this method is acceptable. Therefore, you might need to filter out all such warnings. You can do this by adding the related filter in settings "Keyword Message Filtering".

Figure 10 - Filtration of diagnostic messages by a warning text

After that, all the diagnostic messages whose text contains that expression will disappear from the error list, without the necessity of restarting the code analyzer. You may get turn them on back by simply deleting the expression from the filter.

The last mechanism of reducing the number of diagnostic messages is filtering by masks of project files' names and file paths.

Suppose your project employs the Boost library. The analyzer will certainly inform you about potential issues in this library. But if you are sure that these messages are not relevant for your project, you may simply add the path to the folder with Boost on the page "Don't check files":

Figure 11 - Configuring message filtering by location and file names

After that diagnostic messages related to files in this folder will not be displayed.

Also, PVS-Studio has the "Mark as False Alarm" function. It enables you to mark those lines in your source code which cause the analyzer to generate false alarms. After marking the code, the analyzer will not produce diagnostic warnings on this code. This function makes it more convenient to use the analyzer permanently during the software development process when verifying newly written code.

Thus, in the following example, we turned off the diagnostic messages with the code V640:

for (int i = 0; i < m; ++i)
for (int j = 0; j < n; ++j)
matrix[i][j] = Square(i) + 2*Square(j);
cout << "Matrix initialization." << endl; //-V640
....

This function is described in more detail in the section "Suppression of false alarms".

There are also some other methods to influence the display of diagnostic messages by changing the code analyzer's settings but they are beyond the scope of this article. We recommend you to refer to the documentation on the code analyzer's settings.

## Is it necessary to fix all the potential errors the analyzer informs about?

When you have reviewed all the messages generated by the code analyzer, you will find both real errors and constructs which are not errors. The point is that the analyzer cannot detect 100% exactly all the errors in programs without producing the so called "false alarms". Only the programmer who knows and understands the program can determine if there is an error in each particular case. The code analyzer just significantly reduces the number of code fragments the developer needs to review.

So, there is certainly no reason for correcting all the potential issues the code analyzer refers to.

Suppression mechanisms of individual warnings and mass analyzer messages suppression are described in the articles "Suppression of false alarms" and "Mass Suppression of Analyzer Messages".

# System requirements for PVS-Studio analyzer

The PVS-Studio analyzer is intended to work on the Windows or Linux platform.

When used on Windows it integrates into Microsoft Visual Studio 2017, 2015, 2013, 2012, 2010 development environments. System requirements for the analyzer coincide with requirements for Microsoft Visual Studio:

• Development environment: Microsoft Visual Studio 2017, 2015, 2013, 2012, 2010. It is advisable to install the "X64 Compilers and Tools" Visual Studio component for the analysis of 64-bit applications. It is included into all the mentioned versions of Visual Studio and can be installed through Visual Studio Setup.
• Operating system: Windows Vista/2008/7/8/2012/10 x64.

To work on Linux you should have the GCC or Clang compiler installed, the PVS-Studio.lic license file and the config file PVS-Studio.cfg.

PVS-Studio works on systems with main memory of 1 GB at least (the recommended size is 2 GBs or more) for each processor core; the analyzer supports multi-core operation (the more cores you have, the faster code analysis is).

# PVS-Studio's Trial Mode

 This section describes restrictions of a trial mode of PVS-Studio on Windows. To enable the analyzer work on Linux a license file is needed. In case if you want to try the analyzer on Linux, please, write to us.

After downloading, first installation and run, PVS-Studio analyzer starts working in the trial mode. This mode imposes certain restrictions on the available features of our product. However, the trial mode allows you to start using the tool immediately after installation without any additional steps, such as registration on the site or a license key request. The main function of the analyzer, which is detecting and fixing real errors, is available out of the box.

So, just by downloading the analyzer and running a check, you can benefit your project without worrying about complicated configuration or licensing issues and prices. Of course, it will be only a small fraction of what a full and regular use of all abilities of PVS-Studio will provide.

## Limitations and recommendations

Limitations of the PVS-Studio analyzer have two purposes. The first - to show a potential user that a static analyzer is able to find bugs in the code. The second - to prompt the user to communicate with us via e-mail so that we could help use the tool correctly.

Let's talk about our recommendations and existing limitations of PVS-Studio Trial mode.

### Features That don't Work in the Trial Mode

We will list all the restrictions imposed by the trial version on PVS-Studio abilities.

• The number of navigation "clicks" along the messages of the analyzer (the opening of the source file and setting a cursor to a line, containing potential error), found in the analyzed code, while running analysis from Visual Studio \ C and C++ Compiler Monitoring UI (Standalone.exe), is limited. The number of "clicks" along the messages in trial mode is 200. After these "clicks" are spent, you can get, one more time, another 200 "clicks", filling a small contact form (your name, company name and e-mail). We will refer in detail to the aim of filling this form a bit later. After the end of trial "clicks", the analyzer continues to work, but stops issuing paths to files where potential errors were found.
• Analysis of MSBuild projects (Analyzing Visual C++ (.vcxproj) and Visual C# (.csproj) projects from the command line) using PVS-Studio_Cmd.exe is not available. Such projects can be analyzed in the trial mode by opening them directly in Visual Studio, with a limit on the number of "clicks", described in the previous point.
• Command line utility for PVS-Studio Compiler Monitoring System is not available. The compiler calls tracking in the trial mode is available only from the Compiler Monitoring UI with the limit of clicks available, as described above.
• PVS-Studio plugin is not available for a SonarQube continuous code quality assurance system.
• BlameNotifier console utility, made for notifying developers by email (Managing the Analysis Results (.plog file)) is not available.

As you can see from the list, all limitations of trial version can be divided into 2 groups - limitations on the total number of analyzer messages, available for viewing, and restrictions on the usage of the additional functionality of the analyzer. Now we'll consider these groups of restrictions in more detail.

### A lot does not mean useful

First, let's talk about the restriction of the number of messages that can be viewed in trial mode. Let me remind you that we have a limited number of clicks along the messages in plugin interface \ Compiler Monitoring UI.

The most common wrong behavior pattern: a programmer turns on all the warning settings to the maximum. This is our biggest pain. They enable all types of warnings (general-purpose, 64-bit ones, optimizations), all levels of warnings; some people even manage to find our custom-built warnings and turn them on.

Programmers explain it by saying that they want to see everything that the analyzer is capable of. And this is totally wrong. A right aim would be to see how the analyzer could be beneficial for the project. That is, first of all, you should see that the analyzer can find real errors in the code. By turning all the warnings to the maximum, you have a chance to drown in the large amount of warnings. Having looked through 20-30 uninteresting warnings people lose interest. Most likely, the stage of familiarizing with the tool will end at this point. However, if the number of warnings that the person can see is decreased (by example, by filtering them out), the probability to discover serious errors will increase. Then the programmer will treat the tool differently. He will try to filter uninteresting warnings, customize the tool and learn about the ways to suppress false positives in macros and so on...

There is another point concerning a big amount of warnings. The programmer can be aware that he is looking at both high and low - certainty warnings and he is ready to look through a big number of messages. The trouble is that he quickly takes one's eye off the ball. Roughly speaking, having looked at 10 warnings, he will most likely miss the eleventh one that will point to a real issue.

Therefore, we recommend checking only High and Medium - certainty warnings when using the analyzer for the first time.

#### We are here to help

Now, some brief facts about the existing limitations of trial mode. There is a limited number of jump-clicks to the code that the user can perform.

Let's go through these restrictions and have a look at the reasons behind them. All the stories are based on true facts. These restrictions aren't made up by a market manager, they resulted from long communication with potential clients and observations of the way people get acquainted with PVS-Studio.

When a user is run out of the "jump-clicks", the program will offer to fill in a small form with contact details that we use to find out if we can help with anything else. After that, the user gets another portion of "clicks".

What's the point in contacting us? First of all, we can give a temporary key for a closer look at PVS-Studio. By this moment the programmer got used to the tool, found bugs in his code and now he is ready to see the warnings of other levels.

Secondly, what is important is that we want to help a person get familiar with PVS-Studio. You cannot even imagine, how large the amount of ways to use this tool incorrectly is. Here are some examples.

Someone may have a "nasty macro" and the analyzer issues a lot of meaningless warnings. That's how the person loses his "clicks" going to the code fragments. After that, asking a question "Is everything fine?", we get something like:

It's awful. How in the world people use this analyzer. I am sick of looking at the warnings of the Vxxx number.

This is when we help telling the person about various ways to suppress the warnings in macros or that the person can just turn off this diagnostic, for a start.

Another person complains that the warnings issued for the third-party libraries really bother him.

Then we give a hint that such warnings can be disabled in two clicks. Really, it's just 2 clicks.

In both cases we helped to make the life easier. If there was no communication, those people would continue thinking how terrible the analyzer is. And most likely won't even consider getting the license for the tool.

### Don't Drown in Deep Integration

Further, we will consider the limitation in PVS-Studio trial mode on the usage of tools for deep integration of the analyzer into development processes.

After the first acquaintance with the analyzer (when it becomes more or less clear how to run a check and fix detected bugs), many users are tempted to immediately "dig deeper" and start using more advanced features of the product, such as: integration of analysis into build automation or continuous integration systems, automated mailing of newly found messages to developers, etc.

As you can see from the list of restrictions, many of our console utilities are not available in trial mode. This was made for a reason. According to our experience of communicating with our users, it is very common when a person starts using such tools of "deep" integration not in the optimal way, or often his approach is wrong altogether.

Here are some examples from our real communication with our users. Some of these users are our clients. Unfortunately, the general recommendation to "read the documentation beforehand" is not always working: in some cases, people are too lazy to read it, and sometimes it is not enough to read only one part to assess the whole picture.

First example: the user runs a PVS-Studio_Cmd command-line utility for a check of his solution on a build server. The result of its work is the report plog file in XML format. Next, the user attempts to parse the report himself to put the results into his bug-tracking system. Here he faces the fact that the plog report (and we strongly recommend that our users should not try to parse it on their own!) contains warnings that were suppressed through Suppression of false alarms. The user then angrily writes us about the bug in our tool - "your report contains the error messages, that I have already suppressed!". But the user hadn't taken into account that in plog file, false alarms messages are marked in a special way. We intentionally preserve such suppressed reports in a plog, as in some cases, you may need to quickly view them without restarting the analysis and without a need to remove false alarm markup from code. In this situation, we recommend that person to use our PlogConverter tool to filter and transform plog report into a more appropriate format suitable for his tasks.

Another example: the user configures an automated mailing of analysis results using the BlameNotifier tool (or SonarQube). At the same time, he runs the analysis of a very large project, containing millions lines of code. Of course, because of such a huge code base, the analyzer starts issuing thousands of warnings. Some of them may be false alarms, and some of them are real errors, but that is not the point here. Due to the huge number of warnings, which begin to shower on developers, they start to see these notifications as spam, since it is impossible to view at once such a large number of warnings. As a result, having received several "spam" mailings, a developer can get a negative impression of the analyzer, and then we will not get a license renewal for the next year, bugs in the project won't get fixed - as a result, both sides will lose. We recommend, if you start using the analyzer for such a huge project, first suppress all old messages, and immediately begin to use our analyzer for fixing errors in your newly written code. You can always return to the old messages later, gradually filtering out false positives among them and finding real errors.

Therefore, before starting to use the "advanced" modes of PVS-Studio, we ask our users to contact us and describe the desired scenario of using the tool. We are always ready to advise the optimal way, in terms of the opportunities available in PVS-Studio, and how to solve a particular problem. In addition, we can give a fully functional license without any trial restrictions for a limited period to test "deeper" abilities of the analyzer.

## I am experienced enough

Here is what we can say to those who aren't new to the static analysis tools. It's all very simple. Contact us and we will give you a temporary key for a deeper exploring of the analyzer, without the restrictions imposed by the trial version.

# PVS-Studio Release History

## PVS-Studio 7.00 (January 16, 2019)

• PVS-Studio 7.00 now provides static analyzer for Java. You can read about all new features of PVS-Studio 7.00 in our blog.
• PVS-Studio plug-in for SonarQube is updated to support latest SonarQube version 7.4. The minimal SonarQube version supported by PVS-Studio plug-in is now raised to LTS SonarQube version 6.7.
• V2526. MISRA. The function with the 'clock/time/difftime/ctime/asctime/gmtime/localtime/mktime' name should not be used.
• V2527. MISRA. A switch-expression should not have Boolean type. Consider using of 'if-else' construct.
• V2528. MISRA. The comma operator should not be used.
• V6001. There are identical sub-expressions to the left and to the right of the 'foo' operator.
• V6002. The switch statement does not cover all values of the enum.
• V6003. The use of 'if (A) {...} else if (A) {...}' pattern was detected. There is a probability of logical error presence.
• V6004. The 'then' statement is equivalent to the 'else' statement.
• V6005. The 'x' variable is assigned to itself.
• V6006. The object was created but it is not being used. The 'throw' keyword could be missing.
• V6007. Expression is always true/false.
• V6008. Potential null dereference.
• V6009. Function receives an odd argument.
• V6010. The return value of function 'Foo' is required to be utilized.
• V6011. The expression contains a suspicious mix of integer and real types
• V6012. The '?:' operator, regardless of its conditional expression, always returns one and the same value.
• V6013. Comparison of arrays, strings, collections by reference. Possibly an equality comparison was intended.
• V6014. It's odd that this method always returns one and the same value of NN.
• V6015. Consider inspecting the expression. Probably the '!='/'-='/'+=' should be used here.
• V6016. Suspicious access to element by a constant index inside a loop.
• V6017. The 'X' counter is not used inside a nested loop. Consider inspecting usage of 'Y' counter.
• V6018. Constant expression in switch statement.
• V6019. Unreachable code detected. It is possible that an error is present.
• V6020. Division or mod division by zero.
• V6021. The value is assigned to the 'x' variable but is not used.
• V6022. Parameter is not used inside method's body.
• V6023. Parameter 'A' is always rewritten in method body before being used.
• V6024. The 'continue' operator will terminate 'do { ... } while (false)' loop because the condition is always false.
• V6025. Possibly index is out of bound.
• V6026. This value is already assigned to the 'b' variable.
• V6027. Variables are initialized through the call to the same function. It's probably an error or un-optimized code.
• V6028. Identical expressions to the left and to the right of compound assignment.
• V6029. Possible incorrect order of arguments passed to method.
• V6030. The function located to the right of the '|' and '&' operators will be called regardless of the value of the left operand. Consider using '||' and '&&' instead.
• V6031. The variable 'X' is being used for this loop and for the outer loop.
• V6032. It is odd that the body of 'Foo_1' function is fully equivalent to the body of 'Foo_2' function.
• V6033. An item with the same key has already been added.
• V6034. Shift by N bits is inconsistent with the size of type.
• V6035. Double negation is present in the expression: !!x.
• V6036. The value from the uninitialized optional is used.
• V6037. An unconditional 'break/continue/return/goto' within a loop.
• V6038. Comparison with 'double.NaN' is meaningless. Use 'double.isNaN()' method instead.
• V6039. There are two 'if' statements with identical conditional expressions. The first 'if' statement contains method return. This means that the second 'if' statement is senseless.
• V6040. The code's operational logic does not correspond with its formatting.
• V6041. Suspicious assignment inside the conditional expression of 'if/while/do...while' statement.
• V6042. The expression is checked for compatibility with type 'A', but is cast to type 'B'.
• V6043. Consider inspecting the 'for' operator. Initial and final values of the iterator are the same.
• V6044. Postfix increment/decrement is senseless because this variable is overwritten.
• V6045. Suspicious subexpression in a sequence of similar comparisons.
• V6046. Incorrect format. Consider checking the N format items of the 'Foo' function.
• V6047. It is possible that this 'else' branch must apply to the previous 'if' statement.
• V6048. This expression can be simplified. One of the operands in the operation equals NN. Probably it is a mistake.
• V6049. Classes that define 'equals' method must also define 'hashCode' method.
• V6050. Class initialization cycle is present.
• V6051. Use of jump statements in 'finally' block can lead to the loss of unhandled exceptions.
• V6052. Calling an overridden method in parent-class constructor may lead to use of uninitialized data.
• V6053. Collection is modified while iteration is in progress. ConcurrentModificationException may occur.
• V6054. Classes should not be compared by their name.
• V6055. Expression inside assert statement can change object's state.
• V6056. Implementation of 'compareTo' overloads the method from a base class. Possibly, an override was intended.
• V6057. Consider inspecting this expression. The expression is excessive or contains a misprint.
• V6058. The 'X' function receives objects of incompatible types.
• V6059. Odd use of special character in regular expression. Possibly, it was intended to be escaped.
• V6060. The reference was used before it was verified against null.
• V6061. The used constant value is represented by an octal form.
• V6062. Possible infinite recursion.
• V6063. Odd semicolon ';' after 'if/foreach' operator.
• V6064. Suspicious invocation of Thread.run().
• V6065. A non-serializable class should not be serialized.
• V6066. Passing objects of incompatible types to the method of collection.

## PVS-Studio 6.27 (December 3, 2018)

• Analyzer log conversion tools (plog converter) source code is now available at our GitHub portal: https://github.com/viva64
• PVS-Studio now supports MISRA C and MISRA C++ software development guidelines. The number of supported MISRA rules will gradually increase in the future analyzer releases.
• V2501. MISRA. Octal constants should not be used.
• V2502. MISRA. The 'goto' statement should not be used.
• V2503. MISRA. Implicitly specified enumeration constants should be unique – consider specifying non-unique constants explicitly.
• V2504. MISRA. Size of an array is not specified.
• V2505. MISRA. The 'goto' statement shouldn't jump to a label declared earlier.
• V2506. MISRA. A function should have a single point of exit at the end.
• V2507. MISRA. The body of a loop\conditional statement should be enclosed in braces.
• V2508. MISRA. The function with the 'atof/atoi/atoll/atoll' name should not be used.
• V2509. MISRA. The function with the 'abort/exit/getenv/system' name should not be used.
• V2510. MISRA. The function with the 'qsort/bsearch' name should not be used.
• V2511. MISRA. Memory allocation and deallocation functions should not be used.
• V2512. MISRA. The macro with the 'setjmp' name and the function with the 'longjmp' name should not be used.
• V2513. MISRA. Unbounded functions performing string operations should not be used.
• V2514. MISRA. Unions should not be used.
• V2515. MISRA. Declaration should contain no more than two levels of pointer nesting.
• V2516. MISRA. The 'if' ... 'else if' construct shall be terminated with an 'else' statement.
• V2517. MISRA. Literal suffixes should not contain lowercase characters.
• V2518. MISRA. The 'default' label should be either the first or the last label of a 'switch' statement.
• V2519. MISRA. The 'default' label is missing in 'switch' statement.
• V2520. MISRA. Every switch-clause should be terminated by an unconditional 'break' or 'throw' statement.
• V2521. MISRA. Only the first member of enumerator list should be explicitly initialized, unless all members are explicitly initialized.
• V2522. MISRA. The 'switch' statement should have 'default' as the last label.
• V2523. MISRA. All integer constants of unsigned type should have 'u' or 'U' suffix.
• V2524. MISRA. A switch-label should only appear at the top level of the compound statement forming the body of a 'switch' statement.
• V2525. MISRA. Every 'switch' statement should contain non-empty switch-clauses.

## PVS-Studio 6.26 (October 18, 2018)

• Support for analyzing projects for GNU Arm Embedded Toolchain, Arm Embedded GCC compiler was added.
• It is now possible to use pvsconfig files with CLMonitor/Standalone under Windows.
• Letter case is now preserved for analyzed source files in the analyzer's log when analyzing Visual C++ projects (cl.exe, Visual Studio/MSBuild vcxproj projects).
• New incremental analysis mode was added to pvs-studio-analyzer/Cmake module. PVS-Studio CMake module can now be used for Visual C++ (cl.exe) projects under Windows.
• Incremental analysis support was implemented for .NET Core/.NET Standard Visual Studio projects.
• Now it is possible to analyze projects of WAF build automation tool.
• V1021. The variable is assigned the same value on several loop iterations.
• V1022. An exception was thrown by pointer. Consider throwing it by value instead.
• V1023. A pointer without owner is added to the container by the 'emplace_back' method. A memory leak will occur in case of an exception.
• V1024. The stream is checked for EOF before reading from it, but is not checked after reading. Potential use of invalid data.
• V1025. Rather than creating 'std::unique_lock' to lock on the mutex, a new variable with default value is created.
• V1026. The variable is incremented in the loop. Undefined behavior will occur in case of signed integer overflow.
• V1027. Pointer to an object of the class is cast to unrelated class.
• V1028. Possible overflow. Consider casting operands, not the result.
• V1029. Numeric Truncation Error. Return value of function is written to N-bit variable.
• V1030. The variable is used after it was moved.
• V1031. Function is not declared. The passing of data to or from this function may be affected.
• V1032. Pointer is cast to a more strictly aligned pointer type.
• V1033. Variable is declared as auto in C. Its default type is int.
• V1034. Do not use real-type variables as loop counters.
• V1035. Only values that are returned from fgetpos() can be used as arguments to fsetpos().
• V2014. Don't use terminating functions in library code.

## PVS-Studio 6.25 (August 20, 2018)

• A common suppress file for all projects can now be added to a Visual Studio solution.
• Roslyn and MSBuild libraries used for analyzing Visual Studio projects were updated to support latest C++/C# project types and C# language features.
• Support for multi-target C# projects was improved.
• PVS-Studio CMake module now supports generator expressions and can track implicit dependencies of analyzed files.
• Our website now provides information on using PVS-Studio as a part of security development lifecycle (SDL), as a SAST (Static Application Security Testing) tool. This page contains mappings of analyzer diagnostics rules to the CWE (Common Weakness Enumeration) format and SEI CERT secure coding standard, and the status of our ongoing effort to support MISRA standards.

## PVS-Studio 6.24 (June 14, 2018)

• Support for Texas Instruments Code Composer Studio, ARM compiler was added under Windows\Linux.
• Compiler monitoring under Windows now supports saving monitoring data to a dump file and starting the analysis from this dump file. This allows to re-run the analysis without the necessity to re-build the analyzed project each time.
• A new mode for checking individual files was added to the command line analyzer for Visual Studio projects under Windows.
• V1013. Suspicious subexpression in a sequence of similar comparisons.
• V1014. Structures with members of real type are compared byte-wise.
• V1015. Suspicious simultaneous use of bitwise and logical operators.
• V1016. The value is out of range of enum values. This causes unspecified or undefined behavior.
• V1017. Variable of the 'string_view' type references a temporary object which will be removed after evaluation of an expression.
• V1018. Usage of a suspicious mutex wrapper. It is probably unused, uninitialized, or already locked.
• V1019. Compound assignment expression is used inside condition.
• V1020. Function exited without performing epilogue actions. It is possible that there is an error.

## PVS-Studio 6.23 (March 28, 2018)

• PVS-Studio is now available on macOS! Now you can analyze C and C++ source code with PVS-Studio not only under Windows/Linux, but also under macOS. The analyzer is available as a pkg installer, tgz archive and through Homebrew package manager. The documentation on using PVS-Studio under macOS is available here.
• V011. Presence of #line directives may cause some diagnostic messages to have incorrect file name and line number.
• V1011. Function execution could be deferred. Consider specifying execution policy explicitly.
• V1012. The expression is always false. Overflow check is incorrect.

## PVS-Studio 6.22 (February 28, 2018)

• Analyzing projects for Keil MDK ARM Compiler 5 and ARM Compiler 6 is now supported.
• Analyzing projects for IAR C/C++ Compiler for ARM is now supported.
• V1008. Consider inspecting the 'for' operator. No more than one iteration of the loop will be performed.
• V1009. Check the array initialization. Only the first element is initialized explicitly.
• V1010. Unchecked tainted data is used in expression.

## PVS-Studio 6.21 (January 15, 2018)

• Support for CWE (Common Weakness Enumeration) was added to C/C++/C# analyzers.
• HTML log with source code navigation can now be saved from Visual Studio plug-ins and the Standalone tool.
• WDK (Windows Driver Kit) projects for Visual Studio 2017 are now supported.
• PVS-Studio plug-in for SonarQube was updated for the latest LTS version 6.7.
• V1007. The value from the uninitialized optional is used. Probably it is a mistake.

## PVS-Studio 6.20 (December 1, 2017)

• You can save analysis results as HTML with full source code navigation.
• You can make the analysis less "noisy" by disabling generation of Low Certainty (Level 3) messages. Just set the NoNoise option.

## PVS-Studio 6.19 (November 14, 2017)

• It is now possible to suppress messages from XML log file (.plog) with Windows command line analyzer.
• The performance and stability of message suppression and incremental analysis were improved in Visual Studio plug-ins for very large (thousands of projects) solutions.
• V1004. The pointer was used unsafely after it was verified against nullptr.
• V1005. The resource was acquired using 'X' function but was released using incompatible 'Y' function.
• V1006. Several shared_ptr objects are initialized by the same pointer. A double memory deallocation will occur.

## PVS-Studio 6.18 (September 26, 2017)

• Linux version now has a default location for a license file.
• Linux version now provides a new way to enter credentials.
• Linux version now can generate an HTML analysis report.
• The support of ASP.Net Core projects analysis is added in Windows version.
• Scaling of UI elements on different DPIs was improved in Windows version
• Performance of PVS-Studio output window in Windows version was improved when working with large analyzer reports, sorting the reports by columns, working with a large number of simultaneously selected messages.
• "Send to External Tool" feature was removed from Visual Studio extension.
• Trial mode extension dialogs were substantially redesigned in Visual Studio extension.
• V1002. A class, containing pointers, constructor and destructor, is copied by the automatically generated operator= or copy constructor.
• V1003. The macro is a dangerous, or the expression is suspicious.

## PVS-Studio 6.17 (August 30, 2017)

• 15.3 Update supported for Visual Studio 2017.
• Analyzer report can now be saved from Visual Studio plugin and Standalone in txt\csv\html formats without the need to invoke PlogConverter manually.
• The license and setting files are now saved in UTF-8 encoding.
• A list of recently opened logs is added to the menu of Visual Studio plugins.
• Incremental analysis in PVS-Studio_Cmd.exe - the "AppendScan" option was added. Details can be found in the description of PVS-Studio_Cmd utility here.
• A new plugin to display the analysis results in the Jenkins continuous integration system (on Windows)
• A new version of plugin for SonarQube quality control platform for Linux.
• Support for unparsed output from C++ analyzer was added to PlogConverter tool.
• V821. The variable can be constructed in a lower level scope.
• V1001. The variable is assigned but is not used until the end of the function.
• V3135. The initial value of the index in the nested loop equals 'i'. Consider using 'i + 1' instead.
• V3136. Constant expression in switch statement.
• V3137. The variable is assigned but is not used until the end of the function.

## PVS-Studio 6.16 (June 28, 2017)

• Clang-based toolsets support for Visual Studio 2015/2017.
• Solution directory can now be used as Source Tree Root in Visual Studio.
• V788. Review captured variable in lambda expression.
• V789. Iterators for the container, used in the range-based for loop, become invalid upon a function call.
• V790. It is odd that the assignment operator takes an object by a non-constant reference and returns this object.
• V791. The initial value of the index in the nested loop equals 'i'. Consider using 'i + 1' instead.
• V792. The function located to the right of the '|' and '&' operators will be called regardless of the value of the left operand. Consider using '||' and '&&' instead.
• V793. It is odd that the result of the statement is a part of the condition. Perhaps, this statement should have been compared with something else.
• V794. The copy operator should be protected from the case of this == &src.
• V795. Note that the size of the 'time_t' type is not 64 bits. After the year 2038, the program will work incorrectly.
• V796. A 'break' statement is probably missing in a 'switch' statement.
• V797. The function is used as if it returned a bool type. The return value of the function should probably be compared with std::string::npos.
• V798. The size of the dynamic array can be less than the number of elements in the initializer.
• V799. The variable is not used after memory has been allocated for it. Consider checking the use of this variable.
• V818. It is more efficient to use an initialization list rather than an assignment operator.
• V819. Decreased performance. Memory is allocated and released multiple times inside the loop body.
• V820. The variable is not used after copying. Copying can be replaced with move/swap for optimization.

## PVS-Studio 6.15 (April 27, 2017)

• Visual Studio 2017 support improved.
• Fixed issue related to specific .pch files.
• V782. It is pointless to compute the distance between the elements of different arrays.
• V783. Dereferencing of invalid iterator 'X' might take place.
• V784. The size of the bit mask is less than the size of the first operand. This will cause the loss of the higher bits.
• V785. Constant expression in switch statement.
• V786. Assigning the value C to the X variable looks suspicious. The value range of the variable: [A, B].
• V787. A wrong variable is probably used as an index in the for statement.

## PVS-Studio 6.14 (March 17, 2017)

• Visual Studio 2017 support added.
• Support of Roslyn 2.0 / C# 7.0 in C# PVS-Studio Analyzer.
• Line highlighting added when viewing the analyzer messages in Visual Studio plugins and Standalone version.
• The issue of checking C++ projects fixed. It could appear during the start of the analysis on the system without an installed Visual Studio 2015 /MSBuild 14.
• V780. The object of non-passive (non-PDS) type cannot be used with the function.
• V781. The value of the variable is checked after it was used. Perhaps there is a mistake in program logic. Check lines: N1, N2.
• V3131. The expression is checked for compatibility with type 'A' but is cast to type 'B'.
• V3132. A terminal null is present inside a string. '\0xNN' character sequence was encountered. Probably meant: '\xNN'.
• V3133. Postfix increment/decrement is meaningless because this variable is overwritten.
• V3134. Shift by N bits is greater than the size of type.

## PVS-Studio 6.13 (January 27, 2017)

• Incremental analysis mode is added to the cmd version of the analyzer (PVS-Studio_Cmd.exe). More details can be found in the documentation.
• V779. Unreachable code detected. It is possible that an error is present.
• V3128. The field (property) is used before it is initialized in constructor.
• V3129. The value of the captured variable will be overwritten on the next iteration of the loop in each instance of anonymous function that captures it.
• V3130. Priority of the '&&' operator is higher than that of the '||' operator. Possible missing parentheses.

## PVS-Studio 6.12 (December 22, 2016)

• V773. The function was exited without releasing the pointer. A memory leak is possible.
• V774. The pointer was used after the memory was released.
• V775. It is odd that the BSTR data type is compared using a relational operator.
• V776. Potentially infinite loop. The variable in the loop exit condition does not change its value between iterations.
• V777. Dangerous widening type conversion from an array of derived-class objects to a base-class pointer.
• V778. Two similar code fragments were found. Perhaps, this is a typo and 'X' variable should be used instead of 'Y'.
• V3123. Perhaps the '??' operator works differently from what was expected. Its priority is lower than that of other operators in its left part.
• V3124. Appending an element and checking for key uniqueness is performed on two different variables.
• V3125. The object was used after it was verified against null. Check lines: N1, N2.
• V3126. Type implementing IEquatable<T> interface does not override 'GetHashCode' method.

## PVS-Studio 6.11 (November 29, 2016)

• V771. The '?:' operator uses constants from different enums.
• V772. Calling the 'delete' operator for a void pointer will cause undefined behavior.
• V817. It is more efficient to search for 'X' character rather than a string.
• V3119. Calling a virtual (overridden) event may lead to unpredictable behavior. Consider implementing event accessors explicitly or use 'sealed' keyword.
• V3120. Potentially infinite loop. The variable in the loop exit condition does not change its value between iterations.
• V3121. An enumeration was declared with 'Flags' attribute, but no initializers were set to override default values.
• V3122. Uppercase (lowercase) string is compared with a different lowercase (uppercase) string.
• Support for analyzing Visual C++ projects (.vcxproj) with Intel C++ toolsets was implemented in Visual Studio plug-in.

## PVS-Studio 6.10 (October 25, 2016)

• We are releasing PVS-Studio for Linux! Now it is possible to check C and C+ source code with PVS-Studio not only under Windows, but under Linux as well. The analyzer is available as packages for the mainstream package management systems, and is easily integratable with most common build systems. The detailed documentation on using PVS-Studio Linux version is available here.
• PVS-Studio for Windows is updated with a new user interface! The update affects Vidual Studio plug-in and Standalone PVS-Studio tool.
• PVS-Studio now includes the new BlameNotifier tool. It allows to easily organize e-mail notifications with PVS-Studio analyzer messages of developers responsible for the source code that triggers these messages. Supported VCSs are Git, Svn and Mercurial. A detailed guide on managing the analysis results is available here.
• The support for analyzing MSBuild projects, which are using the Intel C++ compiler, was implemented in the PVS-Studio command line version. The support for Visual Studio is coming in the near future.
• V769. The pointer in the expression equals nullptr. The resulting value is meaningless and should not be used.
• V770. Possible usage of a left shift operator instead of a comparison operator.

## PVS-Studio 6.09 (October 6, 2016)

• If all the diagnostic groups of the analyzer (C++ or C#) are disabled, the analysis of projects of the corresponding language won't start.
• We have added proxy support with the authorization during the update check and the trial extension.
• The ability to completely disable C/C++ or C# analyzer in .pvsconfig files (//-V::C++ and //-V::C#) is now supported.
• In the SonarQube plugin implemented functionality for calculating the LOC metric and determining the reliability remediation effort.
• V768. The '!' operator is applied to an enumerator.
• V3113. Consider inspecting the loop expression. It is possible that different variables are used inside initializer and iterator.
• V3114. IDisposable object is not disposed before method returns.
• V3115. It is not recommended to throw exceptions from 'Equals(object obj)' method.
• V3116. Consider inspecting the 'for' operator. It's possible that the loop will be executed incorrectly or won't be executed at all.
• V3117. Constructor parameter is not used.
• V3118. A component of TimeSpan is used, which does not represent full time interval. Possibly 'Total*' value was intended instead.

## PVS-Studio 6.08 (August 22, 2016)

• Visual Studio plug-in no longer supports analysis from command line with '/command' switch. Please use PVS-Studio_Cmd.exe command line tool instead. The detailed description of the tool is available here.
• V3108. It is not recommended to return null or throw exceptions from 'ToSting()' method.
• V3109. The same sub-expression is present on both sides of the operator. The expression is incorrect or it can be simplified.
• V3110. Possible infinite recursion.
• V3111. Checking value for null will always return false when generic type is instantiated with a value type.
• V3112. An abnormality within similar comparisons. It is possible that a typo is present inside the expression.

## PVS-Studio 6.07 (August 8, 2016)

• PVS-Studio no longer supports 32-bit operating systems. PVS-Studio analyzer (both C++ and C# modules) requires quite a large amount of RAM for its operation, especially when using multiple processor cores during the analysis. The maximum amount of RAM available on a 32-bit system allows correctly running the analyzer on a single core only (i.e. one process at a time). Moreover, in case of a very large project being analyzed, even this amount of RAM could be insufficient. Because of this, and also because a very small fraction of our users still utilizes 32-bit OS, we've decided to cease support for the 32-bit version of the analyzer. This will allows us to concentrate all of our resources on further development of 64-bit version of the analyzer.
• Support for SonarQube continuous quality control system was implemented in the analyzer's command line version. In addition, our installer now contains a dedicated SonarQube plugin, which can be used for integration of analysis results with SonarQube server. The detailed description of this plugin and new analyzer modes is available here.
• V763. Parameter is always rewritten in function body before being used.
• V764. Possible incorrect order of arguments passed to function.
• V765. A compound assignment expression 'X += X + N' is suspicious. Consider inspecting it for a possible error.
• V766. An item with the same key has already been added.
• V767. Suspicious access to element by a constant index inside a loop.
• V3106. Possibly index is out of bound.
• V3107. Identical expressions to the left and to the right of compound assignment.

## PVS-Studio 6.06 (July 7, 2016)

• V758. Reference invalidated, because of the destruction of the temporary object 'unique_ptr', returned by function.
• V759. Violated order of exception handlers. Exception caught by handler for base class.
• V760. Two identical text blocks detected. The second block starts with NN string.
• V761. NN identical blocks were found.
• V762. Consider inspecting virtual function arguments. See NN argument of function 'Foo' in derived class and base class.
• V3105. The 'a' variable was used after it was assigned through null-conditional operator. NullReferenceException is possible.

## PVS-Studio 6.05 (June 9, 2016)

• New PVS-Studio command line tool was added; it supports the check of vcxproj and csproj projects (C++ and C#). Now there is no need to use devenv.exe for nightly checks. More details about this tool can be found here.
• The support of MSBuild plugin was stopped. Instead of it we suggest using a new PVS-Studio command line tool.
• V755. Copying from unsafe data source. Buffer overflow is possible.
• V756. The 'X' counter is not used inside a nested loop. Consider inspecting usage of 'Y' counter.
• V757. It is possible that an incorrect variable is compared with null after type conversion using 'dynamic_cast'.
• V3094. Possible exception when deserializing type. The Ctor(SerializationInfo, StreamingContext) constructor is missing.
• V3095. The object was used before it was verified against null. Check lines: N1, N2.
• V3096. Possible exception when serializing type. [Serializable] attribute is missing.
• V3097. Possible exception: type marked by [Serializable] contains non-serializable members not marked by [NonSerialized].
• V3098. The 'continue' operator will terminate 'do { ... } while (false)' loop because the condition is always false.
• V3099. Not all the members of type are serialized inside 'GetObjectData' method.
• V3100. Unhandled NullReferenceException is possible. Unhandled exceptions in destructor lead to termination of runtime.
• V3101. Potential resurrection of 'this' object instance from destructor. Without re-registering for finalization, destructor will not be called a second time on resurrected object.
• V3102. Suspicious access to element by a constant index inside a loop.
• V3103. A private Ctor(SerializationInfo, StreamingContext) constructor in unsealed type will not be accessible when deserializing derived types.
• V3104. 'GetObjectData' implementation in unsealed type is not virtual, incorrect serialization of derived type is possible.

## PVS-Studio 6.04 (May 16, 2016)

• V753. The '&=' operation always sets a value of 'Foo' variable to zero.
• V754. The expression of 'foo(foo(x))' pattern is excessive or contains an error.
• V3082. The 'Thread' object is created but is not started. It is possible that a call to 'Start' method is missing.
• V3083. Unsafe invocation of event, NullReferenceException is possible. Consider assigning event to a local variable before invoking it.
• V3084. Anonymous function is used to unsubscribe from event. No handlers will be unsubscribed, as a separate delegate instance is created for each anonymous function declaration.
• V3085. The name of 'X' field/property in a nested type is ambiguous. The outer type contains static field/property with identical name.
• V3086. Variables are initialized through the call to the same function. It's probably an error or un-optimized code.
• V3087. Type of variable enumerated in 'foreach' is not guaranteed to be castable to the type of collection's elements.
• V3088. The expression was enclosed by parentheses twice: ((expression)). One pair of parentheses is unnecessary or misprint is present.
• V3089. Initializer of a field marked by [ThreadStatic] attribute will be called once on the first accessing thread. The field will have default value on different threads.
• V3090. Unsafe locking on an object.
• V3091. Empirical analysis. It is possible that a typo is present inside the string literal. The 'foo' word is suspicious.
• V3092. Range intersections are possible within conditional expressions.
• V3093. The operator evaluates both operands. Perhaps a short-circuit operator should be used instead.

## PVS-Studio 6.03 (April 5, 2016)

• V751. Parameter is not used inside method's body.
• V752. Creating an object with placement new requires a buffer of large size.
• V3072. The 'A' class containing IDisposable members does not itself implement IDisposable.
• V3073. Not all IDisposable members are properly disposed. Call 'Dispose' when disposing 'A' class.
• V3074. The 'A' class contains 'Dispose' method. Consider making it implement 'IDisposable' interface.
• V3075. The operation is executed 2 or more times in succession.
• V3076. Comparison with 'double.NaN' is meaningless. Use 'double.IsNaN()' method instead.
• V3077. Property setter / event accessor does not utilize its 'value' parameter.
• V3078. Original sorting order will be lost after repetitive call to 'OrderBy' method. Use 'ThenBy' method to preserve the original sorting.
• V3079. 'ThreadStatic' attribute is applied to a non-static 'A' field and will be ignored.
• V3080. Possible null dereference.
• V3081. The 'X' counter is not used inside a nested loop. Consider inspecting usage of 'Y' counter.
• V051. Some of the references in project are missing or incorrect. The analysis results could be incomplete. Consider making the project fully compilable and building it before analysis.

## PVS-Studio 6.02 (March 9, 2016)

• V3057. Function receives an odd argument.
• V3058. An item with the same key has already been added.
• V3059. Consider adding '[Flags]' attribute to the enum.
• V3060. A value of variable is not modified. Consider inspecting the expression. It is possible that other value should be present instead of '0'.
• V3061. Parameter 'A' is always rewritten in method body before being used.
• V3062. An object is used as an argument to its own method. Consider checking the first actual argument of the 'Foo' method.
• V3063. A part of conditional expression is always true/false.
• V3064. Division or mod division by zero.
• V3065. Parameter is not utilized inside method's body.
• V3066. Possible incorrect order of arguments passed to 'Foo' method.
• V3067. It is possible that 'else' block was forgotten or commented out, thus altering the program's operation logics.
• V3068. Calling overrideable class member from constructor is dangerous.
• V3069. It's possible that the line was commented out improperly, thus altering the program's operation logics.
• V3070. Uninitialized variables are used when initializing the 'A' variable.
• V3071. The object is returned from inside 'using' block. 'Dispose' will be invoked before exiting method.

## PVS-Studio 6.01 (February 3, 2016)

• V736. The behavior is undefined for arithmetic or comparisons with pointers that do not point to members of the same array.
• V737. It is possible that ',' comma is missing at the end of the string.
• V738. Temporary anonymous object is used.
• V739. EOF should not be compared with a value of the 'char' type. Consider using the 'int' type.
• V740. Because NULL is defined as 0, the exception is of the 'int' type. Keyword 'nullptr' could be used for 'pointer' type exception.
• V741. The following pattern is used: throw (a, b);. It is possible that type name was omitted: throw MyException(a, b);..
• V742. Function receives an address of a 'char' type variable instead of pointer to a buffer.
• V743. The memory areas must not overlap. Use 'memmove' function.
• V744. Temporary object is immediately destroyed after being created. Consider naming the object.
• V745. A 'wchar_t *' type string is incorrectly converted to 'BSTR' type string.
• V746. Object slicing. An exception should be caught by reference rather than by value.
• V747. An odd expression inside parenthesis. It is possible that a function name is missing.
• V748. Memory for 'getline' function should be allocated only by 'malloc' or 'realloc' functions. Consider inspecting the first parameter of 'getline' function.
• V749. Destructor of the object will be invoked a second time after leaving the object's scope.
• V750. BSTR string becomes invalid. Notice that BSTR strings store their length before start of the text.
• V816. It is more efficient to catch exception by reference rather than by value.
• V3042. Possible NullReferenceException. The '?.' and '.' operators are used for accessing members of the same object.
• V3043. The code's operational logic does not correspond with its formatting.
• V3044. WPF: writing and reading are performed on a different Dependency Properties.
• V3045. WPF: the names of the property registered for DependencyProperty, and of the property used to access it, do not correspond with each other.
• V3046. WPF: the type registered for DependencyProperty does not correspond with the type of the property used to access it.
• V3047. WPF: A class containing registered property does not correspond with a type that is passed as the ownerType.type.
• V3048. WPF: several Dependency Properties are registered with a same name within the owner type.
• V3049. WPF: readonly field of 'DependencyProperty' type is not initialized.
• V3050. Possibly an incorrect HTML. The </XX> closing tag was encountered, while the </YY> tag was expected.
• V3051. An excessive type cast or check. The object is already of the same type.
• V3052. The original exception object was swallowed. Stack of original exception could be lost.
• V3053. An excessive expression. Examine the substrings "abc" and "abcd".
• V3054. Potentially unsafe double-checked locking. Use volatile variable(s) or synchronization primitives to avoid this.
• V3055. Suspicious assignment inside the condition expression of 'if/while/for' operator.
• V3056. Consider reviewing the correctness of 'X' item's usage.

## PVS-Studio 6.00 (December 22, 2015)

• Static code analysis for C# added! More than 40 diagnostics in first release.
• We are cancelling support for Visual Studio 2005 and Visual Studio 2008.
• V734. Searching for the longer substring is meaningless after searching for the shorter substring.
• V735. Possibly an incorrect HTML. The "</XX" closing tag was encountered, while the "</YY" tag was expected.
• V3001. There are identical sub-expressions to the left and to the right of the 'foo' operator.
• V3002. The switch statement does not cover all values of the enum.
• V3003. The use of 'if (A) {...} else if (A) {...}' pattern was detected. There is a probability of logical error presence.
• V3004. The 'then' statement is equivalent to the 'else' statement.
• V3005. The 'x' variable is assigned to itself.
• V3006. The object was created but it is not being used. The 'throw' keyword could be missing.
• V3007. Odd semicolon ';' after 'if/for/while' operator.
• V3008. The 'x' variable is assigned values twice successively. Perhaps this is a mistake.
• V3009. It's odd that this method always returns one and the same value of NN.
• V3010. The return value of function 'Foo' is required to be utilized.
• V3011. Two opposite conditions were encountered. The second condition is always false.
• V3012. The '?:' operator, regardless of its conditional expression, always returns one and the same value.
• V3013. It is odd that the body of 'Foo_1' function is fully equivalent to the body of 'Foo_2' function.
• V3014. It is likely that a wrong variable is being incremented inside the 'for' operator. Consider reviewing 'X'.
• V3015. It is likely that a wrong variable is being compared inside the 'for' operator. Consider reviewing 'X'.
• V3016. The variable 'X' is being used for this loop and for the outer loop.
• V3017. A pattern was detected: A || (A && ...). The expression is excessive or contains a logical error.
• V3018. Consider inspecting the application's logic. It's possible that 'else' keyword is missing.
• V3019. It is possible that an incorrect variable is compared with null after type conversion using 'as' keyword.
• V3020. An unconditional 'break/continue/return/goto' within a loop.
• V3021. There are two 'if' statements with identical conditional expressions. The first 'if' statement contains method return. This means that the second 'if' statement is senseless.
• V3022. Expression is always true/false.
• V3023. Consider inspecting this expression. The expression is excessive or contains a misprint.
• V3024. An odd precise comparison. Consider using a comparison with defined precision: Math.Abs(A - B) < Epsilon or Math.Abs(A - B) > Epsilon.
• V3025. Incorrect format. Consider checking the N format items of the 'Foo' function.
• V3026. The constant NN is being utilized. The resulting value could be inaccurate. Consider using the KK constant.
• V3027. The variable was utilized in the logical expression before it was verified against null in the same logical expression.
• V3028. Consider inspecting the 'for' operator. Initial and final values of the iterator are the same.
• V3029. The conditional expressions of the 'if' operators situated alongside each other are identical.
• V3030. Recurring check. This condition was already verified in previous line.
• V3031. An excessive check can be simplified. The operator '||' operator is surrounded by opposite expressions 'x' and '!x'.
• V3032. Waiting on this expression is unreliable, as compiler may optimize some of the variables. Use volatile variable(s) or synchronization primitives to avoid this.
• V3033. It is possible that this 'else' branch must apply to the previous 'if' statement.
• V3034. Consider inspecting the expression. Probably the '!=' should be used here.
• V3035. Consider inspecting the expression. Probably the '+=' should be used here.
• V3036. Consider inspecting the expression. Probably the '-=' should be used here.
• V3037. An odd sequence of assignments of this kind: A = B; B = A;.
• V3038. The 'first' argument of 'Foo' function is equal to the 'second' argument
• V3039. Consider inspecting the 'Foo' function call. Defining an absolute path to the file or directory is considered a poor style.
• V3040. The expression contains a suspicious mix of integer and real types.
• V3041. The expression was implicitly cast from integer type to real type. Consider utilizing an explicit type cast to avoid the loss of a fractional part.

# Old PVS-Studio Release History (before 6.00)

## PVS-Studio 5.31 (November 3, 2015)

• False positive quantity is reduced in some diagnostics.

## PVS-Studio 5.30 (October 29, 2015)

• An access error during the Visual C++ preprocessor start for a check of files, using #import directive was removed.
• An error of Compiler Monitoring preprocessing more than 10 minutes, corrected.
• Incorrect installer's work, operating on systems that have 2015 Visual Studio only, was corrected.
• New diagnostic - V728. An excessive check can be simplified. The '||' operator is surrounded by opposite expressions 'x' and '!x'.
• New diagnostic - V729. Function body contains the 'X' label that is not used by any 'goto' statements.
• New diagnostic - V730. Not all members of a class are initialized inside the constructor.
• New diagnostic - V731. The variable of char type is compared with pointer to string.
• New diagnostic - V732. Unary minus operator does not modify a bool type value.
• New diagnostic - V733. It is possible that macro expansion resulted in incorrect evaluation order.

## PVS-Studio 5.29 (September 22, 2015)

• Visual Studio 2015 supported.
• Windows 10 supported.
• New diagnostic - V727. Return value of 'wcslen' function is not multiplied by 'sizeof(wchar_t)'.

## PVS-Studio 5.28 (August 10, 2015)

• New interface of the settings pages Detectable Errors, Don't Check Files, and Keyword Message Filering.
• A new utility PlogConverter was added to convert XML plog files into formats txt, html, and CSV. Check the documentation for details.

## PVS-Studio 5.27 (July 28, 2015)

• New diagnostic - V207. A 32-bit variable is utilized as a reference to a pointer. A write outside the bounds of this variable may occur.
• New diagnostic - V726. An attempt to free memory containing the 'int A[10]' array by using the 'free(A)' function.
• New feature - Analyzer Work Statistics (Diagrams). PVS-Studio analyzer can gather its' operational statistics - the number of detected messages (including suppressed ones) across different severity levels and rule sets. Gathered statistics can be filtered and represented as a diagram in a Microsoft Excel file, showing the change dynamics for messages in the project under analysis. See details in documentation.
• Analysis of preprocessed files removed from Standalone.

## PVS-Studio 5.26 (June 30, 2015)

• New diagnostic - V723. Function returns a pointer to the internal string buffer of a local object, which will be destroyed.
• New diagnostic - V724. Converting integers or pointers to BOOL can lead to a loss of high-order bits. Non-zero value can become 'FALSE'.
• New diagnostic - V725. A dangerous cast of 'this' to 'void*' type in the 'Base' class, as it is followed by a subsequent cast to 'Class' type.
• Message Suppression support was implemented for CLMonitoring/Standalone.
• 2nd and 3rd levels of analyzer warnings are accessible in Trial Mode.

## PVS-Studio 5.25 (May 12, 2015)

• New diagnostic - V722. An abnormality within similar comparisons. It is possible that a typo is present inside the expression.
• Improved the responsiveness of Quick Filters and Analyzer\Levels buttons in Output Window.
• 'False Alarms' output window filter was moved into settings.
• Fix for 'An item with the same key has already been added' error when using message suppression

## PVS-Studio 5.24 (April 10, 2015)

• New diagnostic - V721. The VARIANT_BOOL type is utilized incorrectly. The true value (VARIANT_TRUE) is defined as -1.
• New trial mode. Please refer here.
• A new message suppression mechanism now can be utilized together with command line mode for project files (vcproj/vcxproj) to organize a distribution of analysis logs with newly discovered warnings (in plain text and html formats) by email. More details on command line mode and utilizing analyzer within continuous integration systems.

## PVS-Studio 5.23 (March 17, 2015)

• 64-bit analysis is greatly improved. Now if you want to fix major 64-bit issues just fix all 64 Level 1 messages.
• You can use PVS-Studio-Updater.exe for automatic update of PVS-Studio on build-server. See details here.
• New diagnostic - V719. The switch statement does not cover all values of the enum.
• New diagnostic - V720. It is advised to utilize the 'SuspendThread' function only when developing a debugger (see documentation for details).
• New diagnostic - V221. Suspicious sequence of types castings: pointer -> memsize -> 32-bit integer.
• New diagnostic - V2013. Consider inspecting the correctness of handling the N argument in the 'Foo' function.

## PVS-Studio 5.22 (February 17, 2015)

• New diagnostic - V718. The 'Foo' function should not be called from 'DllMain' function.
• Fix for CLMonitoring operation on C++/CLI projects.
• Memory leak fix for CLMonitoring of long-running processes.
• Include\symbol reference search for Standalone.
• Message Suppression memory usage optimization.
• Message Suppression correctly handles multi-project analyzer messages (as, for example, messages generated on common h files on different IDE projects).
• Several crucial improvements in (Message Suppression).

## PVS-Studio 5.21 (December 11, 2014)

• We are cancelling support for OpenMP diagnostics (VivaMP rule set)
• New diagnostic - V711. It is dangerous to create a local variable within a loop with a same name as a variable controlling this loop.
• New diagnostic - V712. Be advised that compiler may delete this cycle or make it infinity. Use volatile variable(s) or synchronization primitives to avoid this.
• New diagnostic - V713. The pointer was utilized in the logical expression before it was verified against nullptr in the same logical expression.
• New diagnostic - V714. Variable is not passed into foreach loop by a reference, but its value is changed inside of the loop.
• New diagnostic - V715. The 'while' operator has empty body. Suspicious pattern detected.
• New diagnostic - V716. Suspicious type conversion: HRESULT -> BOOL (BOOL -> HRESULT).
• New diagnostic - V717. It is strange to cast object of base class V to derived class U.

## PVS-Studio 5.20 (November 12, 2014)

• New diagnostic - V706. Suspicious division: sizeof(X) / Value. Size of every element in X array does not equal to divisor.
• New diagnostic - V707. Giving short names to global variables is considered to be bad practice.
• New diagnostic - V708. Dangerous construction is used: 'm[x] = m.size()', where 'm' is of 'T' class. This may lead to undefined behavior.
• New diagnostic - V709. Suspicious comparison found: 'a == b == c'. Remember that 'a == b == c' is not equal to 'a == b && b == c.
• New diagnostic - V710. Suspicious declaration found. There is no point to declare constant reference to a number.
• New diagnostic - V2012. Possibility of decreased performance. It is advised to pass arguments to std::unary_function/std::binary_function template as references.
• New feature - Mass Suppression of Analyzer Messages. Sometimes, during deployment of static analysis, especially at large-scale projects, the developer has no desire (or even has no means of) to correct hundreds or even thousands of analyzer's messages which were generated on the existing source code base. In this situation, the need arises to "suppress" all of the analyzer's messages generated on the current state of the code, and, from that point, to be able to see only the messages related to the newly written or modified code. As such code was not yet thoroughly debugged and tested, it can potentially contain a large number of errors.

## PVS-Studio 5.19 (September 18, 2014)

• New diagnostic - V698. strcmp()-like functions can return not only the values -1, 0 and 1, but any values.
• New diagnostic - V699. Consider inspecting the 'foo = bar = baz ? .... : ....' expression. It is possible that 'foo = bar == baz ? .... : ....' should be used here instead.
• New diagnostic - V700. Consider inspecting the 'T foo = foo = x;' expression. It is odd that variable is initialized through itself.
• New diagnostic - V701. realloc() possible leak: when realloc() fails in allocating memory, original pointer is lost. Consider assigning realloc() to a temporary pointer.
• New diagnostic - V702. Classes should always be derived from std::exception (and alike) as 'public'.
• New diagnostic - V703. It is odd that the 'foo' field in derived class overwrites field in base class.
• New diagnostic - V704. 'this == 0' comparison should be avoided - this comparison is always false on newer compilers.
• New diagnostic - V705. It is possible that 'else' block was forgotten or commented out, thus altering the program's operation logics.

## PVS-Studio 5.18 (July 30, 2014)

• ClMonitoring - automatic detection of compiler's platform.
• ClMonitoring - performance increase resulting from the reduction of an impact of antiviral software during preprocessing of analyzed files.
• ClMonitoring - incorrect handling of 64-bit processes resulting from a system update for .NET Framework 4 was fixed.
• New diagnostic - V695. Range intersections are possible within conditional expressions.
• New diagnostic - V696. The 'continue' operator will terminate 'do { ... } while (FALSE)' loop because the condition is always false.
• New diagnostic - V697. A number of elements in the allocated array is equal to size of a pointer in bytes.
• New diagnostic - V206. Explicit conversion from 'void *' to 'int *'.
• New diagnostic - V2011. Consider inspecting signed and unsigned function arguments. See NN argument of function 'Foo' in derived class and base class.

## PVS-Studio 5.17 (May 20, 2014)

• New diagnostic - V690. The class implements a copy constructor/operator=, but lacks the the operator=/copy constructor.
• New diagnostic - V691. Empirical analysis. It is possible that a typo is present inside the string literal. The 'foo' word is suspicious.
• New diagnostic - V692. An inappropriate attempt to append a null character to a string. To determine the length of a string by 'strlen' function correctly, a string ending with a null terminator should be used in the first place.
• New diagnostic - V693. Consider inspecting conditional expression of the loop. It is possible that 'i < X.size()' should be used instead of 'X.size()'.
• New diagnostic - V694. The condition (ptr - const_value) is only false if the value of a pointer equals a magic constant.
• New diagnostic - V815. Decreased performance. Consider replacing the expression 'AA' with 'BB'.
• New diagnostic - V2010. Handling of two different exception types is identical.

## PVS-Studio 5.16 (April 29, 2014)

• Support of C++/CLI projects was greatly improved.
• TFSRipper plugin was removed.
• Fix for crash in Standalone when installing in non-default location on a 64-bit system.
• Fixed issue with hiding of diagnostic messages in some case.

## PVS-Studio 5.15 (April 14, 2014)

• New diagnostic - V689. The destructor of the 'Foo' class is not declared as a virtual. It is possible that a smart pointer will not destroy an object correctly.
• Several crucial improvements in Compiler Monitoring in PVS-Studio.

## PVS-Studio 5.14 (March 12, 2014)

• New option "DIsable 64-bit Analysis" in Specific Analyzer Settings option page can improve analysis speed and decrease .plog file size.
• New feature: Compiler Monitoring in PVS-Studio.
• Fixed problem with incremental analysis notification with auto hide PVS-Studio Output Window.
• New diagnostic - V687. Size of an array calculated by the sizeof() operator was added to a pointer. It is possible that the number of elements should be calculated by sizeof(A)/sizeof(A[0]).
• New diagnostic - V688. The 'foo' local variable possesses the same name as one of the class members, which can result in a confusion.

## PVS-Studio 5.13 (February 5, 2014)

• New diagnostic - V684. A value of variable is not modified. Consider inspecting the expression. It is possible that '1' should be present instead of '0'.
• New diagnostic - V685. Consider inspecting the return statement. The expression contains a comma.
• New diagnostic - V686. A pattern was detected: A || (A && ...). The expression is excessive or contains a logical error.

## PVS-Studio 5.12 (December 23, 2013)

• Fix for the issue with SolutionDir property when direct integration of the analyzer into MSBuild system is utilized.
• The analysis can now be launched from within the context menu of Solution Explorer tool window.
• The 'ID' column will now be hidden by default in the PVS-Studio Output toolwindow. It is possible to enable it again by using the Show Columns -> ID context menu command.
• New diagnostic - V682. Suspicious literal is present: '/r'. It is possible that a backslash should be used here instead: '\r'.
• New diagnostic - V683. Consider inspecting the loop expression. It is possible that the 'i' variable should be incremented instead of the 'n' variable.

## PVS-Studio 5.11 (November 6, 2013)

• Support for the release version of Microsoft Visual Studio 2013 was implemented.
• New diagnostic - V680. The 'delete A, B' expression only destroys the 'A' object. Then the ',' operator returns a resulting value from the right side of the expression.
• New diagnostic - V681. The language standard does not define an order in which the 'Foo' functions will be called during evaluation of arguments.

## PVS-Studio 5.10 (October 7, 2013)

• Fixed the issue with the analyzer when Visual Studio is called with the parameter /useenv: devenv.exe /useenv.
• VS2012 has finally got support for Clang so that it can be used as the preprocessor. It means that PVS-Studio users will see a significant performance boost in VS2012.
• Several crucial improvements were made to the analyzer's performance when parsing code in VS2012.
• The PVS-Studio distribution package now ships with a new application Standalone.
• You can now export analysis results into a .CSV-file to handle them in Excel.
• Support of precompiled headers in Visual Studio and MSBuild was greatly improved.
• New diagnostic - V676. It is incorrect to compare the variable of BOOL type with TRUE.
• New diagnostic - V677. Custom declaration of a standard type. The declaration from system header files should be used instead.
• New diagnostic - V678. An object is used as an argument to its own method. Consider checking the first actual argument of the 'Foo' function.
• New diagnostic - V679. The 'X' variable was not initialized. This variable is passed by a reference to the 'Foo' function in which its value will be utilized.

## PVS-Studio 5.06 (August 13, 2013)

• Fix for incorrect number of verified files when using 'Check Open File(s)' command in Visual Studio 2010.
• New diagnostic - V673. More than N bits are required to store the value, but the expression evaluates to the T type which can only hold K bits.
• New diagnostic - V674. The expression contains a suspicious mix of integer and real types.
• New diagnostic - V675. Writing into the read-only memory.
• New diagnostic - V814. Decreased performance. The 'strlen' function was called multiple times inside the body of a loop.

## PVS-Studio 5.05 (May 28, 2013)

• Support for proxy server with authorization was implemented for trial extension window.
• An issue with using certain special characters in diagnostic message filters was resolved.
• A portion of 'Common Analyzer Settings' page options and all of the options from 'Customer Specific Settings' page were merged together into the new page: Specific Analyzer Settings.
• A new SaveModifiedLog option was implemented. It allows you to define the behavior of 'Save As' dialog for a new\modified analysis report log (always ask, save automatically, do not save).
• Customer diagnostics (V20xx) were assigned to a separate diagnostics group (CS - Customer Specific).
• A new menu command was added: "Check Open File(s)". It allows starting the analysis on all of the C/C++ source files that are currently open in IDE text editor.

## PVS-Studio 5.04 (May 14, 2013)

• Support has been implemented for C++Builder XE4. Now PVS-Studio supports the following versions of C++Builder: XE4, XE3 Update 1, XE2, XE, 2010, 2009.
• New diagnostic - V669. The argument is a non-constant reference. The analyzer is unable to determine the position at which this argument is being modified. It is possible that the function contains an error.
• New diagnostic - V670. An uninitialized class member is used to initialize another member. Remember that members are initialized in the order of their declarations inside a class.
• New diagnostic - V671. It is possible that the 'swap' function interchanges a variable with itself.
• New diagnostic - V672. There is probably no need in creating a new variable here. One of the function's arguments possesses the same name and this argument is a reference.
• New diagnostic - V128. A variable of the memsize type is read from a stream. Consider verifying the compatibility of 32 and 64 bit versions of the application in the context of a stored data.
• New diagnostic - V813. Decreased performance. The argument should probably be rendered as a constant pointer/reference.
• New diagnostic - V2009. Consider passing the 'Foo' argument as a constant pointer/reference.

## PVS-Studio 5.03 (April 16, 2013)

• Enhanced analysis/interface performance when checking large projects and generating a large number of diagnostic messages (the total number of unfiltered messages).
• Fixed the issue with incorrect integration of the PVS-Studio plugin into the C++Builder 2009/2010/XE environments after installation.
• Fixed the bug with the trial-mode.
• The analyzer can now be set to generate relative paths to source files in its log files.
• The analyzer now supports direct integration into the MSBuild build system.
• Integrated Help Language option added to Customer's Settings page. The setting allows you to select a language to be used for integrated help on the diagnostic messages (a click to the message error code in PVS-Studio output window) and online documentation (the PVS-Studio -> Help -> Open PVS-Studio Documentation (html, online) menu command), which are also available at our site. This setting will not change the language of IDE plug-in's interface and messages produced by the analyzer.
• Fix for Command line analysis mode in Visual Studio 2012 in the case of project background loading.
• New diagnostic - V665. Possibly, the usage of '#pragma warning(default: X)' is incorrect in this context. The '#pragma warning(push/pop)' should be used instead.
• New diagnostic - V666. Consider inspecting NN argument of the function 'Foo'. It is possible that the value does not correspond with the length of a string which was passed with the YY argument.
• New diagnostic - V667. The 'throw' operator does not possess any arguments and is not situated within the 'catch' block.
• New diagnostic - V668. There is no sense in testing the pointer against null, as the memory was allocated using the 'new' operator. The exception will be generated in the case of memory allocation error.
• New diagnostic -V812. Decreased performance. Ineffective use of the 'count' function. It can possibly be replaced by the call to the 'find' function.

## PVS-Studio 5.02 (March 6, 2013)

• Incorrect navigation in C++Builder modules that contain several header/source files was fixed.
• The option for inserting user-specified comments while performing false alarm mark-ups (for example, to provide the automatic documentation generation systems with appropriate descriptions) was implemented.
• An issue of incorrectly starting up a C++ preprocessor for some of the files utilizing precompiled headers was fixed.
• New diagnostic - V663. Infinite loop is possible. The 'cin.eof()' condition is insufficient to break from the loop. Consider adding the 'cin.fail()' function call to the conditional expression.
• New diagnostic - V664. The pointer is being dereferenced on the initialization list before it is verified against null inside the body of the constructor function.
• New diagnostic - V811. Decreased performance. Excessive type casting: string -> char * -> string.

## PVS-Studio 5.01 (February 13, 2013)

• Support has been implemented for several previous versions of C++Builder. Now PVS-Studio supports the following versions of C++Builder: XE3 Update 1, XE2, XE, 2010, 2009.
• A bug in C++Builder version with incremental analysis starting-up incorrectly in several situations was fixed.
• Occasional incorrect placement of false alarm markings for C++Builder version was fixed.
• Incorrect display of localized filenames containing regional-specific characters in C++Builder version was fixed.
• An issue with opening source files during diagnostic message navigation in C++Builder version was resolved.
• The issue was fixed of system includes paths being resolved incompletely when starting the preprocessor for the analyzer in C++ Builder versions.
• New diagnostic - V661. A suspicious expression 'A[B < C]'. Probably meant 'A[B] < C'.
• New diagnostic - V662. Consider inspecting the loop expression. Different containers are utilized for setting up initial and final values of the iterator.

## PVS-Studio 5.00 (January 31, 2013)

• Support for the integration to Embarcadero RAD Studio, or Embarcadero C++ Builder to be more precise, was added! As of this moment, PVS-Studio diagnostics capabilities are available to the users of C++ Builder. While in the past PVS-Studio could be conveniently utilized only from within Visual Studio environment, but now C++ developers who choses Embarcadero products will be able to fully utilize PVS-Studio static analyzer as well. Presently, the supported versions are XE2 and XE3, including the XE3 Update 1 with 64-bit C++ compiler.
• Microsoft Design Language (formerly known as Metro Language) C++/CX Windows 8 Store (WinRT) projects on x86/ARM platforms and Windows Phone 8 projects support was implemented.
• A fix for the users of Clang-preprocessor in Visual Studio version was implemented. Previously it was impossible to use Clang as a preprocessor while analyzing projects utilizing the Boost library because of the preprocessing errors. Now these issues were resolved. This significantly decreased the time it takes to analyze Boost projects with the help of Clang preprocessor.
• The obsolete Viva64 options page was removed.
• V004 message text was modified to provide a more correct description.
• New diagnostic - V810. Decreased performance. The 'A' function was called several times with identical arguments. The result should possibly be saved to a temporary variable, which then could be used while calling the 'B' function.
• New diagnostic - V2008. Cyclomatic complexity: NN. Consider refactoring the 'Foo' function.
• New diagnostic - V657. It's odd that this function always returns one and the same value of NN.
• New diagnostic - V658. A value is being subtracted from the unsigned variable. This can result in an overflow. In such a case, the comparison operation can potentially behave unexpectedly.
• New diagnostic - V659. Declarations of functions with 'Foo' name differ in the 'const' keyword only, but the bodies of these functions have different composition. This is suspicious and can possibly be an error.
• New diagnostic - V660. The program contains an unused label and a function call: 'CC:AA()'. It's possible that the following was intended: 'CC::AA()'.

## PVS-Studio 4.77 (December 11, 2012)

• Acquisition of compilation parameters for VS2012 and VS2010 was improved through expansion of support for MSBuild-based projects.
• New diagnostic - V654. The condition of loop is always true/false.
• New diagnostic - V655. The strings was concatenated but are not utilized. Consider inspecting the expression.
• New diagnostic - V656. Variables are initialized through the call to the same function. It's probably an error or un-optimized code.
• New diagnostic - V809. Verifying that a pointer value is not NULL is not required. The 'if (ptr != NULL)' check can be removed.

## PVS-Studio 4.76 (November 23, 2012)

• Some bugs were fixed.

## PVS-Studio 4.75 (November 12, 2012)

• An issue with checking Qt-based projects which manifested itself under certain conditions was solved (details in blog).
• New diagnostic - V646. Consider inspecting the application's logic. It's possible that 'else' keyword is missing.
• New diagnostic - V647. The value of 'A' type is assigned to the pointer of 'B' type.
• New diagnostic - V648. Priority of the '&&' operation is higher than that of the '||' operation.
• New diagnostic - V649. There are two 'if' statements with identical conditional expressions. The first 'if' statement contains function return. This means that the second 'if' statement is senseless.
• New diagnostic - V650. Type casting operation is utilized 2 times in succession. Next, the '+' operation is executed. Probably meant: (T1)((T2)a + b).
• New diagnostic - V651. An odd operation of the 'sizeof(X)/sizeof(T)' kind is performed, where 'X' is of the 'class' type.
• New diagnostic - V652. The operation is executed 3 or more times in succession.
• New diagnostic - V653. A suspicious string consisting of two parts is used for array initialization. It is possible that a comma is missing.
• New diagnostic - V808. An array/object was declared but was not utilized.
• New diagnostic - V2007. This expression can be simplified. One of the operands in the operation equals NN. Probably it is a mistake.

## PVS-Studio 4.74 (October 16, 2012)

• New option "Incremental Results Display Depth was added. This setting defines the mode of message display level in PVS-Studio Output window for the results of incremental analysis. Setting the display level depth here (correspondingly, Level 1 only; Levels 1 and 2; Levels 1, 2 and 3) will enable automatic activation of these display levels on each incremental analysis procedure. The "Preserve_Current_Levels" on the other hand will preserve the existing display setting.
• New option "External Tool Path" was added. This field allows defining an absolute path to any external tool, which could then be executed with the "Send this message to external tool" context menu command of the PVS-Studio Output window. The mentioned menu command is available only for a single simultaneously selected message from the results table, allowing the passing of the command line parameters specified in the ExternalToolCommandLine field to the utility from here. The detailed description of this mode together with usage examples is available here.

## PVS-Studio 4.73 (September 17, 2012)

• Issues with incorrect processing of some Visual Studio 2012 C++11 constructs were fixed.
• A complete support for Visual Studio 2012 themes was implemented.
• The search field for the 'Project' column was added to the PVS-Studio Output Window quick filters.
• The included Clang external preprocessor was updated.
• Support for the TenAsys INtime platform was implemented.

## PVS-Studio 4.72 (August 30, 2012)

• Support for the release version of Microsoft Visual Studio 2012 was implemented.
• A new version of SourceGrid component will be utilized, solving several issues with PVS-Studio Output Window operation.
• Support for diagnostics of issues inside STL library using STLport was implemented.
• New diagnostic - V637. Two opposite conditions were encountered. The second condition is always false.
• New diagnostic - V638. A terminal null is present inside a string. The '\0xNN' characters were encountered. Probably meant: '\xNN'.
• New diagnostic - V639. Consider inspecting the expression for function call. It is possible that one of the closing ')' brackets was positioned incorrectly.
• New diagnostic - V640. Consider inspecting the application's logic. It is possible that several statements should be braced.
• New diagnostic - V641. The size of the allocated memory buffer is not a multiple of the element size.
• New diagnostic - V642. Saving the function result inside the 'byte' type variable is inappropriate. The significant bits could be lost breaking the program's logic.
• New diagnostic - V643. Unusual pointer arithmetic. The value of the 'char' type is being added to the string pointer.
• New diagnostic - V644. A suspicious function declaration. It is possible that the T type object was meant to be created.
• New diagnostic - V645. The function call could lead to the buffer overflow. The bounds should not contain the size of the buffer, but a number of characters it can hold.

## PVS-Studio 4.71 (July 20, 2012)

• New diagnostic - V629. Consider inspecting the expression. Bit shifting of the 32-bit value with a subsequent expansion to the 64-bit type.
• New diagnostic - V630. The 'malloc' function is used to allocate memory for an array of objects which are classes containing constructors/destructors.
• New diagnostic - V631. Consider inspecting the 'Foo' function call. Defining an absolute path to the file or directory is considered a poor style.
• New diagnostic - V632. Consider inspecting the NN argument of the 'Foo' function. It is odd that the argument is of the 'T' type.
• New diagnostic - V633. Consider inspecting the expression. Probably the '!=' should be used here.
• New diagnostic - V634. The priority of the '+' operation is higher than that of the '<<' operation. It's possible that parentheses should be used in the expression.
• New diagnostic - V635. Consider inspecting the expression. The length should probably be multiplied by the sizeof(wchar_t).

## PVS-Studio 4.70 (July 3, 2012)

• Visual Studio 2012 RC support was implemented. At present the analyzer does not provide a complete support for every new syntax construct introduced with Visual Studio 2012 RC. Also, there is an additional issue concerning the speed of the analysis, as we utilize Clang preprocessor to improve the analyzer's performance. Currently, Clang is unable to preprocess some of the new Visual C++ 2012 header files, and that means that the notably slower cl.exe preprocessor from Visual C++ will have to be utilized most of the time instead. In the default mode the correct preprocessor will be set by PVS-Studio automatically so it will not require any interaction from the user. Despite the aforementioned issues, PVS-Studio can now be fully utilized from Visual Studio 2012 RC IDE.
• New diagnostic - V615. An odd explicit conversion from 'float *' type to 'double *' type.
• New diagnostic - V616. The 'Foo' named constant with the value of 0 is used in the bitwise operation.
• New diagnostic - V617. Consider inspecting the condition. An argument of the '|' bitwise operation always contains a non-zero value.
• New diagnostic - V618. It's dangerous to call the 'Foo' function in such a manner, as the line being passed could contain format specification. The example of the safe code: printf("%s", str);.
• New diagnostic - V619. An array is being utilized as a pointer to single object.
• New diagnostic - V620. It's unusual that the expression of sizeof(T)*N kind is being summed with the pointer to T type.
• New diagnostic - V621. Consider inspecting the 'for' operator. It's possible that the loop will be executed incorrectly or won't be executed at all.
• New diagnostic - V622. Consider inspecting the 'switch' statement. It's possible that the first 'case' operator in missing.
• New diagnostic - V623. Consider inspecting the '?:' operator. A temporary object is being created and subsequently destroyed.
• New diagnostic - V624. The constant NN is being utilized. The resulting value could be inaccurate. Consider using the M_NN constant from <math.h>.
• New diagnostic - V625. Consider inspecting the 'for' operator. Initial and final values of the iterator are the same.
• New diagnostic - V626. Consider checking for misprints. It's possible that ',' should be replaced by ';'.
• New diagnostic - V627. Consider inspecting the expression. The argument of sizeof() is the macro which expands to a number.
• New diagnostic - V628. It's possible that the line was commented out improperly, thus altering the program's operation logics.
• New diagnostic - V2006. Implicit type conversion from enum type to integer type.

## PVS-Studio 4.62 (May 30, 2012)

• The support for the MinGW gcc preprocessor was implemented, enabling the verification of such projects as the ones which allow their compilation through MinGW compilers. Also, integration of the analyzer into build systems of such projects is similar to utilization of the analyzer with other projects lacking MSVC .sln files as it is described in detail in the corresponding documentation. As a reminder, the project which does include .sln file could be verified through command line in a regular way as well, not requiring the direct integration of the analyzer into the its' build system.

## PVS-Studio 4.61 (May 22, 2012)

• Navigation for messages containing references to multiple lines was improved. Some of diagnostic messages (V595 for example) are related to several lines of source code at once. Previously, the 'Line' column of PVS-Studio Output Window contained only a single line number while other lines were only mentioned in the text of such message itself. This was inconvenient for the navigation. As of this version the fields of the 'Line' column could contain several line numbers allowing navigation for each individual line.
• A new build of Clang is included which contains several minor bug fixes. PVS-Studio uses Clang as an alternative preprocessor. Please note that PVS-Studio does not utilize Clang static analysis diagnostics.
• New diagnostic - V612. An unconditional 'break/continue/return/goto' within a loop.
• New diagnostic - V613. Strange pointer arithmetic with 'malloc/new'.
• New diagnostic - V614. Uninitialized variable 'Foo' used.

## PVS-Studio 4.60 (April 18, 2012)

• A new "Optimization" (OP) group allows the diagnostics of potential optimizations. It is a static analysis rule set for identification of C/C++/C++11 source code sections which could be optimized. It should be noted that the analyzer solves the task of optimization for the narrow area of micro-optimizations. A full list of diagnostic cases is available in the documentation (codes V801-V807).
• A total number of false positive messages for the 64-bit analyzer (Viva64) was decreased substantially.
• Messages will not be produced for autogenerated files (MIDL).
• Logics behind prompting save dialog for analysis report were improved.
• Issue with Visual Studio Chinese localized version was fixed (the zh locale).
• New diagnostic V610. Undefined behavior. Check the shift operator.
• New diagnostic V611. The memory allocation and deallocation methods are incompatible.

## PVS-Studio 4.56 (March 14, 2012)

• TraceMode option was added to Common Analyzer Settings. This setting could be used to specify the tracing mode (logging of a program's execution path).
• An issue concerning the verification of Itanium-based projects was fixed.
• An issue concerning the calling of the 64-bit version of clang.exe instead of the 32-bit one from within the 32-bit Windows while checking the project with selected x64 architecture was fixed.
• A number of cores to be used for incremental analysis were changed. As of now the regular analysis (Check Solution/project/file) will utilize the exact number of cores specified in the settings. The incremental analysis will use a different value: if the number of cores from the settings is greater than (number of system cores - 1) and there is more than one core in the system then the (number of system cores - 1) will be utilized for it; otherwise the value from the settings will be used. Simply put the incremental analysis will utilize one core less compared to the regular one for the purpose of easing the load on the system.
• New diagnostic V608. Recurring sequence of explicit type casts.
• New diagnostic V609. Divide or mod by zero.

## PVS-Studio 4.55 (February 28, 2012)

• New trial extension window.
• A crash which occurs after reloading current project while code analysis is running was fixed.
• The installer (in case it is the first-time installation) now provides the option to enable PVS-Studio incremental analysis. In case PVS-Studio was installed on system before this option will not be displayed. Incremental analysis could be enabled or disabled through the "Incremental Analysis after Build" PVS-Studio menu command.
• As of now the default number of threads for analysis is equal to the number of processors minus one. This could be modified through the 'ThreadCount' option in PVS-Studio settings.
• New article in documentation: "PVS-Studio's incremental analysis mode".
• Additional functionality for the command line version mode — it is now possible to process several files at once, similar to the compiler batch mode (cl.exe file1.cpp file2.cpp). A more detailed description on command line mode is available in the documentation.
• A support for Microsoft Visual Studio ARMV4 project types was removed.
• New diagnostic V604. It is odd that the number of iterations in the loop equals to the size of the pointer.
• New diagnostic V605. Consider verifying the expression. An unsigned value is compared to the number - NN.
• New diagnostic V606. Ownerless token 'Foo'.
• New diagnostic V607. Ownerless expression 'Foo'.

## PVS-Studio 4.54 (February 1, 2012)

• New trial mode was implemented. As of now only a total number of clicks on messages will be limited. More details can be found in our blog or documentation.
• New menu command "Disable Incremental Analysis until IDE restart" was added. Sometimes disabling the incremental analysis can be convenient, for instance when editing some core h-files, as it forces a large number of files to be recompiled. But it should not be disabled permanently, only temporary, as one can easily forget to turn it on again later. This command is also available in the system tray during incremental analysis.
• New diagnostic V602. Consider inspecting this expression. '<' possibly should be replaced with '<<'.
• New diagnostic V603. The object was created but it is not being used. If you wish to call constructor, 'this->Foo::Foo(....)' should be used.
• New diagnostic V807. Decreased performance. Consider creating a pointer/reference to avoid using the same expression repeatedly.
• New article in documentation: "PVS-Studio menu commands".

## PVS-Studio 4.53 (January 19, 2012)

• New command for team work: "Add TODO comment for Task List". PVS-Studio allows you to automatically generate the special TODO comment containing all the information required to analyze the code fragment marked by it, and to insert it into the source code. Such comment will immediately appear inside the Visual Studio Task List window.
• New diagnostic V599. The virtual destructor is not present, although the 'Foo' class contains virtual functions.
• New diagnostic V600. Consider inspecting the condition. The 'Foo' pointer is always not equal to NULL.
• New diagnostic V601. An odd implicit type casting.

## PVS-Studio 4.52 (December 28, 2011)

• Changes were introduced to the .sln-file independent analyzer command line mode. It is now possible to start the analysis in several processes simultaneously, the output file (--output-file) will not be lost. The entire command line of arguments including the filename should be passed into the cl-params argument: --cl-params $(CFLAGS)$**.
• The "Analysis aborted by timeout" error was fixed, it could have been encountered while checking .sln file through PVS-Studio.exe command line mode.
• New diagnostic V597. The compiler could delete the 'memset' function call, which is used to flush 'Foo' buffer. The RtlSecureZeroMemory() function should be used to erase the private data.
• New diagnostic V598. The 'memset/memcpy' function is used to nullify/copy the fields of 'Foo' class. Virtual method table will be damaged by this.

## PVS-Studio 4.51 (December 22, 2011)

• The issue concerning the #import directive when using Clang preprocessor was fixed. #import is supported by Clang differently from Microsoft Visual C++, therefore it is impossible to use Clang with such files. This directive is now automatically detected, and Visual C++ preprocessor is used for these files.
• 'Don't Check Files' settings used for file and directory exclusions were significantly revised. As of now the folders to be excluded (either by their full and relative paths or my a mask) could be specified independently, as well as the files to be excluded (by their name, extension or a mask as well).
• Some libraries were added to the default exclusion paths. This can be modified on the 'Don't Check Files' page.

## PVS-Studio 4.50 (December 15, 2011)

• An external preprocessor is being utilized to preprocess files with PVS-Studio. It is only Microsoft Visual C++ preprocessor that had been employed for this task in the past. But in 4.50 version of PVS-Studio the support for the Clang preprocessor had been added, as its performance is significantly higher and it lacks some of the Microsoft's preprocessor shortcomings (although it also possesses issues of its own). Still, the utilization of Clang preprocessor provides an increase of operational performance by 1.5-1.7 times in most cases. However there is an aspect that should be considered. The preprocessor to be used can be specified from within the PVS-Studio Options -> Common Analyzer Settings -> Preprocessor field. The available options are: VisualCPP, Clang and VisualCPPAfterClang. The first two of these are self evident. The third one indicates that Clang will be used at first, and if preprocessing errors are encountered, the same file will be preprocessed by the Visual C++ preprocessor instead. This option is a default one (VisualCPPAfterClang).
• By default the analyzer will not produce diagnostic messages for libpng and zlib libraries (it is still possible to re-enable them).
• New diagnostic V596. The object was created but it is not being used. The 'throw' keyword could be missing.

## PVS-Studio 4.39 (November 25, 2011)

• New diagnostics were implemented (V594, V595).
• By default the analyzer will not produce diagnostic messages for Boost library (it is still possible to re-enable them).
• Progress dialog will not be shown anymore during incremental analysis, an animated tray icon, which itself will allow pausing or aborting the analysis, will be used instead.
• New "Don't Check Files and hide all messages from ..." command was added to the output window context menu. This command allows you to filter the messages and afterwards prevent the verification of files from the specified directories. The list of filtered directories can be reviewed in "Don't Check Files" options page.
• The detection of Intel C++ Compiler integration have been revamped - PVS-Studio will not run on projects using this compiler, it is required to replace the compiler with Visual C++ one.
• "Quick Filters" functionality was implemented. It allows filtering all the messages which do not meet the specified filtering settings.

## PVS-Studio 4.38 (October 12, 2011)

• Speed increase (up to 25% for quad core computers).
• "Navigate to ID" command added to the context menu of PVS-Studio window.
• New "Find in PVS-Studio Output" tool window allows searching of keywords in analysis results.
• New diagnostic rules added (V2005).
• Options button on PVS-Studio Output Window was renamed to Suppression and now contain only three tab pages.

## PVS-Studio 4.37 (September 20, 2011)

• New diagnostic rules added (V008, V2003, V2004).
• Now you can export PVS-Studio analysis report to text file.
• We use extended build number in some case.

## PVS-Studio 4.36 (August 31, 2011)

• New diagnostic rules added (V588, V589, V590, V591, V592, V593).

## PVS-Studio 4.35 (August 12, 2011)

• New diagnostic rules added (V583, V584, V806, V585, V586, V587).

## PVS-Studio 4.34 (July 29, 2011)

• Now 64-bit analysis disabled by default.
• Now Incremental Analysis enabled by default.
• Changes of behavior in trial mode.
• PVS_STUDIO predefined macro was added.
• Fixed problem with Incremental Analysis on localized versions of Visual Studio.
• New diagnostic rules added (V582).
• Changed image to display on the left side of the wizard in the Setup program.

## PVS-Studio 4.33 (July 21, 2011)

• Incremental Analysis feature now available for all versions of Microsoft Visual Studio (2005/2008/2010).
• Speed increase (up to 20% for quad core computers).
• New diagnostic rules added (V127, V579, V580, V581).

## PVS-Studio 4.32 (July 15, 2011)

• Changes in PVS-Studio's licensing policy.
• Dynamic balancing of CPU usage.
• Stop Analysis button work faster.

## PVS-Studio 4.31 (July 6, 2011)

• Fixed problem related to interaction with other extensions (including Visual Assist).
• New diagnostic rules added (V577, V578, V805).

## PVS-Studio 4.30 (June 23, 2011)

• The full-fledged support for analyzer's operation through command line was implemented. It is possible to verify independent files or sets of files launching the analyzer from Makefile. Also the analyzer's messages can be viewed not only on screen (for each file), but they also can be saved into single file, which later can be opened in Visual Studio and the regular processing of the analysis' results can be performed, complete with setting up error codes, message filters, code navigation, sorting etc. Details.
• New important mode of operation: Incremental Analysis. As of this moment PVS-Studio can automatically launch the analysis of modified files which are required to be rebuilt using 'Build' command in Visual Studio. All of developers in a team can now detect issues in newly written code without the inconvenience of manually launching the source code analysis - it happens automatically. Incremental Analysis operates similar to Visual Studio IntelliSence. The feature is available only in Visual Studio 2010. Details.
• "Check Selected Item(s)" command was added.
• Changes in starting "Check Solution" via command line. Details.
• New diagnostic rules added (V576).

## PVS-Studio 4.21 (May 20, 2011)

• New diagnostic rules added (V220, V573, V574, V575).
• TFS 2005/2008/2010 integration was added.

## PVS-Studio 4.20 (April 29, 2011)

• New diagnostic rules added (V571, V572).
• Experimental support for ARMV4/ARMV4I platforms for Visual Studio 2005/2008 (Windows Mobile 5/6, PocketPC 2003, Smartphone 2003).
• New "Show License Expired Message" option.

## PVS-Studio 4.17 (April 15, 2011)

• New diagnostic rules added (V007, V570, V804)
• Incorrect display of analysis time in some locales has been fixed.
• New "Analysis Timeout" option. This setting allows you to set the time limit, by reaching which the analysis of individual files will be aborted with V006 error, or to completely disable analysis termination by timeout.
• New "Save File After False Alarm Mark" option. It allows to save or not to save a file each time after marking it as False Alarm.
• New "Use Solution Folder As Initial" option. It defines the folder which is opened while saving the analysis results file.

## PVS-Studio 4.16 (April 1, 2011)

• It is possible now to define a list of files to be analyzed while launching the tool from command line. This can be used, for example, to check only the files which were updated by a revision control system. Details.
• "Check only Files Modified In" option has been added into tool's settings. This option allows you to define the time interval in which the presence of modifications in analyzed files will be controlled using "Date Modified" file attribute. In other words, this approach would allow for verification of "all files modified today". Details.

## PVS-Studio 4.15 (March 17, 2011)

• There are much fewer false alarms in 64-bit analysis.
• Changes in the interface of safe-type definition.
• The error of processing stdafx.h in some special cases is fixed.
• Handling of the report file was improved.
• The progress dialogue was improved: you can see the elapsed time and the remaining time.

## PVS-Studio 4.14 (March 2, 2011)

• There are much fewer false alarms in 64-bit analysis.
• New diagnostic rules were added (V566, V567, V568, V569, V803).
• A new column "Asterisk" was added in the PVS-Studio message window - you may use it to mark interesting diagnoses with the asterisk to discuss them with your colleagues later. The marks are saved in the log file.
• Now you may access PVS-Studio options not only from the menu (in the usual settings dialogue) but in the PVS-Studio window as well. This makes the process of setting the tool quicker and more convenient.
• Now you may save and restore PVS-Studio settings. It enables you to transfer the settings between different computers and workplaces. We also added the "Default settings" command.
• The state of PVS-Studio window's buttons (enabled/disabled) is saved when you launch Microsoft Visual Studio for the next time.

## PVS-Studio 4.13 (February 11, 2011)

• New diagnostic rules are added V563, V564, and V565).
• The "Hide all VXXX errors" command is added into context menu in PVS-Studio window. If you wish to enable the display of VXXX error messages again you can do it through PVS-Studio->Options->Detectable errors page.
• Suppressing false positives located within macro statements (#define) is added.

## PVS-Studio 4.12 (February 7, 2011)

• New diagnostic rules are added (V006, V204, V205, V559, V560, V561, and V562).
• Changes in V201 and V202 diagnostic rules.

## PVS-Studio 4.11 (January 28, 2011)

• V401 rule changed to V802.
• Fixed bug with copying messages to clipboard.

## PVS-Studio 4.10 (January 17, 2011)

• New diagnostic rules are added (V558).

## PVS-Studio 4.00 (December 24, 2010)

• New diagnostic rules are added (V546-V557).
• The issue of processing property sheets in Visual Studio 2010 is fixed.
• The error of traversing projects' tree is fixed.
• The "Project" field is added into the PVS-Studio window - it shows the project the current diagnostic message refers to.
• The issue of installing PVS-Studio for Visual Studio 2010 is fixed - now PVS-Studio is installed not only for the current user but for all the users.
• The crash is fixed occurring when trying to save an empty report file.
• The issue of absent safe_types.txt file is fixed.
• The error is fixed which occurred when trying to check files included into the project but actually absent from the hard disk (for instance, autogenerated files).
• Indication of processing the project's tree is added.
• The file with PVS-Studio's analysis results (.plog extension) is now loaded by double-click.
• The licensing policy is changed.

## PVS-Studio 4.00 BETA (November 24, 2010)

• A new set of general-purpose static analysis rules (V501-V545, V801).
• New diagnostic rules are added (V124-V126).
• Changes in the licensing policy.
• A new window for diagnostic messages generated by the analyzer.
• Speed increase.

## PVS-Studio 3.64 (27 September 2010)

• Major documentation update, new sections was added.

## PVS-Studio 3.63 (10 September 2010)

• Fixed bug which occurred sometimes during analysis of files located on non-system partitions.
• Fixed bug in calculation of macros' values for certain individual files (and not the whole project).
• "What Is It?" feature was removed.
• Issues examples for 64-bit code (PortSample) and parallel code (ParallelSample) are merged into single OmniSample example, which is described particularly in documentation.
• Fixed crash related to presence of unloaded project in Visual Studio solution.

## PVS-Studio 3.62 (16 August 2010)

• New rule V123: Allocation of memory by the pattern "(X*)malloc(sizeof(Y))"
• The analysis of the code from command line (without Visual Studio project) is improved.
• Diagnostic messages from tli/tlh files do not produced by default.

## PVS-Studio 3.61 (22 July 2010)

• Fixed crash in VS2010 with EnableAllWarnings key enabled in project settings.
• Fixed bug related to analysis projects that does excluded from build in Configuration Manager.
• The analysis of the code is considerably improved.

## PVS-Studio 3.60 (10 June 2010)

• New rule V122: Memsize type is used in the struct/class.
• New rule V303: The function is deprecated in the Win64 system. It is safer to use the NewFOO function.
• New rule V2001: Consider using the extended version of the FOO function here.
• New rule V2002: Consider using the 'Ptr' version of the FOO function here.

## PVS-Studio 3.53 (7 May 2010)

• "What Is It?" feature is added. Now you can ask PVS-Studio developers about diagnistic messages produced by our analyzer.
• The analysis of the code related to usage of unnamed structures is considerably improved.
• Fixed bug in structure size evaluation in certain cases.

## PVS-Studio 3.52 (27 April 2010)

• New online help has been added. The previous help system integrated into MSDN. It was not very convenient for some reasons (both for us and users). Now PVS-Studio will open the help system on our site. We refused to integrate it into MSDN anymore. As before, the pdf-version
• of the documentation is also available.
• We stopped supporting Windows 2000.
• The settings page "Exclude From Analysis" was deleted - there is now the page "Don't Check Files" instead.
• Work in Visual Studio 2010 was improved.
• We eliminated the issue of integration into VS2010 when reinstalling.
• We fixed work of the function "Mark As False Alarm" with read-only files.

## PVS-Studio 3.51 (16 April 2010)

• PVS-Studio supports Visual Studio 2010 RTM.
• New rule: V003: Unrecognized error found...
• New rule: V121: Implicit conversion of the type of 'new' operator's argument to size_t type.
• You may specify filemasks on the tab "Don't Check Files" to exclude some files from analysis.
• "Exclude From Analysis" option page improved.
• MoreThan2Gb option removed from "Viva64" option page (this option is deprecated).
• If you want check code from command line then you must indicate analyzer type (Viva64 or VivaMP).
• Priority of analyzer's process is reduced. Now you can work on computer more suitable while analysis is running.

## PVS-Studio 3.50 (26 March 2010)

• PVS-Studio supports Visual Studio 2010 RC. Although Visual Studio has not been released officially yet, we have already added the support for this environment into the analyzer. Now PVS-Studio integrates into Visual Studio 2010 and can analyze projects in this environment. Help system in Visual Studio 2010 has been changed, so the Help section of PVS-Studio does not integrate into the documentation yet as it is done in Visual Studio 2005/2008. But you still may use online-Help. Support of Visual Studio 2010 RC is not complete.
• A new PDF-version of Help system is available. Now we ship a 50-page PDF-document in the PVS-Studio distribution kit. It is a full copy of our Help system (that integrates into MSDN in Visual Studio 2005/2008 and is available online).
• PVS-Studio now has a new mechanism that automatically checks for new versions of the tool on our site. Checking for the updates is managed through the new option CheckForNewVersions in the settings tab called "Common Analyzer Settings". If the option CheckForNewVersions is set to True, a special text file is downloaded from www.viva64.com site when you launch code testing (the commands Check Current File, Check Current Project, Check Solution in PVS-Studio menu). This file contains the number of the latest PVS-Studio version available on the site. If the version on the site is newer than the version installed on the user computer, the user will be asked for a permission to update the tool. If the user agrees, a special separate application PVS-Studio-Updater will be launched that will automatically download and install the new PVS-Studio distribution kit. If the option CheckForNewVersions is set to False, it will not check for the updates.
• We have implemented the support for the standard C++0x at the level it was done in Visual Studio 2010. Now it supports lambda expressions, auto, decltype, static_assert, nullptr, etc. In the future, as C++0x support in Visual C++ is developing, the analyzer PVS-Studio will also provide support for the new C++ language capabilities.
• Now you can check solutions with PVS-Studio from the command line instead of Visual Studio environment. Note that we still mean that the checking will be performed from Visual Studio involving the files of projects (.vcproj) and solutions (.sln) but it will be launched from the command line instead of IDE. This way of launching the tool may be useful when you need to regularly check the code with the help of build systems or continuous integration systems.
• New rule V1212: Data race risk. When accessing the array 'foo' in a parallel loop, different indexes are used for writing and reading.
• We added a code signature certificate in the new version of our tool. It is done for you to be sure that the distribution kit is authentic, and get fewer warnings from the operating system when installing the application.

## PVS-Studio 3.44 (21 January 2010)

• Partial support of code testing for Itanium processors. Now the code that builds in Visual Studio Team System for Itanium processors may be also tested with the analyzer. Analysis can be performed on x86 and x64 systems but analysis on Itanium is not implemented yet.
• We reduced the number of the analyzer's false alarms when analyzing an array access. Now, in some cases, the analyzer "understands" the ranges of values in the for loop and does not generate unnecessary warnings on accessing arrays with these indexes. For example: for (int i = 0; i < 8; i++) arr[i] = foo(); // no warning from the analyzer.
• The number of the analyzer's false alarms is reduced - we introduced a list of data types that do not form large arrays. For example, HWND, CButton. Users may compose their own type lists.
• The installer error is corrected that occurs when installing the program into a folder different than the folder by default.

## PVS-Studio 3.43 (28 December 2009)

• Option ShowAllErrorsInString removed (now it always has the value true).
• New rule V120: Member operator[] of object 'foo' declared with 32-bit type argument, but called with memsize type argument.
• New rule V302: Member operator[] of 'foo' class has a 32-bit type argument. Use memsize-type here.
• Operator[] analysis enhanced.
• Error of long removal of the program in case of recurrent installation "over the program again" corrected.
• Fixed problem related to analysis files with "^" character in filename.

## PVS-Studio 3.42 (9 December 2009)

• Errors diagnostics with magic numbers enhanced. Now in a message about a problem, more information is given out; this allows to use filters in a more flexible way.
• Error during work with precompiled header files of special type corrected.
• Option DoTemplateInstantiate is now turned on by default.
• Error with preprocessor hang-up at large number of preprocessor messages corrected.
• Analysis of operator[] enhanced.

## PVS-Studio 3.41 (30 November 2009)

• Error of same name files analysis during work on a multicore machine corrected.
• Error of incorrect diagnostics of some types of cast-expressions corrected.
• Parsing of overloaded functions in the analyzer improved considerably.
• Diagnostics of incorrect use of time_t type added.
• Processing of special parameters in the settings of Visual C++ project files added.

## PVS-Studio 3.40 (23 November 2009)

• A new feature "Mark as False Alarm" has been added. Due to it, it is now possible to mark those lines in the source code in which false alarm of the code analyzer happens. After such marking, the analyzer will not output any diagnostic messages for such code any more. This allows to use the analyzer constantly and more conveniently in the process of software development for new code verification.
• Project Property Sheets support added, a procedure of easy-to-use Visual Studio projects setup.
• During the verification of parallel programs, the analyzer can walk the code twice, this will allow to collect more information and carry out more precise diagnostics of some errors.

## PVS-Studio 3.30 (25 September 2009)

• In PVS-Studio, the possibility of testing 32-bit projects for estimating the complexity and cost of code migration to 64-bit systems has been added.
• A new rule for 64-bit code analysis has been added, V118: malloc() function accepts a dangerous expression in the capacity of an argument.
• A new rule for 64-bit code analysis has been added, V119: More than one sizeof() operators are used in one expression.
• A new rule for parallel code analysis has been added, V1211: The use of 'flush' directive has no sense for private '%1%' variable, and can reduce performance.
• Combined operation with Intel C++ Compiler has been improved (crash at the attempt of code verification with installed Intel C++ Compiler has been corrected.)
• Localized versions of Visual Studio support has been enhanced.

## PVS-Studio 3.20 (7 September 2009)

• The error of incorrect output of some messages in Visual Studio localized versions has been corrected.
• Critical errors processing improved - now it is easy to inform us on possible tools problems.
• Installer operation improved.
• Project files walking error corrected.

## PVS-Studio 3.10 (10 August 2009)

• Templates instantiating support has been added. Now the search of potential errors is carried out not simply by template body (as it was earlier), but also template parameters substitution is made for more thorough diagnostics.
• The code analyzer can work in the mode of Linux environment simulation. We have added the support of various data models. That is why, now it is possible to verify cross platform programs on a Windows system the way it would be carried out on a Linux system.
• The error connected with incorrect functioning of the analyzer of parallel errors in 32-bit environment has been corrected.
• The work of the analyzer with templates has been considerably improved.

## PVS-Studio 3.00 (27 July 2009)

• Software products Viva64 and VivaMP are united into one program complex PVS-Studio.
• The new version is a significantly upgraded software product.
• Operation of the unit of integration into Visual Studio is much more stable.
• Operation rate in multi-processor systems is increased: analysis is performed in several threads, and the number of the analyzer's operating threads can be set with the help of "Thread Count" option. By default the number of threads corresponds to the number of cores in the processor but it can be reduced.
• A possibility to operate the analyzer from the command line is added. A new option "Remove Intermediate Files" is added into the settings of the program which allows you not to remove command files created during the code analyzer's operation. These command files can be launched separately without launching Visual Studio to perform analysis. Besides, when creating new command files you can perform by analogy analysis of the whole project without using Visual Studio.
• It became more simple, convenient and quick to operate diagnosis of separate errors. Now you can enable and disable the function of showing separate errors in the analysis' results. What is the most important is that changing of the message list is performed automatically without the necessity of relaunching analysis. Having performed analysis you can scroll through the list of errors or simply disable showing of those errors which are not relevant to you project.
• Operating with error filters has been improved greatly. Filters for hiding some messages are now defined simply as a list of strings. Like in case of diagnosing separate errors, using filters doesn't demand relaunching analysis.
• Change of licensing policy. Although PVS-Studio is a single product, we provide licensing both for separate analysis units such as Viva64 and VivaMP and for all the units together. Besides, there are licenses for one user or for a team of developers. All these changes are reflected in registration keys.
• Support of localized versions of Visual Studio has been improved greatly.
• Help system for a new version of PVS-Studio integrating into MSDN has been modified and improved greatly. Description of new sections allows you to master operation with the software product better.
• Graphic design of the software product has been improved. New icons and graphics in the installer make the analyzer's appearance more beautiful.

## VivaMP 1.10 (20 April 2009)

• The analysis of the code containing calls of the class static functions has been improved.
• New diagnostic rules for the analysis of errors connected with the exceptions V1301, V1302, V1303 have been implemented.
• The error of the incorrect display of the analysis progress indicator on machines with non-standard DPI has been corrected.
• Some other enhancements have been implemented.

## VivaMP 1.00 (10 March 2009)

• VivaMP 1.00 release.

## VivaMP 1.00 beta (27 November 2008)

• First public beta version release on the Internet.

## Viva64 2.30 (20 April 2009)

• New diagnostic rule V401 has been implemented.
• Constants processing has been improved, in a number of cases, this reduces the quantity of false diagnostic warnings.
• The error of the incorrect display of the analysis progress indicator on machines with non-standard DPI has been corrected.
• A number of errors have been corrected.

## Viva64 2.22 (10 Mach 2009)

• Collaboration of Viva64 and VivaMP is improved.
• Analyzer performance is improved up to 10%.

## Viva64 2.21 (27 November 2008)

• Collaboration of Viva64 and VivaMP is added.

## Viva64 2.20 (15 October 2008)

• Diagnosis of potentially unsafe constructions is improved. As the result the number of the code analyzer's "false alarms" is reduced approximately by 20%. Now the developer will spend less time to analyze the code diagnosed as potentially unsafe.
• Help system is amended. It has been extended and new examples have been added. As diagnosis of potentially unsafe constructions is improved in this version Help system has been also supplemented with explanations concerning the constructions which are now considered safe.
• The speed of a project's structure analysis is raised. Now the same work is performed 10 times quicker. As the result the total time of the whole project's analysis is reduced.
• C++ template analysis is improved. It's not a secret that far not all the code analyzers understand templates. We're constantly working to improve diagnosis of potentially unsafe constructions in templates. Such an improvement is made in this version.
• Format of some code analyzer's messages is amended to make it possible to set filters more accurately. Thus now, for example, the analyzer doesn't only inform about an incorrect index type while accessing an array but also shows the name of the array itself. If the developer is sure that such an array cannot cause problems in 64-bit mode at all he can filter all the messages concerning this array's name.

## Viva64 2.10 (05 September 2008)

• Visual C++ 2008 Service Pack 1 support is added.

## Viva64 2.0 (09 July 2008)

• Visual C++ 2008 Feature Pack (and TR1) support is added.
• Pedantic mode is added which allows you to find constructions potentially dangerous but rarely causing errors.
• Diagnosis of template functions is improved

## Viva64 1.80 (03 February 2008)

• Visual Studio 2008 is fully supported now.
• Source code analysis speed is increased.
• Installer is improved. Now you can install Viva64 without administrator privileges for personal usage.

## Viva64 1.70 (20 December 2007)

• The support of a new diagnostic message (V117) is added. Memsize type used in union.
• Fixed critical bug related to detection of more than one errors in source line.
• Fixed bug in type evaluation in some complex syntax.
• User Interface is improved. Now you can see a common analysis progress indicator.
• Visual Studio 2008 support is added (BETA).

## Viva64 1.60 (28 August 2007)

• The support of a new diagnostic message (V112) is added. Dangerous magic number used.
• The support of a new diagnostic message (V115) is added. Memsize type used for throw.
• The support of a new diagnostic message (V116) is added. Memsize type used for catch.
• The restriction of a trial version is changed. In each analyzed file the location of only some errors is shown.

## Viva64 1.50 (15 May 2007)

• C source analysis is fully supported. Now C source code may be analyzed correctly.

## Viva64 1.40 (1 May 2007)

• Message Suppression feature added. You can adjust filters on the Message Suppression page of the Viva64 settings to ignore some of the warning messages. For example, you can adjust filters to skip messages with particular error codes and messages including names of specific variables and functions.
• Analysis results representation improved. The results are now displayed in the Visual Studio standard Error List window, just like the compiler messages.

## Viva64 1.30 (17 March 2007)

• Representation of the process of the code analysis is improved. Unnecessary windows switching are removed, a general progress bar is created.
• Toolbar with Viva64 commands is added.
• The user now can point the analyzer if its program is using more than 2GB of RAM. On using less than 2GB some warning messages are disabled.
• The support of a new diagnostic message (V113) is added. Implicit type conversion from memsize to double type or vice versa.
• The support of a new diagnostic message (V114) is added. Dangerous explicit type pointer conversion.
• The support of a new diagnostic message (V203) is added. Explicit type conversion from memsize to double type or vice versa.

## Viva64 1.20 (26 January 2007)

• Filtration of repeating error messages is added. It is useful when there are errors in header files. Earlier if *.h file with an error included into different *.cpp files the warning message about the error in the *.h file was shown several times. Now there is only one message about in the *.h file shown.
• Now Viva64 informs about the number of errors found after the code analysis. You can always see:
• - how much code is left to be checked;
• - how many errors are corrected already;
• - which modules contain the largest number of errors.
• Support of some hot keys is added. Now you can interrupt the analyzer's work with the help of Ctrl+Break. In case you want to check the current file just press Ctrl+Shift+F7.
• There are some errors of the analyzer's work corrected.

## Viva64 1.10 (16 January 2007)

• With the help of the Viva64 analyzer itself we've prepared the 64-bit version of Viva64 at once! But you should not care about the choose of the right version during the installation. The installer will find out itself which version should be installed for your operation system.
• The support of a new rule is added. Now the parameters of the functions with the variable number of arguments are checked (V111-error code).
• There is no unnecessary diagnosis of the address to the array item with the help of enum values.
• There is no unnecessary diagnosis of the constructions of type int a = sizeof(int).
• The Help System is improved.

## Viva64 1.00 (31 December 2006)

• First public release on the Internet.

# PVS-Studio and Continuous Integration

 This article discusses integration of PVS-Studio into the continuous integration process on Windows. Integration into the CI process on Linux is discussed in the article "How to run PVS-Studio on Linux".

## How to use static analysis most efficiently

Before talking on the subject of this article, it would be useful for you to know that running PVS-Studio solely on the build server is effective yet inefficient. A better solution is to build a system that could perform source code analysis at two levels: locally on the developers' machines and on the build server.

This concept stems from the fact that the earlier a defect is detected, the less expensive and difficult it is to fix. For that reason, you want to find and fix bugs as soon as possible, and running PVS-Studio on the developers' machines makes this easier. We recommend using the incremental analysis mode, which allows you to have analysis automatically initiated only for recently modified code after the build.

However, this solution does not guarantee that defects will never get to the version control system. It is to track such cases that the second security level - regular static analysis on the build server - is needed. Even if a bug does slip in, it will be caught and fixed in time. With the analysis integrated into night builds, you will get a morning report about the errors made the day before and be able to fix the faulty code quickly.

Note. It is not recommended to have the analyzer check every commit on the server, as the analysis process may take quite a long time. If you do need to use it in this way and your project is built with MSBuild build system, use the incremental analysis mode of command line module 'PVS-Studio_Cmd.exe'. For details about this mode, see the section "Incremental analysis in command line module 'PVS-Studio_Cmd.exe'" of this paper. You can also use utility 'CLMonitor.exe' (for C and C++ code only) to analyze your source files in this mode (regardless of the build system). To learn more about the use of 'CLMonitor.exe' utility, see the section "Compiler monitoring system" of this paper.

## Preparing for CI

Preparing for integration of PVS-Studio into the CI process is an important phase that will help you save time in the future and use static analysis more efficiently. This section discusses the specifics of PVS-Studio customization that will make further work easier.

### Unattended deployment of PVS-Studio

You need administrator privileges to install PVS-Studio. Unattended installation is performed by running the following command from the command line (in one line):

PVS-Studio_setup.exe /verysilent /suppressmsgboxes
/norestart /nocloseapplications

Executing this command will initiate installation of all available PVS-Studio components. Please note that PVS-Studio may require a restart to complete installation if, for example, the files being updated are locked. If you run the installer without the 'NORESTART' flag, it may restart the computer without any prior notification or dialogue.

The package includes utility 'PVS-Studio-Updater.exe', which checks for analyzer updates. If there are updates available, it will download and install them on local machine. To run the utility in 'silent' mode, use the same options as with installation:

PVS-Studio-Updater.exe /verysilent /suppressmsgboxes

Settings file is generated automatically when running the Visual Studio with installed PVS-Studio plugin or running C and C++ Compiler Monitoring UI (Standalone.exe), and it can then be edited or copied to other machines. The information about the license is also stored in the settings file. The default directory of this file is:

%AppData%\PVS-Studio\Settings.xml

### Preliminary configuration of the analyzer

Before running the analyzer, you need to configure it to optimize handling of the warning list and (if possible) speed up the analysis process.

Note. The options discussed below can be changed by manually editing the settings file or through the settings page's interface of the Visual Studio plug-in or Compiler Monitoring UI.

It may often be helpful to exclude certain files or even entire directories from analysis - this will allow you to keep the code of third party libraries unchecked, thus reducing the overall analysis time and ensuring that you will get only warnings relevant to your project. The analyzer is already configured by default to ignore some files and paths such as the boost library. To learn more about excluding files from analysis, see the article "Settings: Don't Check Files".

At the phase of analyzer integration, you also want to turn off those PVS-Studio diagnostics that are irrelevant to the current project. Diagnostics can be disabled both individually and in groups. If you know which diagnostics are irrelevant, turn them off right away to speed up the check. Otherwise, you can turn them off later. To learn more about disabling diagnostic rules, see the article "Settings: Detectable Errors".

### Suppression of the warnings related to the old code

When integrating static analysis into an existing project with a large codebase, the first check may reveal multiple defects in its source code. The developer team may lack the resources required for fixing all such warnings, and then you need to hide all the warnings triggered by the existing code so that only warnings triggered by newly written/modified code are displayed.

To do this, use the mass warning suppression mechanism, described in detail in the article "Mass Suppression of Analyzer Messages".

Note 1. If you need to hide only single warnings, use the false positive suppression mechanism described in the article "Suppression of false alarms".

Note 2. Using SonarQube, you can specify how warnings issued within a certain period are displayed. You can use this feature to have the analyzer display only those warnings that were triggered after the integration (that is, turn off the warnings triggered by old code).

## Integrating PVS-Studio into the CI process

Integrating PVS-Studio into the CI process is relatively easy. In addition, it provides means for convenient handling of analysis results.

 Integration of PVS-Studio with the SonarQube platform is possible only if you own an Enterprise license. You can order one by emailing us.

The principles of analysis of the projects, based on different build systems will be described below, as well as the utilities for working with the results of analysis.

## Analyzing the source code of MSBuild / Visual Studio projects

This section discusses the most effective way of analyzing MSBuild / Visual Studio solutions and projects, i.e. Visual Studio solutions (.sln) and Visual C++ (.vcxproj) and Visual C# (.csproj) projects.

### General information

Project types listed above can be analyzed from the command line by running the 'PVS-Studio_Cmd.exe' module, located in PVS-Studio's installation directory. The default location is 'C:\Program Files (x86)\PVS-Studio\'.

You can modify analysis parameters by passing various arguments to 'PVS-Studio_Cmd.exe'. To view the list of all available arguments, enter the following command:

PVS-Studio_Cmd.exe --help

The analyzer has one obligatory argument, '--target', which is used to specify the target object for analysis (a .sln, .vcxproj, or .csproj file). The other arguments are optional; they are discussed in detail in the article "Analyzing Visual C++ (.vcxproj) and Visual C# (.csproj) projects from the command line".

The following example demonstrates how to start analysis of a .sln file (in one line):

PVS-Studio_Cmd.exe --target "targetsolution.sln" --platform "Any CPU"
--output "results.plog" --configuration "Release"

Executing this command will initiate analysis of .sln file 'targetsolution.sln' for platform 'Any CPU' in 'Release' configuration. The output file ('results.plog') will be created in the directory of the solution under analysis. The check will be performed with the standard analyzer settings since no specific settings have been specified.

The 'PVS-Studio_Cmd.exe' module employs a number of non-zero exit codes, which it uses to report the final analysis status. An exit code is a bit mask representing all states that occurred while the utility was running. In other words, a non-zero exit code does not necessarily indicate an error in the utility's operation. For a detailed description of exit codes, see the above-mentioned article "Analyzing Visual C++ (.vcxproj) and Visual C# (.csproj) projects from the command line".

### Displaying analysis results only for newly written / modified code

If you use the analyzer regularly, you may want it to issue warnings triggered only by newly written/modified code. With night builds on the build server, this would allow you to view only those warnings that were triggered by mistakes made on the previous day.

To turn on this mode, run the 'PVS-Studio_Cmd.exe' module with the command line argument '--suppressAll'. When this flag is present, the utility will add all the messages to the database of suppressed warnings (.suppress files of the corresponding projects) after saving the analysis results. This will prevent those messages from appearing at the next check. In case you need to view the old warnings again, the complete analysis log can be found in the same directory where the .plog file with new messages is located.

To learn more about the mass warning suppression mechanism, see the article "Mass Suppression of Analyzer Messages".

Note. When using the SonarQube platform, you can keep track of new messages without applying the suppression mechanisms. To do this, configure it to display changes only for the past day.

### Incremental analysis in command line module 'PVS-Studio_Cmd.exe'

PVS-Studio's incremental analysis mode allows you to check only those files that have been modified/affected since the last build. This mode is available in both the Visual Studio plug-in and the command line module. With incremental analysis, only warnings triggered by modified code will be displayed, thus reducing the analysis time by excluding unaffected parts of the solution from analysis.

This mode is useful when your continuous integration system is configured to run an automatic incremental build every time changes in the version control system are detected; that is, when the project is built and analyzed on the build server many times during the day.

The use of incremental analysis in the 'PVS-Studio_Cmd.exe' module is controlled by the flag '--incremental'. The following modes are available here:

• Scan - analyze dependencies to determine which files must be included into incremental analysis. The analysis process itself is not initiated.
• Analyze - run incremental analysis. This step must be performed after the Scan step and can be performed both before and after building the solution/project. Only those files that have changed since the last build will be analyzed.
• ScanAndAnalyze - analyze dependencies to determine which files must be included into incremental analysis, and immediately start incremental analysis.

Note. There are a few details to keep in mind about this mode. Specifically, you could encounter a file locking issue when PVS-Studio uses Visual C++'s preprocessor ('cl.exe'). It has to do with the fact that the 'cl.exe' compiler may lock a file while preprocessing it, causing writing of this file to fail. When the Clang preprocessor is used, this issue is much rarer. Please keep this in mind when configuring the server to run incremental analysis rather than full-fledged analysis at night.

### Analysis of CMake projects

If you need to analyze CMake projects, it is recommended that you convert them into Visual Studio solutions and continue to work with these. This will allow you to use the 'PVS-Studio_Cmd.exe' module's capabilities in full.

## Analyzing projects that use uncommon build systems

If your project uses a build system other than MSBuild, you will not be able to analyze it with the command line module 'PVS-Studio_Cmd.exe'. The package, however, includes utilities to make it possible to analyze such projects too.

### Compiler monitoring system

The PVS-Studio Compiler Monitoring system, or CLMonitoring, is designed to provide 'seamless' integration of PVS-Studio into any build system under Windows that employs one of the preprocessors supported by the command line module 'PVS-Studio.exe' for compilation.

The monitoring server (CLMonitor.exe) monitors the launches of processes corresponding to the target compiler and collects information about these processes' environment. The server monitors only those processes that run under the same user profile where it has been launched.

Supported compilers:

• Microsoft Visual C++ compilers (cl.exe);
• C/C++ compilers of the GNU Compiler Collection (gcc.exe, g++.exe);
• Clang compiler (clang.exe) and Clang-based compilers.

Before integrating the monitoring server into the build process, start the 'CLMonitor.exe' module with the argument 'monitor':

CLMonitor.exe monitor

This command will tell the monitoring server to call itself in monitoring mode and terminate, while the build system will be able to continue with its tasks. Meanwhile, the second CLMonitor process (called by the first) will be still running and monitoring the build process.

Once the build is complete, you will need to launch the 'CLMonitor.exe' module in client mode to generate preprocessed files and start static analysis proper:

CLMonitor.exe analyze -l "c:\ptest.plog" -u "c:\ptest.suppress" -s

This command contains the following arguments:

• analyze - run the 'CLMonitor.exe' module for analysis;
• -l - full path to the file the analysis results will be saved to;
• -u - path to suppress file;
• -s - append all new messages of the current check to the suppress file.

To learn more about the use of the compiler monitoring system, see the article "Compiler Monitoring System in PVS-Studio".

Note. The compiler monitoring system has a number of drawbacks stemming from the natural limitations of this approach, namely the impossibility to guarantee a 100% intercept of all the compiler launches during the build process (for example, when the system is heavily loaded). Another thing to remember is that when several build processes are running in parallel, the system may intercept compiler launches related to another build.

### Direct integration into build automation systems

Note. In direct integration mode, the analyzer can check only C/C++ code.

Direct integration may be necessary when you cannot use the command line module 'PVS-Studio_Cmd.exe' (since the project is built with a system other than MSBuild) and the compiler monitoring system (see the note in the corresponding section).

In that case, you need to integrate a direct call of the analyzer ('PVS-Studio.exe') into the build process and provide it with all the arguments required for preprocessing. That is, the analyzer must be called for the same files that the compiler is called for.

To learn more about direct integration into build automation systems, see the article "Direct integration of the analyzer into build automation systems (C/C++)".

## Handling analysis results

Once the check has finished, the analyzer outputs a .plog file in the XML format. This file is not intended to be handled manually (read by the programmer). The package, however, includes special utilities whose purpose is to provide a convenient way to handle the .plog file.

### Preliminary filtering of analysis results

The analysis results can be filtered even before a start of the analysis by using the No Noise setting. When working on a large code base, the analyzer inevitably generates a large number of warning messages. Besides, it is often impossible to fix all the warnings straight out. Therefore, to concentrate on fixing the most important warnings first, the analysis can be made less "noisy" by using this option. It allows completely disabling the generation of Low Certainty (level 3) warnings. After restarting the analysis, the messages from this level will disappear from the analyzer's output.

When circumstances will allow it, and all of the more important messages are fixed, the 'No Noise' mode can be switched off – all of the messages that disappeared before will be available again.

To enable this setting use the Specific Analyzer Settings page.

### PlogConverter

'PlogConverter.exe' is used to convert the analyzer report into one of the formats that could be read by the programmer:

• text file with analysis results. It may be convenient when you want the analysis results (for example, new diagnostic messages) output into the log of the build system or CI server;
• HTML report with a short description of the analysis results. It suits best for the e-mailing of the notifications;
• HTML report with sorting of the analysis results according to the different parameters and navigation along the source code;
• CSV table with analysis results;
• Tasks file to be viewed in QtCreator;
• text file with a summary table showing the number of messages across severity levels and groups of diagnostics.

This example demonstrates how to use 'PlogConverter.exe' utility (in one line):

PlogConverter.exe test1.plog -o "C:\Results" -r "C:\Test"
-a GA:1 -t Html

This command converts the 'test1.plog' file into an .html file that will include the first-level diagnostic messages of the GA (general-analysis) group. The resulting report will be written to 'C:\Results', while the original .plog file will stay unchanged.

To see full help on 'PlogConverter' utility's parameters, run the following command:

PlogConverter.exe --help

Note. 'PlogConverter' utility comes with the source files (in C#), which can be found in the archive 'PlogConverter_src.zip'. You can adopt the algorithm of parsing a .plog file's structure to create your own output format.

### SonarQube

Analysis results can be imported into the SonarQube platform, which performs continuous code quality inspection. To do this, use the 'sonar-pvs-studio-plugin' included into the package. This plugin allows you to add warnings issued by PVS-Studio to the SonarQube server's message database. This, in its turn, enables you to view bug occurrence/fixing statistics, navigate the warnings, view the documentation on diagnostic rules, and so forth.

Once added to SonarQube, all PVS-Studio messages are assigned type Bug. SonarQube's interface keeps the same layout of message distribution across diagnostic groups as in the analyzer.

To learn more about integrating analysis results into SonarQube, see the article "Integration of PVS-Studio analysis results into SonarQube".

### PVS-Studio plugin for Jenkins

 At the moment, PVS-Studio plugin for Jenkins is only supported on Windows operating systems.

The distribution package includes 'pvs-studio.hpi' plugin, designed for publishing the results of PVS-Studio analyzer work in the Jenkins continuous integration system.

Note. Plugin for Jenkins allows to publish results of static analysis, for which it uses the reports of the analyzer (.plog-files). This allows to configure the publication of various analysis results: full project analysis, incremental analysis (analysis of only modified files), etc. Accordingly, to publish the results of various types of analysis, it is only necessary to correctly configure the analyzer itself and pass the paths of one or more related .plog files to plugin (about this it will be written below). Information about the complete analysis is described in the sections "Analysis of the source code of MSBuild / Visual Studio projects" and "Analyzing projects that use uncommon build systems", incremental analysis mode is described in the article "PVS-Studio's incremental analysis mode".

To install the plugin you should upload it from the 'Advanced' tab at the plugin setup menu (Manage Jenkins - Manage Plugins); select a plugin in the 'Upload Plugin' section and perform the installation by clicking the 'Upload'. Jenkins will install the plugin and notify you if it is necessary to reboot the server.

In the server settings ('Configure System') in the section 'PVS-Studio Plugin Settings', you must specify the installation directory of PVS-Studio. Default directory is 'C:\Program Files (x86)\PVS-Studio\'.

To publish the results of analysis in project settings, you must add the post-build step ('Post-build Actions' section) 'Publish PVS-Studio analysis result'. Mandatory field 'Path(s) to PVS-Studio analysis report(s)', which sets the path to the .plog file (PVS-Studio result), the contents of which will be displayed on the build page. The absolute and relative (for project workspace) paths are supported. Jenkins environment variables are also supported.

If it is necessary to display integrated results of several .plog files, they must be separated by '|' punctuation mark.

The example of setting paths to several reports to publish an integrated result:

$REPORT_PATH | D:\analysis reports\testLog.plog | .\additionalLog.plog Additional parameters ('Analyzers and levels settings', 'Excluded codes', 'Settings path') allow performing supplementary filtration of analyzer warnings. A brief description of these parameters is available directly in the form of their task. The meaning of these parameters and possible values are similar to the parameters 'analyzer', 'excludedCodes' and 'settings' of PlogConverter utility. They are described in the article "Managing the Analysis Results (.plog file)". After a successful execution of publishing PVS-Studio analysis results, they will be displayed on build page. The results of the analysis are stored for each build that allows you to view the analyzer report that corresponds to a specific build. If the resulting analyzer report is large, its preview will be displayed in the build page. The full report will be available for viewing when clicking the links 'View full report' over/under report preview, or 'Full PVS-Studio analysis report' in the menu on the left. Figure 1 - Build page look, displaying the results of PVS-Studio analysis ## Sending analysis results via email with help BlameNotifier Sending analysis report copies to developers is an effective way to inform them about the results. It can be done with the help of special utilities such as SendEmail. SonarQube provides this option as well. Another way to inform the developers is to use 'BlameNotifier' utility, which also comes with the PVS-Studio package. This application allows you to form reports in a flexible way. For example, you can configure it so that it will send individual reports to the developers who submitted faulty code; team leaders, development managers, etc. will get a complete log with the data about all the errors found and developers responsible for them. For basic information about the utility, run the following command: BlameNotifier.exe --help To learn more about 'BlameNotifier', see the article "Managing the Analysis Results (.plog file)", section "Notifying the developer team". ## Conclusion If you have any questions, please feel free to contact us at support@viva64.com. # Direct integration of the analyzer into build automation systems (C/C++) We recommend utilizing PVS-Studio analyzer through the Microsoft Visual Studio development environments, into which the tool is perfectly integrated. But sometimes you can face situations when command line launch is required, for instance in case of the cross-platform build system based on makefiles. In case you possess project (.vcproj/.vcxproj) and solution (.sln) files, and command line execution is required for the sake of daily code checks, for instance, we advise you to examine the article "Analyzing Visual C++ (.vcxproj) and Visual C# (.csproj) projects from the command line". In addition, regardless of the build system being utilized, you can use PVS-Studio compiler monitoring system. ## PVS-Studio analyzer independent mode So, how does a code analyzer work (be it PVS-Studio or any other tool)? When the analyzer user gives a command to check some file (for example, file.cpp), the analyzer performs preprocessing of this file at first. As a result, all the macros are defined and #include-files are arranged. The preprocessed i-file can now be parsed by the code analyzer. Pay attention that the analyzer cannot parse a file which has not been preprocessed, for it won't have information about the types, functions and classes being used. Operation of any code analyzer includes at least two steps: preprocessing and analysis itself. It is possible that C++ sources do not have project files associated with them, for example it is possible in case of multiplatform software or old projects which are built using command line batch utilities. Various Make systems are often employed to control building process in such cases, Microsoft NMake or GNU Make for instance. To analyze such projects it is necessary to embed the direct call for the analyzer into building process (by default, the file is located at 'programfiles%\PVS-Studio\x64\PVS-Studio.exe') , and to pass all arguments required for preprocessing to it. In fact the analyzer should be called for the same files for which the compiler (cl.exe in case of Visual C++) is being called. The PVS-Studio analyzer should be called in batch mode for each C/C++ file or for a whole group of files (files with c/cpp/cxx etc. extensions, the analyzer shouldn't be called for header h files) with the following arguments: PVS-Studio.exe --cl-params %ClArgs% --source-file %cppFile% --cfg %cfgPath% --output-file %ExtFilePath% %ClArgs% — arguments which are passed to cl.exe compiler during regular compilation, including the path to source file (or files). %cppFile% — path to analyzed C/C++ file or paths to a collection of C/C++ files (the filenames should be separated by spaces) %ClArgs% and %cppFile% parameters should be passed to PVS-Studio analyzer in the same way in which they are passed to the compiler, i.e. the full path to the source file should be passed twice, in each param. %cfgPath% — path to PVS-Studio.cfg configuration file. This file is shared between all C/C++ files and can be created manually (the example will be presented below) %ExtFilePath% — optional argument, a path to the external file in which the results of analyzer's work will be stored. In case this argument is missing, the analyzer will output the error messages into stdout. The results generated here can be viewed in Visual Studio's 'PVS-Studio' toolwindow using 'PVS-Studio/Open Analysis Report' menu command (selecting 'Unparsed output' as a file type). Please note, that starting from PVS-Studio version 4.52, the analyzer supports multi-process (PVS-Studio.exe) output into a single file (specified through --output-file) in command line independent mode. This allows several analyzer processes to be launched simultaneously during the compilation performed by a makefile based system. The output file will not be rewritten and lost, as file blocking mechanism had been utilized. Consider this example for starting the analyzer in independent mode for a single file, utilizing the Visual C++ preprocessor (cl.exe): PVS-Studio.exe --cl-params "C:\Test\test.cpp" /D"WIN32" /I"C:\Test\" --source-file "C:\Test\test.cpp" --cfg "C:\Test\PVS-Studio.cfg" --output-file "C:\Test\test.log" The PVS-Studio.cfg (the --cfg parameter) configuration file should include the following lines: exclude-path = C:\Program Files (x86)\Microsoft Visual Studio 10.0 vcinstalldir = C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\ platform = Win32 preprocessor = visualcpp language = C++ Let's review these parameters: • The exclude-path parameter contains the directories for which the analysis will not be performed. If the Visual Studio directory is not included here, the analyzer will generate error messages for its' header .h-files. But you of course cannot modify them. Therefore we recommend you to always add this path to the exclusions. It is also possible to set multiple exclude-path parameters. • The vcinstalldir parameter indicates the directory in which the utilized preprocessor is located. The supported preprocessors are: Microsoft Visual C++ (cl.exe), Clang(clang.exe) and MinGW (gcc.exe). • The platform parameter points to the correct version of the compiler — Win32, x64, Itanium or ARMV4. It is usually Win32 or x64. • The preprocessor parameter indicates which preprocessor should be located at vcinstalldir. Supported values are: visualcpp, clang, gcc. Generally, one should select the preprocessor according to the compiler being used by the build automation system in question. • The 'language' parameter determines the version of C/C++ language within the code of the file being verified (--source-file) which is expected by the analyzer during its' parsing process. Possible values are: C, C++, C++CX, C++CLI. As each of the supported language variants does contain specific key words, the incorrect assignment of this parameter could potentially lead to the V001 parsing error messages. You can filter diagnostics messages generated by analyzer using analyzer-errors and analysis-mode parameters (set them in cfg file of pass through command line). These parameters are optional. • The analyzers-errors parameter allows you to set the codes for errors in which you are interested. For example: analyzer-errors=V112 V111. We do not recommend setting this parameter. • The analysis-mode parameter allows you to control the analyzers being used. Values: 0 - full analysis (by default), 1 - only 64 bit analysis, 4 - only general-purpose analysis, 8 - only optimization analysis. The recommended value is 4. Also there is a possibility to pass the analyzer a ready-made prepossessed file (i-file), by missing the preprocessing phase and by getting to the analysis. To do this, use the parameter skip-cl-exe, specifying yes. In this mode there is no need to use cl-params parameter. Instead, specify the path to the file (--i-file) and set the type of the preprocessor, used to create this i-file. Specifying the path to the source file (--source file) is also necessary. Despite the fact that the i-file already contains the necessary information for analysis, it may be needed to compare the i-file with the file of the source code, for example, when the analyzer has to look at unexpanded macro. Thus, the call of the analyzer in the independent mode with the specified i-file for the preprocessor Visual C++ (cl.exe) could be: PVS-Studio.exe --source-file "C:\Test\test.cpp" --cfg "C:\Test\PVS-Studio.cfg" --output-file "C:\Test\test.log" The configuration file PVS-Studio.cfg (parameter --cfg) should contain the following lines: exclude-path = C:\Program Files (x86)\Microsoft Visual Studio 10.0 vcinstalldir = C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\ platform = Win32 preprocessor = visualcpp language = C++ skip-cl-exe = yes i-file = C:\Test\test.i The full list of command line switches will be displayed with this argument: PVS-Studio.exe –help It should be noted that when calling PVS-Studio.exe directly, the license information stored in the file 'Settings.xml' is not used. When running PVS-Studio.exe, you should explicitly specify the path to a separate file with the license. This is a text file in the UTF-8 encoding, consisting of the two lines: the name and the key. The path to the file with the license can be either specified in the PVS-Studio configuration file or passed as a command-line argument. Appropriate parameter: lic-file. For example, to specify the path to the license file in the .cfg file, you should add the following line: lic-file = D:\Test\license.lic ## An example of using the analyzer independent mode with Makefile project For example let's take the Makefile project which is build using VisualC++ compiler and it is declared in the project's makefile like this:$(CC) $(CFLAGS)$<

The $(CC) macro calls cl.exe, the compilation parameters$(CFLAGS) are passed to it and finally all C/C++ files on which the current build target is dependent are inserted using the $< macro. Thereby the cl.exe compiler will be called with required compilation parameters for all source files. Let's modify this script in such a way that every file is analyzed with PVS-Studio before the compiler is called: $(PVS) --source-file $< --cl-params$(CFLAGS)  $< --cfg "C:\CPP\PVS-Studio.cfg"$(CC) $(CFLAGS)$<

$(PVS) - path to analyzer's executable (%programfiles%\PVS-Studio\x64\PVS-Studio.exe). Take into account that the Visual C++ compiler is being called after the analyzer on the next line with the same arguments as before. This is done to allow for all targets to be built correctly so the build would not stop because of the lack of .obj-files. ## Managing analysis results generated from using command line analyzer mode PVS-Studio tool has been developed to work within the framework of Visual Studio environment. And launching it from the command line is the function that is additional to the main working mode. However, all of analyzer's diagnostic capabilities are available. Error messages, which were generated in this mode, could be easily redirected into the external file with the help of --output-file command line switch. This file will contain the unprocessed and unfiltered analyzer output. Such a file could be viewed in PVS-Studio IDE extension or C and C++ Compiler Monitoring UI (Standalone.exe) by using 'Open Analysis Report' menu command (select 'Unparsed output' as a file type) and afterwards it could be saved in a standard PVS-Studio log file (plog) format. This allows you to avoid the duplication of error messages and also to use all of the standard filtering mechanisms for them. In addition, the 'raw' unparsed output can be converted to one of the supported formats (xml, html, csv and so on) by using the PlogConverter command line tool. ## Incremental analysis in independent command line mode The users who are familiar with PVS-Studio incremental analysis mode within the IDE naturally will miss this feature in the command line mode. But fortunately, almost any build system could provide an incremental analysis just "out of the box", because by invoking "make" we recompile only file which were modified. So the incremental analysis will be automatically provided by using the independent command line version. ## Using Microsoft IntelliSense with analyzer in independent mode Although it is possible to open the unfiltered text file containing analyzer diagnostic messages from within the IDE into PVS-Studio Output window (which itself will allow you to use file navigation and filtering mechanisms), you will only be able to use the code text editor inside the Visual Studio itself, as the additional IntelliSense functionality will be unavailable (that is, autocompletion, type declarations and function navigation, etc.). And all this is quite inconvenient, especially while you are handling the analysis results, even more so with the large projects, forcing you to search class and method declarations manually. As a result the time for handling a single diagnostic message will be greatly increased. To solve this issue, you need to create an empty Visual C++ project (Makefile based one for instance) in the same directory with C++ files being verified by the analyzer (vcproj/vcxproj file should be created in the root folder which is above every file verified). After creating an empty project you should enable the 'Show All Files' mode for it (the button is in the upper part of the Solution Explorer window), which will display all the underlying files in the Solution Explorer tree view. Then you will be able to use the 'Include in Project' context menu command to add all the necessary c, cpp and h files into your project (You will also probably have to add include directory paths for some files, for instance the ones containing third-party library includes). If including only a fraction of the files verified, you also should remember that IntelliSense possibly will not recognize some of the types from within these files, as these types could be defined right in the missing files which were not included by you. Figure 1 — including files into the project The project file we created could not be used to build or verify the sources with PVS-Studio, but still it will substantially simplify handling of the analysis results. Such a project could also be saved and then used later with the next iteration of analyzer diagnostics results in independent mode. ## Differences in behavior of PVS-Studio.exe console version while processing one file or several files at once The cl.exe compiler is able to process source files as either one at a time or as a whole group of files at once. In the first case the compiler is called several times for each file: cl.exe ... file1.cpp cl.exe ... file2.cpp cl.exe ... file2.cpp In the second case it is called just once: cl.exe ... file1.cpp file2.cpp file3.cpp Both of these modes are supported by the PVS-Studio.exe console version as demonstrated above in the examples. It could be helpful for a user to understand the analyzer's logics behind theses two modes. If launched individually, PVS-Studio.exe will firstly invoke the preprocessor for each file and the preprocessed file will be analyzed after it. But when processing several files at once, PVS-Studio.exe will firstly preprocess all these files and then separate instances of PVS-Studio.exe will be invoked individually for each one of the resulting preprocessed files. # Mass Suppression of Analyzer Messages (disable generation of analyzer messages for legacy code)  This article discusses the usage of mass suppression of analyzer messages in the Windows environment. The use of the appropriate functionality in Linux environment is described in the appropriate section of the document "How to run PVS-Studio on Linux". Sometimes, during deployment of static analysis, especially at large-scale projects, the developer has no desire (or even has no means of) to correct hundreds or even thousands of analyzer's messages which were generated on the existing source code base. In this situation, the need arises to "suppress" all of the analyzer's messages generated on the current state of the code, and, from that point, to be able to see only the messages related to the newly written or modified code. As such code was not yet thoroughly debugged and tested, it can potentially contain a large number of errors. If instead you want to hide only some individual messages (false positives for example), the false alarm suppression feature should be used instead. You should also remember that each individual type of diagnostic or an isolated group of analyzer messages could be hidden by utilizing the message filtering feature of the PVS-Studio output window. ## Operation principles Message suppression is based upon utilization of a special analyzer "message base" files (the files with suppress extension), which are located beside project files of your IDE (for example, vcproj and vcxproj files of Microsoft Visual Studio) or are added to project files as noncombilable items. These files contain analyzer messages marked as "suppressed" (marking analyzer messages through IDE plug-in interface will be described in the next section). On every subsequent analysis of such a project by PVS-Studio, its IDE plug-in will be watching for such suppression files and in case such file is found, the messages contained in the "base file" will not appear in the analyzer's output. It should be also noted that modifying the source file upon which the messages were generated, the displacement of code lines in particular, would not result in the re-appearance of these messages. Only by modifying the code line at which the message was originally generated will make the message reappear, as such a message becomes the "new" one. The "suppression base" files are stored in the simple XML format, which allows you to easily read\modify them, or even utilize these files at the level of your team through the revision control software. ## Utilizing message suppression To suppress analyzer messages you will first need to run your projects through the analyzer (PVS-Studio -> Check -> Solution). Next, open 'Suppress Analyzer Messages' window through the 'PVS-Studio -> Suppress Messages...' menu item Figure 1 - message suppression Click the 'Suppress All' button to mark all the analyzer messages that were generated during last analysis run or loaded from plog file. You can also suppress only messages visible in PVS-Studio output window by clicking 'Suppress Filtered' button. From example, you can suppress only Low Certainty (Level 3) messages to concentrate on higher severity messages. After confirmation all of the messages will be appended to the 'suppress' files for the corresponding projects. The 'Active suppress files' list show all of the 'suppress' files for the solution that is currently open in Visual Studio. Messages from individual suppression files can be un-suppressed by selecting them and pressing 'Un-suppress from Selected' button. Suppressed messages will not appear in the output window during subsequent analyzer runs, which will allow you to concentrate more on a newly written code. However, despite these messages not appearing on the list, they are actually still in there. To enable the display of the messages marked as 'suppressed', use the 'Display Suppressed Messages in PVS-Studio Output Window' checkbox (figure 1). The suppressed message will be displayed in the list as strikethrough ones. You can un-mark the message by 'Un-Suppress Selected Messages' item from the context menu. Figure 2 - removing messages from "suppressed" The 'suppression' mark will be removed from the selected messages, and these messages themselves will be removed from the 'suppress' base files, if the corresponding project is opened inside the IDE. ### Adding suppress files to projects After the file is generated, you can add it to the corresponding project as noncompilable\text item with 'Add|Existing Item...' menu command. If the project includes at least one suppress file, the files besides the project file itself are ignored. Adding suppress files to projects allows you to keep suppress files and project files in different directories. Only one suppress file per project is supported - others will be ignored. ### Adding suppress files to Visual Studio solution You can add a suppress file to a solution. You can do this with 'Add|New Item...' menu command. The same as for projects, only one suppress file per solution is supported - the rest ones will be ignored. Solution-level suppress file allows you to suppress messages from all of the projects in the corresponding solution. In case projects possess individual suppress files, the analyzer will take into account both warnings suppressed in a suppress file of the solution, and in a suppress file of a project. When suppressing warnings in case a solution contains a suppress file, the following rules are applied: • if only a solution contains a suppress file, warnings are suppressed only into it - project-level suppress files are not created; • if both solution and project contain suppress files, warnings will be suppressed into both of these files. ### Suppressing analyzer messages from command line The message suppression could be also used directly from the command line. The command line tool PVS-Studio_Cmd.exe will by automatically pick up existing suppress files during analysis. The tool can also be used to suppress analyzer messages that were saved into a plog file. To suppress messages from an existing plog file, the PVS-Studio_Cmd.exe should be started with '--suppressAll' flag. For example (in a single line): "C:\Program Files (x86)\PVS-Studio\PVS-Studio_Cmd.exe" -t "Solution.sln" -o "results.plog" --suppressAll SuppressOnly The execution of this command will generate suppress files for all projects from Solution.sln, for which the analyzer messages were generated in results.plog. The '--suppressAll' flag supports 2 modes of operation. SuppressOnly will run the suppression for the input plog without starting the analysis. AnalyzeAndSuppress will run the analysis first, save the output plog, and only after that the tool will suppress all the messages from this log. This mode allows you to have only the newly generated messages in each subsequent run of the analyzer (as messages from previous runs will be suppressed). # PVS-Studio's incremental analysis mode One of the main problems when using a static analyzer is the necessity to spend much time on analyzing project files after each project modification. It is especially relevant to large projects that are actively developing. Total analysis can be launched regularly in separate cases - for instance, once a day during night builds. But the greatest effect of using the analyzer can be achieved only through earlier detection and fix of found defects. The earlier you can find an error, the less amount of code you will have to fix. That is, the most proper way to use a static analyzer is to analyze a new code right after it is written. Having to launch analysis of all the modified files manually and waiting for it to finish each time surely complicates this scheme. It is incompatible with the intense development and debugging of new code. It's just inconvenient, after all. But PVS-Studio offers a solution of this issue. ## Using incremental analysis The main task, solved when using incremental analysis is the analyzer run automation on the developer's machine immediately after the code is compiled. This mode virtually makes PVS-Studio similar to the '/analyze' switch in certain Visual Studio versions regarding usability. It also provides you with a more convenient interface and use cases. Now you can just work on your code, compile it and get messages from time to time about possible issues. Since the analyzer works in background (you can set the number of processor cores to perform analysis using the 'ThreadCount' option), PVS-Studio doesn't interfere with work of other programs. And it means that you can easily install the tool on computers of many (or all) developers in your department to detect issues in the code right after they appear. You can turn on the afterbuild incremental analysis in the PVS-Studio menu: "PVS-Studio -> Analysis after Build (Modified Files Only)" (Figure 1). This option is on by default. Figure 1 — Managing PVS-Studio's incremental analysis mode Once the incremental analysis option is enabled, PVS-Studio will automatically perform background analysis of all the modified files after building the project. If PVS-Studio detects such modifications, incremental analysis will be launched automatically, and an animated PVS-Studio icon will appear in the notification area (Figure 2). Note that new icons may be often hidden in Windows's notification area. Figure 2 — PVS-Studio performing incremental analysis The notification area's shortcut menu allows you to pause or abort current check (commands 'Pause' and 'Abort' respectively). If the analyzer detects errors in code while performing incremental analysis, the number of detected errors will be shown in the title of the PVS-Studio's window tab in background. Clicking on the icon in the notification area (or the window itself) opens the PVS-Studio Output window where you can start handling the errors. Figure 3 – The result of incremental analysis: 15 suspect fragments of code are found Keep in mind that after the first total check of your project you should review all the diagnostic messages for the required files and fix the errors found in your code. Regarding the rest messages, you should either mark them as false positives or disable those messages or analyzer types which are not relevant to your project. This approach will allow you to get a message list clear of meaningless and unnecessary messages. ## Limiting the time of incremental analysis in IDE Working from the IDE there is a possibility to set a limit on the work of the incremental analysis. This setting can be configured in the setting menu of PVS-Studio in the 'Specific Analyzer Settings' ('IncrementalAnalysisTimeout'). Upon the timeout efflux, the file analysis will be aborted and all the found warnings will be displayed in PVS-Studio output window. Additionally, there will be issued a warning that the analyzer didn't have time to process all the modified files and the information about the total and analyzed number of files. ## Incremental analysis workflow in the IDE To determine the presence of modifications in the source code files, PVS-Studio controls the state of object files (obj/o files), generated by the C++ compiler for every C/C++ file, or the state of assemblies for C# projects. Before the actual build in the environment, the PVS-Studio plugin for Visual Studio captures the object files and their modifications for all the compilable files of the project. To determine the modified files in the C/C++ projects, which require incremental analysis, the analyzer uses MSBuild tools to work with the file tracking logs (*.tlog files). This approach, firstly, enables us to obtain modified files to be analyzed incrementally, in the same way as MSBuild does, and secondly, eliminates the need to make the correspondence between the header files and the source code. The file tracking logs aren't created for C# projects. That's why, to determine the files which require the incremental analysis, the plugin makes the correspondence between the source code and the binary assembly file, which is the result of the project compilation and captures those files that were modified after the assembly was built. After the build, the incremental analysis starts automatically in the background mode. ## Incremental analysis support in the command line module The incremental analysis mode for Visual Studio solutions is also available in the command line module (PVS-Studio_Cmd.exe). This mode allows increasing the speed of the static analysis on the continuous integration server. Consider the following scenario of using the static analyzer. PVS-Studio static analyzer is installed on the machines of the developers, who do the incremental analysis after the local build of the solution, and on the continuous integration server, where the static analysis is performed for the whole code during the night build. Suppose, that also the system of continuous integration is configured for the automatic incremental build of the solution after the changes get detected in the version control system. In other words, the build of the solution on the continuous integration server occurs several times a day. In this case, performing the static analysis of the whole code will significantly increase the build time, which will make the use of static analysis during numerous daily builds almost impossible. Then we may have a situation when a developer makes an error in the code and commits it to the version control system without checking the code with the static code analyzer; the same say the build goes to the testers who detect this defect. In this case, the cost of eliminating this defect goes up. The incremental analysis mode that implements the approaches similar to the approaches of MSBuild for the incremental build, allows to solve this problem. There are following modes of incremental analysis available: • Scan – analyze all dependencies to determine, which files will be analyzed incrementally. There will be no immediate analysis. This step should be done right before the solution or the project is built. The scan results will be written to the temporary directories '.pvs-studio', located in the same directories as the project files. The previous history of edits will be removed, the analyzer will take into account only the modifications made after the last build. • AppendScan – analyze all the dependencies to determine source files for incremental analysis. Note that at this step the incremental analysis will not be started. This step should be done right before the solution or the project is built. The scan results will be written to the temporary directories '.pvs-studio', located in the same directories as the project files. The analyzer will take into account the modifications made after the last build and all the previous modifications. • Analyze – perform incremental analysis. This step should be done after 'Scan' and 'AppendScan' and can be performed both before and after the build of the solution or the project. Static analysis will be performed only for the files from the list, received in the result of the execution of 'Scan' and 'AppendScan' commands. If the option 'Remove Intermediate Files' is set as 'True' in the PVS-Studio settings, then the temporary '.pvs-studio' directories will be removed. • ScanAndAnalyze – analyze all the dependencies to determine which files should be analyzed incrementally and perform incremental analysis of the modified files with the source code. This step should be performed before the build of the project/solution. The analyzer will take into account only the modifications made after the last build. The arguments of the command line module (PVS-Studio_Cmd.exe) to start the incremental analysis are given in the section "Analyzing Visual C++ (.vcxproj) and Visual C# (.csproj) projects from the command line". ## Incremental analysis when using Compiler Monitoring System In case of necessity of performing the incremental analysis when using the Compiler Monitoring system, it is enough to "monitor" the incremental build, i.e. the compilation of the files that have been changed since the last build. This way of usage will allow to analyze only the modified/newly written code. Such a scenario is natural for the Compiler Monitoring system, as it is based on the "monitoring" compiler runs during the project build, which helps to get all needed information for the analysis run on source files, compilation of which has been monitored. Consequently, analysis modes depend on the build that is monitored - full or incremental. Compiler Monitoring system is described in more detail in the article "Compiler Monitoring System in PVS-Studio". ## Combined use of full and incremental analysis modes As it was noted at the beginning of the article, fixing the bug is cheaper when it is found at the early stage of writing the code. To perform this task (the earliest detection of defects in code) the mechanism for incremental analysis is designed, as it was described previously. However, someone might think that only incremental analysis is enough to detect potential errors in your code because they will be detected directly in the development process. This statement is wrong. There are various reasons why a developer might provide the erroneous code under version control system: incremental analysis has been disabled on this machine, he did not wait for the results of the analysis, did not notice notifications, etc. As a result, the error gets into a version control system, and how quickly it will get fixed it is an open question (for example, another developer who discovered the same error, cannot do anything, because it is not his code). From the above situations, there is an important conclusion - static analysis should be divided into several phases that will detect errors which somehow passed the previous phase. The optimal scenario for using PVS-Studio analyzer is applying the analyzer both locally on developers' machines and on a build server. This way there is a two-phase system of analysis. The first phase is static analysis directly when developing. For this phase, incremental analysis is ideal, which enables to analyze the code the developer is working with. Most of the bugs (ideally all) are made by the developer, must be fixed at this stage, before getting into the version control system. The second phase - static analysis on a build server. Full analysis on a build server should be implemented regularly, as this will help to identify those bugs that for whatever reasons were set out in a version control system, and as a result, timely fix them. You can also configure additional actions, such as sending mails with the analysis results to the persons concerned; publish analyzer reports in CI systems, etc. If necessary, you can also enter an intermediate phase, which will run between the two described - incremental analysis on the build server. When configured properly, this will help to detect errors got into version control system, before conducting static analysis during the night build. In more detail, the process of configuration for a build server is described in the article "Integrating PVS-Studio into the Continuous Integration Process". # Suppression of false alarms This section describes analyzer's message suppression features. It provides ways to control both the separate analyzer messages under specific source code lines and whole groups of messages related, for example, to the use of C/C++ macros. The described method, by using comments of a special format, allows disabling individual analyzer rules or modifying text of analyzer's messages.  Features described in the following section are applicable to both C/C++ and C# PVS-Studio analyzers, if the contrary is not stated explicitly. ## Suppression of individual false positives (Mark as False Alarm) Any code analyzer always produces a lot of the so called "false alarms" besides helpful messages. These are situations when it is absolutely obvious to the programmer that the code does not have an error but it is not obvious to the analyzer. Such messages are called false alarms. Consider a sample of code: obj.specialFunc(obj); The analyzer finds it suspicious that a method is called from an object, in which as an argument the same object is passed, so it will issue a warning V678 for this code. The programmer can also know that the use of the 'specialFunc' method in this way is conceivable, therefore, in this case the analyzer warning is a false positive. You can notify the analyzer that the warning V678 issued on this code is a false positive. It can be done either manually or using context menu command. To suppress a false positive you can add a special comment in code: obj.specialFunc(obj); //-V678 Now the analyzer will not generate the V678 warning on this line. After marking the message as a false alarm, the message will disappear from error list. You may enable the display of messages marked as 'False Alarms' in PVS-Studio error list by changing the value of 'PVS-Studio -> Options... -> Specific Analyzer Settings -> DisplayFalseAlarms' settings option. You may add this comment manually as well without using the "Mark selected messages as False Alarms" command, but you must follow the note's format: two slashes, minus (without a space), error code. You may also use a special command provided by PVS-Studio. The user is provided with two commands available from the PVS-Studio's context menu (figure 1). Figure 1 - Commands to work with the mechanism of false alarm suppression Let's study the available commands concerning False Alarm suppression: 1. Mark selected messages as False Alarms. You may choose one or more false alarms in the list (figure 2) and use this command to mark the corresponding code as safe. Figure 2 - Choosing warnings before executing the "Mark selected messages as False Alarms" command 2. Remove False Alarm marks from selected messages. This command removes the comment that marks code as safe. This function might be helpful if, for instance, you were in a hurry and marked some code fragment as safe by mistake. Like in the previous case, you must choose the required messages from the list. We do not recommend you to mark messages as false alarms without preliminarily reviewing the corresponding code fragments since it contradicts the ideology of static analysis. Only the programmer can determine if a particular error message is false or not. ## Implementation of the false alarm suppression function Usually compilers employ #pragma-directives to suppress individual error messages. Consider a code sample: unsigned arraySize = n * sizeof(float); The compiler generates the following message: warning C4267: 'initializing' : conversion from 'size_t' to 'unsigned int', possible loss of data x64Sample.cpp 151 This message can be suppressed with the following construct: #pragma warning (disable:4267) To be more exact, it is better to arrange the code in the following way to suppress this particular message: #pragma warning(push) #pragma warning (disable:4267) unsigned arraySize = n * sizeof(float); #pragma warning(pop) The PVS-Studio analyzer uses comments of a special kind. Suppression of the PVS-Studio's message for the same code line will look in the following way: unsigned arraySize = n * sizeof(INT_PTR); //-V103 This approach was chosen to make the target code cleaner. The point is that PVS-Studio can inform about issues in the middle of multi-line expressions as, for instance, in this sample:  size_t n = 100; for (unsigned i = 0; i < n; // the analyzer will inform of the issue here i++) { // ... } To suppress this message using the comment, you just need to write:  size_t n = 100; for (unsigned i = 0; i < n; //-V104 i++) { // ... } But if we had to add a #pragma-directive into this expression, the code would look much less clear. Storage of the marking in source code lets you modify it without the risk to lose information about lines with errors. It is also possible to use a separate base where we could store information in the following approximate pattern: error code, file name, line number. This pattern is implemented in the different PVS-Studio feature known as "Mass Suppression". ## Suppressing false positives located within C/C++ macro statements (#define) and for other code fragments It goes without saying that the analyzer can locate potential problems within macro statements (#define) and produce diagnostic messages accordingly. But at the same time these messages will be produced by analyzer at such positions where the macro is being used, i.e. where placement of macro's body into the code is actually happening. An example: #define TEST_MACRO \ int a = 0; \ size_t b = 0; \ b = a; void func1() { TEST_MACRO // V101 here } void func2() { TEST_MACRO // V101 here } To suppress these messages you can use the "Mark as False Alarm" command. Then the code containing suppression commands will look like this: #define TEST_MACRO \ int a = 0; \ size_t b = 0; \ b = a; void func1() { TEST_MACRO //-V101 } void func2() { TEST_MACRO //-V101 } But in case the macro is being utilized quite frequently, marking it everywhere as False Alarm is quite inconvenient. It is possible to add a special marking to the code manually to make the analyzer mark the diagnostics inside this macro as False Alarms automatically. With this marking the code will look like this: //-V:TEST_MACRO:101 #define TEST_MACRO \ int a = 0; \ size_t b = 0; \ b = a; void func1() { TEST_MACRO } void func2() { TEST_MACRO } During the verification of such a code the messages concerning issues within macro will be immediately marked as False Alarms. Also, it is possible to select several diagnostics at once, separating them by comma: //-V:TEST_MACRO:101, 105, 201 Please note that if the macro contains another nested macro inside it then the name of top level macro should be specified for automated marking. #define NO_ERROR 0 #define VB_NODATA ((long)(77)) size_t stat; #define CHECK_ERROR_STAT \ if( stat != NO_ERROR && stat != VB_NODATA ) \ return stat; size_t testFunc() { { CHECK_ERROR_STAT // #1 } { CHECK_ERROR_STAT // #2 } return VB_NODATA; // #3 } In the example mentioned above the V126 diagnostics appears at three positions. To automatically mark it as False Alarm one should add the following code at positions #1 and #2: //-V:CHECK_ERROR_STAT:126 To make it work at #3 you should additionally specify this: //-V:VB_NODATA:126 Unfortunately to simply specify "to mark V126 inside VB_NODATA macro" and not to specify anything for CHECK_ERROR_STAT macro is impossible because of technical specifics of preprocessing mechanism. Everything that is written in this section about macros is also true for any code fragment. For example, if you want to suppress all the warnings of the V103 diagnostic for the call of the function 'MyFunction', you should add such a string: //-V:MyFunction:103 ## Suppression of false positives through diagnostic configuration files (.pvsconfig) Analyzer messages can be manipulated and filtered through the comments of as special format. Such comments can be placed either in the special diagnostic configuration files (.pvsconfig) for all analyzers, or directly inside the source code (but only for C/C++ analyzer). The diagnostic configuration files are plain text files which are added to any Visual Studio project or solution. To add the configuration file, select the project or solution in question in the Solution Explorer window inside Visual Studio IDE, and select a context menu item 'Add New Item...'. In the following window, select the 'PVS-Studio Filters File' template (figure 3): Figure 3 - Adding diagnostic configuration file to a solution.  Because of the specifics of some Visual Studio versions, the 'PVS-Studio Filters File' file template may be absent in some versions and editions of Visual Studio for projects and\or solutions. In such a case, it is possible to use add diagnostic configuration file as a simple text file by specifying the 'pvsconfig' extension manually. Make sure that after the file is added, it is set as non-buildable in its' compilation properties. When a configuration file is added to a project, it will be valid for all the source files in this project. A solution configuration file will affect all the source files in all of the projects added to that solution. In addition, .pvsconfig file can be placed in the user data folder (%AppData%\PVS-Studio\) - this file will be automatically used by analyzer, without the need to modify any of your project\solution files. The '.pvsconfig' files utilize quite a simple syntax. Any line starting with the '#' character is considered a comment and ignored. The filters themselves are written as one-line C++/C# comments, i.e. every filter should start with '//' characters. In case of C/C++ code, the filters can also be specified directly in the source code. Please note, that this is not supported for C# projects! Next, let's review different variants of diagnostic configurations and filters. #### Filtering analyzer messages by a fragment of source code (for example, macro, variable and function names) Let us assume that the following structure exists: struct MYRGBA { unsigned data; }; Also there are several functions that are utilizing it: void f1(const struct MYRGBA aaa) { } long int f2(int b, const struct MYRGBA aaa) { return int(); } long int f3(float b, const struct MYRGBA aaa, char c) { return int(); } The analyzer produces three V801: "Decreased performance. It is better to redefine the N function argument as a reference" messages concerning these functions. Such a message will be a false one for the source code in question, as the compiler will optimize the code by itself, thus negating the issue. Of course it is possible to mark every single message as a False Alarm using the "Mark As False Alarm" option. But there is a better way. Adding this line into the sources will suffice: //-V:MYRGBA:801  For C/C++ projects, we advise you to add such a line into .h file near the declaration of the structure, but if this is somehow impossible (for example the structure is located within the system file) you could add this line into the stdafx.h as well. And then, every one of these V801 messages will be automatically marked as false alarm after re-verification. It's not only single words that the described mechanism of warning suppression can be applied. That's why it may be very useful sometimes. Let's examine a few examples: //-V:<<:128 This comment will suppress the V128 warning in all the lines which contain the << operator. buf << my_vector.size(); If you want the V128 warning to be suppressed only when writing data into the 'log' object, you can use the following comment: //-V:log<<:128 buf << my_vector.size(); // Warning untouched log << my_vector.size(); // Warning suppressed Note. Notice that the comment text string must not contain spaces. Correct: //-V:log<<:128 Incorrect: //-V:log <<:128 When searching for the substring, spaces are ignored. But don't worry: a comment like the following one will be treated correctly: //-V:ABC:501 AB C = x == x; // Warning untouched AB y = ABC == ABC; // Warning suppressed #### Complete warning disabling Our analyzer allows the user to completely disable output of any warning through a special comment. In this case, you should specify the number of the diagnostic you want to turn off, after a double colon. The syntax pattern is as follows: //-V::(number) - to disable one diagnostic To disable a number of diagnostics, you can list their numbers separating them by commas. The syntax pattern is the following: //-V::(number1),(number2),...,(numberN) - to disable a number of diagnostics To turn off all the diagnostics of C++ or C# analyzer use the following form: //-V::C++ or //-V::C# For example, if you want to ignore warning V122, you insert the following comment in the beginning of a file: //-V::122 If you want to disable warnings V502, V502, and V525, then the comment will look like this: //-V::502,507,525 Since the analyzer won't output the warnings you have specified, this might significantly reduce the size of the analysis log when too many false positives are generated for some diagnostic. ## Other means of filtering messages in the PVS-Studio analyzer (Detectable Errors, Don't Check Files, Keyword Message Filtering) There may be situations in which a certain type of diagnostics is not relevant for the analyzed project, or one of the diagnostics produces warnings for the source code which, you have no doubt in it, is correct. In this case, you can use the group messages suppression based on the filtering of the gained analysis results. The list of available filtering modes can be accessed through the 'PVS-Studio -> Options' menu item. The suppression of multiple messages through filters does not require restarting of the analysis, the filtering results will appear in PVS-Studio output window immediately. First, you may disable diagnosis of some errors by their code. You may do this using the "Settings: Detectable Errors" tab. On the tab of detected errors, you may specify the numbers of errors that must not be shown in the analysis report. Sometimes it is reasonable to remove errors with particular codes from the report. For instance, if you are sure that errors related to explicit type conversion (codes V201, V202, V203) are not relevant for your project, you may hide them. A display of errors of a certain type can be disabled using the context menu command "Hide all Vxxx errors". Accordingly, in case you need to enable a display back, you can configure it on the section "Detectable Errors", mentioned above. Second, you may disable analysis of some project's parts (some folders or project files). This is the "Settings: Don't Check Files" tab. On this tab, you may insert information about libraries whose files' inclusions (through the #include directive) must not be analyzed. This might be needed to reduce the number of unnecessary diagnostic messages. Suppose your project employs the Boost library. Although the analyzer generates diagnostic messages on some code from this library, you are sure that it is rather safe and well written. So, perhaps there is no need to get warnings concerning its code. In this case, you may disable analysis of the library's files by specifying the path to it on the settings page. Besides, you may add file masks to exclude some files from analysis. The analyzer will not check files meeting the mask conditions. For instance, you may use this method to exclude autogenerated files from analysis. Path masks for files which are mentioned in the latest generated PVS-Studio report in the output window could be appended to the 'Don't Check Files' list using the "Don't check files and hide all messages from..." context menu command for the currently selected message (figure 4). Figure 4 — Appending path masks through the context menu This command allows the appending either of a single selected file or of the whole directory mask containing such a file. Third, you may suppress separate messages by their text. On the "Settings: Keyword Message Filtering" tab, you may set filtering of errors by their text and not their code. If necessary, you may hide error messages containing particular words or phrases in the report. For instance, if the report contains errors that refer to the names of the functions printf and scanf and you think that there cannot be any errors related to them, you should simply add these two words using the editor of suppressed messages. ## Mass Suppression of Analyzer Messages (Mass Suppression) Sometimes, especially on the stage of stage of implementation of static analysis in large projects, you may need to 'suppress' all warnings of code base, since the developers may not have the necessary resources to fix the errors found by the analyzer in the old code. In such a case, it can be useful to 'hide' all warnings issued for existing code to track it only when errors occur again. This can be achieved by using the "mass suppression of analyzer messages" mechanism. The use of the appropriate mechanism in Windows environment is described in the document "Mass Suppression of Analyzer Messages", in Linux environment - in the relevant section of document "How to run PVS-Studio on Linux". ## Possible issues In rare cases markers arranged automatically might sometimes appear in false places. In this case, the analyzer will again produce the same error warnings because it will fail to find the markers. This is the issue of the preprocessor refers to multi-line #pragma-directives of a particular type that also cause confusion with line numbering. To solve this issue, you should mark messages you experience troubles with manually. PVS-Studio always informs about such errors with the message "V002. Some diagnostic messages may contain incorrect line number". Like in case of any other procedure involving mass processing of files, you must remember about possible access conflicts when marking messages as false alarms. Since some files might be opened in an external editor and modified there during file marking, the result of joint processing of such files cannot be predicted. That is why we recommend you either to have copies of source code or use version control systems. # Handling the diagnostic messages list While handling the large number of messages (and the first-time verification of large-scale projects, when filters have not been set yet and false positives haven't been marked, the number of generated messages can come close to tens of thousands), it is reasonable to use the navigational, searching and filtering mechanisms integrated into PVS-Studio output window. ## Navigation and sorting The main purpose of PVS-Studio output window is to simplify the analyzed project's source code navigation and reviewing of potentially dangerous fragments in it. Double-clicking any of the messages in the list will automatically open the file corresponding to this message in the code editor, will place the cursor on the desired line and highlight it. The quick navigation buttons (see figure 1) allow for an easy review of the potentially dangerous fragments in the source code without the need of constant IDE windows switching. Figure 1 — Quick navigation buttons To present the analysis results, PVS-Studio output window utilizes a virtual grid, which is capable of fast rendering and sorting of generated messages even for huge large-scale projects (virtual grid allows you to handle a list containing hundreds of thousands of messages without any considerable hits to performance). The far left grid column can be used to mark messages you deem interesting, for instance the ones you wish to review later. This column allows sorting as well, so it won't be a problem to locate all the messages marked this way. The "Show columns" context menu item can be used to configure the column display in the grid (figure 2): Figure 2 — Configuring the output window grid The grid supports multiline selection with standard Ctrl and Shift hotkeys, while the line selection persists even after the grid is resorted on any column. The "Copy selected messages to clipboard" context menu item (or Ctrl+C hotkey) allows you to copy the contents of all selected lines to a system clipboard. ## Message filtering PVS-Studio output window filtering mechanisms make it possible to quickly find and display either a single diagnostic message or the whole groups of these messages. The window's toolstrip contains several toggle buttons which can be used to turn the display of their corresponding message groups on or off (figure 3). Figure 3 — Message filtration groups All of these switches could be subdivided into 3 sets: filters corresponding to the message certainty, filters corresponding to type of message diagnostics rule set, and filters corresponding to False Alarm markings within the source code. Turning these filters off will momentarily hide all of their corresponding messages inside the output list. Detailed description of the levels of certainty and sets of diagnostic rules is given in the documentation section "Getting Acquainted with the PVS-Studio Static Code Analyzer". The quick filtering mechanism (quick filters) allows you to filter the analysis report by the keywords that you can specify. The quick filtering panel could be opened with the "Quick Filters" button on the output window's toolstrip (figure 4). Figure 4— Quick filtering panel Quick filtering allows the display of messages according to the filters by 3 keywords: by the message's code, by the message's text and by the file containing this message. For example, it is possible to display all the messages containing the word 'odd' from the 'command.cpp' file. Changes to the output list are applied momentarily after the keyword edit box loses focus. The 'Reset Filters' button will erase all of the currently applied filtering keywords. All of the filtering mechanisms described above could be combined together, for example filtering the level of displayed messages and the file which should contain them at the same time, while simultaneously excluding all the messages marked as false positives. ## Quick jumps to individual messages In case there is a need to navigate to an individual message in the grid, it is possible to use the quick jumping dialog, which can be accessed through the "Navigate to ID..." context menu item (figure 5): Figure 5 - evoking of the quick jumping dialog Figure 6 - Navigate to ID dialog Each of the messages in PVS-Studio output list possesses a unique identifier — the serial number under which this message was added into the grid, which itself is displayed in the ID column. The quick navigation dialog allows you to select and auto-focus the message with the designated ID, regardless of current grid's selection and sorting. You also may note that the IDs of the messages contained within the grid are not necessarily strictly sequential, as a fraction them could be hidden by the filtering mechanism, so navigation to such messages is impossible. ## Managing the Visual Studio Task List The large-scale projects are often developed by a distributed team, so a single person isn't able to judge every message static analyzer generates for false-positives, and even more so, is unable to correct the corresponding sections of the source code. In this case it makes sense to delegate such messages to a developer who is directly responsible for the code fragment in question. PVS-Studio allows you to automatically generate the special TODO comment containing all the information required to analyze the code fragment marked by it, and to insert it into the source code. Such comment will immediately appear inside the Visual Studio Task List window (in Visual Studio 2010 the comments' parsing should be enabled in the settings: Tools->Options->Text Editor->C++->Formatting->Enumerate Comment Tasks->true) on condition that the ' Tools->Options->Environment->Task List->Tokens' list does contain the corresponding TODO token (it is present there by default). The comment could be inserted using the 'Add TODO comments for selected messages' command of the context menu (figure 7): Figure 7 - Inserting the TODO comment The TODO comment will be inserted into the line which is responsible for generation of analyzer's message and will contain the error's code, analyzer message itself and a link to the online documentation for this type of error. Such a comment could be easily located by anyone possessing an access to the sources thanks to the Visual Studio Task List. And with the help of the comment's text itself the potential issue could be detected and corrected even by the developer who does not have PVS-Studio installed or does not possess the analyzer's report for the full project (figure 8). Figure 8 —Visual Studio Task List The Task List Window could be accessed through the View->Other Windows->Task List menu. The TODO comments are displayed in the 'Comments' section of the window. # Analyzing Visual Studio projects from the command line In addition to using PVS-Studio directly from Visual Studio, you can also run analysis of MSBuild (i.e. Visual C++ and Visual C#) projects from the command line. This can be useful for setting up regular automatic analyzer runs. For example, during the "night builds" on the build server. ## Starting analysis on sln and csproj/vcxproj files To analyze C++/C# projects or solutions (sln) files that contain such projects, you can use the command line version of the PVS-Studio analyzer directly without having to start the IDE (devenv.exe) process and open in it projects that you are intending to check. The 'PVS-Studio_Cmd.exe' tool can be found in PVS-Studio installation directory (default path is ' c:\Program Files (x86)\PVS-Studio\'). The '--help' command displays all available arguments of cmd analyzer: PVS-Studio_Cmd.exe --help The main arguments of the analyzer: • --target (-t): required parameter. Allows you to specify the target for analysis (sln or csproj/vcxproj file); • --sourceFiles (-f): a path to an XML file which can be used to specify a list of files and\or filters to be analyzed. Only the files specified by this list will be analyzed in the project(s). The detailed description of this file's syntax is available in the corresponding subsection below. • --output (-o): path to the plog file, where the analysis results will be written. If this parameter is missing, a plog file will be created next to the file indicated in the target; • --platform (-p) and --configuration (-c): platform and configuration of the target which will be analyzed. If these parameters are not specified, then the first available "platform|configuration" pair will be chosen (when checking the sln file) or "Debug|AnyCPU" (when analyzing a single csproj project) or "Debug|Win32" (when analyzing a single vcxproj project); • --settings (-s): path to the configuration file of PVS-Studio. If the parameter is missing, IDE settings of PVS-Studio plug-in will be used, located in the folder "c:\Users\%UserName%\AppData\Roaming\PVS-Studio\Settings.xml". Note that for the C# analyzer to work properly, your settings, passed through this flag, should contain your registration information (if you are using the default settings from the AppData, your registration information can be entered via the PVS-Studio plug-in in Visual Studio); • --progress (-r): allows to enable the detailed logging of the analysis progress to stdout (disabled by the default); • --suppressAll (-a): append all un-suppressed messages to the 'suppress' files of corresponding projects (disabled by default). If this flag is used, all the diagnostic messages will be added into the database of suppressed messages after saving the analysis log file. The flag supports two modes: • SuppressOnly adds analyzer messages from the input plog to the suppress files without starting the analysis; • AnalyzeAndSuppress will start the analysis, save the resulting plog, and only then the messages from this plog will be added to suppress files. Using this mode, you will only see messages generated for newly written/changed code at each analyzer run; that is, new messages will be written into the new log and get immediately suppressed so that you don't see them at the next run. However, if you need to view old messages (without having to re-run the analysis on the project), you can view them anytime by opening the complete-log file which is automatically saved in the same folder with the new-messages log file. To learn more about the message suppression mode, see this documentation section; • --sourceTreeRoot (-e): the root of a source tree. PVS-Studio will use this value to generate relative paths in its messages. Settings this option will override the 'SourceTreeRoot' value in PVS-Studio settings; • --incremental (-i): enable the incremental analysis mode. Refer to the "PVS-Studio's incremental analysis mode" topic for more details about incremental analysis in PVS-Studio. The following operating modes of incremental analysis are available: • Scan – scan all the dependencies to determine source files for incremental analysis. Note that at this step the incremental analysis will not be started. The previous history of edits will be removed, the analyzer will take into account only the modifications made after the last build. • AppendScan – analyze all the dependencies to determine source files for incremental analysis. Note that at this step the incremental analysis will not be started. The analyzer will take into account the modifications made after the last build and all the previous modifications. • Analyze – run incremental analysis on the target. This step should be executed after executing the 'Scan' and 'AppendScan' steps, and can be executed both before and after building the target. Static analysis will be performed only for the files from the list, received in the result of the execution of 'Scan' and 'AppendScan' commands. • ScanAndAnalyze – scan dependencies to determine source files for incremental analysis and analyze the files that were modified since last build. • --msBuildProperties (-m): set or override the specified project-level properties. Use a vertical bar "|" to separate multiple project-level properties: --msBuildProperties WarningLevel=2|OutDir=bin\OUT32\ Here is an example of running the analysis for Visual Studio solution "My Solution": PVS-Studio_Cmd.exe --target "mysolution.sln" --platform "Any CPU" --configuration "Release" --output "mylog.plog" --settings "pvs.xml" --progress The command line version of the PVS-Studio analyzer supports all settings on filtering/disabling messages available in the IDE plugin for Visual Studio. You can either set them manually in the xml file, that is passed through the '--settings' argument, or use the settings specified through the UI plugin, without passing this argument. Note that the IDE plug-in of PVS-Studio uses an individual set of settings for each user in the system. ## Specifying individual files for analysis PVS-Studio_Cmd allows you to analyze individual files specified in a list, a path to which can be passed to the analyzer with the --sourceFiles (-f) flag. This file list should be in XML format and it can contain a list of absolute paths to the source files and\or a list of file masks that define such source file paths. <SourceFilesFilters> <SourceFiles> <Path>C:\Projects\Project1\source1.cpp</Path> <Path>C:\Projects\Project1\source1.h</Path> <Path>\Project2\source2.cpp</Path> <Path>source_*.cpp</Path> <Path>stdAfx.h</Path> </SourceFiles> <TlogDirs> <Dir>D:\Build\Tlogs\</Dir> </TlogDirs> <SourcesRoot>C:\Projects\</SourcesRoot> </SourceFilesFilters> Let us review in detail all of available fields and their possible values. • The SourceFiles element allows you to specify a list of files and\or filters for analysis. Each file\filter is specified in the Path sub-element. These filters will be applied to all files from all projects in the solution under analysis. Only the files that are either specified directly or conform to one of the filters will be queued for analysis. You can specify both source files (c/cpp for C++ and cs for C#) and header files (h/hpp for C++). Important note: analysis of header files for C/C++ is only possible when compiler tracing logs are available. The way to specify them is described for the TLogDirs element below. Directory separators in the values specified in the Path elements will be normalized. However, the special directory traversal directives, such as '.' or '..' will not be expanded. The file path masks support '*' and '?' wildcard characters. Possible values of the Path sub-elements are the following: • An absolute path to a file will be directly compared with source files within projects • A relative path to a file (for example, \Project1\source2.cpp) or any mask that does not start with a root directory (i.e. 'C:\' or '\\' for UNC paths) will be interpreted as a mask starting with '*' wildcard. For example, 'stadAfx.h' will be interpreted as '*stdAfx.h'. • The TlogDirs element specifies a list of directories that will be used to search for tlog build artifacts - compiler tracking logs. These files are required for analyzing individual C/C++ header files - as the analyzer works only with compilation units (i.e. c/cpp source files), it is unable to analyze a header file 'by itself', separate from a source file that includes it. To determine the dependency between source and header files, analyzer utilizes tlog files. Tlog files are generated by MSBuild build system while building a project, inside the build output directory, which also contains other build artifacts (such as, for example, compiled binaries). Analyzer needs tlog files with a following name pattern: CL.read.*.tlog, where '*' can be any number. Usually MSBuild generates one such tlog file for each project. You can use the Dir sub-elements to either specify several directories for each separate tlog file, or specify a single root directory, which will contain all of the tlogs - the analyzer will search for tlogs in all of the sub-directories in each of the Dir element. Important node: if tlog files were not generated by building the analyzed project locally (i.e. the build was performed either in a different directory from which the analysis is ran, or on a different machine altogether), then for a correct analysis of individual header files you should additionally specify the current root directory for the project you are analyzing using the SourcesRoot element. The reason for this is that the tracing logs (tlogs) contain file paths in an absolute format. • The SourcesRoot element allows you to specify the base root directory containing the sources of project under analysis. If you are using tlog (TLogDirs element) files that were not generated locally, under the analyzed project, specify this root directory to allow the analyzer correctly identify the source files from the analyzed project among the paths that are present in the tlog files. Only the current root directory of the analyzed project should be specified in the SourcesRoot element. For example, if the sources of the analyzed projects are located in the C:\Projects\Project1, C:\Projects\Project2, etc. folders, then you should specify C:\Projects\ as the SourcesRoot element. When running analysis with the --sourceFiles (-f) flag, the analyzer, contrary to the 'normal' run on the whole project\solution, will generate its warning messages only for the files that are explicitly specified through the SourceFiles filters. This means that when analyzing a source cpp file, the analyzer will only generate messages for this file alone - the messages on the header (h/hpp) files that are included into this source file will be filtered out, unless these header files are also explicitly specified by the SourceiIles filters. ## Command line tool exit codes The PVS-Studio_Cmd utility defines several non-zero exit codes, which do not necessarily indicate some issue with the operation of the tool itself, i.e. even when the tool returns something other when zero it does not always mean that the tool has 'crashed'. The exit code is a bit mask that represents all states that occurred during the PVS-Studio_Cmd utility operation. For example, the tool will return non-zero code (it will be 256 actually) when the analyzer finds some potential issues in the code being analyzed. Such behavior allows you to handle this situation individually, for example, when the policy of using the analyzer on a build server does not allow analyzer issues to be present in the code that was committed to a revision control system. Consider another example: during analysis there were found some issues in code, and one of the source files is missing on disk. In this case the exit code of the PVS-Studio_Cmd utility will be 264 (8 - some of the analyzed source files or project files were not found, 256 - some issues were found in the source code), or, in binary representation 100001000. Next, let's examine all possible PVS-Studio_Cmd state codes that form a bit mask exit code. • '0' - analysis was successfully completed, no issues were found in the source code; • '1' - error (crash) during analysis of some source file(s); • '2' - general (nonspecific) error in the analyzer's operation, a possible handled exception. Usually, this means an error inside the analyzer and is followed by a stack trace. If you encounter this error, please help us improve the analyzer by sending it to us; • '4' - some of the command line arguments passed to the tool were incorrect; • '8' - specified project, solution or analyzer settings file were not found; • '16' - specified configuration and (or) platform were not found in a solution file; • '32' - solution file or project is not supported or contains errors; • '64' - incorrect extension of analyzed project or solution; • '128' - incorrect or out-of-date analyzer license; • '256' - some issues were found in the source code; • '512' - some issues were encountered while performing analyzer message suppression; • '1024' - indicates that analyzer license will expire in less than a month; Let us provide an example of a Windows batch script that decodes the PVS-Studio_Cmd utility exit code: @echo off "C:\Program Files (x86)\PVS-Studio\PVS-Studio_Cmd.exe" -t "YourSolution.sln" -o "YourSolution.plog" set /A FilesFail = "(%errorlevel% & 1) / 1" set /A GeneralExeption = "(%errorlevel% & 2) / 2" set /A IncorrectArguments = "(%errorlevel% & 4) / 4" set /A FileNotFound = "(%errorlevel% & 8) / 8" set /A IncorrectCfg = "(%errorlevel% & 16) / 16" set /A InvalidSolution = "(%errorlevel% & 32) / 32" set /A IncorrectExtension = "(%errorlevel% & 64) / 64" set /A IncorrectLicense = "(%errorlevel% & 128) / 128" set /A AnalysisDiff = "(%errorlevel% & 256) / 256" set /A SuppressFail = "(%errorlevel% & 512) / 512" set /A LicenseRenewal = "(%errorlevel% & 1024) / 1024" if %FilesFail% == 1 echo FilesFail if %GeneralExeption% == 1 echo GeneralExeption if %IncorrectArguments% == 1 echo IncorrectArguments if %FileNotFound% == 1 echo FileNotFound if %IncorrectCfg% == 1 echo IncorrectConfiguration if %InvalidSolution% == 1 echo IncorrectCfg if %IncorrectExtension% == 1 echo IncorrectExtension if %IncorrectLicense% == 1 echo IncorrectLicense if %AnalysisDiff% == 1 echo AnalysisDiff if %SuppressFail% == 1 echo SuppressFail if %LicenseRenewal% == 1 echo LicenseRenewal ## Running analysis from the command line for C/C++ projects built in build systems other than Visual Studio's If your C/C++ project doesn't use Visual Studio's standard build system (MSBuild) or even uses a custom build system \ make files through Visual Studio NMake projects, you won't be able to analyze this project with the PVS-Studio_Cmd. If this is the case, you can use the compiler monitoring system that allows you to analyze projects regardless of the build systems they use by "intercepting" compilation process invocations. The compilation monitoring system can be used both from the command line and through the C and C++ Compiler Monitoring UI client (Standalone.exe). You can also integrate the invocation of command line analyzer kernel directly into your build system. Notice that this will require you to define the PVS-Studio.exe analyzer kernel call for each file to be compiled, just like in the case when calling the C++ compiler. ## How PVS-Studio settings affect the command line launch; analysis results (plog file) filtering and conversion When running project analysis from the command line, by default, the same settings are used as in the case when it is run from the IDE (Visual Studio). The number of processor cores to be used for analysis depends on the number specified in the analyzer settings. You can also specify the settings file directly with '--settings' argument, as was described above. As for the filtering systems (Keyword Message Filtering and Detectable Errors), they will not be used when running the analysis from the command line. It means you'll see all the messages in the log file anyway, regardless of the parameters you have specified. However, when loading the log file in the IDE, the filters will be applied. This is because filters are applied dynamically to the results (including the case when you analyze your project from the IDE as well). It's very convenient since you may want to switch off some of the messages you've got (for example V201). You only need to disable them in the settings to have the corresponding messages disappear from the list WITHOUT you having to rescan the project. The plog file format (XML) is not intended for directly displaying to or read by a human. However, if you need to filter off the analysis results and convert them into a "readable" format, you can use the PlogConverter utility available within the PVS-Studio distribution. You can also download the source code of the utility. PlogConverter allows you to convert one or more plog files into the following formats: • Light-weight text file with analysis results. It can be useful when you need to output the analysis results (for example new messages) into the log of the build system or continuous integration server. • HTML report with a short description of the analysis results. It suits best for the e-mailing of the notifications; • HTML report with sorting of the analysis results according to the different parameters and navigation along the source code; • Tasks - format that can be opened in QtCreator; • CSV table with analysis results. • Text file with a summary table of the number of messages across different levels and diagnostic rule sets. The format of the log file is defined by the command line parameters. You can also use them to filter the results by rule sets, levels, and individual error codes. Here's an example of the PlogConverter command line launch (in one line): PlogConverter.exe test1.plog test2.plog -o "C:\Results" -r "C:\Test" -a GA:1,2,3;64:1 -t Html,Txt,Totals -d V101,V105,V122 PlogConverter will be launched for the 'test1.plog' and 'test2.plog' files in the current folder and the error messages from all specified plog files will be merged and saved into the Results folder on disk C; only messages from the General Analysis (GA) rule set of levels 1, 2, and 3, and from the 64-bit diagnostic rule set (64) of level 1 will be used. Diagnostics V101, V105, and V122 will be filtered off from the list. The results will be saved into a text and HTML files as well as a summary text file for all the diagnostics (with the mentioned parameters taken into account). The original plog file will remain unchanged. Detailed description of the levels of certainty and sets of diagnostic rules is given in the documentation section "Getting Acquainted with the PVS-Studio Static Code Analyzer". A detailed help on all the parameters of the PlogConverter utility can be accessed by executing the following command: PlogConverter.exe --help If you need to convert the plog file into some specific format, you can do it by parsing the file as an XML document yourself. Note that you can reuse the algorithm of sorting out a structure of a plog file from our utility to create a customized output format of analysis results. ## Regular use of PVS-Studio and integration with "daily builds" The process of long-term use of the PVS-Studio code analyzer is comprised of two stages: integration and regular use. When integrating PVS-Studio into an existing large-scale project, programmers will usually read the analyzer messages and either fix the code or mark it using the "Mark as False Alarm" and "Message Suppression" features. Having sorted out all the messages, they will then re-analyze the code once again to get 0 warnings (given that the messages marked as false positives or suppressed through suppress files are set to stay hidden). It means that the integration stage is over and the stage of regular use sets in. From this point on, all the new code added into the project will be analyzed by PVS-Studio. Actually, the ENTIRE code base will be analyzed, but you'll be seeing only new messages. Errors will be found in freshly written/fixed code or old code that has been left unmarked. The option of running the analysis from the command line is useful when developers need to launch it regularly (daily, for instance). The procedure looks as follows: • Running the analysis from the command line. • Sending the resulting log file to all the developers involved. This way, regularly running PVS-Studio on your project will help you avoid new bugs in the code. ## Automatic update and installation of the PVS-Studio distribution To install the PVS-Studio distribution from the command line in a "quiet" mode (i.e. without displaying the UI and dialog boxes), you need to pass the following parameters into it (in one line): PVS-Studio_setup.exe /VERYSILENT /SUPPRESSMSGBOXES /NORESTART /COMPONENTS=Core,Standalone,MSVS,MSVS\2010, MSVS\2012,MSVS\2013,MSVS\2015,MSVS\2017 The /COMPONENTS argument allows specifying the components to be installed: Standalone (C and C++ Compiler Monitoring), plugins for different IDEs and is optional. PVS-Studio may require a reboot if, for example, files that require update are locked. To install PVS-Studio without reboot, use the 'NORESTART' flag. Please also note that if PVS-Studio installer is started in a silent mode, the computer may be rebooted without any warnings or dialogs. Keep in mind that it is required to avoid running Visual Studio (the devenv.exe process) while installing PVS-Studio. The PVS-Studio-Updater.exe utility can check for analyzer updates and download and install them on a local machine. To run the update utility in the 'quiet' mode, you can use the same parameters as for the distribution installation process: PVS-Studio-Updater.exe /VERYSILENT /SUPPRESSMSGBOXES If there are no updates available on the server, the utility will terminate with '0' exit code. Since PVS-Studio-Updater.exe performs local installation, you must avoid running the devenv.exe process while it is working as well. ## Conclusion Running PVS-Studio daily from the command line can help you significantly increase the quality of your code. Sticking with this approach, you will get 0 diagnostic messages every day if your code is correct. And should there be any incorrect edits of the old (already scanned and fixed) code, the next run of PVS-Studio will reveal any fresh defects. Newly written code will be regularly scanned in automatic mode (i.e. independently of the programmer). # Direct integration of PVS-Studio into MSBuild's build process. MSBuild integration mode in Visual Studio IDE  The direct integration of analyzer into MSBuild scenarios (projects) is obsolete. Command line analysis of projects with PVS-Studio is described in the following section. # Predefined PVS_STUDIO macro Among the numerous filtration and message suppression methods of PVS-Studio analyzer is the PVS_STUDIO predefined macro. The first case when it might come in handy is when one wants to prevent some code from getting in the analyzer for a check. For example, the analyzer generates a diagnostic message for the following code:  int rawArray[5]; rawArray[-1] = 0; However, if you will 'wrap' it using this macro the message will not be generated.  int rawArray[5]; #ifndef PVS_STUDIO rawArray[-1] = 0; #endif The PVS_STUDIO macro is automatically inserted while checking the code from the IDE. But if you are using PVS-Studio from the command line, the macro will not be passed by default to the analyzer and this should be done manually. The second case is the override of the default and custom macros. For example, for the following code a warning will be generated about dereference of a potentially null pointer: char *st = (char*)malloc(10); TEST_MACRO(st != NULL); st[0] = '\0'; //V522 To tell the analyzer that the execution of the program gets interrupted with certain conditions, you can override the macro in the following way: #ifdef PVS_STUDIO #undef TEST_MACRO #define TEST_MACRO(expr) if (!(expr)) throw "PVS-Studio"; #endif char *st = (char*)malloc(10); TEST_MACRO(st != NULL); st[0] = '\0'; This method allows to remove the analyzer warnings on the code checked using different libraries, as well as on any other macros that are used for debugging and testing. See the discussion "Mark variable as not NULL after BOOST_REQUIRE in PVS-Studio" on StackOverflow.com site. PVS_STUDIO macro will be automatically substituted when checking code from IDE. If you use the code check from the command line, macro is not passed to the analyzer by default, and it should be done manually. # PVS-Studio: Troubleshooting ## The basic PVS-Studio's operation principles you should know PVS-Studio is composed of 2 basic components: the command-line analyzer (PVS-Studio.exe) and an IDE plugin through which the former is integrated into one of the supported development environments (Microsoft Visual Studio). The way command-line analyzer operates is quite similar to that of a compiler, that is, each file being analyzed is assigned to a separate analyzer instance that, in turn, is called with parameters which, in particular, include the original compilation arguments of the source file itself. Afterwards, the analyzer invokes a required preprocessor (also in accordance with the one that is used to compile the file being analyzed) and then analyzes the resulting temporary preprocessed file, i.e. the file in which all of the include and define directives were expanded. Thus, the command-line analyzer - just like a compiler (for example Visual C++ cl.exe compiler) - is not designed to be used directly by the end user. To continue with the analogy, compilers in most cases are employed indirectly, through a special build system. Such a build system prepares launch parameters for each of the file to be built and also usually optimizes the building process by parallelizing it among all the available logic processors. The IDE PVS-Studio plugin operates in a similar fashion. However, IDE plug-in is not the only method for the employment of PVS-Studio.exe command line analyzer. As mentioned above, the command-line analyzer is very similar to a compiler regarding its usage principles. Therefore, it can be directly integrated, if necessary, into a build system along with a compiler. This way of using the tool may be convenient when dealing with a build scenario which is not supported by PVS-Studio - for example, when utilizing a custom-made build system or an IDE other than Visual Studio. Note that PVS-Studio.exe supports analysis of source files intended to be compiled with gcc, clang, and cl compilers (including the support for specific keywords and constructs). For instance, if you build your project in the Eclipse IDE with gcc, you can integrate PVS-Studio into your makefile build scripts. The only restriction is that PVS-Studio.exe can only operate under Windows NT operating systems family. Besides IDE plugins, our distribution kit also includes a plugin for the Microsoft MSBuild build system which is utilized by Visual C++ projects in the Visual Studio IDE starting with version 2010. Don't confuse it with the plugin for the Visual Studio IDE itself! Thus, you can analyze projects in Visual Studio (version 2010 or higher) in two different ways: either directly through our IDE plugin, or by integrating the analysis process into the build system (through the plugin for MSBuild). Of course, nothing prevents you, if the need arises, from creating your own static analysis plugin, be it for MSBuild or any other build system, or even integrating PVS-Studio.exe's call directly, if possible, into build scripts like in the case of makefile-based ones. ## I can't check a file/project with the IDE PVS-Studio plugin If PVS-Studio plug-in generates the message "C/C++ source code was not found" for your file, make sure that the file you are trying to analyze is included into the project for the build (PVS-Studio ignores files excluded from the build). If you get this message on the whole project, make sure that the type of your C/C++ project is supported by the analyzer. In Visual Studio, PVS-Studio supports only Visual C++ projects for versions 2005 and higher, as well as their corresponding MSBuild Platform Toolsets. Project extensions using other compilers (for example projects for the C++ compiler by Intel) or build parameters (Windows DDK drivers) are not supported. Despite the fact that the command-line analyzer PVS-Studio.exe in itself supports analysis of the source code intended for the gcc/clang compilers, IDE project extensions utilizing these compilers are not supported. If your case is not covered by the ones described above, please contact our support service. If it is possible, please send us the temporary configuration files for the files you are having troubles with. You can get them by setting the option 'PVS-Studio -> Options -> Common Analyzer Settings -> Remove Intermediate Files' to 'False'. After that, the files with the name pattern %SourceFilename.cpp%.PVS-Studio.cfg will appear in the same directory where your project file (.vcxproj) is located. If possible, create an empty test project reproducing your issue and send it to us as well. ## Source files are preprocessed incorrectly when running analysis from the IDE plugin. Error V008 If, having checked your file/project, PVS-Studio generates the V008 message and/or a preprocessor error message (by clang/cl preprocessors) in the results window, make sure that the file(s) you are trying to analyze can be compiled without errors. PVS-Studio requires compilable C/C++ source files to be able to operate properly, while linking errors do not matter. The V008 error means that preprocessor returned a non-zero exit code after finishing its work. The V008 message is usually accompanied by a message generated by a preprocessor itself describing the reason for the error (for example, it failed to find an include file). Note that, for the purpose of optimization, our Visual Studio IDE plugin utilizes a special dual-preprocessing mode: it will first try to preprocess the file with the faster clang preprocessor and then, in case of a failure (clang doesn't support certain Visual C++ specific constructs), launches the standard cl.exe preprocessor. If you get clang's preprocessing errors, try setting the plugin to use only the cl.exe preprocessor (PVS-Studio -> Options -> Common Analyzer Settings -> Preprocessor). If you are sure that your files can be correctly built by the IDE/build system, perhaps the reason for the issue is that some compilation parameters are incorrectly passed into the PVS-Studio.exe analyzer. In this case, please contact our support service and send us the temporary configuration files for these files. You can get them by setting the option 'PVS-Studio -> Options -> Common Analyzer Settings -> Remove Intermediate Files' to 'False'. After that, files with the name pattern %SourceFilename.cpp%.PVS-Studio.cfg will appear in the same directory where your project file is located. If possible, create an empty test project reproducing your issue and send it to us as well. ## IDE plugin crashes and generates the 'PVS-Studio internal error' message If plugin crashes and generates the dialog box entitled 'PVS-Studio Internal Error', please contact our support service and send us the analyzer's crash stack (you can obtain it from the crash dialog box). If the issue occurs regularly, then please send us the plugin's trace log together with the crash stack. You can obtain the trace log by enabling the tracing mode through the 'PVS-Studio -> Options -> Specific Analyzer Settings -> TraceMode (Verbose mode)' setting. The trace log will be saved into the default user directory Application Data\Roaming\PVS-Studio under the name PVSTracexxxx_yyy.log, where xxxx is PID of the process devenv.exe / bds.exe, while yyy is the log number for this process. ## Unhandled IDE crash when utilizing PVS-Studio If you encounter regular crashes of your IDE which are presumably caused by PVS-Studio's operation, please check the Windows system event logs (in the Event Viewer) and contact our support service to provide us with the crash signature and stack (if available) for the application devenv.exe \ bds.exe (the 'Error' message level) which can be found in the Windows Logs -> Application list. ## PVS-Studio.exe crash If you encounter regular unhandled crashes of the PVS-Studio.exe analyzer, please repeat the steps described in the section "IDE crashes when PVS-Studio is running", but for the PVS-Studio.exe process. ## The V001/V003 errors The error V003 actually means that PVS-Studio.exe has failed to check the file because of a handled internal exception. If you discover V003 error messages in the analyzer log, please send us an intermediate file (an i-file containing all the expanded include and define directives) generated by the preprocessor for the file that triggers the v003 error (you can find its name in the file field). You can get this file by setting the 'PVS-Studio -> Options -> Common Analyzer Settings -> Remove Intermediate Files' option to 'False'. Intermediate files with the name pattern SourceFileName.i will appear, after restarting the analysis, in the directory of the project that you are checking (i.e. in the same directory where the vcproj/vcxproj/cbproj files are located). The analyzer may sometimes fail to perform a complete analysis of a source file. It is not always the analyzer's fault - see the documentation section on the V001 error to learn more about this issue. No matter what was the cause of a V001 message, it is usually not critical. Incomplete file parsing is insignificant from the analysis viewpoint. PVS-Studio simply skips a function/class with an error and continues with the analysis. It's only a very small portion of code which is left unchecked. If this portion contains fragments you consider relevant, you may send us an i-file for this source file as well. ## The analyzer cannot locate errors in an incorrect code or generates too many false positives If it seems to you that the analyzer fails to find errors in a code fragment that surely contains them or, on the contrary, generates false positives for a code fragment which you believe to be correct, please send us the preprocessor's temporary file. You can get it by setting the 'PVS-Studio -> Options -> Common Analyzer Settings -> Remove Intermediate Files' option to 'False'. Intermediate files with the name pattern SourceFileName.i will appear, after you restart the analysis, in the directory of the project you are checking (i.e. in the same directory where ycproj/vcxproj/cbproj files are located). Please attach the source file's code fragment that you have issues with as well. We will consider adding a diagnostic rule for your sample or revise the current diagnostics to reduce the number of false positives in your code. ## Issues with handling PVS-Studio analysis report from within the IDE plugin If you encounter any issues when handling the analyzer-generated log file within the window of our IDE plugin, namely: navigation on the analyzed source files is performed incorrectly and/or these files are not available for navigation at all; false positive markers or comments are added in wrong places of your code, and the like - please contact our support service to provide us with the plugin's trace log. You can get it by enabling the tracing mode through the 'PVS-Studio -> Options -> Specific Analyzer Settings -> TraceMode' option (Verbose mode). The trace log will be saved into the default user directory Application Data\Roaming\PVS-Studio under the name PVSTracexxxx_yyy.log, where xxxx is PID of the devenv.exe / bds.exe process, while yyy is the log number for this process. Also, if it is possible, create an empty test project reproducing your trouble and attach it to the letter too. ## Code analysis running from the IDE plugin is slow. Not all the logical processors are being utilized The PVS-Studio plugin can parallelize code analysis at the level of source files, that is, you can have analysis for any files you need to check (even within one project) running in parallel. The plugin by default sets the number of threads into which the analysis process is parallelized according to the number of processors in your system. You may change this number through the option PVS-Studio -> Options -> Common Analyzer Settings -> ThreadCount. If it seems to you that not all of the available logical processors in your system are being utilized, you can increase the number of threads used for parallel analysis. But keep in mind that static analysis, unlike compilation, requires a large amount of memory: each analyzer instance needs about 1.5 Gbytes. If your system, even though possessing a multi-core processor, doesn't meet these requirements, you may encounter a sharp performance degradation caused by the analyzer having to rely on a swap file. In this case, we recommend you to reduce the number of parallel threads of the analyzer to meet the requirement of 1.5 Gbytes per thread, even if this number is smaller than the number of processor cores in your system. Keep in mind that when you have many concurrent threads, your HDD, which stores temporary preprocessed *.i files, may become a bottleneck itself, as these files may grow in size quite quickly. One of the methods to significantly reduce the analysis time is to utilize SSD disks or a RAID array. A performance loss may also be caused by poorly configured antivirus software. Because the PVS-Studio plugin launches quite a large number of analyzer and the cmd.exe instances, your antivirus may find this behavior suspicious. To optimize the analysis time, we recommend you to add PVS-Studio.exe, as well as all of the related directories, to the exceptions list of your antivirus or disable real-time protection while the analysis is running. If you happen to utilize the Security Essentials antivirus (which has become a part of Windows Defender starting with Windows 8), you may face a sharp performance degradation on certain projects/configurations. Please refer to this article on our blog for details concerning this issue. ## I get the message "Files with C or C++ source code for analysis not found." when checking a group of projects or one C/C++ project Projects excluded from the general build in the Configuration Manager window of the Visual Studio environment are not analyzed. For the PVS-Studio analyzer to analyze C/C++ projects correctly, they must be compilable in Visual C++ and buildable without errors. That's why when checking a group of projects or an individual project, PVS-Studio will check only those projects which are included into the general build. Projects excluded from the build won't be analyzed. If none of the projects is included into the build or you try to analyze one project that was not included into the build, the message "Files with C or C++ source code for analysis not found" will be generated, and analysis won't start. Use the Configuration Manager for the current Visual Studio solution to see which projects are included and which are excluded from the general build. ## Errors of the "Cannot open include file", "use the /MD switch for _AFXDLL builds" kinds on projects that could be successfully compiled in Visual Studio. Insertion of incorrect precompiled headers during preprocessing If you are encountering errors with missing includes, incorrect compiler switches (for example, the /MD switch) or macros while running static analysis on a project which can be compiled in Visual Studio IDE without such errors, then it is possible that this behavior is a manifestation of an incorrect precompiled header files being inserted during the preprocessing. This issue arises because of the divergent behavior of Visual C++ compiler (cl.exe) in its' compiler and preprocessor modes. During a normal build, the compiler operates in the "regular" mode (i.e. the compilation results in the object, binary files). However, to perform static analysis, PVS-Studio invokes the compiler in the preprocessor mode. In this mode the compiler performs the expansion of macros and include directives. But, when the compiled file utilizes a precompiled header, the compiler will use a header itself when it encounters the #include directive. It will use the previously generated pch file instead. However, in the preprocessing mode, the compiler will ignore the precompiled pch entirely and will try expanding such #include in a "regular way", i.e. by inserting the contents of the header file in question. It is a common practice to use precompiled headers with the same name in multiple projects (the most common one being stdafx.h). This, because of the disparities in the compiler behavior described earlier, often leads to the header from an incorrect project being included into the source file. There are several reasons why this can happen. For example, a correct pch is specified for a file, but the Includes contain several paths containing several different stdafx.h files, and the incorrect one possesses a higher priority for being included (that is, its' include path occurs earlier on the compiler's command line). Another possible scenario is the one in which several projects include the same C++ source file. This file could be built with different options in different projects, and it uses the different pch files as well. But since this is just a single file in your file system, one of the stdafx.h files from one of the projects it is included into could be located in the same directory as the source file itself. And if the stdafx.h is included into this source file by the #include directive using the quotes, then the preprocessor will always use the header file from the same directory as this file, regardless of the includes passed through the command line. Insertion of the incorrect precompiled header file will not always lead to the preprocessing errors. However, if one of the projects, for example, utilized MFC, and the other one is not, ore the projects possess a different set of Includes, the precompiled headers will be incompatible, and one of the preprocessing errors described in the title of this section will occur. As a result, you will not be able to perform static analysis on such a file. Unfortunately, it is impossible to bypass this issue on the analyzer's side, as it concerns the external preprocessor, that is, the cl.exe. If you are encountering it on one of your projects, then it is possible to solve it by one of the methods described below, depending on the causes that lead to it. In case the precompiled header was incorrectly inserted because of the position of its' include path on the compiler's command line, you can simply move a path for the correct header file to the first position on the command line. If the incorrect header file was inserted because of its' location in the same directory as the source file into which it is included, then you can use the #include directive with pointy brackets, for example: #include <stdafx.h> While using this form, the compiler will ignore the files form the current directory when it performs the insertion. ## 'PVS-Studio is unable to continue due to IDE being busy' message under Windows 8. 'Library not registered' errors When checking large (more than 1000 source files) projects with PVS-Studio under Windows 8, while using Visual Studio 2010 or newer versions, sometimes the errors of the 'Library not registered' kind can appear or analyzer can even halt the analysis process altogether with 'PVS-Studio is unable to continue due to IDE being busy' message. Such errors can be caused by several factors: incorrect installation of Visual Studio and compatibility conflicts between different versions of IDE present within a system. Even if your system currently possesses a single IDE installation, but a different version was present in the past, it is possible that this previous version was uninstalled incorrectly or incompletely. In particular, the compatibility conflict can arise from simultaneously having installations of one of Visual Studio 2010\2012\2013\2015\2017 and Visual Studio 2005 and\or 2008 on your system. Unfortunately, PVS-Studio is unable to 'work around' these issues by itself, as they are caused by conflicts in COM interfaces, which are utilized by Visual Studio API. If you are one of such issues, then you have several different ways of dealing with it. Using PVS-Studio under a system with a 'clean' Visual Studio installation should resolve the issue. However, if it not an option, you can try analyzing your project in several go's, part by part. It is also worth noting that the issue at hand most often arises in the situation when PVS-Studio performs analysis simultaneously with some other IDE background operation (for example, when IntelliSense performs #include parsing). If you wait for this background operation to finish, then it will possibly allow you to analyze your whole project. Another option is to use alternative methods of running the analyzer to check your files. You can check any project by using the compiler monitoring mode from C and C++ Compiler Monitoring UI (Standalone.exe). After installing Visual Studio IDE on a machine with a previously installed PVS-Studio analyzer, the newly installed Visual Studio version lacks the 'PVS-Studio' menu item Unfortunately, the specifics of Visual Studio extensibility implementation prevents PVS-Studio from automatically 'picking up' newly installed Visual Studio in case it happened after the installation of PVS-Studio itself. Here is an example of such a situation. Let's assume that before the installation of PVS-Studio, the machine have only Visual Studio 2013 installed on it. After installing the analyzer, Visual Studio 2013 menu will contain the 'PVS-Studio' item (if the corresponding option was selected during the installation), which allows you to check your projects in this IDE. Now, if Visual Studio 2015 is installed on this machine next (after PVS-Studio was already installed), the menu of this IDE version will not contain 'PVS-Studio' item. In order to add analyzer IDE integration to the newly installed Visual Studio, it is necessary to re-launch PVS-Studio installer (PVS-Studio_Setup.exe file). If you do not have this file already, you can download it from our site. The checkbox besides the required IDE version on the Visual Studio selection installer page will be enabled after the corresponding Visual Studio version is installed. # Tips on speeding up PVS-Studio Any static code analyzer works slower than a compiler. It is determined by the fact that the compiler must work very quickly, though to the detriment of analysis depth. Static analyzers have to store the parse tree to be able to gather more information. Storing the parse tree increases memory consumption, while a lot of checks turn the tree traverse operation into a resource-intensive and slow process. Well, actually it all is not so much crucial, since analysis is a rarer operation than compilation and users can wait a bit. However, we always want our tools to work faster. The article contains tips on how to significantly increase PVS-Studio's speed. At first let's enumerate all the recommendations so that users learn right away how they can make the analyzer work faster: • Use a multi-core computer with a large amount of memory. • Use an SSD both for the system and the project to be analyzed. • Configure (or turn off) your antivirus. • If possible, use Clang as the preprocessor instead of Visual C++ (it can be chosen in the PVS-Studio settings) in Visual Studio 2010 and 2012. • Exclude libraries you don't need from analysis (can be set in the PVS-Studio settings). Let's consider all these recommendations in detail, explaining why they allow the tool to work faster. ## Use a multi-core computer with a large amount of memory PVS-Studio has been supporting multi-thread operation for a long time already (starting with version 3.00 released in 2009). Parallelization is performed at the file level. If analysis is run on four cores, the tool is checking four files at a time. This level of parallelism enables you to get a significant performance boost. Judging by our measurements, there is a marked difference between the four-thread and one-thread analysis modes of test projects. One-thread analysis takes 3 hours and 11 minutes, while four-thread analysis takes 1 hour and 11 minutes (these data were obtained on a four-core computer with 8 Gbytes of memory). That is, the difference is 2.7 times. It is recommended that you have at least one Gbyte of memory for each analyzer's thread. Otherwise (when there are many threads and little memory), the swap file will be used, which will slow down the analysis process. If necessary, you may restrict the number of the analyzer's threads in the PVS-Studio settings: Options -> Common Analyzer Settings -> Thread Count (documentation). By default, the number of threads launched corresponds to the number of cores available in the system. We recommend that you use a computer with four cores and eight Gbytes of memory or better. ## Use an SSD both for the system and the project to be analyzed Strange as it may seem, a slow hard disk is a bottleneck for the code analyzer's work. But we must explain the mechanism of its work for you to understand why it is so. To analyze a file, the tool must first preprocess it, i.e. expand all the #define's, include all the #include's and so on. The preprocessed file has an average size of 10 Mbytes and is written on the disk into the project folder. Only then the analyzer reads and parses it. The file's size is growing because of that very inclusion of the contents of the #include-files read from the system folders. I can't give exact results of measuring the influence of an SSD on the analysis speed because you have to test absolutely identical computers with only hard disks different. But visually the speed-up is great. ## Configure (or turn off) your antivirus Judging by the character of its work, the analyzer is a complex and suspicious program from the viewpoint of an antivirus. Let's specify right away that we don't mean that the analyzer is recognized as a virus - we check this regularly. Besides, we use a code certificate signature. Let's go back to description of the code analyzer's work. For each file being analyzed a separate analyzer's process is run (the PVS-Studio.exe module). If a project contains 3000 files, the same number of PVS-Studio.exe's instances will be launched. PVS-Studio.exe calls Visual C++ environment variable setting (files vcvars*.bat) for its purposes. It also creates a lot of preprocessed files (*.i) (one for each file being compiled) for the time of its work. Auxiliary command (.cmd) files are being used. Although all these actions are not a virus activity, it still makes any antivirus spend many resources on meaningless check of the same things. We recommend that you add the following exceptions in the antivirus's settings: • Do not scan system folders with Visual Studio: • C:\Program Files (x86)\Microsoft Visual Studio 11.0 • C:\Program Files (x86)\Microsoft Visual Studio 12.0 • C:\Program Files (x86)\Microsoft Visual Studio 14.0 • etc. • Do not scan the PVS-Studio folder: • C:\Program Files (x86)\PVS-Studio • Do not scan the project folder: • For example, C:\Users\UserName\Documents\MyProject • Do not scan Visual Studio .exe files: • C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\devenv.exe • C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\devenv.exe • C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\devenv.exe • etc. • Do not scan the cl.exe compiler's .exe files (of different versions): • C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\bin\cl.exe • C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\bin\x86_amd64\cl.exe • C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\bin\amd64\cl.exe • C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin\cl.exe • C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin\x86_amd64\cl.exe • C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin\amd64\cl.exe • C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin\cl.exe • C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin\x86_amd64\cl.exe • C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin\amd64\cl.exe • etc. • Do not scan PVS-Studio and Clang .exe files (of different versions): • C:\Program Files (x86)\PVS-Studio\x86\PVS-Studio.exe • C:\Program Files (x86)\PVS-Studio\x86\clang.exe • C:\Program Files (x86)\PVS-Studio\x64\PVS-Studio.exe • C:\Program Files (x86)\PVS-Studio\x64\clang.exe Perhaps this list is too excessive but we give it in this complete form so that you know regardless of a particular antivirus what files and processes do not need to be scanned. Sometimes there can be no antivirus at all (for instance, on a computer intended specially to build code and run a code analyzer). In this case the speed will be the highest. Even if you have specified the above mentioned exceptions in your antivirus, it still will spend some time on scanning them. Our test measurements show that an aggressive antivirus might slow down the code analyzer's work twice or more. ## In Visual Studio 2010 and 2012, if possible, use Clang as the preprocessor instead of Visual C++ (it can be chosen in the PVS-Studio settings) An external preprocessor is being used to preprocess source files before PVS-Studio analysis. When working from under Visual Studio IDE, the native Microsoft Visual C++ preprocessor, cl.exe, is used by default. In 4.50 version of PVS-Studio, the support for the Clang independent preprocessor had been added, as it lacks some of the Microsoft's preprocessor shortcomings (although it also possesses issues of its own). In some of the older versions of Visual Studio (namely, 2010 and 2012), the cl.exe preprocessor is significantly slower than clang. Using Clang preprocessor with these IDEs provides an increase of operational performance by 1.5-1.7 times in most cases. However, there is an aspect that should be considered. The preprocessor to be used can be specified from within the 'PVS-Studio|Options|Common Analyzer Settings|Preprocessor' field (documentation). The available options are: VisualCPP, Clang and VisualCPPAfterClang. The first two of these are self-evident. The third one indicates that Clang will be used at first, and if preprocessing errors are encountered, the same file will be preprocessed by the Visual C++ preprocessor instead. If your project is analyzed with Clang without any problems, you may use the default option VisualCPPAfterClang or Clang - it doesn't matter. But if your project can be checked only with Visual C++, you'd better specify this option so that the analyzer doesn't launch Clang in vain trying to preprocess your files. ## Exclude libraries you don't need from analysis (can be set in the PVS-Studio settings) Any large software project uses a lot of third-party libraries such as zlib, libjpeg, Boost, etc. Sometimes these libraries are built separately, and in this case the main project has access only to the header and library (lib) files. And sometimes libraries are integrated very firmly into a project and virtually become part of it. In this case the main project is compiled together with the code files of these libraries. The PVS-Studio analyzer can be set to not check code of third-party libraries: even if there are some errors there, you most likely won't fix them. But if you exclude such folders from analysis, you can significantly enhance the analysis speed in general. It is also reasonable to exclude code that surely will not be changed for a long time from analysis. To exclude some folders or separate files from analysis use the PVS-Studio settings -> Don't Check Files (documentation). To exclude folders you can specify in the folder list either one common folder like c:\external-libs, or list some of the folders: c:\external-libs\zlib, c:\external-libs\libjpeg, etc. You can specify a full path, a relative path or a mask. For example, you can just specify zlib and libjpeg in the folder list - this will be automatically considered as a folder with mask *zlib* and *libjpeg*. To learn more, please see the documentation. ## Conclusion Let's once again list the methods of speeding up PVS-Studio: • Use a multi-core computer with a large amount of memory. • Use an SSD both for the system and the project to be analyzed (Update: for PVS-Studio versions 5.22 and above, deploying the project itself on SSD does not improve the overall analysis time). • Configure (or turn off) your antivirus. • If possible, use Clang as the preprocessor instead of Visual C++ (it can be chosen in the PVS-Studio settings) in Visual Studio 2010 and 2012. • Exclude libraries you don't need from analysis (can be set in the PVS-Studio settings). The greatest effect can be achieved when applying a maximum number of these recommendations simultaneously. # Unattended deployment of PVS-Studio  In this article we describe working in the Windows environment. Working in the Linux environment is described in the article "How to run PVS-Studio on Linux". ## Unattended deployment As for most of other software setting up PVS-Studio requires administrative privileges. Unattended setup is performed by specifying command line parameters, for example: PVS-Studio_Setup.exe /verysilent /suppressmsgboxes /norestart /nocloseapplications PVS-Studio may require a reboot if, for example, files that require update are locked. To install PVS-Studio without reboot, use the 'NORESTART' flag. Please also note that if PVS-Studio installer is started in a silent mode without this flag, the computer may be rebooted without any warnings or dialogs. By default, all available PVS-Studio components will be installed. In case this is undesirable, the required components can be selected by the 'COMPONENTS' switch (following is a list of all possible components): PVS-Studio_setup.exe /verysilent /suppressmsgboxes /nocloseapplications /norestart /components= Core, Standalone,MSVS,MSVS\2010,MSVS\2012,MSVS\2013,MSVS\2015,MSVS\2017 Components with 'MSVS' prefix in their name are corresponding to Microsoft Visual Studio plug-in extensions. The 'Core' component is a mandatory one; it contains a core command-line analyzer engine, which is required for all of the IDE extension plug-ins to operate. The Standalone component installs compiler monitoring system, which allows to analyze any kind of project as long as such project uses one of the supported compilers. During installation of PVS-Studio all instances of Visual Studio should be shut down, however to prevent user's data loss PVS-Studio does not shut down Visual Studio. The installer will exit with '100' if it is unable to install the extension (*.vsix) for any of the selected versions of Visual Studio. The PVS-Studio-Updater.exe can perform check for analyzer updates, and, if an update is available, it can download it and perform an installation on a local system. To start the updater tool "silently", the same arguments can be utilized: PVS-Studio-Updater.exe /VERYSILENT /SUPPRESSMSGBOXES If there are no updates on the server, the updater will exit with the code '0'. As PVS-Studio-Updater.exe performs a local deployment of PVS-Studio, devenv.exe should not be running at the time of the update as well. If you connect to Internet via a proxy with authentication, PVS-Studio-Updater.exe will prompt you for proxy credentials. If the proxy credentials are correct, PVS-Studio-Updater.exe will save them in the Windows Credential Manager and will use these credentials to check for updates in future. ## Deploying licenses and customizing settings Deployment of licenses is usually performed right after unattended installation. You can enter license info via PVS-Studio Options. Open PVS-Studio Options (PVS-Studio Menu -> Options...) from your running IDE instance or start C and C++ Compiler Monitoring UI (Standalone.exe) and choose the "Registration" page. License information will be saved to the 'Settings.xml' file. If you want deploy PVS-Studio for many computers then you can install license without manual entering. It should place valid 'Settings.xml' file into folder under user's profile. If many users share one desktop each one should have its own license. Default settings location is the following: %USERPROFILE%\AppData\Roaming\PVS-Studio\Settings.xml It is user-editable xml file, but it also could be edited by through PVS-Studio IDE plugin on a target machine. Please note that all settings that should be kept as default values could be omitted from 'Setting.xml' file. # Relative paths in PVS-Studio log files When generating diagnostic messages, PVS-Studio by default generates absolute, or full, paths to the files where errors have been found. That's why, when saving the report, it's these full paths that get into the resulting file (XML plog file). It may cause some troubles in the future - for example when you need to handle this log file on a different computer. As you know, paths to source files may be different on two computers. This will lead to you being unable to open files and use the integrated mechanism of code navigation in such a log file. Although this problem can be solved by editing the paths in the XML report manually, it's much more convenient to get the analyzer to generate messages with relative paths right away, i.e. paths specified in relation to some fixed directory (for example, the root directory of the project source files' tree). This way of path generation will allow you to get a log file with correct paths on any other computer - you will only need to change the root in relation to which all the paths in the PVS-Studio log file are expanded. The setting 'SourceTreeRoot' found on the page "PVS-Studio -> Options -> Specific Analyzer Settings" serves to tell PVS-Studio to automatically generate relative paths as described and replace their root with the new one. Let's have a look at an example of how this mechanism is used. The 'SourceTreeRoot' option's field is empty by default, and the analyzer always generates full paths in its diagnostic messages. Assume that the project being checked is located in the "C:\MyProjects\Project1" directory. We can take the path "C:\MyProjects\" as the root of the project source files' tree and add it into the field 'SourceTreeRoot', and start analysis after that. Now that analysis is over, PVS-Studio will automatically replace the root directory we've defined with a special marker. It means that in a message for the file "C:\MyProjects\Project1\main.cpp", the path to this file will be defined as "|?|Project1\main.cpp". Messages for the files outside the specified root directory won't be affected. That is, a message for the file "C:\MyCommonLib\lib1.cpp" will contain the absolute path to this file. In the future, when handling this log file in the IDE PVS-Studio plugin, the marker |?| will be automatically replaced with the value specified in the 'SourceTreeRoot' setting's field - for instance, when using the False Alarm function or message navigation. If you need to handle this log file on another computer, you'll just need to define a new path to the root of the source files' tree (for example, "C:\Users\User\Projects\") in the IDE plugin's settings. The plugin will correctly expand the full paths in automated mode. This option can also be used in the Independent mode of the analyzer, when it is integrated directly into a build system (make, msbuild, and so on). It will allow you to separate the process of full analysis of source files and further investigation of analysis results, which might be especially helpful when working on a large project. For example, you can perform a one-time complete check of the whole project on the build server, while analysis results will be studied by several developers on their local computers. You can also use the setting 'UseSolutionDirAsSourceTreeRoot' described on the same page. This setting enables or disables the mode of using the path to the folder, containing the solution file *.sln as a parameter 'SourceTreeRoot'. When this mode is enabled (True), the field 'SourceTreeRoot' will display the value '<Using solution path>'. The actual value of the parameter 'SourceTreeRoot' saved in the settings file does not change. When the setting 'UseSolutionDirAsSourceTreeRoot' is disabled (False), this value (if it was previously set) will be displayed in the field 'SourceTreeRoot' again. Thus, the setting 'UseSolutionDirAsSourceTreeRoot' just changes the mechanism of generating the path to the file, allowing to use 'SourceTreeRoot' as a parameter or a specified value or a path to a folder, containing the solution file. # Compiler Monitoring System in PVS-Studio ## Introduction The PVS-Studio Compiler Monitoring system (CLMonitoring) was designed for "seamless" integration of the PVS-Studio static analyzer into any build system under Windows that employs one of the preprocessors supported by the PVS-Studio.exe command-line analyzer (Visual C++, GCC, Clang, Keil MDK ARM Compiler 5/6, IAR C/C++ Compiler for ARM) for compilation. To perform correct analysis of the source C/C++ files, the PVS-Studio.exe analyzer needs intermediate .i files which are actually the output of the preprocessor containing all the headers included into the source files and expanded macros. This requirement defines why one can't "just take and check" the source files on the disk - besides these files themselves, the analyzer will also need some information necessary for generating those .i files. Note that PVS-Studio doesn't include a preprocessor itself, so it has to rely on an external preprocessor in its work. As the name suggests, the Compiler Monitoring system is based on "monitoring" compiler launches when building a project, which allows the analyzer to gather all the information essential for analysis (that is, necessary to generate the preprocessed .i files) of the source files being built. In its turn, it allows the user to check the project by simply rebuilding it, without having to modify his build scripts in any way. This monitoring system consists of a compiler monitoring server (the command-line utility CLMonitor.exe) and UI client (Standalone.exe), and it is responsible for launching the analysis (CLMonitor.exe can be also used as a client when launched from the command line). In the current version, the system doesn't analyze the hierarchy of the running processes; instead, it just monitors all the running processes in the system. It means that it will also know if a number of projects are being built in parallel and monitor them. ## Working principles CLMonitor.exe server monitors launches of processes corresponding to the target compiler (for example cl.exe for Visual C++ and g++.exe for GCC) and collects information about the environment of these processes. Monitoring server will intercept compiler invocations only for the same user it was itself launched under. This information is essential for a correct launch of static analysis to follow and includes the following data: • the process main folder • the full process launch string (i.e. the name and all the launch arguments of the exe file) • the full path to the process exe file • the process environment system variables Once the project is built, the CLMonitor.exe server must send a signal to stop monitoring. It can be done either from CLMonitor.exe itself (if it was launched as a client) or from Standalone's interface. When the server stops monitoring, it will use the collected information about the processes to generate the corresponding intermediate files for the compiled files. And only then the PVS-Studio.exe analyzer itself is launched to carry out the analysis of those intermediate files and output a standard PVS-Studio's report you can work with both from the Standalone version and any of the PVS-Studio IDE plugins. ## Getting started with CLMonitor.exe Note: in this section, we will discuss how to use CLMonitor.exe to integrate the analysis into an automated build system. If you only to check some of your projects manually, consider using the UI version of C and C++ Compiler Monitoring (Standalone.exe) as described below. CLMonitor.exe is a monitoring server directly responsible for monitoring compiler launches. It must be launched prior to the project build process. After launching the server in monitoring mode, it will trace the invocations of supported compilers. The supported compilers are: • Microsoft Visual C++ (cl.exe) compilers • C/C++ compilers from GNU Compiler Collection (gcc.exe, g++.exe) and its derivatives • Clang (clang.exe) compiler and its derivatives • Keil MDK ARM Compiler 5/6 • IAR C/C++ Compiler for ARM • Texas Instruments ARM Compiler • GNU Arm Embedded Toolchain But if you want the analysis to be integrated directly into your build system (or a continuous integration system and the like), you can't "just" launch the monitoring server because its process blocks the flow of the build process while active. That's why you need to launch CLMonitor.exe with the monitor argument in this case: CLMonitor.exe monitor In this mode, CLMonitor will launch itself in the monitoring mode and then terminate, while the build system will be able to continue its work. At the same time, the second CLMonitor process (launched from the first one) will stay running and monitoring the build process. Since there are no consoles attached to the CLMonitor process in this mode, the monitoring server will - in addition to the standard stdin\stdout streams - output its messages into a Windows event log (Event Logs -> Windows Logs -> Application). Note: for the monitoring server to run correctly, it must be launched with the same privileges as the compiler processes themselves. To ensure correct logging of messages in the system event logs, you need to launch the CLMonitor.exe process with elevated (administrative) privileges at least once. If it has never been launched with such privileges, it will not be allowed to write the error messages into the system log. Notice that the server only records messages about its own runtime errors (handled exceptions) into the system logs, not the analyzer-generated diagnostic messages! Once the build is finished, run CLMonitor.exe in the client mode so that it can generate the preprocessed files and call the static analyzer itself: CLMonitor.exe analyze -l "c:\test.plog" As the '-l' argument, the full path to the analyzer's log file must be passed. When running as a client, CLMonitor.exe will connect to the already running server and start generating the preprocessed files. The client will receive the information on all of the compiler invocations that were detected and then the server will terminate. The client, in its turn, will launch preprocessing and PVS-Studio.exe analyzer for all the source files which have been monitored. When finished, CLMonitor.exe will save a log file (C:\ptest.plog) which can be viewed in Visual Studio PVS-Studio IDE plugin or Compiler Monitoring UI client (Standalone.exe, PVS-Studio|Open/Save|Open Analysis Report). You can also use the analyzer message suppression mechanism with CLMonitor through the '-u' argument: CLMonitor.exe analyze -l "c:\ptest.plog" -u "c:\ptest.suppress" -s The '-u' argument specifies a full path to the suppress file, generated through the 'Message Suppression' dialog in Compiler Monitoring UI client (Standalone.exe, Tools|Message Suppression...). The optional '-s' argument allows you to append the suppress file specified through the -u with newly generated messages from the current analysis run. For setting additional display parameters and messages filtration you can pass the path to the file of diagnostics configuration (.pvsconfig) using the argument '-c': CLMonitor.exe analyze -l "c:\ptest.plog" -c "c:\filter.pvsconfig" ## Saving compilation monitoring dump and running analysis from this dump CLMonitor.exe allows you to save information it gathered from monitoring a compilation process in a dump file. This will make possible re-running the analysis without the need to re-build a project and monitor this build. To save a dump you will first need to run monitoring in a regular way with either trace or monitor commands, as described above. After the build is finished, you can stop monitoring and save dump file. For this, run CLMonitor.exe with the saveDump command: CLMonitor.exe saveDump -d c:\monitoring.zip You can also finish monitoring, save dump file and run the analysis on the files that the monitoring have caught. For this, specify a path to the dump file to the CLMonitor.exe analyze command: CLMonitor.exe analyze -l "c:\ptest.plog" -d c:\monitoring.zip Running the analysis from the pre-generated dump file is possible with the following command: CLMonitor.exe analyzeFromDump -l "c:\ptest.plog" -d c:\monitoring.zip Compilation monitoring dump file is a simple zip archive, containing a list of parameters from compiler processes that CLMonitor had caught (such as process command line arguments, environment variables, current working directory and so on) in an XML format. The analyzeFromDump command supports running the analysis form both the zipped dump file and an un-zipped XML. ## Using compiler monitoring from UI client (Standalone.exe) For the "manual" check of individual projects with CLMonitor, you can use the interface of the Compiler Monitoring UI client (Standalone.exe) which can be launched from the Start menu. To start monitoring, open the dialog box: Tools -> Analyze Your Files... (Figure 1): Figure 1 - The compiler monitoring start dialog box Click "Start Monitoring" button. CLMonitor.exe process will be launched and the environment main window will be minimized. Start building your project, and when it's done, click the "Stop Monitoring" button in the bottom right-hand corner of the window (Figure 2): Figure 2 - The monitoring management dialog box If the monitoring server has successfully tracked all the compiler launches, the preprocessed files will be generated first and then they will be analyzed. When the analysis is finished, you will see a standard PVS-Studio's report (Figure 3): Figure 3 - The resulting output of the monitoring server and the analyzer The report can be saved as an XML file (a .plog file): File -> Save PVS-Studio Log As... ## Compiler monitoring from Visual Studio A convenient navigation for analyzer messages and source code navigation is available in Visual Studio IDE through PVS-Studio extension. If the project to be analyzed can be opened inside this IDE, but the 'regular' analysis by PVS-Studio (i.e. PVS-Studio|Check|Solution) is not available (for example, for makefile Visual Studio projects), it is still possible to have all the benefits of Visual Studio by loading the analysis results (plog file) into PVS-Studio by the ' PVS-Studio|Open/Save|Open Analysis Report...' command. This action can also be automated, through the use of Visual Studio automation mechanism, by tying it, and also the analysis itself, to the project build event. As an example, let's review the integration of PVS-Studio analysis through compiler monitoring into a makefile project. Such type of projects is used, for instance, by the build system of Unreal Engine projects under Windows. As a command to run the build of our makefile project, let's specify the run.bat file: Figure 4 – configuring makefile project The contents of the run.bat file are the following: set slnPath=%1 set plogPath="%~2test.plog" "%ProgramFiles(X86)%\PVS-Studio\CLMonitor.exe" monitor waitfor aaa /t 10 2> NUL nmake "%ProgramFiles(X86)%\PVS-Studio\CLMonitor.exe" analyze -l %plogPath% cscript LoadPlog.vbs %slnPath% %plogPath%  As arguments to run.bat, we pass the paths to solution and project. Compiler monitoring is first launched with CLMonitor.exe. The 'waitfor' command is used as a delay between launching the monitoring and building the project – without it, monitoring might not catch the first compiler invocations. Next step is the build command itself – nmake. After build is finished, we run the analysis, and after this is complete (the analysis results are saved along the project file), we load the results into Visual Studio with the 'LoadPlog.vbs' script. Here is this script: Set objArgs = Wscript.Arguments Dim objSln Set objSln = GetObject(objArgs(0)) Call objSln.DTE.ExecuteCommand("PVSStudio.OpenAnalysisReport", objArgs(1))  Here we use the DTE.ExecuteCommand function from the Visual Studio automation to access our running Visual Studio (in which our solution is currently open) instance directly from the command line. Running this command is virtually identical to clicking the 'PVS-Studio|Open/Save|Open Analysis Report...' menu item in the UI. To find a running Visual Studio instance, we use the GetObject method. Please take a note that this method uses the solution path to identify the running Visual Studio instance. Therefore, when using it, opening the same solution in several instances of Visual Studio is inadvisable – the method could potentially "miss" and analysis results will be opened inside the wrong IDE instance – not the one that was used to rung the build\analysis. ## Specifics of monitoring a build process of IAR Embedded Workbench for ARM Sometimes, IAR Embedded Workbench IDE can set up the current working directory of the compiler process (iccarm.exe) to 'C:\Windows\System32' during the build process. Such behavior can cause issues with the analysis, considering that current working directory of the compiler process is where CLMonitoring stores its intermediate files. To avoid writing intermediate files to 'C:\Windows\System32', which in turn can cause insufficient access rights errors, a workspace should be opened by double clicking the workspace file ('eww' extension) in Windows explorer. In this case, intermediate files will be stored in the workspace file's directory. ## Incremental analysis In case of necessity of performing the incremental analysis when using the Compiler Monitoring system, it is enough to "monitor" the incremental build, i.e. the compilation of the files that have been modified since the last build. This way of usage will allow to analyze only the modified/newly written code. Such a scenario is natural for the Compiler Monitoring system. Accordingly, the analysis mode (full or analysis of only modified files) depends only on what build is monitored: full or incremental. ## Conclusion Despite the convenience of the "seamless" analysis integration into the automated build process (through CLMonitor.exe) employed in this mode, one still should keep in mind the natural restrictions inherent in this mode - particularly, that a 100% capture of all the compiler launches during the build process is not guaranteed, which failure may be caused both by the influence of the external environment (for example antivirus software) and the hardware-software environment specifics (for example the compiler may terminate too quickly when running on an SSD disk while CPU's performance is too low to "catch up with" this launch). That's why we recommend you to provide whenever possible a complete integration of the PVS-Studio static analyzer with your build system (in case you use a build system other than MSBuild) or use the corresponding PVS-Studio IDE plugin. # Viewing Analysis Results with C and C++ Compiler Monitoring UI ## Introduction PVS-Studio can be used independently from the Visual Studio IDE. The core of the analyzer is a command-line utility allowing analysis of C/C++ files that can be compiled by Visual C++, GCC, or Clang. For this reason, we developed a standalone application implemented as a shell for the command-line utility and simplifying the work with the analyzer-generated message log. PVS-Studio provides a convenient plug-in for the Visual Studio environment, allowing "one-click" analysis of this IDE's vcproj/vcxproj-projects. There are, however, a few other build systems out there which we also should provide support for. Although PVS-Studio's analyzer core doesn't depend on any particular format used by this or that build system (such as, for example, MSBuild, GNU Make, NMake, CMake, ninja, and so on), the users would have to carry out a few steps on their own to be able to integrate PVS-Studio's static analysis into a build system other than VCBuild/MSBuild projects supported by Visual Studio. These steps are as follows: • First, the user would need to integrate a call to PVS-Studio.exe directly into the build script (if available) of a particular system. Otherwise, the user will need to modify the build system itself. To learn more about it, read this documentation section. It should be noted right off that this way of using the analyzer is not always convenient or even plausible as the user is not always permitted to modify the build script of the project they are currently working with. • After PVS-Studio's static analysis has been integrated into the build system, the user needs to somehow view and analyze the analyzer's output. This, in its turn, may require creating a special utility to convert the analyzer's log into a format convenient for the user. Note that when you have Visual Studio installed, you can at any time use the PVS-Studio plug-in for this IDE to view the report generated by the analyzer's core. • Finally, when the analyzer finds genuine bugs in the code, the user needs a functionality enabling them to fix those bugs in the source files of the project under analysis. All these issues can be resolved by using the C and C++ Compiler Monitoring UI (Standalone.exe). Figure 1 - Compiler Monitoring UI Compiler Monitoring UI enables "seamless" code analysis regardless of the compiler or build system one is using, and then allows you to work with the analysis results through a user interface similar to that implemented in the PVS-Studio plug-in for Visual Studio. The Compiler Monitoring UI also allows the user to work with the analyzer's log obtained through direct integration of the tool into the build system when there is no Visual Studio installed. These features are discussed below. ## Analyzing source files with the help of the compiler process monitoring system Compiler Monitoring UI provides a user interface for a compilation monitoring system. The monitoring system itself (the console utility CLMonitor.exe) can be used independently of the Compiler Monitoring UI - for example when you need to integrate static analysis into an automated build system. To learn more about the use of the compiler monitoring system, see this documentation section. To start monitoring compiler invocations, open the corresponding dialog: Tools -> Analyze Your Files... (Figure 2): Figure 2 - Build process monitoring start dialog Click on "Start Monitoring". After that, CLMonitor.exe will be called while the main window of the tool will be minimized. Run the build and after it is finished, click on the "Stop Monitoring" button in the window in the bottom right corner of the screen (Figure 3): Figure 3 - Compiler monitoring dialog If the monitoring server has successfully tracked the compiler invocations, static analysis will be launched for the source files. When it is finished, you will get a regular PVS-Studio's analysis report (Figure 4): Figure 4 - Results of the monitoring server's and static analyzer's work The analysis results can be saved into an XML file (with the plog extension) for further use through the menu command 'File -> Save PVS-Studio Log As...'. ## Incremental analysis when using Compiler Monitoring System The way of performing incremental analysis is the same as the process of the analyzing the whole project. The key difference is the need to implement not a full, but an incremental build of the project. In such a case compilers runs for modified files will be monitored that will allow to check only them. The rest of the analysis process is completely identical to the described above, in the section "Analyzing source files with the help of the compiler process monitoring system". ## Working with the list of diagnostic messages Once you have got the analysis report with the analyzer-generated warnings, you can start viewing the messages and fixing the code. You can also load a report obtained earlier into the Compiler Monitoring UI. To do this, use the menu command 'File|Open PVS-Studio Log...'. Various message suppression and filtering mechanisms available in this utility are identical to those employed in the Visual Studio plug-in and are available in the settings window 'Tools|Options...' (Figure 5). Figure 5 - Analysis settings and message filtering mechanisms In the Analyzer Output window, you can navigate through the analyzer's warnings, mark messages as false positives, and add filters for messages. The message handling interface in the Compiler Monitoring UI is identical to that of the output window in the Visual Studio plug-in. To see a detailed description of the message output window, see this documentation section. ## Navigation and search in the source code Although the built-in editor of the Compiler Monitoring UI does not provide a navigation and autocomplete system as powerful and comfortable as Microsoft IntelliSense in the Visual Studio environment or other similar systems, Compiler Monitoring UI still offers several search mechanisms that can simplify your work with the analysis results. Besides regular text search in a currently opened file (Ctrl + F), Compiler Monitoring UI also offers the Code Search dialog for text search in opened files and folders of the file system. This dialog can be accessed through the menu command 'Edit|Find & Replace|Search in Source Files...' (Figure 6): Figure 6 - Search dialog of Compiler Monitoring UI The dialog supports search in the current file, all of the currently opened files, or any folder of the file system. You can at any moment stop the search by clicking on the Cancel button in the modal window that will show up after the search starts. Once the first match is found, the results will start to be output right away into the child window Code Search Results (Figure 7): Figure 7 - Results of text search in project source files Of course, regular text search may be inconvenient or long when you need to find some identifier's or macro's declarations and/or uses. In this case, you can use the mechanism of dependency search and navigation through #include macros. Dependency search in files allows you to search for a character/macro in those particular files that directly participated in compilation, or to be more exact, in the follow-up preprocessing when being checked by the analyzer. To run the dependency search, click on the character whose uses you want to find to open the context menu (Figure 8): Figure 8 - Dependency search for a character The search results, just like with the text search, will be output into a separate child window: 'Find Symbol Results'. You can at any moment stop the search by clicking on the Cancel button in the status bar of the Compiler Monitoring UI main window, near the progress indicator. Navigation through the #include macros allows you to open in the Compiler Monitoring UI code editor files added into the current file through such a macro. To open an include macro, you also need to use the editor's context menu (Figure 9): Figure 9 - Navigation through include macros Keep in mind that information about dependencies is not available for every source file opened in Compiler Monitoring UI. When the dependencies base is not available for the utility, the above mentioned context menu items will be inactive, too. The dependencies base is created only when analysis is run directly from the Compiler Monitoring UI itself. When opening a random C/C++ source file, the utility won't have this information. Note that when saving the analyzer's output as a plog file, this output having been obtained in the Compiler Monitoring UI itself, a special dpn file, associated with the plog file and containing dependencies of the analyzed files, will be created in the same folder. While present near the plog file, the dpn file enables the dependency search when viewing the plog file in the Compiler Monitoring UI. # Analyzer Work Statistics (Diagrams) ## Introduction The PVS-Studio analyzer provides a work statistics gathering feature to see the number of detected messages (including suppressed ones) across different certainty levels and rule sets. Gathered statistics can be filtered and represented as a diagram in a Microsoft Excel file, showing the change dynamics for messages in the project under analysis. ## Gathering analyzer launch statistics PVS-Studio can save launch statistics when analyzing source code through the Microsoft Visual Studio plugin (supported in versions starting with Visual Studio 2010). To enable the statistics saving feature, use the 'Save Solution Statistics' option available on the 'Specific Analyzer Settings' page which can be accessed through the 'PVS-Studio|Options...' menu item of the plugin. The statistics are saved in the folder '%AppData%/PVS-Studio/Statistics'. For each analyzed Visual Studio solution, an associated subfolder is created with the same name. For each solution analysis launch, once the analysis is over, an individual statistics file is created which contains the analysis results (when analyzing Visual Studio projects from the command line, the statistics are also collected). The statistics file contains the information about the number of output messages (both new and old ones hidden by means of the message suppression mechanism) in each PVS-Studio rule set (General Analysis, Optimization, 64-bit Analysis), for each error and error certainty level. Messages marked as false positives are not included into the statistics. Each Visual Studio solution analysis launch is saved into an xml.zip file, which is a usual zip archive containing a simple-format xml file. Thanks to the open format, you can interpret these files on your own or use the PVS-Studio plugin's UI, which is described in detail further in this article. ## Statistics filtering and representation in Microsoft Excel PVS-Studio provides an interface to filter the gathered analysis launch statistics and represent them by means of Microsoft Excel. To use this dialog, you need to have Microsoft Excel (2007 or better) installed on your computer as well as the Visual Studio Tools for Office runtime (installed together with the Visual Studio IDE by default). You can open the statistics filtering dialog by clicking on the 'PVS-Studio|Analysis Statistics...' menu item (also available in C and C++ Compiler Monitoring UI): Figure 1 - PVS-Studio analyzer launch statistics filtering dialog The 'Include Suppressed Messages' checkbox allows showing/hiding suppressed analyzer messages. Messages disabled on the Detectable Errors (PVS-Studio|Options...) settings page are also filtered off when making the Excel document (but xml.zip statistics files themselves contain the complete information about all the error codes). The PVS-Studio statistics filtering dialog includes only the "freshest" data per day. That is, if you ran analysis several times during the day, only the latest statistics file will be used (it is specified in the xml statistics file). However, the complete statistics are saved for every launch and can be found in the folder '%AppData%/PVS-Studio/Statistics/%SolutionName%', if necessary. Once you have selected the required solutions in the list, set up the filters, and specified the time span you want to see the statistics for, an Excel document with the corresponding statistics data is created and can be opened by clicking on the 'Show in Excel' button (Figure 2). Figure 2 - Statistics across message rule sets The 'statistics across message rule sets' diagram shows the change dynamics for the total number of messages for each of the analyzer's rule sets, according to the filters set up previously. Though opened through the PVS-Studio dialog, these diagrams are ordinary Excel documents providing the complete functionality of Excel's interface (filtering, scaling, etc.) and can be saved for further use. # Speeding up the analysis of C/C++ code through distributed build systems (IncrediBuild) To speed up the analysis, you can use a distributed build system, for example, IncrediBuild. The analysis of C/C++ code in PVS-Studio can be divided into 2 stages: preprocessing and analysis itself. Each of these steps can be executed remotely by the distributed build system. To analyze each C/C++ compiled file, PVS-Studio first launches an external preprocessor, and then the C++ analyzer itself. Each such process can be executed remotely. Depending on the type of a checked project, the analysis of PVS-Studio is launched either through the PVS-Studio_Cmd.exe (for MSBuild projects) utility, or using the utility for monitoring the calls of the compiler - CLMonitor.exe \ Standalone.exe (for any build system). Further, one of these utilities will first run the preprocessor (cl.exe, clang.exe for Visual C++ projects, for the rest – the same process that was used for compilation) for each checked file, and then - the C++ analyzer PVS-Studio.exe.  Setting the value of 'ThreadCount' option to more than '16' (or more than a number of processor cores, if processor possesses more than 16 cores) is available only in PVS-Studio Enterprise license. Please contact us to order a license. These processes run concurrently, depending on the 'PVS-Studio|Options...|Common AnalyzerSettings|ThreadCount' setting. By increasing the number of concurrently scanned files, with the help of this setting, and distributing the execution of these processes to remote machines, you can significantly (several times) reduce the total analysis time. ## An example of IncrediBuild configuration Here is an example of speeding up the PVS-Studio analysis by using the IncrediBuild distributed system. For this we'll need an IBConsole management utility. We will use the Automatic Interception Interface, which allows remotely executing any process, intercepted by this system. Launching of the IBConsole utility for distributed analysis using PVS-Studio will look as follows: ibconsole /command=analyze.bat /profile=profile.xml The analyze.bat file must contain a launch command for the analyzer, PVS-Studio_Cmd.exe or CLMonitor.exe, with all the necessary parameters for them (more detailed information about this can be found in the relevant sections of analyzer documentation). Profile.xml file contains the configuration for the Automatic Interception Interface. Here is an example of such a configuration for the analysis of the MSBuild project using PVS-Studio_Cmd.exe: <?xml version="1.0" encoding="UTF-8" standalone="no" ?> <Profile FormatVersion="1"> <Tools> <Tool Filename="PVS-Studio_Cmd" AllowIntercept="true" /> <Tool Filename="cl" AllowRemote="true" /> <Tool Filename="clang" AllowRemote="true" /> <Tool Filename="PVS-Studio" AllowRemote="true" /> </Tools> </Profile> Let's see, what each record in this file means. We can see that the AllowIntercept attribute with the 'true' value is specified for PVS-Studio_Cmd. This means that a process with such a name will not be executed itself in a distributed manner, but the system of automatic interception will track the child processes generated by this process. For the preprocessor cl and clang processes and the C/C++ analyzer PVS-Studio process, the AllowRemote attribute is specified. This means that processes with such names, after being intercepted from the AllowIntercept processes, will be potentially executed on other (remote) IncrediBuild agents. Before running IBConsole, you must specify the 'PVS-Studio|Options...|Common AnalyzerSettings|ThreadCount' setting, according to the total number of cores available on all of IncrediBuild agents. If it's not done, there will be no effect from using IncrediBuild! Note: during the analysis of Visual C++ projects, PVS-Studio uses clang.exe supplied in the PVS-Studio distribution for preprocessing C/C++ files before the analysis, instead of the cl.exe preprocessor. This is done to speed up the preprocessing, as clang is doing it faster than cl. Some older versions of Incredibuild performs a distributed launch of the clang.exe preprocessor not quite correctly, resulting in errors of preprocessing. Therefore, clang should not be specified in the IBConsole configuration file, if your version of IncrediBuild handles clang incorrectly. The used type of preprocessor during the analysis is specified with the 'PVS- Studio|Options...|Common AnalyzerSettings|Preprocessor' setting. If you choose the 'VisualCpp' value for this setting, PVS-Studio will use only cl.exe for preprocessing, which will be executed in a distributed manner, but slower than clang, which cannot be executed in a distributed manner. You should choose this setting depending on the type of the project and the number of agents available to analyze - when having a large numbers of agents, the choice of VisualCpp will be reasonable. With a small numbers of agents, local preprocessing with clang might prove to be faster. For a distributed analysis using CLMonitor / Compiler Monitoring UI (Standalone.exe), you must change the configuration file as follows: replace PVS-Studio_Cmd with CLMonitor or Standalone (depending on whether the check is triggered from the UI or from the command line); cl, if necessary, should be replaced with the type of the preprocessor which is used during build (gcc, clang). For example: <?xml version="1.0" encoding="UTF-8" standalone="no" ?> <Profile FormatVersion="1"> <Tools> <Tool Filename="CLMonitor" AllowIntercept="true" /> <Tool Filename="gcc" AllowRemote="true" /> <Tool Filename="PVS-Studio" AllowRemote="true" /> </Tools> </Profile> When specifying the ThreadCount settings, please note, that the coordinator machine of the analysis (i.e. the one, which runs the PVS-Studio_Cmd/CLMonitor/Standalone) will be responsible for processing the results coming from all of the PVS-Studio.exe processes. This job cannot be distributed - therefore, especially when ThreadCount is set to a very high value (more than 50 processes simultaneously), it is worth thinking about how to "unload" the coordinator machine from the analysis tasks (i.e., from performing the processes of the analyzer and preprocessor). This can be done by using the '/AvoidLocal' IBConsole flag, or in the settings of local IncrediBuild agent on the coordinator machine. # Installing and updating PVS-Studio on Linux PVS-Studio is distributed as Deb/Rpm packages or an archive. Using the installation from the repository, you will be able to receive updates about the release of a new version of the program. The distribution kit includes the following files: • pvs-studio - the kernel of the analyzer; • pvs-studio-analyzer - a utility for checking projects without integration; • plog-converter - a utility for converting the analyzer report to different formats; You can install the analyzer using the following methods: ## Install from repositories ### For debian-based systems: wget -q -O - http://files.viva64.com/etc/pubkey.txt | \ sudo apt-key add - sudo wget -O /etc/apt/sources.list.d/viva64.list \ http://files.viva64.com/etc/viva64.list sudo apt-get update sudo apt-get install pvs-studio ### For yum-based systems: wget -O /etc/yum.repos.d/viva64.repo \ http://files.viva64.com/etc/viva64.repo yum update yum install pvs-studio ### For zypper-based systems: wget -q -O /tmp/viva64.key http://files.viva64.com/etc/pubkey.txt sudo rpm --import /tmp/viva64.key sudo zypper ar -f http://files.viva64.com/rpm viva64 sudo zypper update sudo zypper install pvs-studio ## Manual installation You can download PVS-Studio for Linux here. ### Deb package sudo gdebi pvs-studio-VERSION.deb or sudo dpkg -i pvs-studio-VERSION.deb sudo apt-get -f install ### Rpm package $ sudo dnf install pvs-studio-VERSION.rpm

or

sudo zypper install pvs-studio-VERSION.rpm

or

sudo yum install pvs-studio-VERSION.rpm

or

sudo rpm -i pvs-studio-VERSION.rpm

### Archive

tar -xzf pvs-studio-VERSION.tgz
sudo ./install.sh

## Running the analyzer

After a successful analyzer installation on your computer, to check a project follow the instructions on this page: "How to run PVS-Studio on Linux".

# Installing and updating PVS-Studio on macOS

PVS-Studio is distributed as a graphical installer, archive or via the Homebrew repository. Using installation from a repository, you can get analyzer updates automatically. The distribution kit includes the following files:

• pvs-studio - the kernel of the analyzer;
• pvs-studio-analyzer - a utility for checking projects without integration;
• plog-converter - a utility for converting the analyzer report to different formats;
• plog-converter-source.tgz - the source code of the plog-converter utility.

You can install the analyzer using the following methods:

## Installation from Homebrew

Installation:

brew install viva64/pvs-studio/pvs-studio

Update:

brew upgrade pvs-studio

## Manual Installation

### Installer:

Run the .pkg file and follow the instructions of the installer:

### Archive

Unpack the archive and place the executables in the directory, available in PATH.

tar -xzf pvs-studio-VERSION.tgz

## Running the analyzer

After a successful analyzer installation on your computer, to check a project follow the instructions on this page: "How to run PVS-Studio on Linux and macOS".

# How to run PVS-Studio on Linux and macOS

## Introduction

PVS-Studio static analyzer for C/C++ code is a console application, named pvs-studio, and several supporting utilities. For the program work it is necessary to have configured environment for a build of your project.

A new run of the analyzer is performed for every code file. The analysis results of several source code files can be added to one analyzer report or displayed in stdout.

There are three main work modes of the analyzer:

• Integration of pvs-studio call in a build system;
• Analyzer integration using modules for CMake/QMake;
• Analysis of a project without integration using the pvs-studio-analyzer utility.

## Installing and updating PVS-Studio

Examples of commands to install the analyzer from the packages and repositories are given on this page.

You can request a license for acquaintance with PVS-Studio via a feedback form.

To save information about a license in file it is necessary to use the following command:

pvs-studio-analyzer credentials NAME KEY [-o LIC-FILE]

PVS-Studio.lic file will be created by default in the ~/.config/PVS-Studio/ directory. In this case it is not necessary to specify it in the analyzer run parameters, it will be caught automatically.

License key for the analyzer is a text file of UTF8 encoding.

pvs-studio --license-info /path/to/PVS-Studio.lic

## Quick run

The best way to use the analyzer is to integrate it into your build system, namely near the compiler call. However, if you want to run the analyzer for a quick test on a small project, use the pvs-studio-analyzer utility.

Important. The project should be successfully compiled and built before analysis.

### CMake-project

To check the CMake-project we use the JSON Compilation Database format. To get the file compile_commands.json necessary for the analyzer, you should add one flag to the CMake call:

$cmake -DCMAKE_EXPORT_COMPILE_COMMANDS=On <src-tree-root> CMake supports the generation of a JSON Compilation Database for Unix Makefiles. The analysis starts with the following commands: pvs-studio-analyzer analyze -l /path/to/PVS-Studio.lic -o /path/to/project.log -e /path/to/exclude-path -j<N> plog-converter -a GA:1,2 -t tasklist -o /path/to/project.tasks /path/to/project.log It is important to understand that all files to be analyzed should be compiled. If your project actively uses code generation, then this project should be built before analysis, otherwise there may be errors during preprocessing. ### CMake/Ninja-project To check the Ninja-project we use the JSON Compilation Database format. To get the necessary file compile_commands.json for the analyzer, you must execute the following commands: cmake -GNinja <src-tree-root> ninja -t compdb The analysis is run with the help of following commands: pvs-studio-analyzer analyze -l /path/to/PVS-Studio.lic -o /path/to/project.log -e /path/to/exclude-path -j<N> plog-converter -a GA:1,2 -t tasklist -o /path/to/project.tasks /path/to/project.log ### Any project (Linux only) This utility requires the strace utility. This can be built with the help of the command: pvs-studio-analyzer trace -- make You can use any other build command with all the necessary parameters instead of make, for example: pvs-studio-analyzer trace -- make debug After you build your project, you should execute the commands: pvs-studio-analyzer analyze -l /path/to/PVS-Studio.lic -o /path/to/project.log -e /path/to/exclude-path -j<N> plog-converter -a GA:1,2 -t tasklist -o /path/to/project.tasks /path/to/project.log Analyzer warnings will be saved into the specified project.tasks file. You may see various ways to view and filter the report file in the section "Filtering and viewing the analyzer report" within this document. If your project isn't CMake or you have problems with the strace utility, you may try generating the file compile_commands.json with the help of the Bear utility. This file will help the analyzer to check a project successfully only in cases where the environment variables don't influence the file compilation. ### If you use cross compilers In this case, the compilers may have special names and the analyzer will not be able to find them. To analyze such a project, you must explicitly list the names of the compilers without the paths: pvs-studio-analyzer analyze ... --compiler COMPILER_NAME --compiler gcc --compiler g++ --compiler COMPILER_NAME plog-converter ... Also, when you use cross compilers, the directory with the header files of the compiler will be changed. It's necessary to exclude such directories from the analysis with the help of -e flag, so that the analyzer doesn't issue warnings for these files. pvs-studio-analyzer ... -e /path/to/exclude-path ... There shouldn't be any issues with the cross compilers during the integration of the analyzer into the build system. ## Incremental analysis mode For the pvs-studio-analyzer utility, incremental analysis mode is available (analysis of only changed files), for this, you need to run the utility with the parameter --incremental: pvs-studio-analyzer analyze ... --incremental ... This mode works independently from the incremental project build. I.g. if your project is completely compiled, the first run of the incremental analysis will still analyze all files. During the next run only changed files will be analyzed. For monitoring the changed files, the analyzer saves service information in a directory named .PVS-Studio in the launch directory. That's why for using this mode it is always necessary to run the analyzer in one and the same directory. ## Integration of PVS-Studio into a build system ### Examples of integration in CMake, QMake, Makefile, and WAF Test projects are available in the official PVS-Studio repository on GitHub: ### This is how the integration with CLion and QtCreator looks like Figure 1 shows an example of analyzer warnings viewed in CLion: Figure 1 - PVS-Studio warnings viewed in CLion Figure 2 demonstrates an example of analyzer warnings viewed in QtCreator: Figure 2 - PVS-Studio warnings viewed in QtCreator ### Preprocessor parameters The analyzer checks not the source files, but preprocessed files. This method allows the analyzer perform a more in-depth and qualitative analysis of the source code. In this regard, we have several restrictions for the compilation parameters being passed. These are parameters that hinder the compiler run in the preprocessor mode, or damage the preprocessor output. A number of debugging and optimization flags, for example,-O2, -O3, -g3, -ggdb3 and others, create changes which affect the preprocessor output. Information about invalid parameters will be displayed by the analyzer when they are detected. This fact does not presuppose any changes in the settings of project to be checked, but part of the parameters should be excluded for the analyzer to run in properly. ### Configuration file *.cfg During integration of the analyzer into the build system, you should pass it a settings file (*.cfg). You may choose any name for the configuration file, but it should be written with a "--cfg" flag. The settings file with the name PVS-Studio.cfg, which is located in the same directory as the executable file of the analyzer, can be downloaded automatically without passing through the command-line parameters. Possible values for the settings in the configuration file: • exclude-path (optional) specifies the directory whose files it is not necessary to check. Usually these are directories of system files or link libraries. There can be several exclude-path parameters. • platform (required) specifies the platform. Possible variants: linux32 or linux64. • preprocessor (required) specifies the preprocessor. Possible variants: gcc, clang, keil. • language (required) parameter specifies the version of the C/C++ languages that the analyzer expects to see in the code of the file to be analyzed (--source-file). Possible variants: C, C++. Incorrect setting of this parameter can lead to V001 errors, because every supported language variant has certain specific keywords. • lic-file (optional) contains the absolute path to the license file. • analysis-mode (optional) defines the type of warnings. It is recommended that you use the value "4" (General Analysis, suitable for most users). • output-file (optional) parameter specifies the full path to the file, where the report of the analyzer's work will be stored. If this parameter is missing in the configuration file, all messages concerning the errors found will be displayed in the console. • sourcetree-root (optional) by default, during the generation of diagnostic messages, PVS-Studio issues absolute, full paths to the files, where PVS-Studio detected errors. Using this setting you can specify the root part of the path that the analyzer will automatically replace with a special marker. For example, the absolute path to the file /home/project/main.cpp will be replaced with a relative path |?|/main.cpp, if /home/project was specified as the root. • source-file (required) contains the absolute path to the source file to be analyzed. • skip-cl-exe (required) shows the analyzer that the preprocessing stage can be skipped, and the analysis can be started. • i-file (required) contains the absolute path to the preprocessed file. • no-noise (optional) will disable the generation of Low Certainty messages (Level 3). When working with large-scale projects, the analyzer might generate a huge number of warnings. Use this setting when it is not possible to fix all the warnings at once, so you can concentrate on fixing the most important warnings first. An important note: You don't need to create a new config file to check each file. Simply save the existing settings, for example, lic-file, etc. ## Integration of PVS-Studio with Continuous Integration systems Any of the following methods of integration of the analysis into a build system can be automated in the system Continuous Integration. This can be done in Jenkins, TeamCity and others by setting automatic analysis launch and notification of the found errors. It is also possible to integrate with the platform of the continuous analysis of SonarQube using the plug-in PVS-Studio. The plugin is available with the analyzer in .tgz archive available to download. Setting instructions are available on this page: "Integration of PVS-Studio analysis results into SonarQube". ## Filtering and viewing the analyzer report ### Plog Converter Utility To convert the analyzer bug report to different formats (*.xml, *.tasks and so on) you can use the Plog Converter, which can be found open source. Enter the following in the command line of the terminal: plog-converter [options] <path to the file with PVS-Studio log> All options can be specified in random order. Available options: • -t - utility output format. • -o - the path to the file that will be used for output. If it is missing, the output will be redirected to the standard output device. • -s - path to the configuration file. The file is similar to the analyzer configuration file PVS-Studio.cfg. Information on the root of the project and the excluded directories (exclude-path) is taken from this file. • -r - a path to the project directory. • -a - set of filtered diagnostic rules. The full list: GA, 64, OP, CS, MISRA. The can be used together with specified levels, for example: "-a GA:1,2;64:1;CS". • -d - a list of excluded diagnostics, separated by a comma: "-d V595,V730". • -m - to enable a display of CWE ID and MISRA ID for the found warnings: " -m cwe -m misra". • -e - use stderr instead of stdout. Detailed description of the levels of certainty and sets of diagnostic rules is given in the documentation section "Getting Acquainted with the PVS-Studio Static Code Analyzer". At this point, the available formats are: • xml-a convenient format for further processing of the results of the analysis, which is supported supported by the plugin for SonarQube; • csv - file stores tabular data (numbers and text) in plain text; • errorfile is the output format of the gcc and clang; • tasklist - an error format that can be opened in QtCreator; • html - html report with a short description of the analysis results. It suits best for the e-mailing of the notifications; • fullhtml - report with sorting of the analysis results according to the different parameters and navigation along the source code. The result of execution of the utility, is a file containing messages of a specified format, filtered by the rules that are set in the configuration file. ### Viewing the analyzer report in QtCreator The following is an example of a command which would be suitable for most users, for opening the report in QtCreator: plog-converter -a GA:1,2 -t tasklist -o /path/to/project.tasks /path/to/project.log Figure 3 demonstrates an example of a .tasks file, viewed in QtCreator: Figure 3 - A .tasks file viewed in QtCreator ### Html Report View in a Web Browser or an Email Client The analyzer report converter allows generating an Html report of two types: 1. FullHtml - full report to view the results of the analysis. You can search and sort messages by type, file, level, code and warning text. A feature of this report is the ability to navigate to the location of the error, to the source code file. The source code files themselves, which triggered the analyzer warnings, are copied in html and become a part of report. Examples of the report are shown in figures 4-5. Figure 4 - Example of the Html main page report Figure 5 - Warning view in code Example of a command for receiving such a report: plog-converter -a GA:1,2 -t fullhtml /path/to/project.log -o /path/to/report_dir This report is convenient to send in an archive, or to provide access by the local network using any web server, for example, Lighttpd, etc. 2. Html is a lightweight report, consisting of a single .html file. It contains brief information about the found warnings and is suitable for notification by email. A report example is shown on the Figure 6. Figure 6 - Simple Html page example Example of a command for receiving such a report: plog-converter -a GA:1,2 -t html /path/to/project.log -o /path/to/project.html ### Viewing the analyzer report in Vim/gVim An example of commands to open the report in gVim editor: $ plog-converter -a GA:1,2 -t errorfile
-o /path/to/project.err /path/to/project.log
$gvim /path/to/project.err :set makeprg=cat\ % :silent make :cw The figure 7 demonstrates an example of viewing an .err file in gVim: Figure 7 - viewing the .err file in gVim ### Viewing the analyzer report in GNU Emacs An example of commands to open the report in Emacs editor: plog-converter -a GA:1,2 -t errorfile -o /path/to/project.err /path/to/project.log emacs M-x compile cat /path/to/project.err 2>&1 Figure 8 demonstrates an example of viewing an .err file in Emacs: Figure 8 - viewing the .err file in Emacs ### Viewing the analyzer report in LibreOffice Calc An example of commands to convert the report in CSV format: plog-converter -a GA:1,2 -t csv -o /path/to/project.csv /path/to/project.log After opening the file project.csv in LibreOffice Calc, you must add the autofilter: Menu Bar --> Data --> AutoFilter. Figure 9 demonstrates an example of viewing an .csv file in LibreOffice Calc: Figure 9 - viewing an .csv file in LibreOffice Calc ### Configuration file More settings can be saved into a configuration file with the following options: • enabled-analyzers - an option similar to the -a option in the console string parameters. • sourcetree-root - a string that specifies the path to the root of the source code of the analyzed file. If set incorrectly, the result of the utility's work will be difficult to handle. • errors-off - globally disabled warning numbers that are enumerated with spaces. • exclude-path - a file, the path to which contains a value from this option, will not be initialized. • disabled-keywords- keywords. Messages, pointing to strings which contain these keywords, will be excluded from processing. The option name is separated from the values by a '=' symbol. Each option is specified on a separate string. Comments are written on separate strings; insert # before the comment. ### Adding custom output formats To add your own output format, follow these steps: Create your own output class, making it an heir from the IOutput class, and redefine the virtual method void write(const AnalyzerMessage& msg). Describe the message output in the correct format for this method. The fields of the AnalyzerMessage structure are defined in the analyzermessage.h file. The following actions are the same as for the existing output classes (XMLOutput, for example). In OutputFactory::OutputFactory in m_outputs add your format by analogy with the one that is already specified there. As a variant - add it through the method OutputFactory::registerOutput. The format will be available as the utility option -t after these actions. ## Mass Suppression of Analyzer Messages Mass warnings suppression allows you to easily embed the analyzer in any project and immediately start to benefit from this, i.e. to find new bugs. This mechanism allows you to plan correcting of missed warnings in future, without distracting developers from performing their current tasks. There are several ways of using this mechanism, depending on the integration of the analyzer. ### Analysis using pvs-studio-analyzer Utility To suppress all analyzer warnings (first time and in subsequent occasions) you need to execute the command: pvs-studio-analyzer suppress /path/to/report.log Analysis of the project can be run as before. At the same time suppressed warnings will be filtered: pvs-studio-analyzer analyze ... -o /path/to/report.log plog-converter ... With this run suppressed warnings will be saved in the current directory in a file named uppress_base.json, which should be stored with the project. New suppressed warnings will be recorded to the file. If there is a need to specify a different name or location of the file, than the commands above may be supplemented by specifying the path to the file with suppressed warnings. ### Direct Integration of the Analyzer in the Build System Direct integration might look as follows: .cpp.o:$(CXX) $(CFLAGS)$(DFLAGS) $(INCLUDES)$< -o $@ pvs-studio --cfg$(CFG_PATH) --source-file $< --language C++ --cl-params$(CFLAGS) $(DFLAGS)$(INCLUDES) $< In this mode, the analyzer cannot verify source files and filter them simultaneously. So, filtration and warnings suppression would require additional commands. To suppress all the warnings, you must also run the command: pvs-studio-analyzer suppress /path/to/report.log To filter a new log, you must use the following commands: pvs-studio-analyzer filter-suppressed /path/to/report.log plog-converter ... File with suppressed warnings also has the default name suppress_base.json, for which you can optionally specify an arbitrary name. ## Common problems and their solutions 1. The strace utility issues the following message: strace: invalid option -- 'y' You must update the strace program version. Analysis of a project without integrating it into a build system is a complex task, this option allows the analyzer to get important information about the compilation of a project. 2. The strace utility issues the following message: strace: umovestr: short read (512 < 2049) @0x7ffe...: Bad address Such errors occur in the system processes, and do not affect the project analysis. 3. The strace utility issues the following message: No compilation units found The analyzer could not find files for analysis. Perhaps you are using cross compilers to build the project. See the section "If you use cross compilers" in this documentation. 4. The analyzer report has strings like this: r-vUVbw<6y|D3 h22y|D3xJGy|D3pzp(=a'(ah9f(ah9fJ}*wJ}*}x(->'2h_u(ah The analyzer saves the report in the intermediate format. To view this report, you must convert it to a readable format using a plog-converter utility, which is installed together with the analyzer. 5. The analyzer issues the following error: Incorrect parameter syntax: The ... parameter does not support multiple instances. One of the parameters of the analyzer is set incorrectly several times. This can happen if part of the analyzer parameters are specified in the configuration file, and part of them were passed through the command line parameters. At the same time, some parameter was accidentally specified several times. If you use pvs-studio-analyzer, then almost all the parameters are detected automatically, this is why it can work without a configuration file. Duplication of such parameters can also cause this error. 6. The analyzer issues the warning: V001 A code fragment from 'path/to/file' cannot be analyzed. If the analyzer is unable to parse some code fragment, it skips it and issues the V001 warning. Such a situation doesn't influence the analysis of other files, but if this code is in the header file, then the number of such warnings can be very high. Send us a preprocessed file (.i) for the code fragment, causing this issue, so that we can add support for it. ## Conclusion If you have any questions or problems with running the analyzer, feel free to contact us. # How to Run PVS-Studio Java ## Introduction PVS-Studio Java static code analyzer consists of 2 main parts: the analyzer core, which performs the analysis, and plugins for integration into build systems and IDEs. Plugins extract project structure (a collection of source files and classpath), then pass this information to the analyzer core. In addition, plugins are responsible for deploying the core for analysis - it will be automatically downloaded during the first analysis run. The analyzer has several different ways to integrate into a project. ## System Requirements • Operating system: Windows, Linux, macOS; • Minimum required Java version to run the analyzer with: Java 8 (64-bit). Note: A project being analyzed could use any Java version; • Minimum required Intellij IDEA version: 2017.2 (optional) ## Plugin for Maven For projects with Maven build system, you can use the pvsstudio-maven-plugin. To do this, you need to add the following to the pom.xml file: <pluginRepositories> <pluginRepository> <id>pvsstudio-maven-repo</id> <url>http://files.viva64.com/java/pvsstudio-maven-repository/</url> </pluginRepository> </pluginRepositories> <build> <plugins> <plugin> <groupId>com.pvsstudio</groupId> <artifactId>pvsstudio-maven-plugin</artifactId> <version>7.00.29596</version> <configuration> <analyzer> <outputType>text</outputType> <outputFile>path/to/output.txt</outputFile> </analyzer> </configuration> </plugin> </plugins> </build> After that, you can run the analysis: $ mvn pvsstudio:pvsAnalyze

In addition, the analysis can be included in a project build cycle by adding the execution element:

<plugin>
<groupId>com.pvsstudio</groupId>
<artifactId>pvsstudio-maven-plugin</artifactId>
<version>7.00.29596</version>
<executions>
<execution>
<phase>compile</phase>
<goals>
<goal>pvsAnalyze</goal>
</goals>
</execution>
</executions>
</plugin>

To enter the license information you can use the following command:

mvn pvsstudio:pvsCredentials "-Dpvsstudio.username=USR" "-Dpvsstudio.serial=KEY"

After that, the license information will be saved in %APPDATA%/PVS-Studio-Java/PVS-Studio.lic in Windows OS or in ~/.config/PVS-Studio-Java/PVS-Studio.lic in macOS and Linux.

### Configuration

Analyzer configuration is performed in the <analyzer> section. A list of analyzer options is given below.

• <outputFile>PATH</outputFile> - a path to the file with the analyzer report. Default value: ${basedir}/PVS-Studio. Note: for a report in the 'fullhtml' format in outputFile it is necessary to specify a directory in which a folder will be created with the name 'fullhtml' containing the analyzer report. Default value:${basedir}/fullhtml;
• <outputType>TYPE</outputType> - analyzer report format (text, log, json, xml, tasklist, fullhtml, errorfile). Default value: json;
• <sourceTreeRoot>PATH</sourceTreeRoot> - a common directory root to source files, that will be used to generate an analyzer report with relative paths to the analyzed source files. The value is absent by default;
• <enabledWarnings>V6XXX, ....</enabledWarnings> - list of enabled analyzer rules. When enabled rules are specified here, all other rules are considered to be enabled. The value is absent by default. When this option is absent, all of the analyzer rules are considered to be enabled (unless the additional disabledWarnings option is specified);
• <disabledWarnings>V6XXX, ....</disabledWarnings> - list of disabled diagnostics. When disabled rules are specified here, all other rules are considered to be enabled. The value is absent by default. When this option is absent all of the analyzer rules are considered to be enabled (unless the additional enabledWarnings option is specified);
• <exclude>PATH, ....</exclude> - list of files and/or directories which have to be excluded from the analysis (absolute or relative paths). The value is absent by default - all files will be analyzed unless the analyzeOnly option is provided;
• <analyzeOnly>PATH, ....</analyzeOnly> - list of files and/or directories which have to be analyzed (absolute or relative paths). Default value: absent - all files will be analyzed unless the exclude option is provided;
• <suppressBase>PATH</suppressBase> - path to a suppress file, containing suppressed analyzer messages, that will not be included in analyzer's report. You can add analyzer messages to a suppress file from the interface of PVS-Studio IDE plug-in for IntelliJ IDEA Default value: ${basedir}/.PVS-Studio/suppress_base.json; • <failOnWarnings>BOOLEAN</failOnWarnings> - abort a build, if the analyzer generates a warning. Default value: false; • <incremental>BOOLEAN</incremental> - enable incremental analysis (analysis will be performed for the modified files only). Default value: false; • <forceRebuild>BOOLEAN</forceRebuild> - flag that allows to forcibly rebuild the entire cached program metamodel containing information about its structure and type information. Default value: false; • <disableCache>BOOLEAN</disableCache> - flag that allows to disable cashing of the program metamodel. Default value: false; • <timeout>NUMBER</timeout> - timeout for analyzing a single file (in minutes). Default value: 10; • <verbose>BOOLEAN</verbose> - saving of temporary analyzer files (for example, containing a structure of analyzed project). Default value: false; • <javaPath>PATH</javaPath> - path to the java interpreter, which will run the analyzer core. Default value: java from the PATH environment variable; • <jvmArguments>FLAG, ....</jvmArguments> - additional JVM flags with which the analyzer core will be executed. Default value: -Xss64m; ## Plugin for Gradle For projects with the Gradle build system, you can use the pvsstudio-gradle-plugin plugin. To do this, you need to add the following to the build.gradle file: buildscript { repositories { mavenCentral() maven { url uri('http://files.viva64.com/java/pvsstudio-maven-repository/') } } dependencies { classpath group: 'com.pvsstudio', name: 'pvsstudio-gradle-plugin', version: '7.00.29596' } } apply plugin: com.pvsstudio.PvsStudioGradlePlugin pvsstudio { outputType = 'text' outputFile = 'path/to/output.txt' } After that, you can run the analysis: $ ./gradlew pvsAnalyze

To enter the license information you can use the following command:

./gradlew pvsCredentials "-Ppvsstudio.username=USR" "-Ppvsstudio.serial=KEY"

After that, the license information will be saved in % APPDATA%/PVS-Studio-Java/PVS-Studio.lic in Windows OS or in ~/.config/PVS-Studio-Java/PVS-Studio.lic in macOS and Linux.

### Configuration

The analyzer configuration is performed in the section "pvsstudio". A list of analyzer configurations is given below.

• outputFile = "PATH" - path to the file with the analyzer report. Default value: $projectDir/PVS-Studio. Note: for a report in the 'fullhtml' format in outputFile it is necessary to specify a directory in which a folder will be created with the name 'fullhtml' containing the analyzer report. Default value:${projectDir}/fullhtml;
• outputType = "TYPE" - format of the analyzer report (text, log, json, xml, tasklist, fullhtml, errorfile). Default value: json;
• threadsNum = NUMBER - number of analysis threads. The default value: number of available processors;
• sourceTreeRoot = "PATH" - a common directory root to source files, that will be used to generate an analyzer report with relative paths to the analyzed source files. The value is absent by default;
• enabledWarnings = ["V6XXX", ....] - list of enabled analyzer rules. When enabled rules are specified here, all other rules are considered to be enabled. The value is absent by default. When this option is absent, all of the analyzer rules are considered to be enabled (unless the additional disabledWarnings option is specified);
• disabledWarnings = ["V6XXX", ....] - list of disabled diagnostics. When disabled rules are specified here, all other rules are considered to be enabled. The value is absent by default. When this option is absent all of the analyzer rules are considered to be enabled (unless the additional enabledWarnings option is specified);
• exclude = ["PATH", ....] - list of files and/or directories which have to be excluded from the analysis (absolute or relative paths). The value is absent by default - all files will be analyzed unless the analyzeOnly option is provided;
• analyzeOnly = ["PATH", ....] - list of files and/or directories which have to be analyzed (absolute or relative paths). Default value: absent - all files will be analyzed unless the exclude option is provided;
• suppressBase = "PATH" - path to a suppress file, containing suppressed analyzer messages, that will not be included in analyzer's report. You can add analyzer messages to a suppress file from the interface of PVS-Studio IDE plug-in for IntelliJ IDEA Default value: $projectDir/.PVS-Studio/suppress_base.json; • failOnWarnings = BOOLEAN - abort a build, if the analyzer issued a certain warning. Default value: false; • incremental = BOOLEAN - enable incremental analysis (analysis will be performed for the modified files only). Default value: false; • forceRebuild = BOOLEAN - flag that allows to forcibly rebuild the entire cached program metamodel containing information about its structure and type information. Default value: false; • disableCache = BOOLEAN - flag that allows to disable cashing of the program metamodel. Default value: false; • timeout = NUMBER - timeout of one file analysis (in minutes). Default value: 10; • verbose = BOOLEAN - saving of temporary analyzer files (for example with a structure of the analyzed project). Default value: false; • javaPath = "PATH" - path to the java interpreter, which will run the analyzer core. Default value: java from the PATH environment variable; • jvmArguments = ["FLAG", ....] - - additional JVM flags with which the analyzer core will be executed. Default value: ["-Xss64m"]; ## Plugin for IntelliJ IDEA The PVS-Studio Java analyzer can be also used as a plugin for IntelliJ IDEA. In this case, parsing of a project structure is performed by means of this IDE and the plugin provides a convenient graphic interface to work with the analyzer. The following instructions describe how to install the plugin. 1) File -> Settings -> Plugins -> Browse repositories 2) Manage repositories 3) Add repository (http://files.viva64.com/java/pvsstudio-idea-plugins/updatePlugins.xml) 4) Install Then you should enter license information. 1) Analyze -> PVS-Studio -> Settings 2) Registration tab And finally, you can run the analysis of a current project: ## Integration of PVS-Studio with Continuous Integration systems and SonarQube Any of the following methods of integration of the analysis into a build system can be used for automated analysis in Continuous Integration systems. This can be performed in Jenkins, TeamCity and other CI systems by setting up automatic analysis launch and notification on the generated errors. It is also possible to integrate PVS-Studio analyzer with the SonarQube continuous quality inspection system using the corresponding PVS-Studio plug-in. Installation instructions are available on this page: "Integration of PVS-Studio analysis results into SonarQube". ## Using analyzer core directly If none of the above methods of integration into a project is appropriate, you can use the analyzer core directly. To download the analyzer core, use the following link: http://files.viva64.com/java/pvsstudio-cores/7.00.29596.zip The analyzer requires a collection of source files (or directories with source files) for analysis, and classpath information. The set of source files for the analysis is specified using the -s flag and classpath is specified using the -e flag. In addition, using the --ext-file flag you can specify a file that lists all classpath inclusions, separated by pathSeparator (':' in Unix systems, and ';' in Windows). All available analyzer core flags can be viewed by using the following command: java -jar pvs-studio.jar --help Examples of quick launch: java -jar pvs-studio.jar -s A.java B.java C.java -e Lib1.jar Lib2.jar -j4 -o report.txt -O text java -jar pvs-studio.jar -s src/main/java --ext-file classpath.txt -j4 -o report.txt -O text Suppression of analyzer messages There are several ways to suppress analyzer messages. 1. Using special comments: void f() { int x = 01000; //-V6061 } 2. Using a special suppression file The special suppression 'suppress' file can be generated PVS-Studio IDE plug-in for InlelliJ IDEA. Path to suppress file can be specified as a parameter to maven or gradle analyzer plug-ins, or it can be passed to as a parameter to the direct call of analyzer core. When suppressing messages through IDEA, suppress file will be generated in the '.PVS-Studio' directory, which itself is located in the directory of a project that is currently opened in the IDE. The name of the suppress file will be suppress_base.json; 3. Using @SuppressWarnings(....) annotations Analyzer can recognize several annotations and is able to skip warnings for the code that was already marked by such annotations. For example: @SuppressWarnings("OctalInteger") void f() { int x = 01000; } ## Common problems and their solutions ### "GC overhead limit exceeded" occurs or analysis aborts by timeout The insufficient memory problem can be solved by increasing the available amount of memory and stack. Plugin for Maven: <jvmArguments>-Xmx4096m, -Xss256m</jvmArguments> Plugin for Gradle: jvmArguments = ["-Xmx4096m", "-Xss256m"] Plugin for IntelliJ IDEA: 1) Analyze -> PVS-Studio -> Settings 2) Environment tab -> JVM arguments Typically, the default amount of memory may be insufficient when analyzing some generated code with a large number of nested constructs. It's probably better to exclude that code from analysis (using exclude), to speed it up. ### How to change Java executable to run the analyzer with? The analyzer runs core with java from the PATH environment variable by default. If you need to run the analysis with some other java, you can specify it manually. Plugin for Maven: <javaPath>C:/Program Files/Java/jdk1.8.0_162/bin/java.exe</javaPath> Plugin for Gradle: javaPath = "C:/Program Files/Java/jdk1.8.0_162/bin/java.exe" Plugin for IntelliJ IDEA: 1) Analyze -> PVS-Studio -> Settings 2) Environment tab -> Java executable ### Unable to start the analysis (V00X errors occur) If you are unable to run the analysis, please email us (support@viva64.com) and attach text files from the .PVS-Studio directory (located in the project directory). # Integrating PVS-Studio Analysis Results into SonarQube ## Introduction SonarQube is an open-source platform developed by SonarSource for continuous inspection of code quality to perform automatic reviews with static analysis of code to detect bugs, code smells, and security vulnerabilities on 20+ programming languages. SonarQube offers reports on duplicated code, coding standards, unit tests, code coverage, code complexity, comments, bugs, and security vulnerabilities. SonarQube can record metrics history and provides evolution graphs. SonarQube main page: This page showcases SonarQube's capabilities: https://sonarqube.com. To import analysis results into SonarQube, PVS-Studio provides a special plugin, which allows you to add messages produced by PVS-Studio to the message base of the SonarQube server. SonarQube's Web interface allows you to filter the messages, navigate the code to examine bugs, assign tasks to developers and keep track of the progress, analyze bug amount dynamics, and measure the code quality of your projects. ## System requirements • Operating system: Windows, Linux; • Java 8 or higher; • SonarQube 6.7 LTS or higher; • PVS-Studio analyzer; • PVS-Studio Enterprise-license. ## PVS-Studio plugins and how to install them The following plugins for SonarQube are available for PVS-Studio users: • sonar-pvs-studio-plugin.jar - a plugin which allows importing PVS-Studio analysis results into a project on the SonarQube server; • sonar-pvs-studio-lang-plugin.jar - a plugin which allows creating a quality profile for the C/C++/C# languages. This plugin is provided for compatibility of PVS-Studio plugins when moving from older versions of SonarQube to newer ones. This plugin allows you to keep the metrics/statistics obtained earlier and will probably be discarded in future releases. When creating a new project, use a profile with one of the standard languages (C++, C#, Java). The guide on installing and starting the SonarQube server can be found on the page Installing the Server. Once the SonarQube server is installed, copy the plugin (sonar-pvs-studio-plugin.jar) to this directory: SONARQUBE_HOME/extensions/plugins Depending on what language the analysis results refer to, install the corresponding plugins from the list below (some of them may be installed by default, depending on the SonarQube edition in use): • SonarC++ plugin (GitHub) • SonarC# plugin (Marketplace) • SonarJava plugin (Marketplace) Restart the SonarQube server. ## Creating and setting up a Quality Profile A Quality Profile is a collection of diagnostic rules to apply during an analysis. You can include PVS-Studio diagnostics into existing profiles or create a new profile. Every profile is bound to a particular programming language, but you can create several profiles with different rule sets. The ability to perform any action on quality profiles is granted to members of the sonar-administrators group. A new profile is created using the menu command Quality Profiles -> Create: To include PVS-Studio diagnostics into the active profile, select the desired repository through Rules -> Repository: After that, click on the Bulk Change button to add all of the diagnostics to your profile, or select the desired diagnostics manually. Diagnostics activation window: You can also filter diagnostics by tags before selecting them for your profile: After creating/tweaking your profiles, set one of them as the default profile: The default profile is started automatically for source files written in the specified language. You don't necessarily have to group your profiles based on the utilities used. You can create a single profile for your project and add diagnostics from different utilities to it. When a new PVS-Studio version releases, new diagnostics may appear, so you will have to update the plugin on the SonarQube server and add the new rules to the Quality Profile that uses PVS-Studio diagnostics. One of the sections below describes how to set up automatic updates. ## Code analysis and importing results into SonarQube Analysis results can be imported into SonarQube using the SonarQube Scanner utility. It requires a configuration file named sonar-project.properties and stored in the project's root directory. This file contains analysis parameters for the current project, and you can pass all or some of these settings as launch parameters of the SonarQube Scanner utility. Below we will discuss the standard scanner launch scenarios for importing PVS-Studio analysis results into SonarQube on different platforms. SonarQube Scanner will automatically pick up the configuration file sonar-project.properties in the current launch directory. ### Windows: C, C++, C# MSBuild projects are checked with the PVS-Studio_Cmd.exe utility. By launching this utility once, you can get both an analysis report and the configuration file sonar-project.properties: PVS-Studio_Cmd.exe ... -o Project.plog --sonarqubedata ... This is what the scanner launch command looks like: sonar-scanner.bat ^ -Dsonar.projectKey=ProjectKey ^ -Dsonar.projectName=ProjectName ^ -Dsonar.projectVersion=1.0 ^ -Dsonar.pvs-studio.reportPath=Project.plog ### Windows, Linux: Java Add the following lines to the Java project under analysis (depending on the project type): Maven <outputType>xml</outputType> <outputFile>output.xml</outputFile> <sonarQubeData>sonar-project.properties</sonarQubeData> Gradle outputType = 'xml' outputFile = 'output.xml' sonarQubeData='sonar-project.properties' Just like in the previous case, the configuration file will be created automatically once the Java analyzer has finished the check. The scanner launch command will look like this: sonar-scanner.bat ^ -Dsonar.projectKey=ProjectKey ^ -Dsonar.projectName=ProjectName ^ -Dsonar.projectVersion=1.0 ^ -Dsonar.pvs-studio.reportPath=output.xml ### Linux: C, C++ For a Linux project, you will have to create the configuration file manually after the analysis. As an example, it may include the following parameters: sonar.projectKey=my:project sonar.projectName=My project sonar.projectVersion=1.0 sonar.pvs-studio.reportPath=report.xml sonar.sources=. This is what the converter and scanner launch commands look like: plog-converter ... -t xml - o report.xml ... sonar-scanner.sh \ -Dsonar.projectKey=ProjectKey \ -Dsonar.projectName=ProjectName \ -Dsonar.projectVersion=1.0 \ -Dsonar.pvs-studio.reportPath=report.xml ### sonar-project.properties To fine-tune the analysis further, you can compose the configuration file manually from the following settings (or edit the automatically created file when checking MSBuild and Java projects): • sonar.pvs-studio.reportPath - path to the analyzer report in the .plog (for MSBuild projects) or .xml format; • sonar.pvs-studio.licensePath - path to the PVS-Studio license file (when checking an MSBuild project, you can pass this parameter using sonar.pvs-studio.settingsPath); • sonar.pvs-studio.sourceTreeRoot - path to the project directory on the current computer for cases when the report was generated on another computer, Docker container, etc. This parameter enables you to pass reports containing relative paths to sonar.pvs-studio.reportPath (when checking an MSBuild project, you can pass this parameter using sonar.pvs-studio.settingsPath); • sonar.pvs-studio.settingsPath - path to the Settings.xml file for MSBuild projects checked on Windows. This file already contains information about licencePath and sourceTreeRoot, so you don't have to specify them explicitly; • sonar.pvs-studio.cwe - specifies if CWE IDs are added to analyzer warnings. This option is off by default. Use the value active to enable; • sonar.pvs-studio.misra - specifies if MISRA IDs are added to analyzer warnings. This option is off by default. Use the value active to enable; • sonar.pvs-studio.language - activates the C/C++/C# language plugin. This option is off by default. Use the value active to turn it on. Enable this option if you are using a profile with the C/C++/C# languages added through a separate PVS-Studio plugin. This plugin is provided for compatibility of PVS-Studio plugins when moving from older versions of SonarQube to newer ones. It allows you to keep the metrics/statistics obtained earlier and will probably be discarded in future releases. The other standard scanner configuration parameters are described in the general documentation on SonarQube. ## For software security specialists PVS-Studio's capabilities of detecting potential vulnerabilities are described on the page PVS-Studio SAST (Static Application Security Testing). Security-related information on the code under analysis provided by PVS-Studio is additionally highlighted by SonarQube in the imported analysis results. ### Cwe, cert, misra tags PVS-Studio warnings can be grouped based on different security standards through Issues -> Tag or Rules -> Tag: • misra • cert • cwe You can also select a particular CWE ID if available (when a warning falls into several CWE IDs at once, it will be marked with a single cwe tag; use prefixes in the warning text to filter by IDs): ### CWE and MISRA prefixes in warnings The configuration file sonar-project.properties provides the following options: sonar.pvs-studio.cwe=active sonar.pvs-studio.misra=active They are used to enable the inclusion of CWE and MISRA IDs into analyzer warnings: Warnings can be filtered by tags anytime, regardless of the specified options. ### Statistics on detected CWE and MISRA problems The tab Projects -> Your Project -> Measures shows various code metrics calculated each time a check is launched. All collected information can be visualized as graphs. The Security section allows you to track the number of warnings with CWE and MISRA tags for the current project: The other, general, metrics of PVS-Studio warnings can be viewed in a separate section, PVS-Studio. ## Additional features of the PVS-Studio plugin Most actions available to SonarQube users are standard for this platform. These actions include viewing and sorting analysis results, changing warning status, and so on. For this reason, this section will focus only on the additional features that come with the PVS-Studio plugin. ### Sorting warnings by groups PVS-Studio warnings are divided into several groups, some of which may be irrelevant to the current project. That's why we added an option allowing you to filter diagnostics by the following tags when creating a profile or viewing the analysis results:  PVS-Studio diagnostics group SonarQube tag General analysis pvs-studio#ga Micro-optimizations pvs-studio#op 64-bit errors pvs-studio#64 MISRA pvs-studio#misra Customers' specific diagnostics pvs-studio#cs Analyzer fails pvs-studio#fails These are the standard tags used in PVS-Studio warnings:  Code quality control standards SonarQube tag CWE cwe CERT cert MISRA misra Unlike the pvs-studio# tag group, the standard SonarQube tags may include, depending on the active quality profile, messages from other tools in addition to those from PVS-Studio. ### Viewing code metrics The tab Projects -> Your Project -> Measures shows various code metrics calculated each time a check is launched. When installing the analyzer plugin, a new section, PVS-Studio, is also added, where you can find useful information on your project and have graphs plotted: ## Customizing the analyzer before analysis When working with a large code base, the analyzer will inevitably generate a lot of messages, and it's usually impossible to address them all at once. In order to focus on the most important warnings and keep the statistics "uncluttered", you can do some tweaking of the analyzer settings and log filtering before launching SonarQube Scanner. There are several ways to do this. 1. You can have less "noise" in the analyzer's output by using the No Noise option. It allows you to completely turn off messages of the Low Certainty level (which is the third level). After restarting the analysis, all messages of this level will disappear from the analyzer's output. To enable this option, use the settings window "Specific Analyzer Settings" in Windows or refer to the general documentation for Linux and macOS. 2. You can speed up the check by excluding external libraries, test code, etc. from analysis. To add files and directories to the exceptions list, use the settings window "Don't Check Files" in Windows or refer to the general documentation for Linux and macOS. 3. If you need additional control over the output, for example, message filtering by level or error code, use the message filtering and conversion utility (Plog Converter) for the current platform. 4. If you need to change a warning's level, you can do so in the settings of the analyzer itself rather than in SonarQube. PVS-Studio has the following certainty levels: High, Medium, Low, and Fails. The respective levels in SonarQube are Critical, Major, Minor, and Info. See the page "Additional diagnostics configuration" on how to change warnings' levels. ## Automatic updates of PVS-Studio plugins The update procedure can be automated with SonarQube Web Api. Suppose you have set up an automatic PVS-Studio update system on your build server (as described in the article "Unattended deployment of PVS-Studio"). To update the PVS-Studio plugins and add the new diagnostics to the Quality Profile without using the Web interface, perform the following steps (the example below is for Windows; the same algorithm applies to other operating systems): • Copy the sonar-pvs-studio-plugin.jar file from the PVS-Studio installation directory to$SONARQUBE_HOME\extensions\plugins.
• Restart the SonarQube server.

Suppose your SonarQube server is installed in C:\Sonarqube\ and is running as a service; PVS-Studio is installed in C:\Program Files (x86)\PVS-Studio\. The script which will automatically update the PVS-Studio distribution and sonar-pvs-studio-plugin will then look like this:

set PVS-Studio_Dir="C:\Program Files (x86)\PVS-Studio"
set SQDir="C:\Sonarqube\extensions\plugins\"

rem Update PVS-Studio
cd /d "C:\temp\"
xcopy %PVS-Studio_Dir%\PVS-Studio-Updater.exe . /Y
call PVS-Studio-Updater.exe /VERYSILENT /SUPPRESSMSGBOXES
del PVS-Studio-Updater.exe

rem Stop the SonarQube server
sc stop SonarQube

rem Wait until the server is stopped
ping -n 60 127.0.0.1 >nul

xcopy %PVS-Studio_Dir%\sonar-pvs-studio-plugin.jar %SQDir% /Y

sc start SonarQube

rem Wait until the server is started
ping -n 60 127.0.0.1 >nul
• Specify the key for the Quality Profile where you want the new diagnostics activated. This key can be retrieved through the GET request api/qualityprofiles/search, for example (in one line):
curl http://localhost:9000/api/qualityprofiles/search
-v -u admin:admin

The server's response will be as follows:


{
"profiles": [
{
"key":"c++-sonar-way-90129",
"name":"Sonar way",
"language":"c++",
"languageName":"c++",
"isInherited":false,
"isDefault":true,
"activeRuleCount":674,
"rulesUpdatedAt":"2016-07-28T12:50:55+0000"
},
{
"key":"c-c++-c-pvs-studio-60287",
"name":"PVS-Studio",
"language":"c/c++/c#",
"languageName":"c/c++/c#",
"isInherited":false,
"isDefault":true,
"activeRuleCount":347,
"rulesUpdatedAt":"2016-08-05T09:02:21+0000"
}
]
}  

Suppose you want the new diagnostics to be added to your PVS-Studio profile for the languages 'c/c++/c#'. The key for this profile is the value c-c++-c-pvs-studio-60287.

• Run the POST request api/qualityprofiles/activate_rules and specify the parameters profile_key (obligatory) and tags. The obligatory parameter profile_key specifies the key for the SonarQube profile where the diagnostics will be activated. In our example, this parameter has the value c-c++-c-pvs-studio-60287.
 Note that a profile key may contain special characters, so the URL characters need to be escaped when passing the key in the POST request. In our example, the profile key c-c++-c-pvs-studio-60287 must be converted into c-c%2B%2B-c-pvs-studio-60287

The tags parameter is used to pass the tags of the diagnostics you want activated in your profile. To activate all PVS-Studio diagnostics, pass the pvs-studio tag.

The request for adding all diagnostics to a PVS-Studio profile will look like this (in one line):

curl --request POST -v -u admin:admin -data
"profile_key=c-c%2B%2B-c-pvs-studio-60287&tags=pvs-studio"
http://localhost:9000/api/qualityprofiles/activate_rules

## Recommendations and limitations

• The SonarQube server deletes closed issues older than 30 days by default. We recommend that you disable this option so that you could keep track of the number of warnings addressed over a long time period (say, a year);
• If you have modules specified in the sonar-project.properties file and you have a separate analyzer report for each of them, you need to merge these reports using the PlogConverter utility and specify the resulting report once in sonar.pvs-studio.reportPath.
• The developers of SonarQube recommend using SonarQube Scanner for MSBuild to analyze MSBuild projects. This scanner is a wrapper around the standard SonarQube scanner .It makes the creation of the sonar-project.properties configuration file easier by automatically adding modules (projects of a solution) to it and specifying the paths to the source files to be analyzed. However, we faced some limitations that lead to creating incorrect configuration files when working with complex projects. Because of that, we recommend using the standard SonarQube scanner to import PVS-Studio analysis results.
• All source files that you want analyzed must be stored on the same disk. This limitation is imposed by the SonarQube platform. Source files stored on disks other than that specified in the sonar.projectBaseDir property will not be indexed, and the messages generated for those files will be ignored.

# Managing XML Analyzer Report (.plog file)

 In this section we describe working in the Windows environment. Working in the Linux environment is described in the section "How to run PVS-Studio on Linux".

The analysis results that PVS-Studio generates as its output after it has finished checking a project (either from the Visual Studio plugin or in command-line batch mode) are typically presented as an XML log file (".plog"). Using direct integration of C++ analyzer to the build system to perform the analysis produces an unparsed 'raw' log file. You can view these files in the PVS-Studio plugin for Visual Studio or in the C and C++ Compiler Monitoring UI (Standalone.exe). These formats, however, are not convenient for viewing directly in a text editor, sending them via email, and so on. PVS-Studio package comes with a number of utilities that allow you to manage such log files in a number of ways.

## Preliminary filtering of analysis results

The analysis results can be filtered even before a start of the analysis by using the No Noise setting. When working on a large code base, the analyzer inevitably generates a large number of warning messages. Besides, it is often impossible to fix all the warnings straight out. Therefore, to concentrate on fixing the most important warnings first, the analysis can be made less "noisy" by using this option. It allows completely disabling the generation of Low Certainty (level 3) warnings. After restarting the analysis, the messages from this level will disappear from the analyzer's output.

When circumstances will allow it, and all of the more important messages are fixed, the 'No Noise' mode can be switched off – all of the messages that disappeared before will be available again.

To enable this setting use the Specific Analyzer Settings page.

## Converting the analysis results

When opening log file in a text editor, a user has to deal with XML markup or 'raw' unreadable log from the analyzer. To convert the analysis results into a more convenient format, use PlogConverter utility, which comes with PVS-Studio and can be found in the PVS-Studio installation directory ("C:\Program Files (x86)\PVS-Studio" by default). You can also download the source code of the utility.

Use the "'--help" option to display the basic information about the utility:

PlogConverter.exe --help

Let's take a closer look at the utility's parameters:

• --renderTypes (or -t): defines the possible formats into which the log files can be converted. The supported formats are Html, FullHtml, Totals, Txt, Tasks, Csv, and Plog. When not defined explicitly, the log file will be converted into all of these formats.
• Html: converts the log file into an html file (convenient for automatic delivery to addressees on your mailing list).
• FullHtml: converts the log file and the source files in html-files (you can view the analyzer report in browser with sorting by warnings and navigation along the code).
• Txt: converts the log file into a text file.
• Csv: converts the log file into a comma-separated file (convenient for viewing in Microsoft Excel, for example).
• Totals: outputs the statistics on the issued warnings across the types (GA, OP, 64, CS, MISRA) and certainty levels. Detailed description of the levels of certainty and sets of diagnostic rules is given in the documentation section "Getting Acquainted with the PVS-Studio Static Code Analyzer".
• Tasks: converts the log file into format that can be opened in QtCreator.
• Plog: can be used to merges several plog xml files into one or transforms a raw unparsed log to parsed xml one.

You can combine different format options by separating them with "," (no spaces), for example:

PlogConverter.exe D:\Projct\results.plog --renderTypes=Html,Csv,Totals

or

PlogConverter.exe D:\Projct\results.plog -t Html,Csv,Totals
• --analyzer (or -a): filters the warnings by a specified mask. The mask format:
MessageType:MessageLevels

"MessageType" can be set to one of the following types: GA, OP, 64, CS, MISRA, Fail

"MessageLevels" can be set to values from 1 to 3

You can combine different masks by separating the options with ";" (no spaces), for example (written in one line):

PlogConverter.exe D:\Projct\results.plog --renderTypes=Html,Csv,Totals
--analyzer=GA:1,2;64:1

or

PlogConverter.exe D:\Projct\results.plog -t Html,Csv,Totals
-a GA:1,2;64:1

The command format reflects the following logic: convert ".plog" into Html, Csv, and Totals formats, keeping only the general-analysis warnings (GA) of the 1-st and 2-nd levels and 64-bit warnings (64) of the 1-st level.

• --excludedCodes (or -d): creates a list of warnings (separated with ",") that shouldn't be included into the resulting log file. For example, you don't want the V101, V102, and V200 warnings to be included (written in one line):
PlogConverter.exe D:\Projct\results.plog --renderTypes=Html,Csv,Totals
--excludedCodes=V101,V102,V200

or

PlogConverter.exe D:\Projct\results.plog -t Html,Csv,Totals
-d V101,V102,V200
• --settings (or -s): defines the path to the PVS-Studio configuration file. PlogConverter will read your custom settings for the warnings you want turned off specified in the configuration file. This parameter in fact extends the list of the warnings that you want to be excluded defined by the "--excludedCodes" parameter.
• --srcRoot (or -r): specifies the replacer of the "SourceTreeRoot" marker. If the path to the project's root directory was replaced with the "SourceTreeRoot" marker (|?|), this parameter becomes obligatory (otherwise the utility won't be able to find the project files).
• --outputDir (or -o): defines the directory where the converted log files will be created. If not specified, the files will be created in the same directory where "PlogConverter.exe" is located.
• --outputNameTemplate (or -n): specifies the filename template without extension. All the converted log files will have the same name but different extensions (".txt", ".html", ".csv", ".tasks" or ".plog" depending on the "--renderTypes" parameter).
• --errorCodeMapping (or -m): to enable a display of CWE ID and MISRA ID for the found warnings: " -m cwe,misra".

## Notifying the developer team

Once you have the converted log files, you can send them to other people involved in the development (team leaders, development manager, and so on). This process can be automated by including the analysis step into scheduled "night" builds, where the log file will be converted into the required format and sent to the specified addressees.

Here is an example. Once you have a "fresh" analysis report converted into an HTML file, run SendEmail utility. We are interested in the following basic parameters:

• -f : message sender;
• -t : message recipient. You can specify more than one recipient;
• -s : SMTP server;
• -u : message subject;
• -o message-charset=utf-8 : specifies the UTF-8 character set;
• -o message-file="PVS-Studio_report.html" : HTML file to be sent to the addressees.

You can also inform the developers by using the BlameNotifier utility, which comes with the PVS-Studio package. It is based on the following mechanism: on finishing the analysis, the analyzer generates a ".plog" file, which is then passed to BlameNotifier with some additional parameters. The utility finds the files with potential errors and forms an individual HTML report for each "guilty" developer. Another option is to send a complete log file with all the warnings sorted by the names of the developers responsible for the code that triggered those warnings.

BlameNotifier utility can be found in the PVS-Studio install directory ("C:\Program Files (x86)\PVS-Studio" by default). Use the "--help" option to display the basic information about the utility:

BlameNotifier.exe --help

Let's take a closer look at the utility's parameters:

• --VCS (or -v), obligatory parameter: the type of the version control system that the utility will be dealing with. Supported systems: Git, Svn, Mercurial.
• --recipientsList (or -r), obligatory parameter: the path to the text file with the mailing list. File description format:
# Recipients of complete log file
...
# Recipients of individually assigned warnings
...
username_N email_N

Comments could be written with the "#" character. For recipients of the complete report, you need to add the "*" character before or after their email addresses. The complete log file will include all the warnings sorted by the developers.

• --server (or -x), obligatory parameter: SMTP server for mail sending.
• --sender (or -s), obligatory parameter: sender's email address.
• --port (or -p): mail delivery port (25 by default).
• --maxTasks (or -m): the maximum number of concurrently running blame-processes. By default or when set to a negative number, BlameNotifier will be using 2 * N processes (where N is the number of processor cores).
• --progress (or -g): turn logging on/off. Off by default.

BlameNotifier can also use the parameters of PlogConverter, namely (see the descriptions in the corresponding section above):

• --analyzer (or -a);
• --excludedCodes (or -e);
• --srcRoot (or -t);
• --settings (or -c).

This feature allows you to filter the analysis results before sending them.

For example (written in one line):

BlameNotifier.exe "Drive:\Path\To\Plog" --VCS=Git
--recipientsList="Drive:\Path\To\recipientsList.txt"
--server="smtp.gmail.com"
--srcRoot="..." --maxTasks=40

## Summary

Despite the built-in log-viewing features of PVS-Studio, there are other ways to view the analysis log. You can convert the XML file with the analyzer warnings into one of the formats that can be conveniently opened in other applications (html, txt, csv) by using PlogConverter utility. A converted report can be automatically sent on a daily basis to the persons involved in the development to inform them about the analyzer warnings (SendEmail utility). In addition, BlameNotifier utility can be used to automate the process of finding the developers responsible for writing code that triggered certain warnings. BlameNotifier will send html messages to these developers and also prepare a complete report for "special" persons with the warnings sorted by the "guilty" developers.

## How to tell the analyzer that a function can or cannot return nullptr

There are many system functions, such as malloc, realloc, and calloc, that return a null pointer in certain conditions. They return NULL when they fail to allocate a buffer of the specified size.

Sometimes you may want to change the analyzer's behavior and make it think, for example, that malloc cannot return NULL. This can be done by using the system libraries, where 'out of memory' errors are handled in a specific way.

An opposite scenario is also possible. You may want to help the analyzer by telling it that a certain system or user-made function can return a null pointer.

To help you with that, we added a mechanism that allows you to use special comments to tell the analyzer that a certain function can or cannot return NULL.

• V_RET_NULL - the function can return a null pointer
• V_RET_NOT_NULL - the function cannot return a null pointer

Comment format:

//V_RET_[NOT]_NULL, namespace:Space, class:Memory, function:my_malloc
• function option - specifies the name of the function that can(not) return NULL.
• class option - class name; optional.
• namespace option - namespace name; optional.

The controlling comment can be written next to the function declaration.

However , you cannot do this for such functions as malloc because changing system header files is a bad idea.

A possible way out is to add the comment to one of the global headers included into each of the translation units. If you work in Visual Studio, the file stdafx.h would be a good choice.

Another solution is to use the diagnostic configuration file pvsconfig. See "Suppression of false alarms" (section "Mass suppression of false positives through diagnostic configuration files (pvsconfig)").

This is illustrated by the two examples below.

The function does not return NULL:

//V_RET_NOT_NULL, function:malloc

Now the analyzer thinks that the malloc function cannot return NULL and, therefore, will not issue the V522 warning for the following code:

int *p = (int *)malloc(sizeof(int) * 100);
p[0] = 12345; // ok

The function returns a pointer that could be null:

//V_RET_NULL, namesapce:Memory, function:QuickAlloc

With this comment, the following code will be triggering the warning:

char *p = Memory::QuickAlloc(strlen(src) + 1);
strcpy(p, src); // Warning!

In projects with special quality requirements, you might need to find all functions, returning a pointer. To do this, you can use the following comment:

//V_RET_NULL_ALL

We don't recommend using this mode because of issuing a large number of warnings. But if it's really needed in your project, you can use this special comment to add in code a check of a returned pointer for all such functions.

## How to Set Your Level for Specific Diagnostics

Analyzer warnings are of three levels of certainty: High, Medium, Low. Depending on the used constructs in code the analyzer estimates the certainty of warnings and assigns them an appropriate level in a report. Some warnings may be issued simultaneously on several levels.

In some projects, search for specific types of errors can be very important, regardless of the level of warning certainty. Sometimes there can be a reverse situation, when the error messages are of little use, but a programmer does not want to disable them at all. In such cases, you can manually set the diagnostics level of High/Medium/Low. To do this, you should use the special comments that can be added in code or the diagnostics configuration file. Examples of comments:

//V_LEVEL_1::501,502
//V_LEVEL_2::522,783,579
//V_LEVEL_3::773

Finding such comments, the analyzer issue warnings at the specified level.

## Changing an output message's text

You can specify that one or more entities should be replaced with some other one(s) in certain messages. This enables the analyzer to generate warnings taking into account the project's specifics. The control comment has the following format:

//+Vnnn:RENAME:{Aaaa:Bbbb},{<foo.h>:<myfoo.h>},{100:200},......

In all the messages Vnnn, the following replacements will be done:

• Aaaa will be replaced with Bbbb.
• <foo.h> will be replaced with <myfoo.h>.
• The number 100 will be replaced with 200.

The working principle of this mechanism is best to be explained by an example.

When coming across the number 3.1415 in code, the V624 diagnostic suggests replacing it with M_PI from the <math.h> library. But suppose our project uses a special math library and it is this library that we should use mathematical constants from. So the programmer may add the following comment in a global file (for example StdAfx.h):

//+V624:RENAME:{M_PI:OUR_PI},{<math.h>:"math/MMath.h"}

After that the analyzer will be warning that the OUR_PI constant from the header file "math/MMath.h" should be used.

You can also extend messages generated by PVS-Studio. The control comment has the following format:

//+Vnnn:ADD:{ Message}

The string specified by the programmer will be added to the end of every message with the number Vnnn.

Take diagnostic V2003, for example. The message associated with it is: "V2003 - Explicit conversion from 'float/double' type to signed integer type.". You can reflect some specifics of the project in the message and extend it by adding the following comment:

//+V2003:ADD:{ Consider using boost::numeric_cast instead.}

From now on, the analyzer will be generating a modified message: "V2003 - Explicit conversion from 'float/double' type to signed integer type. Consider using boost::numeric_cast instead.".

## Configuration of the assert() macro handling

The analyzer equally checks the code where the assert() macro is presented regardless of the configuration of the project (Debug, Release, ...) and specifically doesn't take into account that the execution of the code is interrupted when having the false condition.

To set another analyzer behavior, use the following comment in code:

//V_ASSERT_CONTRACT

Note that in such a mode the analysis results may differ depending on the way the macro is expanded in the checked project configuration.

Let's look at this example to make it clear:

MyClass *p = dynamic_cast<MyClass *>(x);
assert(p);
p->foo();

The dynamic_cast operator can return the nullptr value. Thus, in the standard mode the analyzer will issue the warning that when calling the function foo(), null pointer dereference might occur.
But if we use the comment, the warning will be gone.

## How to specify an alias for a system function

Some projects use custom implementations of various system functions, such as memcpy, malloc, and so on. In this case, the analyzer doesn't understand that such functions behave in the same way as their standard analogues. Using the V_FUNC_ALIAS annotation, you can specify which custom functions correspond to which system ones.

Comment format:

//V_FUNC_ALIAS, implementation:sysf, function:f, namespace:ns, class:c
• implementation option - name of the system function for which an alias will be specified.
• function option - alias name. The function specified in this option must have exactly the same signature as the one specified in the implementation option.
• class option - class name; optional.
• namespace option - namespace name; optional.

Consider this example:

//V_FUNC_ALIAS, implementation:memcpy, function:MyMemCpy

Now, the analyzer will process calls to the MyMemCpy function in the same way it processes calls to memcpy. For example, this code snippet will trigger the V512 warning:

int buf[] = { 1, 2, 3, 4 };
int out[2];
MyMemCpy (out, buf, 4 * sizeof(int)); // Warning!

# Analysis of Unreal Engine projects

 This section describes the analysis of Unreal Engine projects on Windows operating system. The instructions for checking projects under Linux\macOS are available by this link.

## Introduction

A specialized build system called Unreal Build System is used for building Unreal Engine projects. This system is integrated over the build system used by the Visual Studio environment (MSBuild) by utilizing autogenerated makefile MSBuild projects. This is a special type of Visual C++ (vcxproj) projects in which the execution of the build is relegated to the execution of a command calling a third-party utility, for example (but not necessarily), Make. The use of makefile projects allows working with source code of Unreal Engine from Visual Studio environment, taking advantage of such features as code autocompletion, syntax highlighting, symbol navigation, etc.

Because makefile MSBuild projects themselves do not contain full information, necessary to perform the compilation, and therefore, preprocessing of C/C++ source files, PVS-Studio does not support the analysis of such projects from within Visual Studio, or by PVS-Studio_Cmd.exe command line tool. Therefore, to check such projects with PVS-Studio, you can go two ways - monitoring of compiler invocations (Compiler Monitoring) and direct integration of the PVS-Studio.exe C/C++ analyzer in the Unreal Build Tool utility. Let's consider these options in more detail.

## Analysis using compiler monitoring

Unreal Build System uses the Visual C++ compiler-cl.exe for building under Windows. This compiler is supported by the system of PVS-Studio compiler monitoring on Windows. It can be both used from the C and C++ Compiler Monitoring UI (Standalone.exe) or from CLMonitor.exe command line tool.

Compiler monitoring can be launched manually from within the Compiler Monitoring UI or it can be assigned to the event of starting\ending builds in Visual Studio. The result of the analysis by the monitoring system is a plog XML report file, which you can open from the Visual Studio PVS-Studio extension, or convert to one of the standard formats (txt, html, csv) using the PlogConverter special tool.

A more detailed description for the system of compiler monitoring is available in this section of the documentation. We recommend using this way to run the analysis when you want to check it for the first time and get acquainted with the analyzer, as this way is the easiest one to set up.

## Analysis using Unreal Build Tool integration

 A general description of how to integrate PVS-Studio C/C++ analyzer into any build system directly is available here.

In case of Unreal Build System, the developers from Epic Games provide the opportunity to use PVS-Studio through the direct integration with the build utility called Unreal Build Tool, starting from version 4.17.

Unreal Build Tool allows running the analysis by PVS-Studio after project's compilation, by adding the following flag to the build command line:

-StaticAnalyzer=PVSStudio

For example, a full command line for running Unreal Build Tool might look as follows:

UnrealBuildTool.exe UE4Client Win32 Debug -WaitMutex -FromMsBuild
-StaticAnalyzer=PVSStudio -DEPLOY

To enable the analysis when running a build from Visual Studio, open the project properties by going to Properties-> Configuration Properties-> NMake and add the -StaticAnalyzer=PVSStudio flag to the options for project's build and rebuild commands (Build Command Line/Rebuild All Command Line)

The path to the file with the results of the analysis will be displayed in the Output (Build) window of Visual Studio (or stdout if you've run the Unreal Build Tool manually from the command line). This analysis results file has a "unparsed" format - you can open it from the IDE by 'PVS-Studio|Open/Save|Open Analysis Report' command, selecting the 'unparsed output' file type, or you can convert this results file by using the PlogConverter utility, as described in the section above for the XML log.

Before starting the analysis, you should enter your license for the analyzer - open Visual Studio and in the 'PVS-Studio|Options...Registration' window enter your license information (please note, that before Unreal Engine version 4.20, UBT was unable to get the license information from the PVS-Studio common settings file; in case UBT does not recognize a license entered via UI, you should create a separate license file with the name of PVS-Studio.lic and place it to the '%USERPROFILE%\AppData\Roaming\PVS-Studio' directory).

Note 1: the usual Visual Studio trial mode for PVS-Studio does not work with the Unreal Build Tool projects. In order to try the analyzer, you can use the Compiler Monitoring from the Compiler Monitoring UI (the analysis results can be viewed in Visual Studio as well, by saving the analysis log after the check from Compiler Monitoring UI and opening them from VS). Email us at support@viva64.com, if you want a trial license to test out the integration with Unreal Build Tool.

Note 2: integration of PVS-Studio with Unreal Build Tool currently supports not all of analzyer's settings, available from Visual Studio (PVS-Studio|Options...). Currently, supported are the specification of exclude directories via ''PVS-Studio|Options...|Don't Check Files' and filtering of the loaded analysis results through 'Detectable Errors'.

In addition, in the UBT integration, the analyzer runs with only the General Analysis diagnostics group enabled. If you are interested in running the analyzer for other groups of diagnostics available in PVS-Studio, please write us through support@viva64.com.

# Settings: General

When developing PVS-Studio we assigned primary importance to the simplicity of use. We took into account our experience of working with traditional lint-like code analyzers. And that is why one of the main advantages of PVS-Studio over other code analyzers is that you can start using it immediately. Besides, PVS-Studio has been designed in such a way that the developer using the analyzer would not have to set it up at all. We managed to solve this task: a developer has a powerful code analyzer which you need not to set up at the first launch.

But you should understand that the code analyzer is a powerful tool which needs competent use. It is this competent use of the analyzer (thanks to the settings system) that allows you to achieve significant results. Operation of the code analyzer implies that there should be a tool (a program) which performs routine work of searching potentially unsafe constructions in code and a master (a developer) who can make decisions on the basis of what he knows about the project being verified. Thus, for example, the developer can inform the analyzer that:

• some error types are not important for analysis and do not need to be shown (with the help of settings of Settings: Detectable Errors);
• the project does not contain incorrect type conversions (by disabling the corresponding diagnostic messages, Settings: Detectable Errors);

Correct setting of these parameters can greatly reduce the number of diagnostic messages produced by the code analyzer. It means that if the developer helps the analyzer and gives it some additional information by using the settings, the analyzer will in its turn reduce the number of places in the code which the developer must pay attention to when examining the analysis results.

PVS-Studio setting can be accessed through the PVS-Studio -> Options command in the IDE main menu. When selecting this command you will see the dialogue of PVS-Studio options.

Each settings page is extensively described in PVS-Studio documentation.

# Settings: Common Analyzer Settings

The tab of the analyzer's general settings displays the settings which do not depend on the particular analysis unit being used.

## Check For New Versions

The analyzer can automatically check for updates on www.viva64.com site. It uses our update module.

If the CheckForNewVersions option is set to True, a special text file is downloaded from www.viva64.com site when you launch code checking (the commands Check Current File, Check Current Project, Check Solution in PVS-Studio menu). This file contains the number of the latest PVS-Studio version available on the site. If the version on the site is newer than the version installed on the user computer, the user will be asked for a permission to update the program. If the user agrees, a special separate application PVS-Studio-Updater will be launched that will automatically download and install the new PVS-Studio distribution kit. If the option CheckForNewVersions is set to False, it will not check for the updates.

Analysis of files is performed faster on multi-core computers. Thus, on a 4-core computer the analyzer can use all the four cores for its operation. But, if for some reason, you need to limit the number of cores being used, you can do this by selecting the required number. The number of processor cores will be used as a default value.

When running analysis on a single system, we do not advise setting the value of this option greater, than the number of processor cores available. Setting the value larger than the number of cores could degrade the overall analyzer performance. If you wish to run more analysis tasks concurrently, you can use a distributed build system, for example, IncrediBuild. More detailed description of this mode of using PVS-Studio is described in the relevant section of documentation.

## Preprocessor (only for Visual Studio)

An external preprocessor is being used to preprocess source files before PVS-Studio analysis. When working from under Visual Studio IDE, the native Microsoft Visual C++ preprocessor, cl.exe, is used by default. In 4.50 version of PVS-Studio, the support for the Clang independent preprocessor had been added, as it lacks some of the Microsoft's preprocessor shortcomings (although it also possesses issues of its own).

In some of the older versions of Visual Studio (namely, 2010 and 2012), the cl.exe preprocessor is significantly slower than clang. Using Clang preprocessor with these IDEs provides an increase of operational performance by 1.5-1.7 times in most cases.

However, there is an aspect that should be considered. The preprocessor to be used can be specified from within the 'PVS-Studio|Options|Common Analyzer Settings|Preprocessor' field. The available options are: VisualCPP, Clang and VisualCPPAfterClang. The first two of these are self-evident. The third one indicates that Clang will be used at first, and if preprocessing errors are encountered, the same file will be preprocessed by the Visual C++ preprocessor instead.

## Remove Intermediate Files

The analyzer creates a lot of temporary command files for its operation to launch the analysis unit itself, to perform preprocessing and to manage the whole process of analysis. Such files are created for each project file being analyzed. Usually they are not of interest for a user and are removed after the analysis process. But in some cases it can be useful to look through these files. So you can indicate to the analyzer not to remove them. In this case you can launch the analyzer outside the IDE from the command line.

# Settings: Detectable Errors

This settings page allows you to manage the displaying of various types of PVS-Studio messages in the analysis results list.

All the diagnostic messages output by the analyzer are split into several groups. The display (show/hide) of each message type can be handled individually, while the following actions are available for a whole message group:

• Disabled – to completely disable an entire message group. Errors from this group will not be displayed in the analysis results list (PVS-Studio output window). Enabling the group again will require to re-run an analysis;
• Show All – to show all the messages of a group in the analysis results list;
• Hide All – to hide all the messages of a group in the analysis results list.

It may be sometimes useful to hide errors with certain codes in the list. For instance, if you know for sure that errors with the codes V505 and V506 are irrelevant for your project, you can hide them in the list by unticking the corresponding checkboxes.

Please mind that you don't need to relaunch the analysis when using the options "Show All" and "Hide All"! The analyzer always generates all the message types found in the project, while whether they should be shown or hidden in the list is defined by the settings on this page. When enabling/disabling error displaying, they will be shown/hidden in the analysis results list right away, without you having to re-analyze the whole project.

Complete disabling of message groups can be used to enhance the analyzer's performance and get the analysis reports (plog-files) of smaller sizes.

# Settings: Don't Check Files

You may specify file masks to exclude some of the files or folders from analysis on the tab "Don't Check Files". The analyzer will not check those files that meet the masks' conditions.

Using this technique, you may, for instance, exclude autogenerated files from the analysis. Besides, you may define the files to be excluded from analysis by the name of the folder they are located in.

A mask is defined with the help of wild card match types. The '*' (any number of any characters) wild card can be used, the '?' symbol is not supported.

The case of a character is irrelevant. The '*' wildcard character could only be inserted at the beginning or at the end of the mask, therefore the masks of the 'a*b' kind are not supported. After exclusion masks were specified, the messages from files corresponding to these masks should disappear from PVS-Studio Output window, and the next time then analysis is started these files will be excluded from it. Thereby the total time of the entire project's analysis could be substantially decreased by excluding files and directories with these masks.

*ex.c — all files with the names ending with "ex" characters and "c" extension will be excluded.

*.cpp — all files possessing the "cpp" extension will be excluded

stdafx.cpp — every file possessing such name will be excluded from analysis regardless of its location within the filesystem

c:\Libs\ — all files located in this directory and its subdirectories will be excluded

\Libs\ or *\Libs\* — all files located in the directories with path containing the Libs subdirectory will be excluded.

Libs or *Libs* — the files possessing within their paths the subdirectory with the 'Libs' chars in its name will be excluded. Also the files with names containing the 'libs' characters will be excluded as well, for example 'c:\project\mylibs.cpp.' To avoid confusion we advise you always to specify folders with slash separators.

c:\proj\includes.cpp — a single file located in the c:\proj\ folder with the specified name will be excluded from the analysis.

# Settings: Keyword Message Filtering

In the keyword filtering tab you can filter analyzer messages by the text they contain.

When it's necessary you may hide from analyzer's report the diagnosed errors containing particular words or phrases. For example, if the report contains errors in which names of printf and scanf functions are indicated and you consider that there can be no errors relating to them just add these two words using the message suppression editor.

Please note! When changing the list of the hidden messages you don't need to restart analysis of the project. The analyzer always generates all the diagnostic messages and the display of various messages is managed with the help of this settings tab. When modifying message filters the changes will immediately appear in the report and you won't need to launch analysis of the whole project again.

# Settings: Registration

Open PVS-Studio settings page. (PVS-Studio Menu -> Options...).

In the registration tab the licensing information is entered.

After purchasing the analyzer you receive registration information: the name and the serial number. These data must be entered in this tab. In the LicenseType field the licensing mode will be indicated.

Information on the licensing conditions is located in the ordering page on site.

# Settings: Specific Analyzer Settings

## Analysis Timeout

This setting allows you to set the time limit, by reaching which the analysis of individual files will be aborted with V006. File cannot be processed. Analysis aborted by timeout error, or to completely disable analysis termination by timeout. We strongly advise you to consult the description of the error cited above before modifying this setting. The timeout is often caused by the shortage of RAM. In such a case it is reasonable not to increase the time but to decrease the number of parallel threads being utilized. This can lead to a substantial increase in performance in case the processor possesses numerous cores but RAM capacity is insufficient.

## Incremental Analysis Timeout

This setting allows setting a time limit, after which the incremental analysis files will be aborted. All the warnings, detected at the moment of the analysis stoppage, will be output in the PVS-Studio window. Additionally, there will be issued a warning that the analyzer didn't have time to process all the modified files and the information about the total and analyzed number of files.

This option is relevant only for working in Visual Studio IDE.

## No Noise

When working on a large code base, the analyzer inevitably generates a large number of warning messages. Besides, it is often impossible to fix all the warnings straight out. Therefore, to concentrate on fixing the most important warnings first, the analysis can be made less "noisy" by using this option. It allows completely disabling the generation of Low Certainty (level 3) warnings. After restarting the analysis, the messages from this level will disappear from the analyzer's output.

When circumstances will allow it, and all of the more important messages are fixed, the 'No Noise' mode can be switched off – all of the messages that disappeared before will be available again.

## Perform Custom Build Step

Setting this option to 'true' enables the execution of actions specified in the 'Custom Build Step' section of Visual Studio project file (vcproj/vcxproj). It should be noted that the analyzer requires a fully-compilable code for its correct operation. So, if, for example, the 'Custom Build Section' contains actions used to auto-generate some header files, these actions should be executed (by enabling this setting) before starting the project's analysis. However, in case this step performs some actions concerning the linking process, for instance, then such actions will be irrelevant to code analysis. The 'Custom Build Step' actions are specified at the level of the project and will be executed by PVS-Studio during initial scanning of the project file tree. If this setting is enabled and its execution results in a non-zero exit code, the analysis of the corresponding project file will not be started.

## Save File After False Alarm Mark

Marking the message as False alarm requires the modification of source code files. By default the analyzer will save each source code file after making every such mark. However, if such frequent savings of files are undesirable (for example if the files are being stored on different machine in LAN), they can be disabled using this setting.

Exercise caution while modifying this setting because the not saving the files after marking them with false alarms can lead to a loss of work in case of IDE being closed.

## Display False Alarms

Allows enabling the display of messages marked as 'False Alarms' in the PVS-Studio output window. This option will take effect immediately, without the need to re-run the analysis. When this option is set to 'true', an 'FA' indicator containing the number of false alarms on the output window panel will become visible.

## False Alarm Comment

Allows specifying a text fragment (a comment, for example) which will be automatically inserted to the source file when using the False Alarm marking mechanism, on a line preceding the one being marked.

System environment variables, specified in the %name% format, will be automatically evaluated before the insertion commences. A special %PVSMESSAGE% variable can be utilized to insert the text of the message that is being marked itself into the source.

## Integrated Help Language

The setting allows you to select a language to be used for integrated help on the diagnostic messages (a click to the message error code in PVS-Studio output window) and online documentation (the PVS-Studio -> Help -> Open PVS-Studio Documentation (html, online) menu command), which are also available at our site.

This setting will not change the language of IDE plug-in's interface and messages produced by the analyzer.

## Show Tray Icon

This setting allows you to control the notifications of PVS-Studio analyzer operations. In case PVS-Studio output window contains error messages after performing the analysis (the messages potentially can be concealed by various filters as false alarms, by the names of files being verified and so on; such messages will not be present in PVS-Studio window), the analyzer will inform you about their presence with popup message in the Windows notification area (System tray). Single mouse click on this message or PVS-Studio tray icon will open the output window containing the messages which were found by the analyzer.

## Incremental Results Display Depth

This setting defines the mode of message display level in PVS-Studio Output window for the results of incremental analysis. Setting the display level depth here (correspondingly, Level 1 only; Levels 1 and 2; Levels 1, 2 and 3) will enable automatic activation of these display levels on each incremental analysis procedure. The "Preserve_Current_Levels" on the other hand will preserve the existing display setting.

This setting could be handful for periodic combined use of incremental and regular analysis modes, as the accidental disabling of, for example, level 1 diagnostics during the review of a large analysis log will also result in the concealment of portion of incremental analysis log afterwards. As the incremental analysis operates in the background, such situation could potentially lead to the loss of positives on existing issues within the project source code.

## Trace Mode

The setting allows you to select the tracing mode (logging of a program's execution path) for PVS-Studio IDE extension packages (the plug-ins for Visual Studio IDEs). There are several verbosity levels of the tracing (The Verbose mode is the most detailed one). When tracing is enabled PVS-Studio will automatically create a log file with the 'log' extension which will be located in the AppData\PVS-Studio directory (for example c:\Users\admin\AppData\Roaming\PVS-Studio\PVSTrace2168_000.log). Similarly, each of the running IDE processes will use a separate file to store its' logging results.

## Automatic Settings Import

This option allows you to enable the automatic import of settings (xml files) from %AppData%\PVS-Studio\SettingsImports\' directory. The settings will be imported on each update from stored settings, i.e. when Visual Studio or PVS-Studio command line is started, when the settings are rest, etc. When importing settings, flag-style options (true\false) and all options containing a single value (a string, for example), will be overwritten by the settings from SettingsImports. The options containing several valued (for example, the excluded directories), will be merged.

If the SettingsImports folder contains several xml files, these files will be applied to the current settings in a sequential manner, according to their names.

## Use Solution Folder As Initial

By default PVS-Studio offers saving report file (.plog) inside the same folder as the current solution file.

Modifying this setting allows you to restore the usual behavior of Windows file dialogs, i.e. the dialog will remember the last folder that was opened in it and will use this folder as initial.

## Save Modified Log

This setting specifies whether the 'Save log' confirmation prompt should be displayed before starting the analysis or loading another log file, in case output window already contains new, unsaved or modified analysis results. Setting the option to 'Yes' will enable automatic saving of analysis results to the current log file (after it was selected once in the 'Save File' dialog). Setting the option to 'No' will force IDE plug-in to discard any of the analysis results. The choice of the value 'Ask_Always' (used by default) will display the prompt to save the report each time, allowing the user to make the choice himself.

## Source Tree Root

By default, PVS-Studio will produce diagnostic messages containing absolute paths to the files being verified. This setting could be utilized for specifying the 'root' section of the path, which will be replaced by a special marker in case the path to the file within the analyzer's diagnostic message also starts from this 'root'. For example, the absolute path to the C:\Projects\Project1\main.cpp file will be replaced to a relative path |?|Project1\main.cpp, if the 'C:\Projects\' was specified as a 'root'.

When handling PVS-Studio log containing messages with paths in such relative format, IDE plug-in will automatically replace the |?| with this setting's value. Thereby, utilizing this setting allows you to handle PVS-Studio report on any local machine with the access to the verified sources, regardless of the sources' location in the file system structure.

A detailed description of the mode is available here.

## Use Solution Dir As Source Tree Root

This setting enables or disables the mode of using the path to the folder, containing the solution file *.sln as a parameter 'SourceTreeRoot'.

## Save Solution Statistics

Controls whether the analysis run statistics will be saved to '%AppData%\PVS-Studio\Statistics' folder. The statistics can be reviewed in the 'PVS-Studio|Analysis Statistics...' dialog.

# V001. A code fragment from 'file' cannot be analyzed.

The analyzer sometimes fails to diagnose a file with source code completely. There may be three reasons for that:

1) An error in code

There is a template class or template function with an error. If this function is not instantiated, the compiler fails to detect some errors in it. In other words, such an error does not hamper compilation. PVS-Studio tries to find potential errors even in classes and functions that are not used anywhere. If the analyzer cannot parse some code, it will generate the V001 warning. Consider a code sample:

template <class T>
class A
{
public:
void Foo()
{
//forget ;
int x
}
};

Visual C++ will compile this code if the A class is not used anywhere. But it contains an error, which hampers PVS-Studio's work.

2) An error in the Visual C++'s preprocessor

The analyzer uses the Visual C++'s preprocessor while working. From time to time this preprocessor makes errors when generating preprocessed "*.i" files. As a result, the analyzer receives incorrect data. Here is a sample:

hWnd = CreateWindow (
wndclass.lpszClassName,    // window class name
__T("NcFTPBatch"),         // window caption
WS_OVERLAPPED | WS_CAPTION | WS_SYSMENU | WS_MINIMIZEBOX,
// window style
100,            // initial x position
100,            // initial y position
450,            // initial x size
100,            // initial y size
NULL,           // parent window handle
hInstance,      // program instance handle
NULL);          // creation parameters
if (hWnd == NULL) {
...

Visual C++'s preprocessor turned this code fragment into:

hWnd = // window class name// window caption// window style//
initial x position// initial y position// initial x size//
initial y size// parent window handle// window menu handle//
program instance handleCreateWindowExA(0L,
wndclass.lpszClassName, "NcFTPBatch", 0x00000000L | 0x00C00000L |
0x00080000L | 0x00020000L, 100, 100,450, 100, ((void *)0),
((void *)0), hInstance, ((void *)0)); // creation parameters
if (hWnd == NULL) {
...

It turns out that we have the following code:

hWnd = // a long comment
if (hWnd == NULL) {
...

This code is incorrect and PVS-Studio will inform you about it. Of course it is a defect of PVS-Studio, so we will eliminate it in time.

It is necessary to note that Visual C++ successfully compiles this code because the algorithms it uses for compilation purposes and generation of preprocessed "*.i" files are different.

3) Defects inside PVS-Studio

On rare occasions PVS-Studio fails to parse complex template code.

Whatever the reason for generating the V001 warning, it is not crucial. Usually incomplete parse of a file is not very significant from the viewpoint of analysis. PVS-Studio simply skips a function/class with an error and continues analyzing the file. Only a small code fragment is left unanalyzed.

# V002. Some diagnostic messages may contain incorrect line number.

The analyzer can sometimes issue an error "Some diagnostic messages may contain incorrect line number". This occurs when it encounters multiline #pragma directives, on all supported versions of Microsoft Visual Studio.

Any code analyzer works only with preprocessed files, i.e. with those files in which all (#define) macros are expanded and all included files are substituted (#include). Also in the pre-processed file there is information about the substituted files and their positions. That means, in the preprocessed files there is information about line numbers.

Preprocessing is carried out in any case. For the user this procedure looks quite transparent. Sometimes the preprocessor is a part of the code analyzer and sometimes (like in the case with PVS-Studio) external preprocessor is used. In PVS-Studio we use the preprocessor by Microsoft Visual Studio or Clang. The analyzer starts the command line compiler cl.exe/clang.exe for each C/C++ file being processed and generates a preprocessed file with "i" extension.

Here is one the situation where the message "Some diagnostic messages may contain incorrect line number" is issued and a failure in positioning diagnostic messages occurs. It happens because of multiline directives #pragma of a special type. Here is an example of correct code:

#pragma warning(push)
void test()
{
int a;
if (a == 1) // PVS-Studio will inform about the error here
return;
}

If #pragma directive is written in two lines, PVS-Studio will point to an error in an incorrect fragment (there will be shift by one line):

#pragma \
warning(push)
void test()
{
int a;      // PVS-Studio will show the error here,
if (a == 1) // actually, however, the error should be here.
return;
}

However, in another case there will be no error caused by the multiline #pragma directive:

#pragma warning \
(push)
void test()
{
int a;
if (a == 1) // PVS-Studio will inform about the error in this line
return;
}

Our recommendation here is either not to use the multiline #pragma directives at all, or to use them in such a way that they can be correctly processed.

The code analyzer tries to detect a failure in lines numbering in the processed file. This mechanism is a heuristic one and it cannot guarantee correct determination of diagnostic messages positioning in the program code. However, if it is possible to find out that a particular file contains multiline pragmas, and there exists a positioning error, then the message "Some diagnostic messages may contain incorrect line number" is issued.

This mechanism works in the following way.

The analyzer opens the source C/C++ file and searches for the very last token. It selects only those tokens that are not shorter than three symbols in order to ignore closing parentheses, etc. E.g., for the following code the "return" operator will be considered as the last token:

01 #include "stdafx.h"
02
03 int foo(int a)
04 {
05   assert(a >= 0 &&
06          a <= 1000);
07   int b = a + 1;
08   return b;
09 }

Having found the last token, the analyzer will determine the number of the line which contains it. In this very case it is line 8. Further on, the analyzer searches for the last token in the file which has already been preprocessed. If the last tokens do not coincide, then most likely the macro in the end of file was not expanded; the analyzer is unable to understand whether the lines are arranged correctly, and ignores the given situation. However, such situations occur very rarely and last tokens almost always coincide in the source and preprocessed files. If it is so, the line number is determined, in which the token in the preprocessed file is situated.

Thus, we have the line numbers in which the last token is located in the source file and in the preprocessed file. If these line numbers do not coincide, then there has been a failure in lines numbering. In this case, the analyzer will notify the user about it with the message "Some diagnostic messages may contain incorrect line number".

Please consider that if a multiline #pragma-directive is situated in the file after all the dangerous code areas are found, then all the line numbers for the found errors will be correct. Even though the analyzer issues the message "Some diagnostic messages may contain incorrect line number for file", this will not prevent you from analyzing the diagnostic messages issued by it.

Please pay attention that this error may lead to incorrect work of the code analyzer, although it is not an error of PVS-Studio itself.

# V003. Unrecognized error found...

Message V003 means that a critical error occurred in the analyzer. It is most likely that in this case you will not see any warning messages concerning the file being checked at all.

Although the message V003 is very rare, we will appreciate if you help us fix the issue that caused this message to appear. To do this, please send us a preprocessed i-file that caused the error and its corresponding configuration launch files (*.PVS-Studio.cfg and *.PVS-Studio.cmd) to this e-mail support@viva64.com.

Note. A preprocessed i-file is generated from a source file (for example, file.cpp) when the preprocessor finishes its work. To get this file you should set the option RemoveIntermediateFiles to False on the tab "Common Analyzer Settings" in PVS-Studio settings and restart the analysis of this one file. After that you can find the corresponding i-file in the project folder (for example, file.i and its corresponding file.PVS-Studio.cfg and file.PVS-Studio.cmd).

# V004. Diagnostics from the 64-bit rule set are not entirely accurate without the appropriate 64-bit compiler. Consider utilizing 64-bit compiler if possible.

When detecting issues of 64-bit code, it is 64-bit configuration of a project that the analyzer must always test. For it is 64-bit configuration where data types are correctly expanded and branches like "#ifdef WIN64" are selected, and so on. It is incorrect to try to detect issues of 64-bit code in a 32-bit configuration.

But sometimes it may be helpful to test the 32-bit configuration of a project. You can do it in case when there is no 64-bit configuration yet but you need to estimate the scope of work on porting the code to a 64-bit platform. In this case you can test a project in 32-bit mode. Testing the 32-bit configuration instead of the 64-bit one will show how many diagnostic warnings the analyzer will generate when testing the 64-bit configuration. Our experiments show that of course far not all the diagnostic warnings are generated when testing the 32-bit configuration. But about 95% of them in the 32-bit mode coincide with those in the 64-bit mode. It allows you to estimate the necessary scope of work.

Pay attention! Even if you correct all the errors detected when testing the 32-bit configuration of a project, you cannot consider the code fully compatible with 64-bit systems. You need to perform the final testing of the project in its 64-bit configuration.

The V004 message is generated only once for each project checked in the 32-bit configuration. The warning refers to the file which will be the first to be analyzed when checking the project. It is done for the purpose to avoid displaying a lot of similar warnings in the report.

# V005. Cannot determine active configuration for project. Please check projects and solution configurations.

This issue with PVS-Studio is caused by the mismatch of selected project's platform configurations declared in the solution file (Vault.sln) and platform configurations declared in the project file itself. For example, the solution file may contain lines of this particular kind for concerned project:

{F56ECFEC-45F9-4485-8A1B-6269E0D27E49}.Release|x64.ActiveCfg = Release|x64

However, the project file itself may lack the declaration of Release|x64 configuration. Therefore trying to check this particular project, PVS-Studio is unable to locate the 'Release|x64' configuration. The following line is expected to be automatically generated by IDE in the solution file for such a case:

{F56ECFEC-45F9-4485-8A1B-6269E0D27E49}.Release|x64.ActiveCfg = Release|Win32

In automatically generated solution file the solution's active platform configuration (Release|x64.ActiveCfg) is set equal to one of project's existing configurations (I.e. in this particular case Release|Win32). Such a situation is expected and can be handled by PVS-Studio correctly.

# V006. File cannot be processed. Analysis aborted by timeout.

Message V006 is generated when an analyzer cannot process a file for a particular time period and aborts. Such situation might happen in two cases.

The first reason - an error inside the analyzer that does not allow it to parse some code fragment. It happens rather seldom, yet it is possible. Although message V006 appears rather seldom, we would appreciate if you help us eliminate the issue which causes the message to appear. If you have worked with projects in C/C++, please send your preprocessed i-file where this issue occurs and its corresponding configuration launch files (*.PVS-Studio.cfg and *.PVS-Studio.cmd) to the address support@viva64.com.

Note. A preprocessed i-file is generated from a source file (for example, file.cpp) when the preprocessor finishes its work. To get this file you should set the option RemoveIntermediateFiles to False on the tab "Common Analyzer Settings" in PVS-Studio settings and restart the analysis of this one file. After that you can find the corresponding i-file in the project folder (for example, file.i and its corresponding file.PVS-Studio.cfg and file.PVS-Studio.cmd).

The second possible reason is the following: although the analyzer could process the file correctly, it does not have enough time to do that because it gets too few system resources due to high processor load. By default, the number of threads spawned for analysis is equal to the number of processor cores. For example, if we have four cores in our machine, the tool will start analysis of four files at once. Each instance of an analyzer's process requires about 1.5 Gbytes of memory. If your computer does not have enough memory, the tool will start using the swap file and analysis will run slowly and fail to fit into the required time period. Besides, you may encounter this problem when you have other "heavy" applications running on your computer simultaneously with the analyzer.

To solve this issue, you may directly restrict the number of cores to be used for analysis in the PVS-Studio settings (ThreadCount option on the "Common Analyzer Settings" tab).

# V007. Deprecated CLR switch was detected. Incorrect diagnostics are possible.

The V007 message appears when the projects utilizing the C++/Common Language Infrastructure Microsoft specification, containing one of the deprecated /clr compiler switches, are selected for analysis. Although you may continue analyzing such a project, PVS-Studio does not officially support these compiler flags. It is possible that some analyzer errors will be incorrect.

# V008. Unable to start the analysis on this file.

PVS-Studio was unable to start the analysis of the designated file. This message indicates that an external C++ preprocessor, started by the analyzer to create a preprocessed source code file, exited with a non-zero error code. Moreover, std error can also contain detailed description of this error, which can be viewed in PVS-Studio Output Window for this file.

There could be several reasons for the V008 error:

1) The source code is not compilable

If the C++ sources code is not compilable for some reason (a missing header file for example), then the preprocessor will exit with non-zero error code and the "fatal compilation error" type message will be outputted into std error. PVS-Studio is unable to initiate the analysis in case C++ file hadn't been successfully preprocessed. To resolve this error you should ensure the compilability of the file being analyzed.

2) The preprocessor's executable file had been damaged\locked

Such a situation is possible when the preprocessor's executable file had been damaged or locked by system antiviral software. In this case the PVS-Studio Output window could also contain the error messages of this kind: "The system cannot execute the specified program". To resolve it you should verify the integrity of the utilized preprocessor's executable and lower the security policies' level of system's antiviral software.

3) One of PVS-Studio auxiliary command files had been locked.

PVS-Studio analyzer is not launching the C++ preprocessor directly, but with the help of its own pre-generated command files. In case of strict system security policies, antiviral software could potentially block the correct initialization of C++ preprocessor. This could be also resolved by easing the system security policies toward the analyzer.

# V009. To use free version of PVS-Studio, source code files are required to start with a special comment.

You entered a free license key allowing you to use the analyzer in free mode. To be able to run the tool with this key, you need to add special comments to your source files with the following extensions: .c, .cc, .cpp, .cp, .cxx, .c++, .cs. Header files do not need to be modified.

You can insert the comments manually or by using a special open-source utility available at GitHub: how-to-use-pvs-studio-free.

// This is a personal academic project. Dear PVS-Studio, please check it.

// PVS-Studio Static Code Analyzer for C, C++ and C#: http://www.viva64.com

// This is an open source non-commercial project. Dear PVS-Studio, please check it.

// PVS-Studio Static Code Analyzer for C, C++ and C#: http://www.viva64.com

// This is an independent project of an individual developer. Dear PVS-Studio, please check it.

// PVS-Studio Static Code Analyzer for C, C++ and C#: http://www.viva64.com

Some developers might not want additional commented lines not related to the project in their files. It is their right, and they can simply choose not to use the analyzer. Another option is to purchase a commercial license and use the tool without any limitations. We consider your adding these comments as your way to say thank you to us for the granted license and help us promote our product.

# V010. Analysis of 'Makefile/Utility' type projects is not supported in this tool. Use direct analyzer integration or compiler monitoring instead.

The warning V010 appears upon the attempt to check .vcxproj - projects, having the configuration type 'makefile' or 'utility'. PVS-Studio doesn't support such projects, using the plugin or a command line version of the analyzer. This is due to the fact that in the makefile/utility projects, the information necessary to the analyzer (the compilation parameters, in particular) about the build details is not available.

In case if the analysis of such projects is needed, please use a compiler monitoring system or direct integration of the analyzer. You can also disable this warning on the settings page of PVS-Studio (Detectable errors (C++), Fails list).

# V011. Presence of #line directives may cause some diagnostic messages to have incorrect file name and line number.

A #line directive is generated by the preprocessor and specifies the filename and line number that a particular line in the preprocessed file refers to. This is demonstrated by the following example.

#line 20 "a.h"
void X(); // Function X is declared at line 20 in file a.h
void Y(); // Function Y is declared at line 21 in file a.h
void Z(); // Function Z is declared at line 22 in file a.h
#line 5 "a.cpp"
int foo; // Variable foo is declared at line 5 in file a.cpp
int X() { // Definition of function X starts at line 6 in file a.cpp
return 0; // Line 7
} // Line 8

#line directives are used by various tools, including the PVS-Studio analyzer, to navigate the file.

Sometimes source files (*.c; *.cpp; *.h, etc.) happen to include #line directives as well. This may happen, for example, when the file is generated automatically by some code-generating software (example).

When preprocessing such a file, those #line directives will be added to the resulting *.i file. Suppose, for example, that we have a file named A.cpp:

int a;
#line 30 "My.y"
int b = 10 / 0;

After the preprocessing, we get the file A.i with the following contents:

#line 1 "A.cpp"
int a;
#line 30 "My.y"
int b = 10 / 0;

This makes correct navigation impossible. On detecting a division by zero, the analyzer will report this error as occurring at line 30 in the My.y file. Technically speaking, the analyzer is correct, as the error is indeed a result of the incorrect code in the My.y file. However, with the navigation broken, you will not be able to view the My.y file since the project may simply have no such file. In addition, you will never know that currently, the division-by-zero error actually occurs at line 3 in the A.cpp file.

To fix this issue, we recommend deleting all #line directives in the source files of your project. These directives typically get there by accident and only hinder the work of various tools, such as code analyzers, rather than help.

V011 diagnostic was developed to detect such unwanted #line directives in the source code. The analyzer reports the first 10 #line's in a file. Reporting more makes no sense since you can easily find and delete the remaining #line directives using the search option of your editor.

This is the fixed code:

int a;
int b = 10 / 0;

After the preprocessing, you get the following *.i file:

#line 1 "A.cpp"
int a;
int b = 10 / 0;

The navigation is fixed, and the analyzer will correctly report that the division by zero occurs at line 2 in the A.cpp file.

# V051. Some of the references in project are missing or incorrect. The analysis results could be incomplete. Consider making the project fully compilable and building it before analysis.

A V051 message indicates that the C# project, loaded in the analyzer, contains compilation errors. These usually include unknown data types, namespaces, and assemblies (dll files), and generally occur when you try to analyze a project that has dependent assemblies of nuget packages absent on the local machine, or third-party libraries absent among the projects of the current solution.

Despite this error, the analyzer will try to scan the part of the code that doesn't contain unknown types, but results of such analysis may be incomplete, as some of the messages may be lost. The reason is that most diagnostics can work properly only when the analyzer has complete information about all the data types contained in the source files to be analyzed, including the types implemented in third-party assemblies.

Even if rebuilding of dependency files is provided for in the build scenario of the project, the analyzer won't automatically rebuild the entire project. That's why we recommend that, before scanning it, you ensure that the project is fully compilable, including making sure that all the dependency assemblies (dll files) are present.

Sometimes the analyzer may mistakenly generate this message on a fully compilable project, with all the dependencies present. It may happen, for example, when the project uses a non-standard MSBuild scenario - say, csproj files are importing some additional props and target files. In this case, you can ignore the V051 message or turn it off in the analyzer settings.

If you wish to learn which compiler errors are causing the V051 error, start the analysis of your projects with the analyzer's cmd version, and add the '--logCompilerErrors' flag to its arguments (in a single line):

PVS-Studio_Cmd.exe –t MyProject.sln –p "Any CPU" –c "Debug"
--logCompilerErrors

# V052. A critical error had occurred.

The appearance of V052 message means that a critical error had occurred inside the analyzer. It is most likely that several source files will not be analyzed.

You can get additional information about this error from two sources: the analyzer report file (plog) and standard output stream of error messages stderr (when you use the command line version).

If you are using the IDE Visual Studio or Standalone application the error stack is displayed in PVS-Studio window. The stack will be recorded in the very beginning of the plog file. At the same time the stack is divided into substrings, and each of them is recorded and displayed as a separate error without number.

If you are working from the command line, you can analyze the return code of the command line version to understand that the exception occurred, and then examine the plog, without opening it in the IDE Visual Studio or Standalone application. For this purpose, the report can be converted, for example, to a text file using the PlogConverter utility. Return codes of the command line version are described in the section "Analyzing Visual C++ (.vcxproj) and Visual C # project (.csproj) projects from the command line", used utilities - PlogConverter - "Managing the Analysis Results (plog file)".

Although the V052 message is quite rare, we will appreciate if you can help us fixing the issue that had cause it. To accomplish this, please send the exception stack from PVS-Studio output window (or the message from stderr in case the command line version was utilized) to support@viva64.com.

# V061. An error has occurred.

A V061 message indicates that an error related to the analyzer's functioning has occurred. It could be an unexpected exception in the analyzer, failure to build a semantic model of the program, and so on.

In this case, please email us (support@viva64.com) and attach the text files from the .PVS-Studio directory (you can find them in the project directory) so that we could fix the bug as soon as possible.

In addition, you can use the verbose parameter to tell the analyzer to save additional information to the .PVS-Studio directory while running. That information could also be helpful.

Maven plugin:

<verbose>true</verbose>

verbose = true

IntelliJ IDEA plugin:

1) Analyze -> PVS-Studio -> Settings

2) Tab Misc -> uncheck Remove intermediate files

# V062. Failed to run analyzer core. Make sure the correct 64-bit Java 8 or higher executable is used, or specify it manually.

A V062 message means that the plugin has failed to run the analyzer core. This message typically appears when attempting to launch the core with an incorrect Java version. The core can work correctly only with the 64-bit Java version 8 or higher. The analyzer retrieves the path to the Java interpreter from the PATH environment variable by default.

You can also specify the path to the required Java interpreter manually.

Maven plugin:

<javaPath>C:/Program Files/Java/jdk1.8.0_162/bin/java.exe</javaPath>

javaPath = "C:/Program Files/Java/jdk1.8.0_162/bin/java.exe"

IntelliJ IDEA plugin:

1) Analyze -> PVS-Studio -> Settings

2) Tab Environment -> Java executable

If you still cannot launch the analyzer, please email us (support@viva64.com) and attach the text files from the .PVS-Studio directory (you can find it in the project directory). We will try to find a solution as soon as possible.

# V063. Analysis aborted by timeout.

A V063 message means that the analyzer has failed to check a file in the given time frame (10 minutes by default). Such messages are often accompanied by "GC overhead limit exceeded" messages.

In some cases, this problem can be solved by simply increasing the amount of memory and stack available to the analyzer.

Maven plugin:

<jvmArguments>-Xmx4096m, -Xss256m</jvmArguments>

jvmArguments = ["-Xmx4096m", "-Xss256m"]

IntelliJ IDEA plugin:

1) Analyze -> PVS-Studio -> Settings

2) Tab Environment -> JVM arguments

The amount of memory available by default could be insufficient when analyzing generated code with numerous nested constructs.

You may want to exclude such code from analysis (using the exclude option) so that the analyzer does not waste time checking it.

A V063 message can also appear when the analyzer does not get enough system resources because of high CPU load. It could process the file correctly if given enough time, but the default time frame is too small.

If you are still getting this message, it may be a sign of a bug in the analyzer. In this case, please email us (support@viva64.com) and attach the text files from the .PVS-Studio directory (you can find it in the project directory) together with the code that seems to trigger this error so that we could fix the bug as soon as possible.

# V101. Implicit assignment type conversion to memsize type.

The analyzer detected a potential error relating to implicit type conversion while executing the assignment operator "=". The error may consist in incorrect calculating of the value of the expression to the right of the assignment operator "=". An example of the code causing the warning message:

size_t a;
unsigned b;
...
a = b; // V101

The operation of converting a 32-bit type to memsize-type is safe in itself as there is no data loss. For example, you can always save the value of unsigned-type variable into a variable of size_t type. But the presence of this type conversion may indicate a hidden error made before.

The first cause of the error occurrence on a 64-bit system may be the change of the expression calculation process. Let's consider an example:

unsigned a = 10;
int b = -11;
ptrdiff_t c = a + b; //V101
cout << c << endl;

On a 32-bit system this code will display the value -1, while on a 64-bit system it will be 4294967295. This behaviour fully meets the rules of type converion in C++ but most likely it will cause an error in a real code.

Let's explain the example. According to C++ rules a+b expression has unsigned type and contains the value 0xFFFFFFFFu. On a 32-bit system ptrdiff_t type is a sign 32-bit type. After 0xFFFFFFFFu value is assigned to the 32-bit sign variable it will contain the value -1. On a 64-bit system ptrdiff_t type is a sign 64-bit type. It means 0xFFFFFFFFu value will be represented as it is. That is, the value of the variable after assignment will be 4294967295.

The error may be corrected by excluding mixed use of memsize and non-memsize-types in one expression. An example of code correction:

size_t a = 10;
ptrdiff_t b = -11;
ptrdiff_t c = a + b;
cout << c << endl;

A more proper way of correction is to refuse using sign and non-sign data types together.

The second cause of the error may be an overflow occurring in 32-bit data types. In this case the error may stand before the assignment operator but you can detect it only indirectly. Such errors occur in code allocating large memory sizes. Let's consider an example:

unsigned Width  = 1800;
unsigned Height = 1800;
unsigned Depth  = 1800;
// Real error is here
unsigned CellCount = Width * Height * Depth;
// Here we get a diagnostic message V101
size_t ArraySize = CellCount * sizeof(char);
cout << ArraySize << endl;
void *Array = malloc(ArraySize);

Suppose that we decided to process data arrays of more than 4 Gb on a 64-bit system. In this case the given code will cause allocation of a wrong memory size. The programmer is planning to allocate 5832000000 memory bytes but he gets only 1537032704 instead. It happens because of an overflow occurring while calculating Width * Height * Depth expression. Unfortunately, we cannot diagnose the error in the line containing this expression but we can indirectly indicate the presence of the error detecting type conversion in the line:

size_t ArraySize = CellCount * sizeof(char); //V101

To correct the error you should use types allowing you to store the necessary range of values. Mind that correction of the following kind is not appropriate:

size_t CellCount = Width * Height * Depth;

We still have the overflow here. Let's consider two examples of proper code correction:

// 1)
unsigned Width  = 1800;
unsigned Height = 1800;
unsigned Depth  = 1800;
size_t CellCount =
static_cast<size_t>(Width) *
static_cast<size_t>(Height) *
static_cast<size_t>(Depth);
// 2)
size_t Width  = 1800;
size_t Height = 1800;
size_t Depth  = 1800;
size_t CellCount = Width * Height * Depth;

You should keep in mind that the error can be situated not only higher but even in another module. Let's give a corresponding example. Here the error consists in incorrect index calculation when the array's size exceeds 4 Gb.

Suppose that the application uses a large one-dimensional array and CalcIndex function allows you to address this array as a two-dimensional one.

extern unsigned ArrayWidth;
unsigned CalcIndex(unsigned x, unsigned y) {
return x + y * ArrayWidth;
}
...
const size_t index = CalcIndex(x, y); //V101

The analyzer will warn about the problem in the line: const size_t index = CalcIndex(x, y). But the error is in incorrect implementation of CalcIndex function. If we take CalcIndex separately it is absolutely correct. The output and input values have unsigned type. Calculations are also performed only with unsigned types participating. There are no explicit or implicit type conversions and the analyzer has no opportunity to detect a logic problem relating to CalcIndex function. The error consists in that the result returned by the function and possibly the result of the input values was chosen incorrectly. The function's result must have memsize type.

Fortunately, the analyzer managed to detect implicit conversion of CalcIndex function's result to size_t type. It allows you to analyze the situation and bring necessary changes into the program. Correction of the error may be, for example, the following:

extern size_t ArrayWidth;
size_t CalcIndex(size_t x, size_t y) {
return x + y * ArrayWidth;
}
...
const size_t index = CalcIndex(x, y);

If you are sure that the code is correct and the array's size will never reach 4 Gb you can suppress the analyzer's warning message by explicit type conversion:

extern unsigned ArrayWidth;
unsigned CalcIndex(unsigned x, unsigned y) {
return x + y * ArrayWidth;
}
...
const size_t index = static_cast<size_t>(CalcIndex(x, y));

In some cases the analyzer can understand itself that an overflow is impossible and the message won't be displayed.

Let's consider the last example related to incorrect shift operations

ptrdiff_t SetBitN(ptrdiff_t value, unsigned bitNum) {
ptrdiff_t mask = 1 << bitNum; //V101
}

The expression " mask = 1 << bitNum " is unsafe because this code cannot set the high-order bits of the 64-bit variable mask into ones. If you try to use SetBitN function for setting, for example, the 33rd bit, an overflow will occur when performing the shift operation and you will not get the result you've expected.

# V102. Usage of non memsize type for pointer arithmetic.

The analyzer found a possible error in pointer arithmetic. The error may be caused by an overflow during the determination of the expression.

Let's take up the first example.

short a16, b16, c16;
char *pointer;
...
pointer += a16 * b16 * c16;

The given example works correctly with pointers if the value of the expression a16 * b16 * c16 does not excess INT_MAX (2Gb). This code could always work correctly on the 32-bit platform because the program never allocated large-sized arrays. On the 64-bit platform the programmer using the previous code while working with an array of a large size would be disappointed. Suppose, we would like to shift the pointer value in 3000000000 bytes, and the variables a16, b16 and c16 have values 3000, 1000 and 1000 correspondingly. During the determination of the expression a16 * b16 * c16 all the variables, according to the C++ rules, will be converted to type int, and only then the multiplication will take place. While multiplying an overflow will occur, and the result of this would be the number -1294967296. The incorrect expression result will be extended to type ptrdiff_t and pointer determination will be launched. As a result, we'll face an abnormal program termination while trying to use the incorrect pointer.

To prevent such errors one should use memsize types. In our case it will be correct to change the types of the variables a16, b16, c16 or to use the explicit type conversion to type ptrdiff_t as follows:

short a16, b16, c16;
char *pointer;
...
pointer += static_cast<ptrdiff_t>(a16) *
static_cast<ptrdiff_t>(b16) *
static_cast<ptrdiff_t>(c16)

It's worth mentioning that it is not always incorrect not to use memsize type in pointer arithmetic. Let's examine the following situation:

char ch;
short a16;
int *pointer;
...
int *decodePtr = pointer + ch * a16;

The analyzer does not show a message on it because it is correct. There are no determinations which may cause an overflow and the result of this expression will be always correct on the 32-bit platform as well as on the 64-bit platform.

# V103. Implicit type conversion from memsize type to 32-bit type.

The analyzer found a possible error related to the implicit memsize-type conversion to 32-bit type. The error consists in the loss of high bits in 64-bit type which causes the loss of the value.

The compiler also diagnoses such type conversions and shows warnings. Unfortunately, such warnings are often switched off, especially when the project contains a great deal of the previous legacy code or old libraries are used. In order not to make a programmer look through hundreds and thousands of such warnings, showed by the compiler, the analyzer informs only about those which may be the cause of the incorrect work of the code on the 64-bit platform.

The first example.

Our application works with videos and we want to calculate what file-size we'll need in order to store all the shots kept in memory into a file.

size_t Width, Height, FrameCount;
...
unsigned BufferSizeForWrite = Width * Height * FrameCount *
sizeof(RGBStruct);

Earlier the general size of the shots in memory could never excess 4 Gb (practically 2-3 Gb depending on the kind of OS Windows). On the 64-bit platform we have an opportunity to store much more shots in memory, and let's suppose that their general size is 10 Gb. After putting the result of the expression Width * Height * FrameCount * sizeof(RGBStruct) into the variable BufferSizeForWrite, we'll truncate high bits and will deal with the incorrect value.

The correct solution will be to change the type of the variable BufferSizeForWrite into type size_t.

size_t Width, Height, FrameCount;
...
size_t  BufferSizeForWrite = Width * Height * FrameCount *
sizeof(RGBStruct);

The second example.

Saving of the result of pointers subtraction.

char *ptr_1, *ptr_2;
...
int diff = ptr_2 -  ptr_1;

If pointers differ more than in one INT_MAX byte (2 Gb) a value cutoff during the assignment will occur. As a result the variable diff will have an incorrect value. For the storing of the given value we should use type ptrdiff_t or another memsize type.

char *ptr_1, *ptr_2;
...
ptrdiff_t diff = ptr_2 -  ptr_1;

When you are sure about the correctness of the code and the implicit type conversion does not cause errors while changing over to the 64-bit platform, you may use the explicit type conversion in order to avoid error messages showed in this line. For example:

unsigned BitCount = static_cast<unsigned>(sizeof(RGBStruct) * 8);

If you suspect that the code contains incorrect explicit conversions of memsize types to 32-bit types about which the analyzer does not warn, you can use the V202.

As was said before analyzer informs only about those type conversions which can cause incorrect code work on a 64-bit platform. The code given below won't be considered incorrect though there occurs conversion of memsize type to int type:

int size = sizeof(float);

# V104. Implicit type conversion to memsize type in an arithmetic expression.

The analyzer found a possible error inside an arithmetic expression and this error is related to the implicit type conversion to memsize type. The error of an overflow may be caused by the changing of the permissible interval of the values of the variables included into the expression.

The first example.

The incorrect comparison expression. Let's examine the code:

size_t n;
unsigned i;
// Infinite loop (n > UINT_MAX).
for (i = 0; i != n; ++i) { ... }

In this example the error are shown which are related to the implicit conversion of type unsigned to type size_t while performing the comparison operation.

On the 64-bit platform you may have an opportunity to process a larger data size and the value of the variable n may excess the number UINT_MAX (4 Gb). As a result, the condition i != n will be always true and that will cause an eternal cycle.

An example of the corrected code:

size_t n;
size_t  i;
for (i = 0; i != n; ++i) { ... }

The second example.

char *begin, *end;
int bufLen, bufCount;
...
ptrdiff_t diff = begin - end + bufLen * bufCount;

The implicit conversion of type int to type ptrdiff_t often indicates an error. One should pay attention that the conversion takes place not while performing operator "=" (for the expression begin - end + bufLen * bufCount has type ptrdiff_t), but inside this expression. The subexpression begin - end according to C++ rules has type ptrdiff_t, and the right bufLen * bufCount type int. While changing over to 64-bit platform the program may begin to process a larger data size which may result in an overflow while determining the subexpression bufLen * bufCount.

You should change the type of the variables bufLen and bufCount into memsize type or use the explicit type conversion, as follows:

char *begin, *end;
int bufLen, bufCount;
...
ptrdiff_t diff = begin - end +
ptrdiff_t(bufLen) * ptrdiff_t(bufCount);

Let's notice that the implicit conversion to memsize type inside the expressions is not always incorrect. Let's examine the following situation:

size_t value;
char c1, c2;
size_t result = value + c1 * c2;

The analyzer does not show error message although the conversion of type int to size_t occurs in this case, for there can be no overflow while determining the subexpression c1 * c2.

If you suspect that the program may contain errors related to the incorrect explicit type conversion in expressions, you may use the V201. Here is an example when the explicit type conversion to type size_t hides an error:

int i;
size_t st;
...
st = size_t(i * i * i) * st;

# V105. N operand of '?:' operation: implicit type conversion to memsize type.

The analyzer found a possible error inside an arithmetic expression related to the implicit type conversion to memsize type. An overflow error may be caused by the changing of the permissible interval of the values of the variables included into the expression. This warning is almost equivalent to warning V104 with the exception that the implicit type conversion occurs due to the use of ?: operation.

Let's give an example of the implicit type conversion while using operation:

int i32;
float f = b != 1 ? sizeof(int) : i32;

In the arithmetic expression the ternary operation ?: is used which has three operands:

• b != 1 - the first operand;
• sizeof(int) - the second operand;
• i32 - the third operand.

The result of the expression b != 1 ? sizeof(int) : i32 is the value of type size_t which is then converted into type float value. Thus, the implicit type conversion realized for the 3rd operand of ?: operation.

Let's examine an example of the incorrect code:

bool useDefaultVolume;
size_t defaultVolume;
unsigned width, height, depth;
...
size_t volume = useDefaultVolume ?
defaultVolume :
width * height * depth;

Let's suppose, we're developing an application of computational modeling which requires three-dimensional calculation area. The number of calculating elements which are used is determined according to the variable useDefaultSize value and is assigned on default or by multiplication of length, height and depth of the calculating area. On the 32-bit platform the size of memory which was already allocated, cannot excess 2-3 Gb (depending on the kind of OS Windows) and as consequence the result of the expression width * height * depth will be always correct. On the 64-bit platform, using the opportunity to deal with a larger memory size, the number of calculating elements may excess the value UINT_MAX (4 Gb). In this case an overflow will occur while determining the expression width * height * depth because the result of this expression had type unsigned.

Correction of the code may consist in the changing of the type of the variables width, height and depth to memsize type as follows:

...
size_t width, height, depth;
...
size_t volume = useDefaultVolume ?
defaultVolume :
width * height * depth;

Or in use of the explicit type conversion:

unsigned width, height, depth;
...
size_t volume = useDefaultVolume ?
defaultVolume :
size_t(width) * size_t(height) * size_t(depth);

In addition, we advise to read the description of a similar warning V104, where one can learn about other effects of the implicit type conversion to memsize type.

# V106. Implicit type conversion N argument of function 'foo' to memsize type.

The analyzer found a possible error related to the implicit actual function argument conversion to memsize type.

The first example.

The program deals with large arrays using container CArray from library MFC. On the 64-bit platform the number of array items may excess the value INT_MAX (2Gb), which will make the work of the following code impossible:

CArray<int, int> myArray;
...
int invalidIndex = 0;
INT_PTR validIndex = 0;
while (validIndex != myArray.GetSize()) {
myArray.SetAt(invalidIndex, 123);
++invalidIndex;
++validIndex;
}

The given code fills all the array myArray items with value 123. It seems to be absolutely correct and the compiler won't show any warnings in spite of its impossibility to work on the 64-bit platform. The error consists in the use of type int as an index of the variable invalidIndex. When the value of the variable invalidIndex excesses INT_MAX an overflow will occur and it will receive value "-1". The analyzer diagnoses this error and warns that the implicit conversion of the first argument of the function SetAt to memsize type (here it is type INT_PTR) occurs. When seeing such a warning you may correct the error replacing int type with a more appropriate one.

The given example is significant because it is rather unfair to blame a programmer for the ineffective code. The reason is that GetAt function in class CArray in the previous MFC library version was declared as follows:

void SetAt(int nIndex, ARG_TYPE newElement);

And in the new version:

void SetAt(INT_PTR nIndex, ARG_TYPE newElement);

Even the Microsoft developers creating MFC could not take into account all the possible consequences of the use of int type for indexing in the array and we can forgive the common developer who has written this code.

Here is the correct variant:

...
INT_PTR  invalidIndex = 0;
INT_PTR validIndex = 0;
while (validIndex != myArray.GetSize()) {
myArray.SetAt(invalidIndex, 123);
++invalidIndex;
++validIndex;
}

The second example.

The program determines the necessary data array size and then allocated it using function malloc as follows:

unsigned GetArraySize();
...
unsigned size = GetArraySize();
void *p = malloc(size);

The analyzer will warn about the line void *p = malloc(size);. Looking through the definition of function malloc we will see that its formal argument assigning the size of the allocated memory is represented by type size_t. But in the program the variable size of unsigned type is used as the actual argument. If your program on the 64-bit platform needs an array more than UINT_MAX bytes (4Gb), we can be sure that the given code is incorrect for type unsigned cannot keep a value more than UINT_MAX. The program correction consists in changing the types of the variables and functions used in the determination of the data array size. In the given example we should replace unsigned type with one of memsize types, and also if it is necessary modify the function GetArraySize code.

...
size_t  GetArraySize();
...
size_t size = GetArraySize();
void *p = malloc(size);

The analyzer show warnings on the implicit type conversion only if it may cause an error during program port on the 64-bit platform. Here it is the code which contains the implicit type conversion but does not cause errors:

void MyFoo(SSIZE_T index);
...
char c = 'z';
MyFoo(0);
MyFoo(c);

If you are sure that the implicit type conversion of the actual function argument is absolutely correct you may use the explicit type conversion to suppress the analyzer's warnings as follows:

typedef size_t TYear;
void MyFoo(TYear year);
int year;
...
MyFoo(static_cast<TYear>(year));

Sometimes the explicit type conversion may hide an error. In this case you may use the V201.

# V107. Implicit type conversion N argument of function 'foo' to 32-bit type.

The analyzer found a possible error related to the implicit conversion of the actual function argument which has memsize type to 32-bit type.

Let's examine an example of the code which contains the function for searching for the max array item:

float FindMaxItem(float *array, int arraySize) {
float max = -FLT_MAX;
for (int i = 0; i != arraySize; ++i) {
float item = *array++;
if (max < item)
max = item;
}
return max;
}
...
float *beginArray;
float *endArray;
float maxValue = FindMaxItem(beginArray, endArray - beginArray);

This code may work successfully on the 32-bit platform but it won't be able to process arrays containing more than INT_MAX (2Gb) items on the 64-bit architecture. This limitation is caused by the use of int type for the argument arraySize. Pay attention that the function code looks absolutely correct not only from the compiler's point of view but also from that of the analyzer. There is no type conversion in this function and one cannot find the possible problem.

The analyzer will warn about the implicit conversion of memsize type to a 32-bit type during the invocation of FindMaxItem function. Let's try to find out why it happens so. According to C++ rules the result of the subtraction of two pointers has type ptrdiff_t. When invocating FindMaxItem function the implicit conversion of ptrdiff_t type to int type occurs which will cause the loss of the high bits. This may be the reason for the incorrect program behavior while processing a large data size.

The correct solution will be to replace int type with ptrdiff_t type for it will allow to keep the whole range of values. The corrected code:

float FindMaxItem(float *array, ptrdiff_t  arraySize) {
float max = -FLT_MAX;
for (ptrdiff_t  i = 0; i != arraySize; ++i) {
float item = *array++;
if (max < item)
max = item;
}
return max;
}

Analyzer tries as far as possible to recognize safe type conversions and keep from displaying warning messages on them. For example, the analyzer won't give a warning message on FindMaxItem function's call in the following code:

float Arr[1000];
float maxValue =
FindMaxItem(Arr, sizeof(Arr)/sizeof(float));

When you are sure that the code is correct and the implicit type conversion of the actual function argument does not cause errors you may use the explicit type conversion so that to avoid showing warning messages. An example:

extern int nPenStyle
extern size_t nWidth;
extern COLORREF crColor;
...
// Call constructor CPen::CPen(int, int, COLORREF)
CPen myPen(nPenStyle, static_cast<int>(nWidth), crColor);

In that case if you suspect that the code contains incorrect explicit conversions of memsize types to 32-bit types about which the analyzer does not warn, you may use the V202.

# V108. Incorrect index type: 'foo[not a memsize-type]'. Use memsize type instead.

The analyzer found a possible error of indexing large arrays. The error may consist in the incorrect index determination.

The first example.

extern char *longString;
extern bool *isAlnum;
...
unsigned i = 0;
while (*longString) {
isAlnum[i] = isalnum(*longString++);
++i;
}

The given code is absolutely correct for the 32-bit platform where it is actually impossible to process arrays more than UINT_MAX bytes (4Gb). On the 64-bit platform it is possible to process an array with the size more than 4 Gb that is sometimes very convenient. The error consists in the use of the variable of unsigned type for indexing the array isAlnum. When we fill the first UINT_MAX of the items the variable i overflow will occur and it will equal zero. As the result we'll begin to rewrite the array isAlnum items which are situated in the beginning and some items will be left unassigned.

The correction is to replace the variable i type with memsize type:

...
size_t  i = 0;
while (*longString)
isAlnum[i++] = isalnum(*longString++);

The second example.

class Region {
float *array;
int Width, Height, Depth;
float Region::GetCell(int x, int y, int z) const;
...
};
float Region::GetCell(int x, int y, int z) const {
return array[x + y * Width + z * Width * Height];
}

For computational modeling programs the main memory size is an important source, and the possibility to use more than 4 Gb of memory on the 64-bit architecture increases calculating possibilities greatly. In such programs one-dimensional arrays are often used which are then dealt with as three-dimensional ones. There are functions for that which similar to GetCell that provides access to the necessary items of the calculation area. But the given code may deal correctly with arrays containing not more than INT_MAX (2Gb) items. The reason is in the use of 32-bit int types which participate in calculating the item index. If the number of items in the array array excesses INT_MAX (2 Gb) an overflow will occur and the index value will be determined incorrectly. Programmers often make a mistake trying to correct the code in the following way:

float Region::GetCell(int x, int y, int z) const {
return array[static_cast<ptrdiff_t>(x) + y * Width +
z * Width * Height];
}

They know that according to C++ rules the expression for calculating the index will have ptrdiff_t type and because of it hope to avoid the overflow. Unfortunately, the overflow may occur inside the subexpression y * Width or z * Width * Height for to determine them int type is still used.

If you want to correct the code without changing the types of the variables included into the expression you should convert each variable explicitly to memsize type:

float Region::GetCell(int x, int y, int z) const {
return array[ptrdiff_t(x) +
ptrdiff_t(y) * ptrdiff_t(Width) +
ptrdiff_t(z) * ptrdiff_t(Width) *
ptrdiff_t(Height)];
}

Another decision is to replace the variables types with memsize type:

class Region {
float *array;
ptrdiff_t Width, Height, Depth;
float
Region::GetCell(ptrdiff_t x, ptrdiff_t y, ptrdiff_t z) const;
...
};
float Region::GetCell(ptrdiff_t x, ptrdiff_t y, ptrdiff_t z) const
{
return array[x + y * Width + z * Width * Height];
}

If you use expressions which type is different from memsize type for indexing but are sure about the code correctness, you may use the explicit type conversion to suppress the analyzer's warning messages as follows:

bool *Seconds;
int min, sec;
...
bool flag = Seconds[static_cast<size_t>(min * 60 + sec)];

If you suspect that the program may contain errors related to the incorrect explicit type conversion in expressions you may use the V201.

The analyzer tries as far as possible to understand when using non-memsize-type as the array's index is safe and keep from displaying warnings in such cases. As the result the analyzer's behaviour can sometimes seem strange. In such situations we ask you not to hurry and try to analyze the situation. Let's consider the following code:

char Arr[] = { '0', '1', '2', '3', '4' };
char *p = Arr + 2;
cout << p[0u + 1] << endl;
cout << p[0u - 1] << endl; //V108

This code works correctly in 32-bit mode and displays numbers 3 and 1. While testing this code we'll get a warning message only on one line with the expression "p[0u - 1]". And it's absolutely right. If you compile and launch this example in 64-bit mode you'll see that the value 3 will be displayed and after that a program crash will occur.

The error relates to that indexing of "p[0u - 1]" is incorrect on a 64-bit system and this is what analyzer warns about. According to C++ rules "0u - 1" expression will have unsigned type and equal 0xFFFFFFFFu. On a 32-bit architecture addition of an index with this number will be the same as substraction of 1. And on a 64-bit system 0xFFFFFFFFu value will be justly added to the index and memory will be addressed outside the array.

Of course indexing to arrays with the use of such types as int and unsigned is often safe. In this case analyzer's warnings may seem inappropriate. But you should keep in mind that such code still may be unsafe in case of its modernization for processing a different data set. The code with int and unsigned types can appear to be less efficient than it is possible on a 64-bit architecture.

If you are sure that indexation is correct you use "Suppression of false alarms" or use filters. You can use explicit type conversion in the code:

for (int i = 0; i != n; ++i)
Array[static_cast<ptrdiff_t>(i)] = 0;

# V109. Implicit type conversion of return value to memsize type.

The analyzer found a possible error related to the implicit conversion of the return value type. The error may consist in the incorrect determination of the return value.

Let's examine an example.

extern int Width, Height, Depth;
size_t GetIndex(int x, int y, int z) {
return x + y * Width + z * Width * Height;
}
...
array[GetIndex(x, y, z)] = 0.0f;

If the code deals with large arrays (more than INT_MAX items) it will behave incorrectly and we will address not those items of the array array that we want. But the analyzer won't show a warning message on the line array[GetIndex(x, y, z)] = 0.0f; for it is absolutely correct. The analyzer informs about a possible error inside the function and is right for the error is located exactly there and is related to the arithmetic overflow. In spite of the facte that we return the type size_t value the expression x + y * Width + z * Width * Height is determined with the use of type int.

To correct the error we should use the explicit conversion of all the variables included into the expression to memsize types.

extern int Width, Height, Depth;
size_t GetIndex(int x, int y, int z) {
return (size_t)(x) +
(size_t)(y) * (size_t)(Width) +
(size_t)(z) * (size_t)(Width) * (size_t)(Height);
}

Another variant of correction is the use of other types for the variables included into the expression.

extern size_t Width, Height, Depth;
size_t GetIndex(size_t x, size_t y, size_t z) {
return x + y * Width + z * Width * Height;
}

When you are sure that the code is correct and the implicit type conversion does not cause errors while porting to the 64-bit architecture you may use the explicit type conversion so that to avoid showing of the warning messages in this line. For example:

DWORD_PTR Calc(unsigned a) {
return (DWORD_PTR)(10 * a);
}

In case you suspect that the code contains incorrect explicit type conversions to memsize types about which the analyzer does not show warnings you may use the V201.

# V110. Implicit type conversion of return value from memsize type to 32-bit type.

The analyzer found a possible error related to the implicit conversion of the return value. The error consists in dropping of the high bits in the 64-bit type which causes the loss of value.

Let's examine an example.

extern char *begin, *end;
unsigned GetSize() {
return end - begin;
}

The result of the end - begin expression has type ptrdiff_t. But as the function returns type unsigned the implicit type conversion occurs which causes the loss of the result high bits. Thus, if the pointers begin and end refer to the beginning and the end of the array according to a larger UINT_MAX (4Gb), the function will return the incorrect value.

The correction consists in modifying the program in such a way so that the arrays sizes are kept and transported in memsize types. In this case the correct code of the GetSize function should look as follows:

extern char *begin, *end;
size_t  GetSize() {
return end - begin;
}

In some cases the analyzer won't display a warning message on type conversion if it is obviously correct. For example, the analyzer won't display a warning message on the following code where despite the fact that sizeof() operator's result is size_t type it can be safely placed into unsigned type:

unsigned GetSize() {
return sizeof(double);
}

When you are sure that the code is correct and the implicit type conversion does not cause errors while porting to the 64-bit architecture you may use the explicit type conversion so that to avoid showing of the warning messages. For example:

unsigned GetBitCount() {
return static_cast<unsigned>(sizeof(TypeRGBA) * 8);
}

If you suspect that the code contains incorrect explicit conversions of the return values types about which the analyzer does not warn you may use the V202.

# V111. Call of function 'foo' with variable number of arguments. N argument has memsize type.

The analyzer found a possible error related to the transfer of the actual argument of memsize type into the function with variable number of arguments. The possible error may consist in the change of demands made to the function on the 64-bit system.

Let's examine an example.

const char *invalidFormat = "%u";
size_t value = SIZE_MAX;
printf(invalidFormat, value);

The given code does not take into account that size_t type does not coincide with unsigned type on the 64-bit platform. It will cause the printing of the incorrect result in case if value > UINT_MAX. The analyzer warns you that memsize type is used as an actual argument. It means that you should check the line invalidFormat assigning the printing format. The correct variant may look as follows:

const char *validFormat = "%Iu";
size_t value = SIZE_MAX;
printf(validFormat, value);

In the code of a real application, this error can occur in the following form, e.g.:

wsprintf(szDebugMessage,
_T("%s location %08x caused an access violation.\r\n"),
Exception->m_pAddr);

The second example.

char buf[9];
sprintf(buf, "%p", pointer);

The author of this inaccurate code did not take into account that the pointer size may excess 32 bits later. As a result, this code will cause buffer overflow on the 64-bit architecture. After checking the code on which the V111 warning message is shown you may choose one of the two ways: to increase the buffer size or rewrite the code using safe constructions.

char buf[sizeof(pointer) * 2 + 1];
sprintf(buf, "%p", pointer);
// --- or ---
std::stringstream s;
s << pointer;

The third example.

char buf[9];
sprintf_s(buf, sizeof(buf), "%p", pointer);

While examining the second example you could rightly notice that in order to prevent the overflow you should use functions with security enhancements. In this case the buffer overflow won't occur but unfortunately the correct result won't be shown as well.

If the arguments types did not change their digit capacity the code is considered to be correct and warning messages won't be shown. The example:

printf("%d", 10*5);
CString str;
size_t n = sizeof(float);
str.Format(StrFormat, static_cast<int>(n));

Unfortunately, we often cannot distinguish the correct code from the incorrect one while diagnosing the described type of errors. This warning message will be shown on many of calls of the functions with variable items number even when the call is absolutely correct. It is related to the principal danger of using such C++ constructions. Most frequent problems are the problems with the use of variants of the following functions: printf, scanf, CString::Format. The generally accepted practice is to refuse them and to use safe programming methods. For example, you may replace printf with cout and sprintf with boost::format or std::stringstream.

Note. Eliminating false positives when working with formatted output functions

The V111 diagnostic is very simple. When the analyzer has no information about a variadic function, it warns you about every case when variable of memsize-type is passed to that function. When it does have the information, the more accurate diagnostic V576 joins in, controlling the output of the V111 diagnostic. If V576 finds nothing, no V111 warning is issued either.

Therefore, you can reduce the number of false positives by providing the analyzer with information about the format functions. The analyzer is already familiar with such typical functions as 'printf', 'sprintf', etc., so it is user-implemented functions that you want to annotate. See the description of the V576 diagnostic for details about annotating functions.

Consider the following example. You may ask, "Why does not the analyzer output a V111 warning in case N1, but does that in case N2?"

void OurLoggerFn(wchar_t const* const _Format, ...)
{
....
}
void Foo(size_t length)
{
wprintf( L"%Iu", length );     // N1
OurLoggerFn( L"%Iu", length ); // N2
}

The reason is that the analyzer knows how standard function 'wprintf' works, while it knows nothing about 'OurLoggerFn', so it prefers to be overcautious and issues a warning about passing a memsize-type variable ('size_t' in this case) as an actual argument to a variadic function.

To eliminate the V111 warning, annotate the 'OurLoggerFn' function as follows:

//+V576, function:OurLoggerFn, format_arg:1, ellipsis_arg:2
void OurLoggerFn(wchar_t const* const _Format, ...)
.....

# V112. Dangerous magic number N used.

The analyzer found the use of a dangerous magic number. The possible error may consist in the use of numeric literal as special values or size of memsize type.

Let's examine the first example.

size_t ArraySize = N * 4;
size_t *Array = (size_t *)malloc(ArraySize);

A programmer while writing the program relied on that the size size_t will be always equal 4 and wrote the calculation of the array size "N * 4". This code dose not take into account that size_t on the 64-bit system will have 8 bytes and will allocate less memory than it is necessary. The correction of the code consists in the use of sizeof operator instead of a constant 4.

size_t ArraySize = N * sizeof(size_t);
size_t *Array = (size_t *)malloc(ArraySize);

The second example.

size_t n = static_cast<size_t>(-1);
if (n == 0xffffffffu) { ... }

Sometimes as an error code or other special marker the value "-1" is used which is written as "0xffffffff". On the 64-bit platform the written expression is incorrect and one should evidently use the value "-1".

size_t n = static_cast<size_t>(-1);
if (n == static_cast<size_t>(-1)) { ... }

Let's list magic numbers which may influence the efficiency of an application while porting it on the 64-bit system and due to this are diagnosed by analyzer.

You should study the code thoroughly in order to see if there are magic constants and replace them with safe constants and expressions. For this purpose you may use sizeof() operator, special value from <limits.h>, <inttypes.h> etc.

In some cases magic constants are not considered unsafe. For example, there will be no warning on this code:

float Color[4];

# V113. Implicit type conversion from memsize to double type or vice versa.

The analyzer found a possible error related to the implicit conversion of memsize type to double type of vice versa. The possible error may consist in the impossibility of storing the whole value range of memsize type in variables of double type.

Let's study an example.

SIZE_T size = SIZE_MAX;
double tmp = size;
size = tmp; // x86: size == SIZE_MAX
// x64: size != SIZE_MAX

Double type has size 64 bits and is compatible IEEE-754 standard on 32-bit and 64-bit systems. Some programmers use double type to store and work with integer types.

The given example may be justified on a 32-bit system for double type has 52 significant bits and is capable to store a 32-bit integer value without a loss. But while trying to store an integer number in a variable of double type the exact value can be lost (see picture).

If an approximate value can be used for the work algorithm in your program no corrections are needed. But we would like to warn you about the results of the change of behavior of a code like this on 64-bit systems. In any case it is not recommended to mix integer arithmetic with floating point arithmetic.

# V114. Dangerous explicit type pointer conversion.

The analyzer found a possible error related to the dangerous explicit type conversion of a pointer of one type to a pointer of another. The error may consist in the incorrect work with the objects to which the analyzer refers.

Let's examine an example. It contains the explicit type conversion of a int pointer to a size_t pointer.

int array[4] = { 1, 2, 3, 4 };
size_t *sizetPtr = (size_t *)(array);
cout << sizetPtr[1] << endl;

As you can see the result of the program output is different in 32-bit and 64-bit variants. On the 32-bit system the access to the array items is correct for the sizes of size_t and int types coincide and we see the output "2". On the 64-bit system we got "17179869187" in output for it is this value 17179869187 which stays in the first item of array sizetPtr.

The correction of the situation described consists in refusing dangerous type conversions with the help of the program modernization. Another variant is to create a new array and to copy into it the values from the original array.

Of course not all the explicit conversions of pointer types are dangerous. In the following example the work result does not depend on the system capacity for enum type and int type have the same size on the 32-bit system and the 64-bit system as well. So the analyzer won't show any warning messages on this code.

int array[4] = { 1, 2, 3, 4 };
enum ENumbers { ZERO, ONE, TWO, THREE, FOUR };
ENumbers *enumPtr = (ENumbers *)(array);
cout << enumPtr[1] << endl;

# V115. Memsize type is used for throw.

The analyzer found a possible error related to the use of memsize type for throwing an exception. The error may consist in the incorrect exception handling.

Let's examine an example of the code which contains throw and catch operators.

char *ptr1, *ptr2;
...
try {
throw ptr2 - ptr1;
}
catch(int) {
Foo();
}

On 64-bit system the exception handler will not work and the function Foo () will not be called. This results from the fact that expression "ptr2 - ptr1" has type ptrdiff_t which on 64-bit system does not equivalent with type int.

The correction of the situation described consists in use of correct type for catch of exception. In this case is necessary use of ptrdiff_t type, as noted below.

try {
throw ptr2 - ptr1;
}
catch(ptrdiff_t) {
Foo();
}

More right correction will consist in refusal of similar practice of programming. We recommend to use special classes for sending information about the error.

• 64-bit Lessons. Lesson 20. Pattern 12. Exceptions.

# V116. Memsize type is used for catch.

The analyzer found a possible error related to the use of memsize type for catching exception. The error may consist in the incorrect exception handling.

Let's examine an example of the code which contains throw and catch operators.

try {
try {
throw UINT64(-1);
}
catch(size_t) {
cout << "x64 portability issues" << endl;
}
}
catch(UINT64) {
cout << "OK" << endl;
}

The work result on the 32-bit system: OK
The work result on the 64-bit system: x64 portability issues

This behavior change is connected with what on 64-bit system the size_t type is equivalent to UINT64.

Correction of the described situation consists in change of a code for achievement of necessary logic of work.

More right correction will consist in refusal of similar practice of programming. We recommend using special classes for sending information about the error.

• 64-bit Lessons. Lesson 20. Pattern 12. Exceptions.

# V117. Memsize type is used in the union.

The analyzer found a possible error related to the use of memsize inside a union. The error may occur while working with such unions without taking into account the size changes of memsize types on the 64-bit system.

One should be attentive to the unions which contain pointers and other members of memsize type.

The first example.

Sometimes one needs to work with a pointer as with an integer. The code in the example is convenient because the explicit type conversions are not used for work with the pointer number form.

union PtrNumUnion {
char *m_p;
unsigned m_n;
} u;
...
u.m_p = str;
u.m_n += delta;

This code is correct on 32-bit systems and is incorrect on 64-bit ones. Changing the m_n member on the 64-bit system we work only with a part of the m_p pointer. One should use that type which would conform with the pointer size as follows.

union PtrNumUnion {
char *m_p;
size_t m_n; //type fixed
} u;

The second example.

Another frequent case of use of a union is the representation of one member as a set of smaller ones. For example, we may need to split the size_t type value into bytes for realization of the table algorithm of counting zero bits in a byte.

union SizetToBytesUnion {
size_t value;
struct {
unsigned char b0, b1, b2, b3;
} bytes;
} u;

SizetToBytesUnion u;
u.value = value;
size_t zeroBitsN = TranslateTable[u.bytes.b0] +
TranslateTable[u.bytes.b1] +
TranslateTable[u.bytes.b2] +
TranslateTable[u.bytes.b3];

A fundamental algorithmic error is made here which is based on the supposition that the size_t type consists of 4 bytes. The automatic search of algorithmic errors is not possible on the current stage of development of static analyzers but Viva64 provides search of all the unions which contain memsize types. Looking through the list of such potentially dangerous unions a user can find logical errors. On finding the union given in the example a user can detect an algorithmic error and rewrite the code in the following way.

union SizetToBytesUnion {
size_t value;
unsigned char bytes[sizeof(value)];
} u;

SizetToBytesUnion u;
u.value = value;
size_t zeroBitsN = 0;
for (size_t i = 0; i != sizeof(u.bytes); ++i)
zeroBitsN += TranslateTable[u.bytes[i]];

This warning message is similar to the warning V122.

# V118. malloc() function accepts a dangerous expression in the capacity of an argument.

The analyzer detected a potential error relating to using a dangerous expression serving as an actual argument for malloc function. The error may lie in incorrect suggestions about types' sizes defined as numerical constants.

The analyzer considers suspicious those expressions which contain constant literals multiple of four but which lack sizeof() operator.

Example 1.

An incorrect code of memory allocation for a matrix 3x3 of items of size_t type may look as follows:

size_t *pMatrix = (size_t *)malloc(36); // V118

Although this code could work very well in a 32-bit system, using number 36 is incorrect. When compiling a 64-bit version 72 bytes must be allocated. You may use sizeof () operator to correct this error:

size_t *pMatrix = (size_t *)malloc(9 * sizeof(size_t));

Example 2.

The following code based on the suggestion that the size of Item structure is 12 bytes is also incorrect for a 64-bit system:

struct Item {
int m_a;
int m_b;
Item *m_pParent;
};
Item *items = (Item *)malloc(GetArraySize() * 12); // V118

Correction of this error also consists in using sizeof() operator to correctly calculate the size of the structure:

Item *items = (Item *)malloc(GetArraySize() * sizeof(Item));

These errors are simple and easy to correct. But they are nevertheless dangerous and difficult to find in case of large applications. That's why diagnosis of such errors is implemented as a separate rule.

Presence of a constant in an expression which is a parameter for malloc() function does not necessarily means that V118 warning will be always shown on it. If sizeof() operator participates in the expression this construction is safe. Here is an example of a code which the analyzer considers safe:

int *items = (int *)malloc(sizeof(int) * 12);

# V119. More than one sizeof() operator is used in one expression.

The analyzer detected an unsafe arithmetic expression containing several sizeof() operators. Such expressions can potentially contain errors relating to incorrect calculations of the structures' sizes without taking into account field alignment.

Example:

struct MyBigStruct {
unsigned m_numberOfPointers;
void *m_Pointers[1];
};
size_t n2 = 1000;
void *p;
p = malloc(sizeof(unsigned) + n2 * sizeof(void *));

To calculate the size of the structure which will contain 1000 pointers, an arithmetic expression is used which is correct at first sight. The sizes of the base types are defined by sizeof() operators. It is good but not sufficient for correct calculation of the necessary memory size. You should also take into account field alignment.

This example is correct for a 32-bit mode for the sizes of the pointers and unsigned type coincide. They are both 4 bytes. The pointers and unsigned type are aligned also at the boundary of four bytes. So the necessary memory size will be calculated correctly.

In a 64-bit code the size of the pointer is 8 bytes. Pointers are aligned at the boundary of 8 bytes as well. It leads to that after m_numberOfPointers variable 4 additional bytes will be situated at the boundary of 8 bytes to align the pointers.

To calculate the correct size you should use offsetof function:

p = malloc(offsetof(MyBigStruct, m_Pointers) +
n * sizeof(void *));

In many cases using several sizeof() operators in one expression is correct and the analyzer ignores such constructions. Here is an example of safe expressions with several sizeof operators:

int MyArray[] = { 1, 2, 3 };
size_t MyArraySize =
sizeof(MyArray) / sizeof(MyArray[0]);
assert(sizeof(unsigned) < sizeof(size_t));
size_t strLen = sizeof(String) - sizeof(TCHAR);

# V120. Member operator[] of object 'foo' is declared with 32-bit type argument, but is called with memsize type argument.

The analyzer detected a potential error of working with classes that contain operator[]. Classes with an overloaded operator[] are usually a kind of an array where the index of the item being called is operator[] argument. If operator[] has a 32-bit type formal argument but memsize-type is used as an actual argument, it might indicate an error. Let us consider an example leading to the warning V120:

class MyArray {
int m_arr[10];
public:
int &operator;[](unsigned i) { return m_arr[i]; }
} Object;
size_t k = 1;
Object[k] = 44; //V120

This example does not contain an error but might indicate an architecture shortcoming. You should either work with MyArray using 32-bit indexes or modify operator[] so that it takes an argument of size_t type. The latter is preferable because memsize-types not only serve to make a program safer but sometimes allow the compiler to build a more efficient code.

The related diagnostic warnings are V108 and V302.

# V121. Implicit conversion of the type of 'new' operator's argument to size_t type.

The analyzer detected a potential error related to calling the operator new. A value of a non-memsize type is passed to the operator "new" as an argument. The operator new takes values of the type size_t, and passing a 32-bit actual argument may signal a potential overflow that may occur when calculating the memory amount being allocated. Here is an example:

unsigned a = 5;
unsigned b = 1024;
unsigned c = 1024;
unsigned d = 1024;
char *ptr = new char[a*b*c*d]; //V121

Here you may see an overflow occurring when calculating the expression "a*b*c*d". As a result, the program allocates less memory than it should. To correct the code, use the type size_t:

size_t a = 5;
size_t b = 1024;
size_t c = 1024;
size_t d = 1024;
char *ptr = new char[a*b*c*d]; //Ok

The error will not be diagnosed if the value of the argument is defined as a safe 32-bit constant value. Here is an example of safe code:

char *ptr = new char[100];
const int size = 3*3;
char *p2 = new char[size];

This warning message is similar to the warning V106.

# V122. Memsize type is used in the struct/class.

Sometimes you might need to find all the fields in the structures that have a memsize-type. You can find such fields using the V122 diagnostic rule.

The necessity to view all the memsize-fields might appear when you port a program that has structure serialization, for example, into a file. Consider an example:

struct Header
{
unsigned m_version;
size_t m_bodyLen;
};
...
...

This code writes a different number of bytes into the file depending on the mode it is compiled in - either Win32 or Win64. This might violate compatibility of files' formats or cause other errors.

The task of automating the detection of such errors is almost impossible to solve. However, if there are some reasons to suppose that the code might contain such errors, developers can once check all the structures that participate in serialization. It is for this purpose that you may need a check with the V122 rule. By default it is disabled since it generates false warnings in more than 99% of cases.

In the example above, the V122 message will be produced on the line "size_t m_bodyLen;". To correct this code, you may use types of fixed size:

struct Header
{
My_UInt32 m_version;
My_UInt32 m_bodyLen;
};
...
...

Let's consider other examples where the V122 message will be generated:

class X
{
int i;
DWORD_PTR a; //V122
DWORD_PTR b[3]; //V122
float c[3][4];
float *ptr; //V122
};

V117 is a related diagnostic message.

Note. If you are sure that structures containing pointers will never serialize, you may use this comment:

//-V122_NOPTR

It will suppress all warnings related to pointers.

This comment should be added into the header file included into all the other files. For example, such is the "stdafx.h" file. If you add this comment into a "*.cpp" file, it will affect only this particular file.

# V123. Allocation of memory by the pattern "(X*)malloc(sizeof(Y))" where the sizes of X and Y types are not equal.

The analyzer found a potential error related to the operation of memory allocation. When calculating the amount of memory to be allocated, the sizeof(X) operator is used. The result returned by the memory allocation function is converted to a different type, "(Y *)", instead of "(X *)". It may indicate allocation of insufficient or excessive amount of memory.

Consider the first example:

int **ArrayOfPointers = (int **)malloc(n * sizeof(int));

The misprint in the 64-bit program here will cause allocation of memory twice less than necessary. In the 32-bit program, the sizes of the "int" type and "pointer to int" coincide and the program works correctly despite the misprint.

This is the correct version of the code:

int **ArrayOfPointers = (int **)malloc(n * sizeof(int *));

Consider another example where more memory is allocated than needed:

unsigned *p = (unsigned *)malloc(len * sizeof(size_t));

A program with such code will most probably work correctly both in the 32-bit and 64-bit versions. But in the 64-bit version, it will allocate more memory than it needs. This is the correct code:

unsigned *p = (unsigned *)malloc(len * sizeof(unsigned));

In some cases the analyzer does not generate a warning although the types X and Y do not coincide. Here is an example of such correct code:

BYTE *simpleBuf = (BYTE *)malloc(n * sizeof(float));

# V124. Function 'Foo' writes/reads 'N' bytes. The alignment rules and type sizes have been changed. Consider reviewing this value.

The analyzer detected a potential error: the size of data being written or read is defined by a constant. When the code is compiled in the 64-bit mode, the sizes of some data and their alignment boundaries will change. The sizes of base types and their alignment boundaries are shown in the picture:

The analyzer examines code fragments where the size of data being written or read is defined explicitly. The programmer must review these fragments. Here is a code sample:

size_t n = fread(buf, 1, 40, f_in);

Constant 40 may be an incorrect value from the viewpoint of the 64-bit system. Perhaps you should write it so:

size_t n = fread(buf, 1, 10 * sizeof(size_t), f_in);

# V125. It is not advised to declare type 'T' as 32-bit type.

The analyzer detected a potential error: 64-bit code contains definitions of reserved types, the latter being defined as 32-bit ones. For example:

typedef unsigned size_t;
typedef __int32 INT_PTR;

Such type definitions may cause various errors since these types have different sizes in different parts of the program and libraries. For instance, the size_t type is defined in the stddef.h header file for the C language and in the cstddef file for the C++ language.

References:

# V126. Be advised that the size of the type 'long' varies between LLP64/LP64 data models.

This diagnostic message lets you find all the 'long' types used in a program.

Of course, presence of the 'long' type in a program is not an error in itself. But you may need to review all the fragments of the program text where this type is used when you create portable 64-bit code that must work well in Windows and Linux.

Windows and Linux use different data models for the 64-bit architecture. A data model means correlations of sizes of base data types such as int, float, pointer, etc. Windows uses the LLP64 data model while Linux uses the LP64 data model. In these models, the sizes of the 'long' type are different.

In Windows (LLP64), the size of the 'long' type is 4 bytes.

In Linux (LP64), the size of the 'long' type is 8 bytes.

The difference of the 'long' type's sizes may make files' formats incompatible or cause errors when developing code executed in Linux and Windows. So if you want, you may use PVS-Studio to review all the code fragments where the 'long' type is used.

References:

# V127. An overflow of the 32-bit variable is possible inside a long cycle which utilizes a memsize-type loop counter.

The analyzer detected a potential error: a 32-bit variable might overflow in a long loop. Of course, the analyzer will not be able to find all the possible cases when variable overflows in loops occur. But it will help you find some incorrect type constructs. For example:

int count = 0;
for (size_t i = 0; i != N; i++)
{
if ((A[i] & MASK) != 0)
count++;
}

This code works well in a 32-bit program. The variable of the 'int' type is enough to count the number of some items in the array. But in a 64-bit program the number of these items may exceed INT_MAX and an overflow of the 'count' variable will occur. This is what the analyzer warns you about by generating the V127 message. This is the correct code:

size_t count = 0;
for (size_t i = 0; i != N; i++)
{
if ((A[i] & MASK) != 0)
count++;
}

The analyzer also contains several additional checks to make false reports fewer. For instance, the V127 warning will not be generated when we deal with a short loop. Here you are a sample of code the analyzer considers safe:

int count = 0;
for (size_t i = 0; i < 100; i++)
{
if ((A[i] & MASK) != 0)
count++;
}

# V128. A variable of the memsize type is read from a stream. Consider verifying the compatibility of 32 and 64 bit versions of the application in the context of a stored data.

The analyzer has detected a potential error related to data incompatibility between the 32-bit and 64-bit versions of an application, when memsize-variables are being written to or read from the stream. The error is this: data written to the binary file in the 32-bit program version will be read incorrectly by the 64-bit one.

For example:

std::vector<int> v;
....
ofstream os("myproject.dat", ios::binary);
....
os << v.size();

The 'size()' function returns a value of the size_t type whose size is different in 32-bit and 64-bit applications. Consequently, different numbers of bytes will be written to the file.

There exist many ways to avoid the data incompatibility issue. The simplest and crudest one is to strictly define the size of types being written and read. For example:

std::vector<int> v;
....
ofstream os("myproject.dat", ios::binary);
....
os << static_cast<__int64>(v.size());

A strictly defined cast to 64-bit types cannot be called a nice solution, of course. The reason is that this method won't let the program read data written by the old 32-bit program version. On the other hand, if data are defined to be read and written as 32-bit values, we face another problem: the 64-bit program version won't be able to write information about arrays consisting of more than 2^32 items. This may be a disappointing limitation, as 64-bit software is usually created to handle huge data arrays.

A way out can be found through introducing a notion of the version of saved data. For example, 32-bit applications can open files created by the 32-bit version of your program, while 64-bit applications can handle data generated both by the 32-bit and 64-bit versions.

One more way to solve the compatibility problem is to store data in the text format or the XML format.

Note that this compatibility issue is irrelevant in many programs. If your application doesn't create projects and other files to be opened on other computers, you may turn off the V128 diagnostic.

You also shouldn't worry if the stream is used to print values on the screen. PVS-Studio tries to detect these situations and avoid generating the message. False positives are, however, still possible. If you get them, use one of the false positive suppression mechanisms described in the documentation.

According to users demand, we added a possibility to manually point out functions, which saves or loads data. When somewhere in code a memsize-type is passed to one of these functions, this code considered dangerous.

Addition format is as follows: just above function prototype (or near its realization, or in standard header file) user should add a special comment. Let us start with the usage example:

//+V128, function:write, non_memsize:2
void write(string name, char);
void write(string name, int32);
void write(string name, int64);
foo()
{
write("zz", array.size()); // warning V128
}

Format:

• "function" key represents name of the function to be checked by analyzer. This key is necessary – without this key addition, of course, would not work.
• "class" key – non-necessary key that allows to enter class name to which this function belongs (i.e. class method). Without specifying it analyzer will check any function with given name, with specifying – only ones that belongs to the particular class.
• "namespace" key – non-necessary key that allows to enter namespace name to which function belongs. Again, without specifying it analyzer will check any function with given name, with specifying – only ones that belongs to the particular namespace. Key will correctly work with the "class" key – analyzer then will check any class method with given name that belongs to particular namespace.
• "non_memsize" key allows specifying number of argument that should not allow type, which size changes depending on architecture. Number counts from one, not from zero. There is a technical restriction – this number should not exceed 14. There may be multiple "non-memsize" keys if there is a need to check multiple function arguments.

Warning level in case of user functions is always first.

At last, here is full usage example:

// Warns when in method C of class B
// from A namespace memsize-type value
// is put as a second or third argument.
//+V128,namespace:A,class:B,function:C,non_memsize:3,non_memsize:2

# V201. Explicit conversion from 32-bit integer type to memsize type.

It informs about the presence of the explicit type conversion from 32-bit integer type to memsize type which may hide one of the following errors: V101, V102, V104, V105, V106, V108, V109. You may address to the given warnings list to find out the cause of showing the diagnosis message V201.

The V201 warning applied to conversions of 32-bit integer types to pointers before. Such conversions are rather dangerous, so we singled them out into a separate diagnostic rule V204.

Keep in mind that most of the warnings of this type will be likely shown on the correct code. Here are some examples of the correct and incorrect code on which this warning will be shown.

The examples of the incorrect code.

int i;
ptrdiff_t n;
...
for (i = 0; (ptrdiff_t)(i) != n; ++i) {   //V201
...
}

unsigned width, height, depth;
...
size_t arraySize = size_t(width * height * depth);   //V201

The examples of the correct code.

const size_t seconds = static_cast<size_t>(60 * 60);   //V201
unsigned *array;
...
size_t sum = 0;
for (size_t i = 0; i != n; i++) {
sum += static_cast<size_t>(array[i] / 4);   //V201
}
unsigned width, height, depth;
...
size_t arraySize =
size_t(width) * size_t(height) * size_t(depth);    //V201

# V202. Explicit conversion from memsize type to 32-bit integer type.

It informs about the presence of the explicit integer memsize type conversion to 32-bit type which may hide one of the following errors: V103, V107, V110. You may see the given warnings list to find out the cause of showing the warning message V202.

The V202 warning applied to conversions of pointers to 32-bit integer types before. Such conversions are rather dangerous, so we singled them out into a separate rule V205.

Keep in mind that most of the warnings of this type will be likely shown on the correct code. Here are some examples of the correct and incorrect code on which this warning will be shown.

The examples of the incorrect code.

size_t n;
...
for (unsigned i = 0; i != (unsigned)n; ++i) {   //V202
...
}

UINT_PTR width, height, depth;
...
UINT arraySize = UINT(width * height * depth);   //V202

The examples of the correct code.

const unsigned bits =
unsigned(sizeof(object) * 8); //V202

extern size_t nPane;
extern HICON hIcon;
BOOL result =
SetIcon(static_cast<int>(nPane), hIcon); //V202

# V203. Explicit type conversion from memsize to double type or vice versa.

The analyzer found a possible error related to the explicit conversion of memsize type into double type and vice versa. The possible error may consist in the impossibility to save the whole range of values of memsize type in variables of double type.

This error is completely similar to error V113. The difference is in that the explicit type conversion is used as in a further example:

SIZE_T size = SIZE_MAX;
double tmp = static_cast<double>(size);
size = static_cast<SIZE_T>(tmp); // x86: size == SIZE_T
// x64: size != SIZE_T

To study this kind of errors see the description of error V113.

# V204. Explicit conversion from 32-bit integer type to pointer type.

This warning informs you about an explicit conversion of a 32-bit integer type to a pointer type. We used the V201 diagnostic rule before to diagnose this situation. But explicit conversion of the 'int' type to pointer is much more dangerous than conversion of 'int' to 'intptr_t'. That is why we created a separate rule to search for explicit type conversions when handling pointers.

Here is a sample of incorrect code.

int n;
float *ptr;
...
ptr = (float *)(n);

The 'int' type's size is 4 bytes in a 64-bit program, so it cannot store a pointer whose size is 8 bytes. Type conversion like in the sample above usually signals an error.

What is very unpleasant about such errors is that they can hide for a long time before you reveal them. A program might store pointers in 32-bit variables and work correctly for some time as long as all the objects created in the program are located in low-order addresses of memory.

If you need to store a pointer in an integer variable for some reason, you'd better use memsize-types. For instance: size_t, ptrdiff_t, intptr_t, uintptr_t.

This is the correct code:

intptr_t n;
float *ptr;
...
ptr = (float *)(n);


However, there is a specific case when you may store a pointer in 32-bit types. I am speaking about handles which are used in Windows to work with various system objects. Here are examples of such types: HANDLE, HWND, HMENU, HPALETTE, HBITMAP, etc. Actually these types are pointers. For instance, HANDLE is defined in header files as "typedef void *HANDLE;".

Although handles are 64-bit pointers, only the less significant 32 bits are employed in them for the purpose of better compatibility (for example, to enable 32-bit and 64-bit processes interact with each other). For details, see "Microsoft Interface Definition Language (MIDL): 64-Bit Porting Guide" (USER and GDI handles are sign extended 32b values).

Such pointers can be stored in 32-bit data types (for instance, int, DWORD). To cast such pointers to 32-bit types and vice versa special functions are used:

void            * Handle64ToHandle( const void * POINTER_64 h )
void * POINTER_64 HandleToHandle64( const void *h )
long              HandleToLong    ( const void *h )
unsigned long     HandleToUlong   ( const void *h )
void            * IntToPtr        ( const int i )
void            * LongToHandle    ( const long h )
void            * LongToPtr       ( const long l )
void            * Ptr64ToPtr      ( const void * POINTER_64 p )
int               PtrToInt        ( const void *p )
long              PtrToLong       ( const void *p )
void * POINTER_64 PtrToPtr64      ( const void *p )
short             PtrToShort      ( const void *p )
unsigned int      PtrToUint       ( const void *p )
unsigned long     PtrToUlong      ( const void *p )
unsigned short    PtrToUshort     ( const void *p )
void            * UIntToPtr       ( const unsigned int ui )
void            * ULongToPtr      ( const unsigned long ul ) 

# V205. Explicit conversion of pointer type to 32-bit integer type.

This warning informs you about an explicit conversion of a pointer type to a 32-bit integer type. We used the V202 diagnostic rule before to diagnose this situation. But explicit conversion of a pointer to the 'int' type is much more dangerous than conversion of 'intptr_t' to 'int'. That is why we created a separate rule to search for explicit type conversions when handling pointers.

Here is a sample of incorrect code.

int n;
float *ptr;
...
n = (int)ptr;

The 'int' type's size is 4 bytes in a 64-bit program, so it cannot store a pointer whose size is 8 bytes. Type conversion like in the sample above usually signals an error.

What is very unpleasant about such errors is that they can hide for a long time before you reveal them. A program might store pointers in 32-bit variables and work correctly for some time as long as all the objects created in the program are located in low-order addresses of memory.

If you need to store a pointer in an integer variable for some reason, you'd better use memsize-types. For instance: size_t, ptrdiff_t, intptr_t, uintptr_t.

This is the correct code:

intptr_t n;
float *ptr;
...
n = (intptr_t)ptr;


However, there is a specific case when you may store a pointer in 32-bit types. I am speaking about handles which are used in Windows to work with various system objects. Here are examples of such types: HANDLE, HWND, HMENU, HPALETTE, HBITMAP, etc. Actually these types are pointers. For instance, HANDLE is defined in header files as "typedef void *HANDLE;".

Although handles are 64-bit pointers, only the less significant 32 bits are employed in them for the purpose of better compatibility (for example, to enable 32-bit and 64-bit processes interact with each other). For details, see "Microsoft Interface Definition Language (MIDL): 64-Bit Porting Guide" (USER and GDI handles are sign extended 32b values).

Such pointers can be stored in 32-bit data types (for instance, int, DWORD). To cast such pointers to 32-bit types and vice versa special functions are used:

void            * Handle64ToHandle( const void * POINTER_64 h )
void * POINTER_64 HandleToHandle64( const void *h )
long              HandleToLong    ( const void *h )
unsigned long     HandleToUlong   ( const void *h )
void            * IntToPtr        ( const int i )
void            * LongToHandle    ( const long h )
void            * LongToPtr       ( const long l )
void            * Ptr64ToPtr      ( const void * POINTER_64 p )
int               PtrToInt        ( const void *p )
long              PtrToLong       ( const void *p )
void * POINTER_64 PtrToPtr64      ( const void *p )
short             PtrToShort      ( const void *p )
unsigned int      PtrToUint       ( const void *p )
unsigned long     PtrToUlong      ( const void *p )
unsigned short    PtrToUshort     ( const void *p )
void            * UIntToPtr       ( const unsigned int ui )
void            * ULongToPtr      ( const unsigned long ul ) 

Let's take a look at the following example:

HANDLE h = Get();
UINT uId = (UINT)h;

The analyzer does not generate the message here, though HANDLE is nothing but a pointer. Values of this pointer always fit into 32 bits. Just make sure you take care when working with them in future. Keep in mind that non-valid handles are declared in the following way:

#define INVALID_HANDLE_VALUE ((HANDLE)(LONG_PTR)-1)

That's why it would be incorrect to write the next line like this:

if (HANDLE(uID) == INVALID_HANDLE_VALUE)

Since the 'uID' variable is unsigned, the pointer's value will equal 0x00000000FFFFFFFF, not 0xFFFFFFFFFFFFFFFF.

The analyzer will generate the V204 warning for a suspicious check when unsigned turns into a pointer.

# V206. Explicit conversion from 'void *' to 'int *'.

This warning informs you about an explicit conversion of the 'void *' or 'byte *' pointer to a function pointer or 32/64-bit integer pointer. Or vice versa.

Of course, the type conversion like that is not in itself an error. Let's figure out what for we have implemented this diagnostic.

It is a pretty frequent situation when a pointer to some memory buffer is passed into another part of the program through a void * or byte * pointer. There may be different reasons for doing so; it usually indicates a poor code design, but this question is out of the scope of this paper. Function pointers are often stored as void * pointers, too.

So, assume we have an array/function pointer saved as void * in some part of the program while it is cast back in another part. When porting such a code, you may get unpleasant errors: a type may change in one place but stay unchanged in some other place.

For example:

size_t array[20];
void *v = array;
....
unsigned* sizes = (unsigned*)(v);

This code works well in the 32-bit mode as the sizes of the 'unsigned' and 'size_t' types coincide. In the 64-bit mode, however, their sizes are different and the program will behave unexpectedly. See also pattern 6, changing an array type.

The analyzer will point out the line with the explicit type conversion where you will discover an error if study it carefully. The fixed code may look like this:

unsigned array[20];
void *v = array;
....
unsigned* sizes = (unsigned*)(v);

or like this:

size_t array[20];
void *v = array;
....
size_t* sizes = (size_t*)(v);

A similar error may occur when working with function pointers.

void Do(void *ptr, unsigned a)
{
typedef void (*PtrFoo)(DWORD);
PtrFoo f = (PtrFoo)(ptr);
f(a);
}

void Foo(DWORD_PTR a) { /*... */ }

void Call()
{
Do(Foo, 1);
}

The fixed code:

typedef void (*PtrFoo)(DWORD_PTR);

Note. The analyzer knows about the plenty of cases when explicit type conversion is safe. For instance, it doesn't worry about explicit type conversion of a void * pointer returned by the malloc() function:

int *p = (int *)malloc(sizeof(int) * N);

As said in the beginning, explicit type conversion is not in itself an error. That's why, despite numbers of exceptions to this rule, the analyzer still generates quite a lot of false V206 warnings. It doesn't know if there are any other fragments in the program where these pointers are used incorrectly, so it has to generate warnings on every potentially dangerous type conversion.

For instance, I've cited two examples of incorrect code and ways to fix them above. Even after they are fixed, the analyzer will keep generating false positives on the already correct code.

You can use the following approach to handle this warning: carefully study all the V206 messages once and then disable this diagnostic in the settings. If there are few false positives, use one of the false positive suppression methods.

# V207. A 32-bit variable is utilized as a reference to a pointer. A write outside the bounds of this variable may occur.

This warning informs you about an explicit conversion of a 32-bit integer variable to the reference to pointer type.

int A;
(int *&)A = pointer;

Suppose we need for some reason to write a pointer into an integer variable. To do this, we can cast the integer 'A' variable to the 'int *&' type (reference to pointer).

This code can work well in a 32-bit system as the 'int' type and the pointer have the same sizes. But in a 64-bit system, writing outside the 'A' variable's memory bounds will occur, which will in its turn lead to undefined behavior.

To fix the bug, we need to use one of the memsize-types - for example intptr_t:

intptr_t A;
(intptr_t *&)A = pointer;

Now let's discuss a more complicated example, based on code taken from a real-life application:

enum MyEnum { VAL1, VAL2 };
void Get(void*& data) {
static int value;
data = &value;
}
void M() {
MyEnum e;
Get((void*&)e);
....
}

There is a function which returns values of the pointer type. One of the returned values is written into a variable of the 'enum' type. We won't discuss now the reason for doing so; we are rather interested in the fact that this code used to work right in the 32-bit mode while its 64-bit version doesn't - the Get() function changes not only the 'e' variable but the nearby memory as well.

# V220. Suspicious sequence of types castings: memsize -> 32-bit integer -> memsize.

The warning informs you about a strange sequence of type conversions. A memsize-type is explicitly cast to a 32-bit integer type and then is again cast to a memsize-type either explicitly or implicitly. Such a sequence of conversions leads to a loss of high-order bits. Usually it signals a crucial error.

Consider this sample:

char *p1;
char *p2;
ptrdiff_t n;
...
n = int(p1 - p2);

We have an unnecessary conversion to the 'int' type here. It must not be here and even might cause a failure if p1 and p2 pointers are more than INT_MAX items away from each other in a 64-bit program.

This is the correct code:

char *p1;
char *p2;
ptrdiff_t n;
...
n = p1 - p2;

Let's consider another sample:

BOOL SetItemData(int nItem, DWORD_PTR dwData);
...
CItemData *pData = new CItemData;
...
CListCtrl::SetItemData(nItem, (DWORD)pData);

This code will cause an error if the CltemData object is created beyond the four low-order Gbytes of memory. This is the correct code:

BOOL SetItemData(int nItem, DWORD_PTR dwData);
...
CItemData *pData = new CItemData;
...
CListCtrl::SetItemData(nItem, (DWORD_PTR)pData);

One should keep in mind that the analyzer does not generate the warning when conversion is done over such data types as HANDLE, HWND, HCURSOR, and so on. Although these types are in fact pointers (void *), their values always fit into the least significant 32 bits. It is done on purpose so that these handles could be passed between 32-bit and 64-bit processes. For details see How to correctly cast a pointer to int in a 64-bit application?

Have a look at the following example:

typedef void * HANDLE;
HANDLE GetHandle(DWORD nStdHandle);
int _open_osfhandle(intptr_t _OSFileHandle, int _Flags);
....
int fh = _open_osfhandle((int)GetHandle(sh), 0);

We are dealing with a conversion of the following kind:

HANDLE -> int -> intptr_t

That is, the pointer is first cast to the 32-bit 'int' type and then is extended to 'intptr_t'. It doesn’t look nice. The programmer should rather have written it like "(intptr_t)GetHandle(STD_OUTPUT_HANDLE)". But there is still no error here as values of the HANDLE type fit into 'int'. That’s why the analyzer keeps silent.

If it were written like this:

int fh = _open_osfhandle((unsigned)GetHandle(sh), 0);

the analyzer would generate the message. Mixing signed and unsigned types together spoils it all. Suppose GetHandle() returns INVALID_HANDLE_VALUE. This value is defined in the system headers in the following way:

#define INVALID_HANDLE_VALUE ((HANDLE)(LONG_PTR)-1)

Now, what we get after the conversion (intptr_t)(unsigned)((HANDLE)(LONG_PTR)-1) is:

-1 -> 0xffffffffffffffff -> HANDLE -> 0xffffffffu -> 0x00000000fffffffff

The value -1 has turned into 4294967295. The programmer may fail to notice and take this into account and the program will keep running incorrectly if the GetHandle() function returns INVALID_HANDLE_VALUE. Because of that, the analyzer will generate the warning in the second case.

# V221. Suspicious sequence of types castings: pointer -> memsize -> 32-bit integer.

This warning informs the programmer about the presence of a strange sequence of type conversions. A pointer is explicitly cast to a memsize-type and then again, explicitly or implicitly, to the 32-bit integer type. This sequence of conversions causes a loss of the most significant bits. It usually indicates a serious error in the code.

Take a look at the following example:

int *p = Foo();
unsigned a, b;
a = size_t(p);
b = unsigned(size_t(p));

In both cases, the pointer is cast to the 'unsigned' type, causing its most significant part to be truncated. If you then cast the variable 'a' or 'b' to a pointer again, the resulting pointer is likely to be incorrect.

The difference between the variables 'a' and 'b' is only in that the second case is harder to diagnose. In the first case, the compiler will warn you about the loss of the most significant bits, but keep silent in the second case as what is used there is an explicit type conversion.

To fix the error, we should store pointers in memsize-types only, for example in variables of the size_t type:

int *p = Foo();
size_t a, b;
a = size_t(p);
b = size_t(p);

There may be difficulties with understanding why the analyzer generates the warning on the following code pattern:

BOOL Foo(void *ptr)
{
return (INT_PTR)ptr;
}

You see, the BOOL type is nothing but a 32-bit 'int' type. So we are dealing with a sequence of type conversions:

pointer -> INT_PTR -> int.

You may think there's actually no error here because what matters to us is only whether or not the pointer is equal to zero. But the error is real. It's just that programmers sometimes confuse the ways the types BOOL and bool behave.

Assume we have a 64-bit variable whose value equals 0x000012300000000. Casting it to bool and BOOL will have different results:

int64_t v = 0x000012300000000ll;

bool b = (bool)(v); // true

BOOL B = (BOOL)(v); // FALSE

In the case of 'BOOL', the most significant bits will be simply truncated and the non-zero value will turn to 0 (FALSE).

It's just the same with the pointer. When explicitly cast to BOOL, its most significant bits will get truncated and the non-zero pointer will turn to the integer 0 (FALSE). Although low, there is still some probability of this event. Therefore, code like that is incorrect.

To fix it, we can go two ways. The first one is to use the 'bool' type:

bool Foo(void *ptr)
{
return (INT_PTR)ptr;
}

But of course it's better and easier to do it like this:

bool Foo(void *ptr)
{
return ptr != nullptr;
}

The method shown above is not always applicable. For instance, there is no 'bool' type in the C language. So here's the second way to fix the error:

BOOL Foo(void *ptr)
{
return ptr != NULL;
}

Keep in mind that the analyzer does not generate the warning when conversion is done over such data types as HANDLE, HWND, HCURSOR, and so on. Although these are in fact pointers (void *), their values always fit into the least significant 32 bits. It is done on purpose so that these handles could be passed between 32-bit and 64-bit processes. For details, see: How to correctly cast a pointer to int in a 64-bit application?

# V301. Unexpected function overloading behavior. See N argument of function 'foo' in derived class 'derived' and base class 'base'.

The analyzer found a possible error related to the changes in the overriding virtual functions behavior.

The example of the change in the virtual function behavior.

class CWinApp {
...
virtual void WinHelp(DWORD_PTR dwData, UINT nCmd);
...
};
class CSampleApp : public CWinApp {
...
virtual void WinHelp(DWORD dwData, UINT nCmd);
...
};

It is the common example which the developer may face while porting his application to the 64-bit architecture. Let's follow the life-cycle of the developing of some application. Suppose it was being developed for Visual Studio 6.0. at first when the function WinHelp in class CWinApp had the following prototype:

virtual void WinHelp(DWORD dwData, UINT nCmd = HELP_CONTEXT);

It would be absolutely correct to implement the overlap of the virtual function in class CSampleApp, as it is shown in the example. Then the project was placed into Visual Studio 2005 where the prototype of the function in class CWinApp underwent changes that consist in replacing DWORD type with DWORD_PTR type. On the 32-bit platform this program will continue to work properly for here DWORD and DWORD_PTR types coincide. Troubles will occur while compliling this code for the 64-bit platform. We get two functions with the same names but with different parameters the result of which is that the user's code won't be called.

The analyzer allows to find such errors the correction of which is not difficult. It is enough to change the function prototype in the successor class as follows:

class CSampleApp : public CWinApp {
...
virtual void WinHelp(DWORD_PTR  dwData, UINT nCmd);
...
};

# V302. Member operator[] of 'foo' class has a 32-bit type argument. Use memsize-type here.

The analyzer detected a potential error of working with classes that contain operator[]. Classes with an overloaded operator[] are usually a kind of an array where the index of the item being called is operator[] argument. If operator[] has a 32-bit type argument it might indicate an error. Let us consider an example leading to the warning V302:

class MyArray {
std::vector<float> m_arr;
...
float &operator[](int i)  //V302
{
DoSomething();
return m_arr[i];
}
} A;
...
int x = 2000;
int y = 2000;
int z = 2000;
A[x * y * z] = 33;

If the class is designed to work with many arguments, implementing operator[] like this is incorrect because it does not allow addressing the items whose numbers are more than UINT_MAX. To diagnose the error in the example above you should point to the potentially incorrect operator[]. The expression "x * y * z" does not look suspicious because there is no implicit type conversion. When we correct operator[] in the following way:

float &operator[](ptrdiff_t i);

PVS-Studio analyzer warns about a potential error in the line "A[x * y * z] = 33;" and now we can make the code absolutely correct. Here is an example of the corrected code:

class MyArray {
std::vector<float> m_arr;
...
float &operator[](ptrdiff_t i)  //V302
{
DoSomething();
return m_arr[i];
}
} A;
...
ptrdiff_t x = 2000;
ptrdiff_t y = 2000;
ptrdiff_t z = 2000;
A[x * y * z] = 33;

The related diagnostic warnings are V108 and V120.

# V303. The function is deprecated in the Win64 system. It is safer to use the 'foo' function.

You should replace some functions with their new versions when porting an application to 64-bit systems. Otherwise, the 64-bit application might work incorrectly. The analyzer warns about the use of deprecated functions in code and offers versions to replace them.

Let's consider several examples of deprecated functions:

## EnumProcessModules

Extract from MSDN:

To control whether a 64-bit application enumerates 32-bit modules, 64-bit modules, or both types of modules, use the EnumProcessModulesEx function.

## SetWindowLong

Extract from MSDN:

This function has been superseded by the SetWindowLongPtr function. To write code that is compatible with both 32-bit and 64-bit versions of Windows, use the SetWindowLongPtr function.

## GetFileSize

Extract from MSDN:

When lpFileSizeHigh is NULL, the results returned for large files are ambiguous, and you will not be able to determine the actual size of the file. It is recommended that you use GetFileSizeEx instead.

## Note

Be careful, if you want to replace the 'lstrlen' functin with 'strlen'. The 'lstrlen' function cannot correctly evaluate the length of the string if this string contains of more than 'INT_MAX' characters. However, in practice the possibility to see such long strings is really low. But as opposed to the 'strlen' function, the 'Istrlen' function correctly processes the situation when is is passed a null pointer:

If lpString is NULL, the function returns 0.

If we just replace 'lstrlen' with 'strlen', then the program can start working incorrectly. That's why usually it's not recommended to replace 'Istrlen' with some other function call.

# V501. There are identical sub-expressions to the left and to the right of the 'foo' operator.

The analyzer found a code fragment that most probably has a logic error. There is an operator (<, >, <=, >=, ==, !=, &&, ||, -, /) in the program text to the left and to the right of which there are identical subexpressions.

Consider an example:

if (a.x != 0 && a.x != 0)

In this case, the '&&' operator is surrounded by identical subexpressions "a.x != 0" and it allows us to detect an error made through inattention. The correct code that will not look suspicious to the analyzer looks in the following way:

if (a.x != 0 && a.y != 0)

Consider another example of an error detected by the analyzer in the code of a real application:

class Foo {
int iChilds[2];
...
bool hasChilds() const { return(iChilds > 0 || iChilds > 0); }
...
}

In this case, the code is senseless though it is compiled successfully and without any warnings. Correct code must look as follows:

bool hasChilds() const { return(iChilds[0] > 0 || iChilds[1] > 0);}

The analyzer does not generate the warning in all the cases when there are identical subexpressions to the left and to the right of the operator.

The first exception refers to those constructs where the increment operator ++, the decrement operator - or += and -= operator are used. Here is an example taken from a real application:

do {
} while (*++scan == *++match && *++scan == *++match &&
*++scan == *++match && *++scan == *++match &&
*++scan == *++match && *++scan == *++match &&
*++scan == *++match && *++scan == *++match &&
scan < strend);

The analyzer considers this code safe.

The second exception refers to comparison of two equal numbers. Programmers often employ this method to disable some program branches. Here is an example:

#if defined(_OPENMP)
#include <omp.h>
#else
...
#endif
...
if (0 == omp_get_thread_num()) {

The last exception refers to comparison that uses macros:

#define _WINVER_NT4_    0x0004
#define _WINVER_95_     0x0004
...
UINT    winver = g_App.m_pPrefs->GetWindowsVersion();
if(winver == _WINVER_95_ || winver == _WINVER_NT4_)

You should keep in mind that the analyzer might generate a warning on a correct construct in some cases. For instance, the analyzer does not consider side effects when calling functions:

if (wr.takeChar() == '\0' && wr.takeChar() == '\0')

Another example of a false alarm was noticed during unit-tests of some project - in the part of it where the correctness of the overloaded operator '==' was checked:

CHECK(VDStringA() == VDStringA(), true);
CHECK(VDStringA("abc") == VDStringA("abc"), true);

The diagnostic message isn't generated if two identical expressions of 'float' or 'double' types are being compared. Such a comparison allows to identify the value as NaN. The example of code implementing the verification of this kind:

bool isnan(double X) { return X != X; }
 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-570, CWE-571.
 You can look at examples of errors detected by the V501 diagnostic.

# V502. Perhaps the '?:' operator works in a different way than it was expected. The '?:' operator has a lower priority than the 'foo' operator.

The analyzer found a code fragment that most probably has a logic error. The program text has an expression that contains the ternary operator '?:' and might be calculated in a different way than the programmer expects.

The '?:' operator has a lower priority than operators ||, &&, |, ^, &, !=, ==, >=, <=, >, <, >>, <<, -, +, %, /, *. One might forget about it and write an incorrect code like the following one:

bool bAdd = ...;
size_t rightLen = ...;
size_t newTypeLen = rightLen + bAdd ? 1 : 0;

Having forgotten that the '+' operator has a higher priority than the '?:' operator, the programmer expects that the code is equivalent to "rightLen + (bAdd ? 1 : 0)". But actually the code is equivalent to the expression "(rightLen + bAdd) ? 1 : 0".

The analyzer diagnoses the probable error by checking:

1) If there is a variable or subexpression of the bool type to the left of the '?:' operator.

2) If this subexpression is compared to / added to / multiplied by... the variable whose type is other than bool.

If these conditions hold, it is highly probable that there is an error in this code and the analyzer will generate the warning message we are discussing.

Here are some other examples of incorrect code:

bool b;
int x, y, z, h;
...
x = y < b ? z : h;
x = y + (z != h) ? 1 : 2;

The programmer most likely wanted to have the following correct code:

bool b;
int x, y, z, h;
...
x = y < (b ? z : h);
x = y + ((z != h) ? 1 : 2);

If there is a type other than bool to the left of the '?:' operator, the analyzer thinks that the code is written in the C style (where there is no bool) or that it is written using class objects and therefore the analyzer cannot find out if this code is dangerous or not.

Here is an example of correct code in the C style that the analyzer considers correct too:

int conditions1;
int conditions2;
int conditions3;
...
char x = conditions1 + conditions2 + conditions3 ? 'a' : 'b';

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-783.
 You can look at examples of errors detected by the V502 diagnostic.

# V503. This is a nonsensical comparison: pointer < 0.

The analyzer found a code fragment that has a nonsensical comparison. It is most probable that this code has a logic error. Here is an example:

class MyClass {
public:
CObj *Find(const char *name);
...
} Storage;

if (Storage.Find("foo") < 0)
ObjectNotFound();

It seems almost incredible that such a code can exist in a program. However, the reason for its appearance might be quite simple. Suppose we have the following code in our program:

class MyClass {
public:
// Find() returns -1.
ptrdiff_t Find(const char *name);
CObj *Get(ptrdiff_t  index);
...
} Storage;
...
ptrdiff_t index = Storage.Find("ZZ");
if (index >= 0)
Foo(Storage.Get(index));
...
if (Storage.Find("foo") < 0)
ObjectNotFound();

This is correct yet not very smart code. During the refactoring process, the MyClass class may be rewritten in the following way:

class MyClass {
public:
CObj *Find(const char *name);
...
} Storage;

After this modernization of the class, you should fix all the places in the program which use the Find() function. You cannot miss the first code fragment since it will not be compiled, so it will be certainly fixed:

CObj *obj = Storage.Find("ZZ");
if (obj != nullptr)
Foo(obj);

The second code fragment is compiled well and you might miss it easily and therefore make the error we are discussing:

if (Storage.Find("foo") < 0)
ObjectNotFound();

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-697.
 You can look at examples of errors detected by the V503 diagnostic.

# V504. It is highly probable that the semicolon ';' is missing after 'return' keyword.

The analyzer found a code fragment where the semicolon ';' is probably missing. Here is an example of code that causes generating the V504 diagnostic message:

void Foo();

void Foo2(int *ptr)
{
if (ptr == NULL)
return
Foo();
...
}

The programmer intended to terminate the function's operation if the pointer ptr == NULL. But the programmer forgot to write the semicolon ';' after the return operator which causes the call of the Foo() function. The functions Foo() and Foo2() do not return anything and therefore the code is compiled without errors and warnings.

Most probably, the programmer intended to write:

void Foo();

void Foo2(int *ptr)
{
if (ptr == NULL)
return;
Foo();
...
}

But if the initial code is still correct, it is better to rewrite it in the following way:

void Foo2(int *ptr)
{
if (ptr == NULL)
{
Foo();
return;
}
...
}

The analyzer considers the code safe if the "if" operator is absent or the function call is located in the same line with the "return" operator. You might quite often see such code in programs. Here are examples of safe code:

void CPagerCtrl::RecalcSize()
{
return
(void)::SendMessageW((m_hWnd), (0x1400 + 2), 0, 0);
}

void Trace(unsigned int n, std::string const &s)
{ if (n) return TraceImpl(n, s); Trace0(s); }
 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-841.
 You can look at examples of errors detected by the V504 diagnostic.

# V505. The 'alloca' function is used inside the loop. This can quickly overflow stack.

The analyzer detected a use of the alloca function inside a loop. Since the alloca function uses stack memory, its repeated call in the loop body might unexpectedly cause a stack overflow.

Here is an example of dangerous code:

for (size_t i = 0; i < n; ++i)
if (wcscmp(strings[i], A2W(pszSrc[i])) == 0)
{
...
}

The _alloca function is used inside the A2W macro. Whether this code will cause an error or not depends upon the length of the processed strings, their number and size of the available stack.

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-770.
 You can look at examples of errors detected by the V505 diagnostic.

# V506. Pointer to local variable 'X' is stored outside the scope of this variable. Such a pointer will become invalid.

The analyzer found a potential error related to storing a pointer of a local variable. The warning is generated if the lifetime of an object is less than that of the pointer referring to it.

The first example:

class MyClass
{
size_t *m_p;
void Foo() {
size_t localVar;
...
m_p = &localVar;
}
};

In this case, the address of the local variable is saved inside the class into the m_p variable and can be then used by mistake in a different function when the localVar variable is destructed.

The second example:

void Get(float **x)
{
float f;
...
*x = &f;
}

The Get() function will return the pointer to the local variable that will not exist by the moment.

This message is similar to V507 message.

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-562.
 You can look at examples of errors detected by the V506 diagnostic.

# V507. Pointer to local array 'X' is stored outside the scope of this array. Such a pointer will become invalid.

The analyzer found a potential error related to storing a pointer of a local array. The warning is generated if the lifetime of an array is less than that of the pointer referring to it.

The first example:

class MyClass1
{
int *m_p;
void Foo()
{
int localArray[33];
...
m_p = localArray;
}
};

The localArray array is created in the stack and the localArray array will no longer exist after the Foo() function terminates. However, the pointer to this array will be saved in the m_p variable and can be used by mistake, which will cause an error.

The second example:

struct CVariable {
...
char  name[64];
};

void CRendererContext::RiGeometryV(int n, char *tokens[])
{
for (i=0;i<n;i++)
{
CVariable  var;
if (parseVariable(&var, NULL, tokens[i])) {
tokens[i]  =  var.name;
}
}

In this example, the pointer to the array situated in a variable of the CVariable type is saved in an external array. As a result, the "tokens" array will contain pointers to non-existing objects after the function RiGeometryV terminates.

The V507 warning does not always indicate an error. Below is an abridged code fragment that the analyzer considers dangerous although this code is correct:

png_infop info_ptr = png_create_info_struct(png_ptr);
...
BYTE trans[256];
info_ptr->trans = trans;
...
png_destroy_write_struct(&png_ptr, &info_ptr);

In this code, the lifetime of the info_ptr object coincides with the lifetime of trans. The object is created inside png_create_info_struct () and destroyed inside png_destroy_write_struct(). The analyzer cannot make out this case and supposes that the png_ptr object comes from outside. Here is an example where the analyzer could be right:

void Foo()
{
png_infop info_ptr;
info_ptr = GetExternInfoPng();
BYTE trans[256];
info_ptr->trans = trans;
}

This message is similar to V506 message.

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-562.
 You can look at examples of errors detected by the V507 diagnostic.

# V508. The use of 'new type(n)' pattern was detected. Probably meant: 'new type[n]'.

The analyzer found code that might contain a misprint and therefore lead to an error. There is only one object of integer type that is dynamically created and initialized. It is highly probable that round brackets are used instead of square brackets by misprint. Here is an example:

int n;
...
int *P1 = new int(n);

Memory is allocated for one object of the int type. It is rather strange. Perhaps the correct code should look like this:

int n;
...
int *P1 = new int[n];

The analyzer generates the warning only if memory is allocated for simple types. The argument in the brackets must be of integer type in this case. As a result, the analyzer will not generate the warning on the following correct code:

float f = 1.0f;
float *f2 = new float(f);

MyClass *p = new MyClass(33);
 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-480.

# V509. The 'throw' operator inside the destructor should be placed within the try..catch block. Raising exception inside the destructor is illegal.

In case an exception is thrown in a C++ program stack unwinding begins which causes objects to be destroyed by calling their destructors. If a destructor invoked during stack unwinding throws another exception and that exception propagates outside the destructor the C++ runtime immediately terminates the program by calling terminate() function. Therefore destructors should never let exceptions propagate - each exception thrown within a destructor should be handled in that destructor.

The analyzer found a destructor containing the throw operator outside the try..catch block. Here is an example:

LocalStorage::~LocalStorage()
{
...
if (!FooFree(m_index))
throw Err("FooFree", GetLastError());
...
}

This code must be rewritten so that the programmer is informed about the error that has occurred in the destructor without using the exception mechanism. If the error is not crucial, it can be ignored:

LocalStorage::~LocalStorage()
{
try {
...
if (!FooFree(m_index))
throw Err("FooFree", GetLastError());
...
}
catch (...)
{
assert(false);
}
}

Exceptions may be thrown when calling the 'new' operator as well. If memory cannot be allocated, the 'bad_alloc' exception will be thrown. For example:

A::~A()
{
...
int *localPointer = new int[MAX_SIZE];
...
}

An exception may be thrown when using dynamic_cast<Type> while handling references. If types cannot be cast, the 'bad_cast' exception will be thrown. For example:

B::~B()
{
...
UserType &type = dynamic_cast<UserType&>(baseType);
...
}

To fix these errors you should rewrite the code so that 'new' or 'dynamic_cast' are put into the 'try{...}' block.

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-248.
 You can look at examples of errors detected by the V509 diagnostic.

# V510. The 'Foo' function is not expected to receive class-type variable as 'N' actual argument.

There are functions in whose description it is impossible to specify the number and types of all the acceptable parameters. In this case, the list of formal arguments ends with the ellipsis (...) that means: "and perhaps some more arguments". Here is an example of an ellipsis function: "int printf(const char* ...);". Only POD-types can serve as actual arguments for ellipsis.

POD is an abbreviation for "Plain Old Data", i.e. "Plain data in C style". The following types and structures refer to POD-types:

• all the built-in arithmetic types (including wchar_t and bool);
• types defined with the enum key word;
• pointers;
• POD-structures (struct or class) and POD-unions that meet the following requirements:
• do not contain user constructors, destructor or copying assignment operator;
• do not have base classes;
• do not contain virtual functions;
• do not contain protected or private non-static data members;
• do not contain non-static data members of non-POD-types (or arrays of such types) and references.

If a class object is passed to an ellipsis function, this almost always indicates an error in program. The V510 rule helps detect incorrect code of the following kind:

wchar_t buf[100];
std::wstring ws(L"12345");
swprintf(buf, L"%s", ws);

The object's contents are saved into the stack instead of the pointer to the string. This code will cause generating "abracadabra" in the buffer or a program crash.

The correct version of the code must look this way:

wchar_t buf[100];
std::wstring ws(L"12345");
swprintf(buf, L"%s", ws.c_str());

Since you might pass anything you like into functions with a variable number of arguments, almost all the books on C++ programming do not recommend using them. They suggest employing safe mechanisms instead, for instance, boost::format.

In new standard, it is said that:

C++11 5.2.2/7: Passing a potentially-evaluated argument of class type having a non-trivial copy constructor, a non-trivial move constructor, or a non-trivial destructor, with no corresponding parameter, is conditionally-supported with implementation-defined semantics.

Thus, it is possible to pass into function' ellipsis "more various kinds" of objects. However, we decided not to change anything in this rule. In 99% of cases transferring a complex class as an argument is a misprint or another kind of error. This code should be reviewed. In case of inconvenience caused by large amount of false alarms related to this warning, it is possible to mark these functions to suppress it massively. An example:

//-V:MySuperPrint:510

It is possible to read about multiple warning suppression in details in section "Suppression of false alarms".

## Note one specific thing about using the CString class from the MFC library

We must see an error similar to the one mentioned above in the following code:

CString s;
CString arg(L"OK");
s.Format(L"Test CString: %s\n", arg);

The correct version of the code must look in the following way:

s.Format(L"Test CString: %s\n", arg.GetString());

Or, as MSDN suggests [1], you may use the explicit cast operator LPCTSTR implemented in the CString class to get a pointer to the string. Here is a sample of correct code from MSDN:

CString kindOfFruit = "bananas";
int howmany = 25;
printf("You have %d %s\n", howmany, (LPCTSTR)kindOfFruit);

However, the first version "s.Format(L"Test CString: %s\n", arg);" is actually correct as well like the others. This topic is discussed in detail in the article "Big Brother helps you" [2].

The MFC developers implemented the CString class in a special way so that you could pass it into functions of the printf and Format types. It is done rather intricately and if you want to make it out, study implementation of the CStringT class in the source codes.

So, the analyzer makes an exception for the CStringT type and considers the following code correct:

CString s;
CString arg(L"OK");
s.Format(L"Test CString: %s\n", arg);

## Related materials

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-686.
 You can look at examples of errors detected by the V510 diagnostic.

# V511. The sizeof() operator returns size of the pointer, and not of the array, in given expression.

There is one specific feature of the language you might easily forget about and make a mistake. Look at the following code fragment:

char A[100];
void Foo(char B[100])
{
}

In this code, the A object is an array and the sizeof(A) expression will return value 100.

The B object is simply a pointer. Value 100 in the square brackets indicates to the programmer that he is working with an array of 100 items. But it is not an array of a hundred items which is passed into the function - it is only the pointer. So, the sizeof(B) expression will return value 4 or 8 (the size of the pointer in a 32-bit/64-bit system).

The V511 warning is generated when the size of a pointer is calculated which is passed as an argument in the format "TypeName ArrayName[N]". Such code is most likely to have an error. Look at the sample:

void Foo(float array[3])
{
size_t n = sizeof(array) / sizeof(array[0]);
for (size_t i = 0; i != n; i++)
array[i] = 1.0f;
}

The function will not fill the whole array with value 1.0f but only 1 or 2 items depending on the system's capacity.

Win32: sizeof(array) / sizeof(array[0]) = 4/4 = 1.

Win64: sizeof(array) / sizeof(array[0]) = 8/4 = 2.

To avoid such errors, we must explicitly pass the array's size. Here is correct code:

void Foo(float *array, size_t arraySize)
{
for (size_t i = 0; i != arraySize; i++)
array[i] = 1.0f;
}

Another way is to use a reference to the array:

void Foo(float (&array)[3])
{
size_t n = sizeof(array) / sizeof(array[0]);
for (size_t i = 0; i != n; i++)
array[i] = 1.0f;
}
 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-467.
 You can look at examples of errors detected by the V511 diagnostic.

# V512. A call of the 'Foo' function will lead to a buffer overflow or underflow.

The analyzer found a potential error related to memory buffer filling, copying or comparison. The error might cause a buffer overflow or, vice versa, buffer underflow.

This is a rather common kind of errors that occurs due to misprints or inattention. What is unpleasant about such errors is that a program might work well for a long time. Due to sheer luck, acceptable values might be found in uninitialized memory. The area of writable memory might not be used.

Let's study two samples taken from real applications.

Sample N1.

MD5Context *ctx;
...
memset(ctx, 0, sizeof(ctx));

Here the misprint causes clearing of only a part of the structure and not the whole structure. The error is in calculation of the pointer's size and not the whole structure MD5Context. Here is the correct version of the code:

MD5Context *ctx;
...
memset(ctx, 0, sizeof(*ctx));

Sample N2.

#define CONT_MAP_MAX 50
int _iContMap[CONT_MAP_MAX];
memset(_iContMap, -1, CONT_MAP_MAX);

In this sample, the size of the buffer to be filled is also defined incorrectly. This is the correct version:

#define CONT_MAP_MAX 50
int _iContMap[CONT_MAP_MAX];
memset(_iContMap, -1, CONT_MAP_MAX * sizeof(int));

Sample N3.

struct MyTime
{
....
int time;
};
MyTime s;
time((time_t*)&s.time);

In this sample the type 's.time' is also specified incorrectly. In case there is a 64-bit time_t, we'll get an overflow. This is the correct version:

struct MyTime
{
....
time_t time;
};
MyTime s;
time(&s.time);

Note on the strncpy function.

Some programmers are surprised that the analyzer generates the V512 warning on the following code:

char buf[5];
strncpy(buf, "X", 100);

It may seem at first sight that the function is to copy only 2 bytes (the 'X' character and the terminal null). But an array overrun will really occur here. The author of this code has forgotten one thing about the 'strncpy' function. Here is a quotation from the description of this function on the MSDN website: If count is greater than the length of strSource, the destination string is padded with null characters up to length count.

It turns out for some reason that for some projects the analyzer generates a lot of false positives warning about buffer underflows. Sometimes, on the contrary, all the warnings about buffer overflows appear to be false positives. In this case you may use the fine setting of the diagnostic rule.

It can be done by adding the following comments into the code text where you need:

//-V512_UNDERFLOW_OFF

//-V512_OVERFLOW_OFF

The first comment disables warnings about underflows in the current translation unit, while the second disables warnings about overflows. If you add both, it will be identical to completely disabling the V512 diagnostic rule.

These comments should be added into the header file included into all the other files. For instance, such is the "stdafx.h" file. If you add the comments into the "*.cpp" file, they will affect only this particular file.

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-119, CWE-125, CWE-193, CWE-467, CWE-682, CWE-787, CWE-806.
 You can look at examples of errors detected by the V512 diagnostic.

Below is an extract from the 6-th chapter of the book "Advanced Windows: creating efficient Win32-applications considering the specifics of the 64-bit Windows" by Jeffrey Richter / 4-th issue.

CreateThread is a Windows-function creating a thread. But never call it if you write your code in C/C++. You should use the function _beginthreadex from the Visual C++ library instead.

To make multi-threaded applications using C/C++ (CRT) library work correctly, you should create a special data structure and link it to every thread from which the library functions are called. Moreover, they must know that when you address them, they must look through this data block in the master thread in order not to damage data in some other thread.

So how does the system know that it must create this data block together with creating a new thread? The answer is very simple - it doesn't know and never will like to. Only you are fully responsible for it. If you use functions which are unsafe in multi-threaded environment, you should create threads with the library function _beginthreadex and not Windows-function CreateThread.

Note that the _beginthreadex function exists only in multi-threaded versions of the C/C++ library. When linking a project to a single-threaded library, the linker will generate an error message "unresolved external symbol". Of course, it is done intentionally since the single-threaded library cannot work correctly in a multi-threaded application. Note also that Visual Studio chooses the single-threaded library by default when creating a new project. This way is not the safest one, so you should choose yourself one of the multi-threaded versions of the C/C++ library for multi-threaded applications.

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-676.
 You can look at examples of errors detected by the V513 diagnostic.

# V514. Dividing sizeof a pointer by another value. There is a probability of logical error presence.

The analyzer found a potential error related to division of the pointer's size by some value. Division of the pointer's size is rather a strange operation since it has no practical sense and most likely indicates an error or misprint in code.

Consider an example:

const size_t StrLen = 16;
LPTSTR dest = new TCHAR[StrLen];
TCHAR src[StrLen] = _T("string for V514");
_tcsncpy(dest, src, sizeof(dest)/sizeof(dest[0]));

In the "sizeof(dest)/sizeof(dest[0])" expression, the pointer's size is divided by the size of the element the pointer refers to. As a result, we might get different numbers of copied bytes depending on sizes of the pointer and TCHAR type - but never the number the programmer expected.

Taking into account that the _tcsncpy function is unsafe in itself, correct and safer code may look in the following way:

const size_t StrLen = 16;
LPTSTR dest = new TCHAR[StrLen];
TCHAR src[StrLen] = _T("string for V514");
_tcsncpy_s(dest, StrLen, src, StrLen);
 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-131.
 You can look at examples of errors detected by the V514 diagnostic.

# V515. The 'delete' operator is applied to non-pointer.

In code, the delete operator is applied to a class object instead of the pointer. It is most likely to be an error.

Consider a code sample:

CString str;
...
delete str;

The 'delete' operator can be applied to an object of the CString type since the CString class can be automatically cast to the pointer. Such code might cause an exception or unexpected program behavior.

Correct code might look so:

CString *pstr = new CString;
...
delete pstr;

In some cases, applying the 'delete' operator to class objects is not an error. You may encounter such code, for instance, when working with the QT::QbasicAtomicPointer class. The analyzer ignores calls of the 'delete' operator to objects of this type. If you know other similar classes it is a normal practice to apply the 'delete' operator to, please tell us about them. We will add them into exceptions.

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-763.

# V516. Consider inspecting an odd expression. Non-null function pointer is compared to null.

Code contains a construct comparing a non-null pointer to a function with null. It is most probably that there is a misprint in code – parentheses are missing.

Consider this example:

int Foo();
void Use()
{
if (Foo == 0)
{
//...
}
}

The condition "Foo == 0" is meaningless. The address of the 'Foo' function never equals zero, so the comparison result will always be 'false'. In the code we consider, the programmer missed parentheses by accident. This is the correct version of the code:

if (Foo() == 0)
{
//...
}

If there is an explicit taking of address, the code is considered correct. For example:

int Foo();
void Use()
{
if (&Foo != NULL)
//...
}
 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-570, CWE-571.
 You can look at examples of errors detected by the V516 diagnostic.

# V517. The use of 'if (A) {...} else if (A) {...}' pattern was detected. There is a probability of logical error presence.

The analyzer detected a possible error in a construct consisting of conditional statements. Consider the sample:

if (a == 1)
Foo1();
else if (a == 2)
Foo2();
else if (a == 1)
Foo3();

In this sample, the 'Foo3()' function will never get control. Most likely, we deal with a logical error and the correct code should look as follows:

if (a == 1)
Foo1();
else if (a == 2)
Foo2();
else if (a == 3)
Foo3()

In practice, such an error might look in the following way:

if (radius < THRESH * 5)
*yOut = THRESH * 10 / radius;
else if (radius < THRESH * 5)
*yOut = -3.0f / (THRESH * 5.0f) * (radius - THRESH * 5.0f) + 3.0f;
else
*yOut = 0.0f;

It is difficult to say how a correct comparison condition must look, but the error in this code is evident.

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-570.
 You can look at examples of errors detected by the V517 diagnostic.

# V518. The 'malloc' function allocates strange amount of memory calculated by 'strlen(expr)'. Perhaps the correct variant is strlen(expr) + 1.

The analyzer found a potential error related to allocating insufficient amount of memory. The string's length is calculated in code and the memory buffer of a corresponding size is allocated but the terminal '\0' is not allowed for.

Consider this example:

char *p = (char *)malloc(strlen(src));
strcpy(p, src);

In this case, it is just +1 which is missing. The correct version is:

char *p = (char *)malloc(strlen(src) + 1);
strcpy(p, src);

Here is another example of incorrect code detected by the analyzer in one application:

if((t=(char *)realloc(next->name, strlen(name+1))))
{
next->name=t;
strcpy(next->name,name);
}

The programmer was inattentive and made a mistake when writing the right bracket ')'. As a result, we will allocate 2 bytes less memory than necessary. This is the correct code:

if((t=(char *)realloc(next->name, strlen(name)+1)))

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-131.
 You can look at examples of errors detected by the V518 diagnostic.

# V519. The 'x' variable is assigned values twice successively. Perhaps this is a mistake.

The analyzer detected a potential error related to assignment of a value two times successively to the same variable while the variable itself is not used between these assignments.

Consider this sample:

A = GetA();
A = GetB();

The fact that the 'A' variable is assigned values twice might signal an error. Most probably, the code should look this way:

A = GetA();
B = GetB();

If the variable is used between assignments, the analyzer considers this code correct:

A = 1;
A = A + 1;
A = Foo(A);

Let's see how such an error may look in practice. The following sample is taken from a real application where a user class CSize is implemented:

class CSize : public SIZE
{
...
CSize(POINT pt) { cx = pt.x;  cx = pt.y; }

The correct version is the following:

CSize(POINT pt) { cx = pt.x;  cy = pt.y; }

Let's study one more example. The second line was written for the purpose of debugging or checking how text of a different color would look. And it seems that the programmer forgot to remove the second line then:

m_clrSample = GetSysColor(COLOR_WINDOWTEXT);
m_clrSample = RGB(60,0,0);

Sometimes the analyzer generates false alarms when writing into variables is used for the purpose of debugging. Here is an example of such code:

status = Foo1();
status = Foo2();

In this case, we may suppress false alarms using the "//-V519" comment. We may also remove meaningless assignments from the code. And the last thing. Perhaps this code is still incorrect, so we have to check the value of the 'status' variable.

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-563.
 You can look at examples of errors detected by the V519 diagnostic.

# V520. The comma operator ',' in array index expression.

The analyzer found a potential error that may be caused by a misprint. An expression containing the ',' operator is used as an index for an array.

Here is a sample of suspicious code:

float **array_2D;
array_2D[getx() , gety()] = 0;

Most probably, it was meant to be:

array_2D[ getx() ][ gety() ] = 0;

Such errors might appear if the programmer worked earlier with a programming language where array indexes are separated by commas.

Let's look at a sample of an error found by the analyzer in one project:

float **m;
TextOutput &t = ...
...
t.printf("%10.5f, %10.5f, %10.5f,\n%10.5f, %10.5f, %10.5f,\n%10.5f,
%10.5f, %10.5f)",
m[0, 0], m[0, 1], m[0, 2],
m[1, 0], m[1, 1], m[1, 2],
m[2, 0], m[2, 1], m[2, 2]);

Since the printf function of the TextOutput class works with a variable number of arguments, it cannot check whether pointers will be passed to it instead of values of the float type. As a result, we will get rubbish displayed instead of matrix items' values. This is the correct code:

t.printf("%10.5f, %10.5f, %10.5f,\n%10.5f, %10.5f, %10.5f,\n%10.5f,
%10.5f, %10.5f)",
m[0][0], m[0][1], m[0][2],
m[1][0], m[1][1], m[1][2],
m[2][0], m[2][1], m[2][2]);
 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-480.
 You can look at examples of errors detected by the V520 diagnostic.

# V521. Such expressions using the ',' operator are dangerous. Make sure the expression is correct.

The comma operator ',' is used to execute expressions to the both sides of it in the left-to-right order and get the value of the right expression.

The analyzer found an expression in code that uses the ',' operator in a suspicious way. It is highly probable that the program text contains a misprint.

Consider the following sample:

float Foo()
{
double A;
A = 1,23;
float f = 10.0f;
return 3,f;
}

In this code, the A variable will be assigned value 1 instead of 1.23. According to C/C++ rules, the "A = 1,23" expression equals "(A = 1),23". Also, the Foo() function will return value 10.0f instead of 3.0f. In the both cases, the error is related to using the ',' character instead of the '.' character.

This is the corrected version:

float Foo()
{
double A;
A = 1.23;
float f = 10.0f;
return 3.f;
}

Note. There were cases when the analyzer could not make out the code and generated V521 warnings for absolutely safe constructs. It is usually related to usage of template classes or complex macros. If you noted such a false alarm when working with the analyzer, please tell the developers about it. To suppress false alarms, you may use the comment of the "//-V521" type.

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-480.
 You can look at examples of errors detected by the V521 diagnostic.

# V522. Dereferencing of the null pointer might take place.

The analyzer detected a fragment of code that might cause using a null pointer.

Let's study several examples the analyzer generates the V522 diagnostic message for:

if (pointer != 0 || pointer->m_a) { ... }
if (pointer == 0 && pointer->x()) { ... }
if (array == 0 && array[3]) { ... }
if (!pointer && pointer->x()) { ... }

In all the conditions, there is a logical error that leads to dereferencing of the null pointer. The error may be introduced into the code during code refactoring or through a misprint.

Correct versions:

if (pointer == 0 || pointer->m_a) { ... }
if (pointer != 0 && pointer->x()) { ... }
if (array != 0 && array[3]) { ... }
if (pointer && pointer->x()) { ... }

These are simple cases, of course. In practice, operations of pointer check and pointer use may be located in different places. If the analyzer generates the V522 warning, study the code above and try to understand why the pointer might be a null pointer.

Here is a code sample where pointer check and pointer use are in different strings

if (ptag == NULL) {
SysPrintf("SPR1 Tag BUSERR\n");
psHu32(DMAC_STAT)|= 1<<15;
spr1->chcr = ( spr1->chcr & 0xFFFF ) |
( (*ptag) & 0xFFFF0000 );
return;
}

The analyzer will warn you about the danger in the "( (*ptag) & 0xFFFF0000 )" string. It's either an incorrectly written condition here or there should be a different variable instead of 'ptag'.

Sometimes programmers deliberately use null pointer dereferencing for the testing purpose. For example, analyzer will produce the warning for those places that contain this macro:

/// This generate a coredump when we need a
/// method to be compiled but not usabled.
#define elxFIXME { char * p=0; *p=0; }

Extraneous warnings can be turned off by using the "//-V522" comment in those strings that contain the 'elxFIXME' macro. Or, as an alternative, you can write a comment of a special kind beside the macro:

//-V:elxFIXME:522

The comment can be written both before and after the macro - it doesn't matter. To learn more about methods of suppressing false positives, follow here.

This diagnostic relies on information about whether a particular pointer could be null. In some cases, this information is retrieved from the table of annotated functions, which is stored inside the analyzer itself.

malloc is one of such functions. Since it can return NULL, using the pointer returned by it without a prior check may result in null pointer dereferencing.

Sometimes our users wish to change the analyzer's behavior and make it think that malloc cannot return NULL. For example, to do that, they use the system libraries, where 'out of memory' errors are handled in a specific way.

They may also want to tell the analyzer that a certain function can return a null pointer.

In that case, you can use the additional settings, described in the section "How to tell the analyzer that a function can or cannot return nullptr".

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-476, CWE-690.
 You can look at examples of errors detected by the V522 diagnostic.

# V523. The 'then' statement is equivalent to the 'else' statement.

The analyzer found a case when the true and false statements of the 'if' operator coincide completely. This often signals a logical error.

Here is an example:

if (X)
Foo_A();
else
Foo_A();

Whether the X condition is false or true, the Foo_A() function will be called anyway.

This is the correct version of the code:

if (X)
Foo_A();
else
Foo_B();

Here is an example of such an error taken from a real application:

if (!_isVertical)
Flags |= DT_BOTTOM;
else
Flags |= DT_BOTTOM;

Presence of two empty statements is considered correct and safe. You might often see such constructs when using macros. This is a sample of safe code:

if (exp) {
} else {
}

Also the analyzer thinks that it is suspicious, if the 'if' statement does not contain the 'else' block, and the code written next is identical to the conditional statement block. At the same time, this code block ends with a return, break, etc.

Suspicious code snippet:

if (X)
{
doSomething();
Foo_A();
return;
}
doSomething();
Foo_A();
return;

Perhaps the programmer forgot to edit the copied code fragment or wrote excessive code.

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-691.
 You can look at examples of errors detected by the V523 diagnostic.

# V524. It is odd that the body of 'Foo_1' function is fully equivalent to the body of 'Foo_2' function.

This warning is generated when the analyzer detects two functions implemented in the same way. Presence of two identical functions is not an error in itself but you should study them.

The sense of such diagnosis is detecting the following type of errors:

class Point
{
...
float GetX() { return m_x; }
float GetY() { return m_x; }
};

The misprint in the code causes the two functions different in sense to perform the same actions. This is the correct version:

float GetX() { return m_x; }
float GetY() { return m_y; }

Identity of the bodies of functions GetX() and GetY() in this sample obviously signals an error. However, the percentage of false alarms will be too great if the analyzer generates warnings for all identical functions, so it is guided by a range of exceptions when it must not warn the programmer about identical function bodies. Here are some of them:

• The analyzer does not report about identity of functions' bodies if they do not use variables except for arguments. For example: "bool IsXYZ() { return true; }".
• Functions use static objects and therefore have different inner states. For example: "int Get() { static int x = 1; return x++; }"
• Functions are type cast operators.
• Functions with identical bodies are repeated more than twice.
• And so on.

However, in some cases the analyzer cannot understand that identical function bodies are not an error. This is code which is diagnosed as dangerous but really it is not:

PolynomialMod2 Plus(const PolynomialMod2 &b) const
{return Xor(b);}
PolynomialMod2 Minus(const PolynomialMod2 &b) const
{return Xor(b);}

You can suppress false alarms using several methods. If false alarms refer to files of external libraries, you may add this library (i.e. its path) to exceptions. If false alarms refer to your own code, you may use the comment of the "//-V524" type to suppress false warnings. If there are too many false alarms, you may completely disable this diagnosis in the analyzer's settings. You may also modify the code so that one function calls another with the same code.

The last method is often the best since it, first, reduces the amount of code and, second, makes it easier to support. You need to edit only one function instead of the both functions. This is a sample of real code where the programmer could benefit from calling one function from another:

static void PreSave(void) {
int x;
for(x=0;x<TotalSides;x++) {
int b;
for(b=0; b<65500; b++)
diskdata[x][b] ^= diskdatao[x][b];
}
}

static void PostSave (void) {
int x;
for(x=0;x<TotalSides;x++) {
int b;
for(b=0; b<65500; b++)
diskdata[x][b] ^= diskdatao[x][b];
}
}

This code should be replaced with the following:

static void PreSave(void) {
int x;
for(x=0;x<TotalSides;x++) {
int b;
for(b=0; b<65500; b++)
diskdata[x][b] ^= diskdatao[x][b];
}
}

static void PostSave (void) {
PreSave();
}

We did not fix the error in this sample, but the V524 warning disappeared after refactoring and the code got simpler.

 You can look at examples of errors detected by the V524 diagnostic.

# V525. The code contains the collection of similar blocks. Check items X, Y, Z, ... in lines N1, N2, N3, ...

The analyzer detected code that might contain a misprint. This code can be split into smaller similar fragments. Although they look similar, they differ in some way. It is highly probable that this code was created with the Copy-Paste method. The V525 message is generated if the analyzer suspects that some element was not fixed in the copied text. The error might be located in one of the lines whose numbers are listed in the V525 message.

1) This diagnostic rule is based on heuristic methods and often produces false alarms.

2) Implementation of the rule's heuristic algorithm is complicated and occupies more than 1000 lines of C++ code. That is why it is difficult to describe in documentation. So it may be hard for the user to understand why the V525 message was generated.

3) The diagnostic message refers not to one line but several lines. The analyzer cannot point out only one line since the error may be in any of them.

1) It can detect errors which are too hard to notice during code review.

Let's study an artificial sample at first:

...
float rgba[4];
rgba[0] = object.GetR();
rgba[1] = object.GetG();
rgba[2] = object.GetB();
rgba[3] = object.GetR();

The 'rgba' array presents color and transparency of some object. When writing the code that fills the array, we wrote the line "rgba[0] = object.GetR();" at first. Then we copied and changed this line several times. But in the last line, we missed some changes, so it is the 'GetR()' function which is called instead of the 'GetA()' function. The analyzer generates the following warning on this code:

V525: The code containing the collection of similar blocks. Check items 'GetR', 'GetG', 'GetB', 'GetR' in lines 12, 13, 14, 15.

If you review lines 12, 13, 14 and 15, you will find the error. This is the correct code:

rgba[3] = object.GetA(); 

Now let's study several samples taken from real applications. The first sample:

tbb[0].iBitmap = 0;
tbb[0].idCommand = IDC_TB_EXIT;
tbb[0].fsState = TBSTATE_ENABLED;
tbb[0].fsStyle = BTNS_BUTTON;
tbb[0].dwData = 0;
tbb[0].iString = -1;
...
tbb[6].iBitmap = 6;
tbb[6].idCommand = IDC_TB_SETTINGS;
tbb[6].fsState = TBSTATE_ENABLED;
tbb[6].fsStyle = BTNS_BUTTON;
tbb[6].dwData = 0;
tbb[6].iString = -1;

tbb[7].iBitmap = 7;
tbb[7].idCommand = IDC_TB_CALC;
tbb[7].fsState = TBSTATE_ENABLED;
tbb[7].fsStyle = BTNS_BUTTON;
tbb[6].dwData = 0;
tbb[7].iString = -1;

The code fragment is far not complete. More than half of it was cut out. The fragment was being written through copying and editing the code. No wonder that an incorrect index was lost in such a large fragment. The analyzer generates the following diagnostic message: "The code containing the collection of similar blocks. Check items '0', '1', '2', '3', '4', '5', '6', '6' in lines 589, 596, 603, 610, 617, 624, 631, 638". If we review these lines, we will find and correct the index '6' repeated twice. This is the correct code:

tbb[7].iBitmap = 7;
tbb[7].idCommand = IDC_TB_CALC;
tbb[7].fsState = TBSTATE_ENABLED;
tbb[7].fsStyle = BTNS_BUTTON;
tbb[7].dwData = 0;
tbb[7].iString = -1;

The second sample:

pPopup->EnableMenuItem(
ID_CONTEXT_EDITTEXT,MF_GRAYED|MF_DISABLED|MF_BYCOMMAND);
ID_CONTEXT_CLOSEALL, MF_GRAYED|MF_DISABLED|MF_BYCOMMAND);
ID_CONTEXT_CLOSE, MF_GRAYED|MF_DISABLED|MF_BYCOMMAND);
ID_CONTEXT_SAVELAYOUT, MF_GRAYED|MF_DISABLED|MF_BYCOMMAND);
ID_CONTEXT_RESIZE, MF_GRAYED|MF_DISABLED|MF_BYCOMMAND);
ID_CONTEXT_REFRESH, MF_GRAYED|MF_DISABLED|MF_BYCOMMAND);
ID_CONTEXT_EDITTEXT, MF_GRAYED|MF_DISABLED|MF_BYCOMMAND);
ID_CONTEXT_SAVE, MF_GRAYED|MF_DISABLED|MF_BYCOMMAND);
ID_CONTEXT_EDITIMAGE,MF_GRAYED|MF_DISABLED|MF_BYCOMMAND);
ID_CONTEXT_CLONE,MF_GRAYED|MF_DISABLED|MF_BYCOMMAND);


It is very difficult to find an error in this code while reviewing it. But there is an error here: the state of the same menu item 'ID_CONTEXT_EDITTEXT' is modified twice. Let's mark the two repeated lines:

------------------------------
ID_CONTEXT_EDITTEXT,MF_GRAYED|MF_DISABLED|MF_BYCOMMAND);
------------------------------
ID_CONTEXT_CLOSEALL, MF_GRAYED|MF_DISABLED|MF_BYCOMMAND);
ID_CONTEXT_CLOSE, MF_GRAYED|MF_DISABLED|MF_BYCOMMAND);
ID_CONTEXT_SAVELAYOUT, MF_GRAYED|MF_DISABLED|MF_BYCOMMAND);
ID_CONTEXT_RESIZE, MF_GRAYED|MF_DISABLED|MF_BYCOMMAND);
ID_CONTEXT_REFRESH, MF_GRAYED|MF_DISABLED|MF_BYCOMMAND);
------------------------------
ID_CONTEXT_EDITTEXT, MF_GRAYED|MF_DISABLED|MF_BYCOMMAND);
------------------------------
ID_CONTEXT_SAVE, MF_GRAYED|MF_DISABLED|MF_BYCOMMAND);
ID_CONTEXT_EDITIMAGE,MF_GRAYED|MF_DISABLED|MF_BYCOMMAND);
ID_CONTEXT_CLONE,MF_GRAYED|MF_DISABLED|MF_BYCOMMAND);

Maybe it is a small error and one of the lines is just unnecessary. Or maybe the programmer forgot to change the state of some other menu item.

Unfortunately, the analyzer often makes a mistake while carrying out this diagnosis and generates false alarms. This is an example of code causing a false alarm:

switch (i) {
case 0: f1 = 2; f2 = 3; break;
case 1: f1 = 0; f2 = 3; break;
case 2: f1 = 1; f2 = 3; break;
case 3: f1 = 1; f2 = 2; break;
case 4: f1 = 2; f2 = 0; break;
case 5: f1 = 0; f2 = 1; break;
}

The analyzer does not like a correct column of numbers: 2, 0, 1, 1, 2, 0. In such cases, you may enable the warning suppression mechanism by typing the comment //-V525 in the end of the line:

switch (i) {
case 0: f1 = 2; f2 = 3; break; //-V525
case 1: f1 = 0; f2 = 3; break;
case 2: f1 = 1; f2 = 3; break;
case 3: f1 = 1; f2 = 2; break;
case 4: f1 = 2; f2 = 0; break;
case 5: f1 = 0; f2 = 1; break;
}

If there are too many false alarms, you may disable this diagnostic rule in the analyzer's settings. We will also appreciate if you write to our support service about cases when false alarms are generated and we will try to improve the diagnosis algorithm. Please attach corresponding code fragments to your letters.

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-682.
 You can look at examples of errors detected by the V525 diagnostic.

# V526. The 'strcmp' function returns 0 if corresponding strings are equal. Consider examining the condition for mistakes.

This message is a kind of recommendation. It rarely diagnoses a logical error but helps make code more readable for young developers.

The analyzer detected a construct comparing two strings that can be written in a clearer way. Such functions as strcmp, strncmp and wcsncmp return 0 if strings identical. It may cause logical errors in program. Look at a code sample:

if (strcmp(s1, s2))

This condition will hold if the strings ARE NOT IDENTICAL. Perhaps you remember well what strcmp() returns, but a person who rarely works with string functions might think that the strcmp() function returns the value of type 'bool'. Then he will read this code in this way: "the condition is true if the strings match".

You'd better not save on more characters in the program text and write the code this way:

if (strcmp(s1, s2) != 0)

This text tells the programmer that the strcmp() function returns some numeric value, not the bool type. This code ensures that the programmer will understand it properly.

If you do not want to get this diagnostic message, you may disable it in the analyzer settings.

 You can look at examples of errors detected by the V526 diagnostic.

# V527. It is odd that the 'zero' value is assigned to pointer. Probably meant: *ptr = zero.

This error occurs in two similar cases.

1) The analyzer found a potential error: a pointer to bool type is assigned false value. It is highly probable that the pointer dereferencing operation is missing. For example:

float Get(bool *retStatus)
{
...
if (retStatus != nullptr)
retStatus = false;
...
}

The '*' operator is missing in this code. The operation of nulling the retStatus pointer will be performed instead of status return. This is the correct code:

if (retStatus != nullptr)
*retStatus = false;

2) The analyzer found a potential error: a pointer referring to the char/wchar_t type is assigned value '\0' or L'\0'. It is highly probable that the pointer dereferencing operation is missing. For example:

char *cp;
...
cp = '\0';

This is the correct code:

char *cp;
...
*cp = '\0';
 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-351.
 You can look at examples of errors detected by the V527 diagnostic.

# V528. It is odd that pointer is compared with the 'zero' value. Probably meant: *ptr != zero.

This error occurs in two similar cases.

1) The analyzer found a potential error: a pointer to bool type is compared to false value. It is highly probable that the pointer dereferencing operation is missing. For example:

bool *pState;
...
if (pState != false)
...

The '*' operator is missing in this code. As a result, we compare the pState pointer's value to the null pointer. This is the correct code:

bool *pState;
...
if (*pState != false)
...

2) The analyzer found a potential error: a pointer to the char/wchar_t type is compared to value '\0' or L'\0'. It is highly probable that the pointer dereferencing operation is missing. For example:

char *cp;
...
if (cp != '\0')

This is the correct code:

char *cp;
...
if (*cp != '\0')
 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-697.
 You can look at examples of errors detected by the V528 diagnostic.

# V529. Odd semicolon ';' after 'if/for/while' operator.

The analyzer detected a potential error: a semicolon ';' stands after the 'if', 'for' or 'while' operator. For example:

for (i = 0; i < n; i++);
{
Foo(i);
}

This is the correct code:

for (i = 0; i < n; i++)
{
Foo(i);
}

Using a semicolon ';' right after the for or while operator is not an error in itself and you may see it quite often in code. So the analyzer eliminates many cases relying on some additional factors. For instance, the following code sample is considered safe:

for (depth = 0, cur = parent; cur; depth++, cur = cur->parent)
;
 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-670.
 You can look at examples of errors detected by the V529 diagnostic.

# V530. The return value of function 'Foo' is required to be utilized.

Calls of some functions are senseless if their results are not used. Let's study the first sample:

void VariantValue::Clear()
{
m_vtype = VT_NULL;
m_bvalue = false;
m_ivalue = 0;
m_fvalue = 0;
m_svalue.empty();
m_tvalue = 0;
}

This value emptying code is taken from a real application. The error here is the following: by accident, the string::empty() function is called instead of the string::clear() function and the line's content remains unchanged. The analyzer diagnoses this error relying on knowledge that the result of the string::empty() function must be used. For instance, it must be compared to something or written into a variable.

This is the correct code:

void VariantValue::Clear()
{
m_vtype = VT_NULL;
m_bvalue = false;
m_ivalue = 0;
m_fvalue = 0;
m_svalue.clear();
m_tvalue = 0;
}

The second sample:

void unregisterThread() {
}

The std::remove function does not remove elements from the container. It only shifts the elements and brings the iterator back to the beginning of the trash. Suppose we have the vector<int> container that contains elements 1,2,3,1,2,3,1,2,3. If we execute the code "remove( v.begin(), v.end(), 2 )", the container will contain elements 1,3,1,3,?,?,?, where ? is some trash. The function will bring the iterator back to the first senseless element, so if we want to remove these trash elements, we must write the code this way: "v.erase(remove(v.begin(), v.end(), 2), v.end())".

As you may see from this explanation, the result std::remove must be used. This is the correct code:

void unregisterThread() {
}

There are very many functions whose results must be used. Among them are the following: malloc, realloc, fopen, isalpha, atof, strcmp and many, many others. An unused result signals an error which is usually caused by a misprint. However, the analyzer warns only about errors related to using the STL library. There are two reasons for that:

1) It is much more difficult to make a mistake by not using the result of the fopen() function than confuse std::clear() and std::empty().

2) This functionality duplicates the capabilities of Code Analysis for C/C++ included into some Visual Studio editions (see warning C6031). But these warnings are not implemented in Visual Studio for STL functions.

If you want to propose extending the list of functions supported by analyzer, contact our support service. We will appreciate if you give interesting samples and advice.

Security

In addition to straightforward bugs and typos, security is another area to be taken into account. There are functions that deal with access control, such as LogonUser and SetThreadToken, but there are many more. One must always check on the statuses returned by these functions. Not using these return values is a grave mistake and potential vulnerability - this is why the analyzer issues warning V530 for such functions as well.

You can specify the names of user functions for which it should be checked if their return values are used.

To enable this option, you need to insert a special comment near the function prototype (or in the common header file), for example:

//+V530, namespace:MyNamespace, class:MyClass, function:MyFunc
namespace MyNamespace {
class MyClass {
int MyFunc();
}
....
obj.MyFunc(); // warning V530
}

Format:

• function parameter defines the function name.
• class parameter defines the class name if the function is defined in a class.
• namespace parameter defines the namespace name if the function or class method are defined in a particular namespace.

In projects with special quality requirements, you might need to find all functions, the return value of which is not used. To do this, you can use the following comment:

//V_RET_USE_ALL

We don't recommend using this mode because of issuing a large number of V530 warnings. But if it's really needed in your project, you can use this special comment.

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-252.
 You can look at examples of errors detected by the V530 diagnostic.

# V531. It is odd that a sizeof() operator is multiplied by sizeof().

Code where a value returned by the sizeof() operator is multiplied by another sizeof() operator most always signals an error. It is unreasonable to multiply the size of one object by the size of another object. Such errors usually occur when working with strings.

Let's study a real code sample:

TCHAR szTemp[256];
DWORD dwLen =
sizeof(szTemp) * sizeof(TCHAR));

The LoadString function takes the buffer's size in characters as the last argument. In the Unicode version of the application, we will tell the function that the buffer's size is larger than it is actually. This may cause a buffer overflow. Note that if we fix the code in the following way, it will not become correct at all:

TCHAR szTemp[256];
DWORD dwLen =
::LoadString(hInstDll, dwID, szTemp, sizeof(szTemp));

Here is a quotation from MSDN on this topic:

Using this function incorrectly can compromise the security of your application. Incorrect use includes specifying the wrong size in the nBufferMax parameter. For example, if lpBuffer points to a buffer szBuffer which is declared as TCHAR szBuffer[100], then sizeof(szBuffer) gives the size of the buffer in bytes, which could lead to a buffer overflow for the Unicode version of the function. Buffer overflow situations are the cause of many security problems in applications. In this case, using sizeof(szBuffer)/sizeof(TCHAR) or sizeof(szBuffer)/sizeof(szBuffer[0]) would give the proper size of the buffer.

This is the correct code:

TCHAR szTemp[256];
DWORD dwLen =
sizeof(szTemp) / sizeof(TCHAR));

Here is another correct code:

const size_t BUF_LEN = 256;
TCHAR szTemp[BUF_LEN];
DWORD dwLen =
::LoadString(hInstDll, dwID, szTemp, BUF_LEN);
 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-131.
 You can look at examples of errors detected by the V531 diagnostic.

# V532. Consider inspecting the statement of '*pointer++' pattern. Probably meant: '(*pointer)++'.

The analyzer detected a potential error: a pointer dereferencing operation is present in code but the value the pointer refers to is not used in any way.

Let's study this sample:

int *p;
...
*p++;

The "*p++" expression performs the following actions. The "p" pointer is incremented by one, but before that a value of the "int" type is fetched from memory. This value is not used in any way, which is strange. It looks as if the dereferencing operation "*" is unnecessary. There are several ways of correcting the code:

1) We may remove the unnecessary dereferencing operation - the "*p++;" expression is equal to "p++;":

int *p;
...
p++;

2) If the developer intended to increment the value instead of the pointer, we should write it so:

int *p;
...
(*p)++;

If the "*p++" expression's result is used, the analyzer considers the code correct. This is a sample of safe code:

while(*src)
*dest++ = *src++;

Let's study a sample taken from a real application:

STDMETHODIMP CCustomAutoComplete::Next(
ULONG celt, LPOLESTR *rgelt, ULONG *pceltFetched)
{
...
if (pceltFetched != NULL)
*pceltFetched++;
...

In this case, parentheses are missing. This is the correct code:

if (pceltFetched != NULL)
(*pceltFetched)++;

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-480.
 You can look at examples of errors detected by the V532 diagnostic.

# V533. It is likely that a wrong variable is being incremented inside the 'for' operator. Consider reviewing 'X'.

The analyzer detected a potential error: a variable referring to an outer loop and located inside the 'for' operator is incremented.

This is the simplest form of this error:

for (size_t i = 0; i != 5; i++)
for (size_t j = 0; j != 5; i++)
A[i][j] = 0;

It is the 'i' variable which is incremented instead of 'j' in the inner loop. Such an error might be not so visible in a real application. This is the correct code:

for (size_t i = 0; i != 5; i++)
for (size_t j = 0; j != 5; j++)
A[i][j] = 0;
 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-691.
 You can look at examples of errors detected by the V533 diagnostic.

# V533. It is likely that a wrong variable is being incremented inside the 'for' operator. Consider reviewing 'X'.

The analyzer detected a potential error: a variable referring to an outer loop and located inside the 'for' operator is incremented.

This is the simplest form of this error:

for (size_t i = 0; i != 5; i++)
for (size_t j = 0; j != 5; i++)
A[i][j] = 0;

It is the 'i' variable which is incremented instead of 'j' in the inner loop. Such an error might be not so visible in a real application. This is the correct code:

for (size_t i = 0; i != 5; i++)
for (size_t j = 0; j != 5; j++)
A[i][j] = 0;
 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-691.
 You can look at examples of errors detected by the V533 diagnostic.

# V534. It is likely that a wrong variable is being compared inside the 'for' operator. Consider reviewing 'X'.

The analyzer detected a potential error: a variable referring to an outer loop is used in the condition of the 'for' operator.

This is the simplest form of this error:

for (size_t i = 0; i != 5; i++)
for (size_t j = 0; i != 5; j++)
A[i][j] = 0;

It is the comparison 'i != 5' that is performed instead of 'j != 5' in the inner loop. Such an error might be not so visible in a real application. This is the correct code:

for (size_t i = 0; i != 5; i++)
for (size_t j = 0; j != 5; j++)
A[i][j] = 0;
 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-691.
 You can look at examples of errors detected by the V534 diagnostic.

# V535. The variable 'X' is being used for this loop and for the outer loop.

The analyzer detected a potential error: a nested loop is arranged by a variable which is also used in an outer loop. In a schematic form, this error looks in the following way:

size_t i, j;
for (i = 0; i != 5; i++)
for (i = 0; i != 5; i++)
A[i][j] = 0;

Of course, this is an artificial sample, so we may easily see the error, but in a real application, the error might be not so apparent. This is the correct code:

size_t i, j;
for (i = 0; i != 5; i++)
for (j = 0; j != 5; j++)
A[i][j] = 0;


Using one variable both for the outer and inner loops is not always a mistake. Consider a sample of correct code the analyzer won't generate the warning for:

for(c = lb; c <= ub; c++)
{
if (!(xlb <= xlat(c) && xlat(c) <= ub))
{
Range * r = new Range(xlb, xlb + 1);
for (c = lb + 1; c <= ub; c++)
r = doUnion(
r, new Range(xlat(c), xlat(c) + 1));
return r;
}
}

In this code, the inner loop "for (c = lb + 1; c <= ub; c++)" is arranged by the "c" variable. The outer loop also uses the "c" variable. But there is no error here. After the inner loop is executed, the "return r;" operator will perform exit from the function.

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-691.
 You can look at examples of errors detected by the V535 diagnostic.

# V536. Be advised that the utilized constant value is represented by an octal form.

Using constants in the octal number system is not an error in itself. This system is convenient when handling bits and is used in code that interacts with a network or external devices. However, an average programmer uses this number system rarely and therefore may make a mistake by writing 0 before a number forgetting that it makes this value an octal number.

The analyzer warns about octal constants if there are no other octal constants nearby. Such "single" octal constants are usually errors.

Let's study a sample taken from a real application. It is rather large but it illustrates the sense of the issue very well.

inline
void elxLuminocity(const PixelRGBf& iPixel,
LuminanceCell< PixelRGBf >& oCell)
{
oCell._luminance = 0.2220f*iPixel._red +
0.7067f*iPixel._blue +
0.0713f*iPixel._green;
oCell._pixel = iPixel;
}

inline
void elxLuminocity(const PixelRGBi& iPixel,
LuminanceCell< PixelRGBi >& oCell)
{
oCell._luminance = 2220*iPixel._red +
7067*iPixel._blue +
0713*iPixel._green;
oCell._pixel = iPixel;
}  

It is hard to find the error while reviewing this code, but it does have an error. The first function elxLuminocity is correct and handles values of the 'float' type. There are the following constants in the code: 0.2220f, 0.7067f, 0.0713f. The second function is similar to the first but it handles integer values. All the integer values are multiplied by 10000. Here are they: 2220, 7067, 0713. The error is that the last constant "0713" is defined in the octal number system and its value is 459, not 713. This is the correct code:

oCell._luminance = 2220*iPixel._red +
7067*iPixel._blue +
713*iPixel._green; 

As it was said above, the warning of octal constants is generated only if there are no other octal constants nearby. That is why the analyzer considers the following sample safe and does not produce any warnings for it:

static unsigned short bytebit[8] = {
01, 02, 04, 010, 020, 040, 0100, 0200 };
 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-682.
 You can look at examples of errors detected by the V536 diagnostic.

# V537. Consider reviewing the correctness of 'X' item's usage.

The analyzer detected a potential misprint in code. This rule tries to diagnose an error of the following type using the heuristic method:

int x = static_cast<int>(GetX()) * n;
int y = static_cast<int>(GetX()) * n;

In the second line, the GetX() function is used instead of GetY(). This is the correct code:

int x = static_cast<int>(GetX()) * n;
int y = static_cast<int>(GetY()) * n;

To detect this suspicious fragment, the analyzer followed this logic: we have a line containing a name that includes the "X" fragment. Beside it, there is a line that has an antipode name with "Y". But this second line has "X" as well. Since this condition and some other conditions hold, the construct must be reviewed by the programmer. This code would not be considered dangerous if, for instance, there were no variables "x" and "y" to the left. This is a code sample the analyzer ignores:

array[0] = GetX() / 2;
array[1] = GetX() / 2;

Unfortunately, this rule often produces false alarms since the analyzer does not know how the program is organized and what the code's purpose is. This is a sample of a false alarm:

halfWidth -= borderWidth + 2;
halfHeight -= borderWidth + 2;

The analyzer supposed that the second line must be presented by a different expression, for instance, "halfHeight -= borderHeight + 2". But actually there is no error here. The border's size is equal in both vertical and horizontal positions. There is just no borderHeight constant. However, such high-level abstractions are not clear to the analyzer. To suppress this warning, you may type the "//-V537" comment into the code.

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-682.
 You can look at examples of errors detected by the V537 diagnostic.

# V538. The line contains control character 0x0B (vertical tabulation).

There are ASCII control characters in the program text. The following character refers to them:

0x0B - LINE TABULATION (vertical tabulation) - Moves the typing point to the next vertical tabulation position. In terminals, this character is usually equivalent to line feed.

Such characters are allowed to be present in program text and such text is successfully compiled in Visual C++. However, these characters must have appeared in the program text by accident and you'd better get rid of them. There are two reasons for that:

1) If such a control character stands in the first lines of a file, the Visual Studio environment cannot understand the file's format and opens it with the Notepad application instead of its own embedded editor.

2) Some external tools working with program texts may incorrectly process files containing the above mentioned control characters.

0x0B characters are invisible in the Visual Studio 2010 editor. To find and delete them in a line, you may open the file in the Notepad application or any other editor that can display such control characters.

 You can look at examples of errors detected by the V538 diagnostic.

# V539. Consider inspecting iterators which are being passed as arguments to function 'Foo'.

The analyzer detected code handling containers which is likely to have an error. You should examine this code fragment.

Let's study several samples demonstrating cases when this warning is generated:

Sample 1.

void X(std::vector<int> &X, std::vector<int> &Y)
{
std::for_each (X.begin(), X.end(), SetValue);
std::for_each (Y.begin(), X.end(), SetValue);
}

Two arrays are filled with some values in the function. Due to the misprint, the "std::for_each" function, being called for the second time, receives iterators from different containers, which causes an error during program execution. This is the correct code:

std::for_each (X.begin(), X.end(), SetValue);
std::for_each (Y.begin(), Y.end(), SetValue);

Sample 2.

std::includes(a.begin(), a.end(), a.begin(), a.end());

This code is strange. The programmer most probably intended to process two different chains instead of one. This is the correct code:

std::includes(a.begin(), a.end(), b.begin(), b.end());
 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-688.
 You can look at examples of errors detected by the V539 diagnostic.

# V540. Member 'x' should point to string terminated by two 0 characters.

In Windows API, there are structures where string-pointers must end with a double zero. For example, such is the lpstrFilter member in the OPENFILENAME structure.

Here is the description of lpstrFilter in MSDN:

LPCTSTR

A buffer containing pairs of null-terminated filter strings. The last string in the buffer must be terminated by two NULL characters.

It follows from this description that we must add one more zero at the end of the string. For example: lpstrFilter = "All Files\0*.*\0";

lofn.lpstrFilter = L"Equalizer Preset (*.feq)\0*.feq";

This code will cause generating rubbish in the filter field in the file dialogue. This is the correct code:

lofn.lpstrFilter = L"Equalizer Preset (*.feq)\0*.feq\0";

We added 0 at the end of the string manually while the compiler will add one more zero. Some programmers write this way to make it clearer:

lofn.lpstrFilter     = L"Equalizer Preset (*.feq)\0*.feq\0\0";

But here we will get three zeroes instead of two. It is unnecessary yet well visible to the programmer.

There are also some other structures besides OPENFILENAME where you might make such mistakes. For instance, the strings lpstrGroupNames and lpstrCardNames in structures OPENCARD_SEARCH_CRITERIA, OPENCARDNAME must end with a double zero too.

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-628.
 You can look at examples of errors detected by the V540 diagnostic.

# V541. It is dangerous to print a string into itself.

The analyzer detected a potential error: a string gets printed inside itself. This may lead to unexpected results. Look at this sample:

char s[100] = "test";
sprintf(s, "N = %d, S = %s", 123, s);

In this code, the 's' buffer is used simultaneously as a buffer for a new string and as one of the elements making up the text. The programmer intends to get this string:

N = 123, S = test

But actually this code will cause creating the following string:

N = 123, S = N = 123, S =

In other cases, such code can lead not only to the output of incorrect text, but also to the buffer overflow or a program crash. To fix the code, we should use a new buffer to save the result. This is the correct code:

char s1[100] = "test";
char s2[100];
sprintf(s2, "N = %d, S = %s", 123, s1);
 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-628.
 You can look at examples of errors detected by the V541 diagnostic.

# V542. Consider inspecting an odd type cast: 'Type1' to ' Type2'.

The analyzer found a very suspicious explicit type conversion. This type conversion may signal an error. You should review the corresponding code fragment.

For example:

typedef unsigned char Byte;

void Process(wchar_t ch);
void Process(wchar_t *str);

void Foo(Byte *buf, size_t nCount)
{
for (size_t i = 0; i < nCount; ++i)
{
Process((wchar_t *)buf[i]);
}
}

There is the Process function that can handle both separate characters and strings. There is also the 'Foo' function which receives a buffer-pointer at the input. This buffer is handled as an array of characters of the wchar_t type. But the code contains an error, so the analyzer warns you that the 'char' type is explicitly cast to the ' wchar_t *' type. The reason is that the "(wchar_t *)buf[i]" expression is equivalent to "(wchar_t *)(buf[i])". A value of the 'char' type is first fetched out of the array and then cast to a pointer. This is the correct code:

Process(((wchar_t *)buf)[i]);

However, strange type conversions are not always errors. Consider a sample of safe code taken from a real application:

wchar_t *destStr = new wchar_t[len+1];
...
for (int j = 0 ; j < nbChar ; j++)
{
if (Case == UPPERCASE)
destStr[j] =
(wchar_t)::CharUpperW((LPWSTR)destStr[j]);
...

Here you may see an explicit conversion of the 'wchar_t' type to 'LPWSTR' and vice versa. The point is that Windows API and the CharUpperW function can handle an input value both as a pointer and a character. This is the function's prototype:

LPTSTR WINAPI CharUpperW(__inout LPWSTR lpsz);

If the high-order part of the pointer is 0, the input value is considered a character. Otherwise, the function processes the string.

The analyzer knows about the CharUpperW function's behavior and considers this code safe. But it may produce a false alarm in some other similar situation.

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-704.
 You can look at examples of errors detected by the V542 diagnostic.

# V543. It is odd that value 'X' is assigned to the variable 'Y' of HRESULT type.

The analyzer detected a potential error related to handling a variable of the HRESULT type.

HRESULT is a 32-bit value divided into three different fields: severity code, device code and error code. Such special constants as S_OK, E_FAIL, E_ABORT, etc. serve to handle HRESULT-values while the SUCCEEDED and FAILED macros are used to check HRESULT-values.

The V543 warning is generated if the analyzer detects an attempt to write value -1, true or false into a variable of the HRESULT type. Consider this sample:

HRESULT h;
...
if (bExceptionCatched)
{
ShowPluginErrorMessage(pi, errorText);
h = -1;
}

Writing of value "-1" is incorrect. If you want to report about some unspecified error, you should use value 0x80004005L (Unspecified failure). This constant and the like are described in "WinError.h". This is the correct code:

if (bExceptionCatched)
{
ShowPluginErrorMessage(pi, errorText);
h = E_FAIL;
}

References:

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-704.
 You can look at examples of errors detected by the V543 diagnostic.

# V544. It is odd that the value 'X' of HRESULT type is compared with 'Y'.

The analyzer detected a potential error related to handling a variable of the HRESULT type.

HRESULT is a 32-bit value divided into three different fields: severity code, device code and error code. Such special constants as S_OK, E_FAIL, E_ABORT, etc. serve to handle HRESULT-values while the SUCCEEDED and FAILED macros are used to check HRESULT-values.

The V544 warning is generated if the analyzer detects an attempt to compare a variable of the HRESULT type to -1, true or false. Consider this sample:

HRESULT hr;
...
if (hr == -1)
{
}

Comparison of the variable to "-1" is incorrect. Error codes may differ. For instance, these may be 0x80000002L (Ran out of memory), 0x80004005L (unspecified failure), 0x80070005L (General access denied error) and so on. To check the HRESULT -value in this case, we must use the FAILED macro defined in "WinError.h". This is the correct code:

if (FAILED(hr))
{
}

References:

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-253.

# V545. Such conditional expression of 'if' statement is incorrect for the HRESULT type value 'Foo'. The SUCCEEDED or FAILED macro should be used instead.

The analyzer detected a potential error related to handling a variable of the HRESULT type.

HRESULT is a 32-bit value divided into three different fields: severity code, device code and error code. Such special constants as S_OK, E_FAIL, E_ABORT, etc. serve to handle HRESULT-values while the SUCCEEDED and FAILED macros are used to check HRESULT-values.

The V545 warning is generated if a variable of the HRESULT type is used in the 'if' operator as a bool-variable. Consider this sample:

HRESULT hr;
...
if (hr)
{
}

'HRESULT' and 'bool' are two types absolutely different in meaning. This sample of comparison is incorrect. The HRESULT type can have many states including 0L (S_OK), 0x80000002L (Ran out of memory), 0x80004005L (unspecified failure) and so on. Note that the code of the state S_OK is 0.

To check the HRESULT-value, we must use macro SUCCEEDED or FAILED defined in "WinError.h". These are correct versions of code:

if (FAILED(hr))
{
}
if (SUCCEEDED(hr))
{
}

References:

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-253.
 You can look at examples of errors detected by the V545 diagnostic.

# V546. Member of a class is initialized with itself: 'Foo(Foo)'.

The analyzer detected a misprint in the fragment where a class member is being initialized with itself. Consider an example of a constructor:

C95(int field) : Field(Field)
{
...
}

The names of the parameter and the class member here differ only in one letter. Because of that, the programmer misprinted here causing the 'Field' member to remain uninitialized. This is the correct code:

C95(int field) : Field(field)
{
...
}

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-665.
 You can look at examples of errors detected by the V546 diagnostic.

# V547. Expression is always true/false.

The analyzer detected a potential error: a condition is always true or false. Such conditions do not always signal an error but still you must review such code fragments.

Consider a code sample:

LRESULT CALLBACK GridProc(HWND hWnd,
UINT message, WPARAM wParam, LPARAM lParam)
{
...
if (wParam<0)
{
BGHS[SelfIndex].rows = 0;
}
else
{
BGHS[SelfIndex].rows = MAX_ROWS;
}
...
}

The "BGHS[SelfIndex].rows = 0;" branch here will never be executed because the wParam variable has an unsigned type WPARAM which is defined as "typedef UINT_PTR WPARAM".

Either this code contains a logical error or we may reduce it to one line: "BGHS[SelfIndex].rows = MAX_ROWS;".

Now let's examine a code sample which is correct yet potentially dangerous and contains a meaningless comparison:

unsigned int a = _ttoi(LPCTSTR(str1));
if((0 > a) || (a > 255))
{
return(FALSE);
}

The programmer wanted to implement the following algorithm.

1) Convert a string into a number.

2) If the number lies outside the range [0..255], return FALSE.

The error here is in using the 'unsigned' type. If the _ttoi function returns a negative value, it will turn into a large positive value. For instance, value "-3" will become 4294967293. The comparison '0 > a' will always evaluate to false. The program works correctly only because the range of values [0..255] is checked by the 'a > 255' condition.

The analyzer will generate the following warning for this code fragment: "V547 Expression '0 > a' is always false. Unsigned type value is never < 0."

We should correct this fragment this way:

int a = _ttoi(LPCTSTR(str1));
if((0 > a) || (a > 255))
{
return(FALSE);
}

Let's consider one special case. The analyzer generates the warning:

V547 Expression 's == "Abcd"' is always false. To compare strings you should use strcmp() function.

for this code:

const char *s = "Abcd";
void Test()
{
if (s == "Abcd")
cout << "TRUE" << endl;
else
cout << "FALSE" << endl;
}

But it is not quite true. This code still can print "TRUE" when the 's' variable and Test() function are defined in one module. The compiler does not produce a lot of identical constant strings but uses one string. As a result, the code sometimes seems quite operable. However, you must understand that this code is very bad and you should use special functions for comparison.

Another example:

if (lpszHelpFile != 0)
{
pwzHelpFile = ((_lpa_ex = lpszHelpFile) == 0) ?
0 : Foo(lpszHelpFile);
...
}

This code works quite correctly but it is too tangled. The "((_lpa_ex = lpszHelpFile) == 0)" condition is always false, as the lpszHelpFile pointer is always not equal to zero. This code is difficult to read and should be rewritten.

This is the simplified code:

if (lpszHelpFile != 0)
{
_lpa_ex = lpszHelpFile;
pwzHelpFile = Foo(lpszHelpFile);
...
}

Another example:

SOCKET csd;
csd = accept(nsd, (struct sockaddr *) &sa_client, &clen);
if (csd < 0)
....

The accept function in Visual Studio header files returns a value that has the unsigned SOCKET type. That's why the check 'csd < 0' is invalid since its result is always false. The returned values must be explicitly compared to different constants, for instance, SOCKET_ERROR:

if (csd == SOCKET_ERROR)

The analyzer warns you far not of all the conditions which are always false or true. It diagnoses only those cases when an error is highly probable. Let's consider some samples that the analyzer considers absolutely correct:

// 1) Eternal loop
while (true)
{
...
}

// 2) Macro expanded in the Release version
// MY_DEBUG_LOG("X=", x);
0 && ("X=", x);

// 3) assert(false);
if (error) {
assert(false);
return -1;
}

Note. Every now and then, we get similar emails where users tell us they don't understand the V547 diagnostic. Let's make things clear. This is the typical scenario described in those emails:

for (int i = 0; i <= 1; i++)
{
if(i == 0)
A();
else if(i == 1)        // V547
B();
}

The analyzer issues the warning "Expression 'i == 1' is always true", but it's not actually true. The value of the variable can be not only one but also zero. Perhaps you should fix the diagnostic.

Explanation. The warning doesn't say that the value of the 'i' variable is always 1. It says that 'i' equals 1 in a particular line and points this line out.

When executing the check 'if (i == 1)', it is known for sure that the 'i' variable will be equal to 1. There are no other options. This code is of course not necessarily faulty, but it is definitely worth reviewing.

As you can see, the warning for this code is absolutely legal. If you encounter a warning like that, there are two ways to deal with it:

• If it's a bug, fix it.
• If it's not a bug but just an unnecessary check, remove it.

Simplified code:

for (int i = 0; i <= 1; i++)
{
if(i == 0)
A();
else
B();
}

If it's an unnecessary check, but you still don't want to change the code, use one of the false positive suppression options.

Let's take a look at another example, this time, related to enumeration types.

enum state_t { STATE_A = 0, STATE_B = 1 }

state_t GetState()
{
if (someFailure)
return (state_t)-1;
return STATE_A;
}

state_t state = GetState();
if (state == STATE_A)    // <= V547

The author intended to return -1 if something went wrong while running the 'GetState' function.

The analyzer issues the "V547 CWE-571 Expression 'state == SOME_STATE' is always true" warning here. This may seem a false positive since we cannot predict the function's return value. However, the analyzer actually behaves this way due to undefined behavior in the code.

No named constant with the value of -1 is defined inside 'state_t', and the 'return (state_t)-1' statement can actually return any value due to undefined behavior. By the way, in this example, the analyzer warns about undefined behavior by issuing the "V1016 The value '-1' is out of range of enum values. This causes unspecified or undefined behavior" warning in the 'return (state_t)-1' line.

Therefore, since 'return (state_t)-1;' is in fact undefined behavior, the analyzer does not consider -1 a possible return value of the function. From the analyzer's perspective, the 'GetState' function can return only 'STATE_A'. This is the cause of the V547 warning.

In order to correct the issue, we should add a constant indicating an erroneous result to the enumeration:

enum state_t { STATE_ERROR = -1, STATE_A = 0, STATE_B = 1 }
state_t GetState()
{
if (someFailure)
return STATE_ERROR;
return STATE_A;
}

Both the V547 and V1016 warnings will now be resolved.

• An interesting example, when the V547 warning seems to be strange and incorrect. But if you study this out, it turns out that the code is actually dangerous. Discussion at StackOverflow: Does PVS-Studio know about Unicode chars?
 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-570, CWE-571.
 You can look at examples of errors detected by the V547 diagnostic.

# V548. Consider reviewing type casting. TYPE X[][] is not equivalent to TYPE **X.

The analyzer detected a potential error related to an explicit type conversion. An array defined as "type Array[3][4]" is cast to type "type **". This type conversion is most likely to be meaningless.

Types "type[a][b]" and "type **" are different data structures. Type[a][b] is a single memory area that you can handle as a two-dimensional array. Type ** is an array of pointers referring to some memory areas.

Here is an example:

void Foo(char **names, size_t count)
{
for(size_t i=0; i<count; i++)
printf("%s\n", names[i]);
}

void Foo2()
{
char names[32][32];
...
Foo((char **)names, 32); //Crash
}

This is the correct code:

void Foo2()
{
char names[32][32];
...
char *names_p[32];
for(size_t i=0; i<32; i++)
names_p[i] = names[i];
Foo(names_p, 32); //OK
}
 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-704.
 You can look at examples of errors detected by the V548 diagnostic.

# V549. The 'first' argument of 'Foo' function is equal to the 'second' argument.

The analyzer detected a potential error in the program: coincidence of two actual arguments of a function. Passing the same value as two arguments is a normal thing for many functions. But if you deal with such functions as memmove, memcpy, strstr and strncmp, you must check the code.

Here is a sample from a real application:

#define str_cmp(s1, s2)  wcscmp(s1, s2)
...
v = abs(str_cmp(a->tdata, a->tdata));

The misprint here causes the wcscmp function to perform comparison of a string from itself. This is the correct code:

v = abs(str_cmp(a->tdata, b->tdata));

The analyzer generates the warning if the following functions are being handled: memcpy, memmove, memcmp, _memicmp, strstr, strspn, strtok, strcmp, strncmp, wcscmp, _stricmp, wcsncmp, etc. If you found a similar error that analyzer fails to diagnose, please tell us the name of the function that must not take same values as the first and second arguments.

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-688.
 You can look at examples of errors detected by the V549 diagnostic.

# V550. An odd precise comparison. It's probably better to use a comparison with defined precision: fabs(A - B) < Epsilon or fabs(A - B) > Epsilon.

The analyzer detected a potential error: the == or != operator is used to compare floating point numbers. Precise comparison might often cause an error.

Consider this sample:

double a = 0.5;
if (a == 0.5) //OK
x++;

double b = sin(M_PI / 6.0);
if (b == 0.5) //ERROR
x++;

The first comparison 'a == 0.5' is true. The second comparison 'b == 0.5' may be both true and false. The result of the 'b == 0.5' expression depends upon the processor, compiler's version and settings being used. For instance, the 'b' variable's value was 0.49999999999999994 when we used the Visual C++ 2010 compiler. A more correct version of this code looks this way:

double b = sin(M_PI / 6.0);
if (fabs(b - 0.5) < DBL_EPSILON)
x++;

In this case, the comparison with error presented by DBL_EPSILON is true because the result of the sin() function lies within the range [-1, 1]. But if we handle values larger than several units, errors like FLT_EPSILON and DBL_EPSILON will be too small. And vice versa, if we handle values like 0.00001, these errors will be too big. Each time you must choose errors adequate to the range of possible values.

Question: how do I compare two double-variables then?

double a = ...;
double b = ...;
if (a == b) // how?
{
}

There is no single right answer. In most cases, you may compare two variables of the double type by writing the following code:

if (fabs(a-b) <= DBL_EPSILON * fmax(fabs(a), fabs(b)))
{
}

But be careful with this formula - it works only for numbers with the same sign. Besides, if you have a row with many calculations, there is an error constantly accumulating, which might cause the DBL_EPSILON constant to appear a too small value.

Well, can I perform precise comparison of floating point values?

Sometimes, yes. But rather rarely. You may perform such comparison if the values you are comparing are one and the same value in its sense.

Here is a sample where you may use precise comparison:

// -1 - value is not initialized.
float val = -1.0f;
if (Foo1())
val = 123.0f;
if (val == -1.0f) //OK
{
}

In this case, the comparison with value "-1" is permissible because it is this very value which we used to initialize the variable before.

We cannot cover the topic of comparing float/double types within the scope of documentation, so please refer to additional sources given at the end of this article.

The analyzer can only point to potentially dangerous code fragments where comparison may result unexpectedly. But it is only the programmer who may understand whether these code fragments really contain errors. We cannot also give precise recommendations in the documentation since tasks where floating point types are used are too diverse.

The diagnostic message isn't generated if two identical expressions of 'float' or 'double' types are being compared. Such a comparison allows to identify the value as NaN. The example of code implementing the verification of this kind:

bool isnan(double X) { return X != X; }

References:

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-682.
 You can look at examples of errors detected by the V550 diagnostic.

# V551. The code under this 'case' label is unreachable.

The analyzer detected a potential error: one of the switch() operator's branches never gets control. The reason is that the switch() operator's argument cannot accept the value defined in the case operator. Consider this sample:

char ch = strText[i];
switch (ch)
{
case '<':
strHTML += "<";
bLastCharSpace = FALSE;
nNonbreakChars++;
break;
case '>':
strHTML += ">";
bLastCharSpace = FALSE;
nNonbreakChars++;
break;
case 0xB7:
case 0xBB:
strHTML += ch;
strHTML += "<wbr>";
bLastCharSpace = FALSE;
nNonbreakChars = 0;
break;
...
}

The branch following "case 0xB7:" and "case 0xBB:" in this code will never get control. The 'ch' variable has the 'char' type and therefore the range of its values is [-128..127]. The comparisons "ch == 0xB7" and "ch==0xBB" will always be false. To make the code correct, we must cast the 'ch' variable to the 'unsigned char' type:

unsigned char ch = strText[i];
switch (ch)
{
...
case 0xB7:
case 0xBB:
strHTML += ch;
strHTML += "<wbr>";
bLastCharSpace = FALSE;
nNonbreakChars = 0;
break;
...
}

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-561.

# V552. A bool type variable is being incremented. Perhaps another variable should be incremented instead.

The analyzer detected a potentially dangerous construct in code where a variable of the bool type is being incremented:

bool bValue = false;
...
bValue++;

First, the C++ language's standard reads:

The use of an operand of type bool with the postfix ++ operator is deprecated.

It means that we should not use such a construct.

Second, it is better to assign the 'true' value explicitly to this variable. This code is clearer:

bValue = true;

Third, it might be that there is a misprint in the code and the programmer actually intended to increment a different variable. For example:

bool bValue = false;
int iValue = 1;
...
if (bValue)
bValue++;

A wrong variable was used by accident here while it was meant to be this code:

bool bValue = false;
int iValue = 1;
...
if (bValue)
iValue++;
 You can look at examples of errors detected by the V552 diagnostic.

# V553. The length of function's body or class's declaration is more than 2000 lines long. You should consider refactoring the code.

The analyzer detected a class definition or function body that occupies more than 2000 lines. This class or function does not necessarily contain errors yet the probability is very high. The larger a function is, the more probable it is to make an error and the more difficult it is to debug. The larger a class is, the more difficult it is to examine its interfaces.

This message is a good opportunity to find time for code refactoring at last. Yes, you always have to do something urgent but the larger you functions and classes are, the more time you will spend on supporting the old code and eliminating errors in it instead of writing a new functionality.

References:

• Steve McConnell, "Code Complete, 2nd Edition" Microsoft Press, Paperback, 2nd edition, Published June 2004, 914 pages, ISBN: 0-7356-1967-0. (Part 7.4. How Long Can a Routine Be?).

# V554. Incorrect use of smart pointer.

Analyzer located an issue then the usage of smart pointer could lead to undefined behavior, in particular to the heap damage, abnormal program termination or incomplete objects destruction. The error here is that different methods will be used to allocate and free memory.

Consider a sample:

void Foo()
{
struct A
{
A() { cout << "A()" << endl; }
~A() { cout << "~A()" << endl; }
};

std::unique_ptr<A> p(new A[3]);
}

By default, the unique_ptr class uses the 'delete' operator to release memory. That is why only one object of the 'A' class will be destroyed and the following text will be displayed:

A()
A()
A()
~A()

To fix this error, we must specify that the class must use the 'delete []' operator. Here is the correct code:

std::unique_ptr<A[]> p(new A[3]);

Now the same number of constructors and destructors will be called and we will see this text:

A()
A()
A()
~A()
~A()
~A()

Consider another sample:

std::unique_ptr<int []> p((int *)malloc(sizeof(int) * 5));

The function 'malloc()' is used to allocate memory while the 'delete []' operator is used to release it. It is incorrect and we must specify that the 'free()' function must be used to release memory. This is the correct code:

int *d =(int *)std::malloc(sizeof(int) * 5);
unique_ptr<int, void (*)(void*)> p(d, std::free);

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-762.
 You can look at examples of errors detected by the V554 diagnostic.

# V555. The expression of the 'A - B > 0' kind will work as 'A != B'.

The analyzer detected a potential error in an expression of "A - B > 0" type. It is highly probable that the condition is wrong if the "A - B" subexpression has the unsigned type.

The "A - B > 0" condition holds in all the cases when 'A' is not equal to 'B'. It means that we may write the "A != B" expression instead of "A - B > 0". However, the programmer must have intended to implement quite a different thing.

Consider this sample:

unsigned int *B;
...
if (B[i]-70 > 0)

The programmer wanted to check whether the i-item of the B array is above 70. He could write it this way: "B[i] > 70". But he, proceeding from some reasons, wrote it this way: "B[i]-70 > 0" and made a mistake. He forgot that items of the 'B' array have the 'unsigned' type. It means that the "B[i]-70" expression has the 'unsigned' type too. So it turns out that the condition is always true except for the case when the 'B[i]' item equals to 70.

Let's clarify this case.

If 'B[i]' is above 70, then "B[i]-70" is above 0.

If 'B[i]' is below 70, then we will get an overflow of the unsigned type and a very large value as a result. Let B[i] == 50. Then "B[i]-70" = 50u - 70u = 0xFFFFFFECu = 4294967276. Surely, 4294967276 > 0.

A demonstration sample:

unsigned A;
A = 10; cout << "A=10 " << (A-70 > 0) << endl;
A = 70; cout << "A=70 " << (A-70 > 0) << endl;
A = 90; cout << "A=90 " << (A-70 > 0) << endl;
// Will be printed
A=10 1
A=70 0
A=90 1

The first way to correct the code:

unsigned int *B;
...
if (B[i] > 70)

The second way to correct the code:

int *B;
...
if (B[i]-70 > 0)

Note that an expression of the "A - B > 0" type far not always signals an error. Consider a sample where the analyzer generates a false alarm:

// Functions GetLength() and GetPosition() return
// value of size_t type.
while ((inStream.GetLength() - inStream.GetPosition()) > 0)
{ ... }

GetLength() is always above or equal to GetPosition() here, so the code is correct. To suppress the false alarm, we may add the comment //-V555 or rewrite the code in the following way:

while (inStream.GetLength() != inStream.GetPosition())
{ ... }

Here is another case when no error occurs.

__int64 A;
__uint32 B;
...
if (A - B > 0)

The "A - B" subexpression here has the signed type __int64 and no error occurs. The analyzer does not generate warnings in such cases.

 You can look at examples of errors detected by the V555 diagnostic.

# V556. The values of different enum types are compared.

The analyzer detected a potential error: code contains comparison of enum values which have different types.

Consider a sample:

enum ErrorTypeA { E_OK, E_FAIL };
enum ErrorTypeB { E_ERROR, E_SUCCESS };
void Foo(ErrorTypeB status) {
if (status == E_OK)
{ ... }
}

The programmer used a wrong name in the comparison by accident, so the program's logic is disrupted. This is the correct code:

void Foo(ErrorTypeB status) {
if (status == E_SUCCESS)
{ ... }
}

Comparison of values of different enum types is not necessarily an error, but you must review such code.

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-697.
 You can look at examples of errors detected by the V556 diagnostic.

# V557. Array overrun is possible.

The analyzer detected a potential memory access outside an array. The most common case is an error occurring when writing the '\0' character after the last array's item. Let's examine a sample of this error:

struct IT_SAMPLE
{
unsigned char filename[14];
...
};

static int it_riff_dsmf_process_sample(
IT_SAMPLE * sample, const unsigned char * data)
{
memcpy( sample->filename, data, 13 );
sample->filename[ 14 ] = 0;
...
}

The last array's item has index 13, not 14. That is why the correct code is this one:

sample->filename[13] = 0;

Of course, you'd better use an expression involving the sizeof() operator instead of constant index' value in such cases. However, remember that you may make a mistake in this case too. For example:

typedef wchar_t letter;
letter    name[30];
...
name[sizeof(name) - 1] = L'\0';

At first sight, the "sizeof(name) - 1" expression is right. But the programmer forgot that he handled the 'wchar_t' type and not 'char'. As a result, the '\0' character is written far outside the array's boundaries. This is the correct code:

name[sizeof(name) / sizeof(*name) - 1] = L'\0';

To simplify writing of such constructs, you may use this special macro:

#define str_len(arg) ((sizeof(arg) / sizeof(arg[0])) - 1)
name[str_len(name)] = L'\0';

The analyzer detects some errors when the index is represented by a variable whose value might run out of the array's boundaries. For example:

int buff[25];
for (int i=0; i <= 25; i++)
buff[i] = 10;

This is the correct code:

int buff[25];
for (int i=0; i < 25; i++)
buff[i] = 10;

Note that the analyzer might make mistakes when handling such value ranges and generate false alarms.

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-119, CWE-125, CWE-787.
 You can look at examples of errors detected by the V557 diagnostic.

# V558. Function returns the pointer/reference to temporary local object.

The analyzer detected an issue when a function returns a pointer to a local object. This object will be destroyed when leaving the function, so you will not be able to use the pointer to it anymore. In a most common case, this diagnostic message is generated against the following code:

float *F()
{
float f = 1.0;
return &f;
}

Of course, the error would hardly be present in such a form in real code. Let's consider a more real example.

int *Foo()
{
int A[10];
// ...
if (err)
return 0;

int *B = new int[10];
memcpy(B, A, sizeof(A));

return A;
}

Here, we handled the temporary array A. On some condition, we must return the pointer to the new array B. But the misprint causes the A array to be returned, which will cause unexpected behavior of the program or crash. This is the correct code:

int *Foo()
{
...
int *B = new int[10];
memcpy(B, A, sizeof(A));
return B;
}

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-562.
 You can look at examples of errors detected by the V558 diagnostic.

# V559. Suspicious assignment inside the conditional expression of 'if/while/for' statement.

The analyzer detected an issue that has to do with using the assignment operator '=' in the conditional expression of an 'if' or 'while' statement. Such a construct usually indicates the presence of a mistake. It is very likely that the programmer intended to use the '==' operator instead of '='.

Consider the following example:

const int MAX_X = 100;
int x;
...
if (x = MAX_X)
{ ... }

There is a typo in this code: the value of the 'x' variable will be modified instead of being compared with the constant MAX_X:

if (x == MAX_X)
{ ... }

Using assignments inside conditions is not always an error, of course. This technique is used by many programmers to make code shorter. However, it is a bad style because it takes a long time to find out if such a construct results from a typo or the programmer's intention to make the code shorter.

Instead of using assignments inside conditional expressions, we recommend implementing them as a separate operation or enclosing them in additional parentheses:

while ((x = Foo()))
{
...
}

Code like this will be interpreted by both the analyzer and most compilers as correct. Besides, it tells other programmers that there is no error here.

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-481.
 You can look at examples of errors detected by the V559 diagnostic.

# V560. A part of conditional expression is always true/false.

The analyzer detected a potential error inside a logical condition. A part of a logical condition is always true and therefore is considered dangerous.

Consider this sample:

#define REO_INPLACEACTIVE (0x02000000L)
...
if (reObj.dwFlags && REO_INPLACEACTIVE)
m_pRichEditOle->InPlaceDeactivate();

The programmer wanted to check some particular bit in the dwFlags variable. But he made a misprint by writing the '&&' operator instead of '&' operator. This is the correct code:

if (reObj.dwFlags & REO_INPLACEACTIVE)
m_pRichEditOle->InPlaceDeactivate();

Let's examine another sample:

if (a = 10 || a == 20)

The programmer wrote the assignment operator '=' instead of comparison operator '==' by accident. From the viewpoint of the C++ language, this expression is identical to an expression like "if (a = (10 || a == 20))".

The analyzer considers the "10 || a == 20" expression dangerous because its left part is a constant. This is the correct code:

if (a == 10 || a == 20)

Sometimes the V560 warning indicates just a surplus code, not an error. Consider the following sample:

if (!mainmenu) {
if (freeze || winfreeze ||
(!gameon && gamestarted))
drawmode = normalmode;
}

The analyzer will warn you that the 'mainmenu' variable in the (mainmenu && gameon) subexpression is always equal to 0. It follows from the check above " if (!mainmenu)". This code can be quite correct. But it is surplus and should be simplified. It will make the program clearer to other developers.

This is the simplified code:

if (!mainmenu) {
if (freeze || winfreeze ||
(!gameon && gamestarted))
drawmode = normalmode;
}

This is a more interesting case.

int16u Integer = ReadInt16u(Liste);
int32u Exponent=(Integer>>10) & 0xFF;
if (Exponent==0 || Exponent==0xFF)  // V560
return 0;

The user who sent us this example was puzzled by the analyzer issuing a warning saying that the 'Exponent==0xFF' subexpression was always false. Let's figure this out. To do that, we need to count carefully.

The range of values of 16-bit unsigned variable 'Integer' is [0..0b1111111111111111], i.e. [0..0xFFFF].

Shifting by 10 bits to the right reduces the range: [0..0b111111], i.e. [0..0x3F].

After that, the '& 0xFF' operation is executed.

As a result, there's no way you can get the value '0xFF' - only '0x3F' at most.

Some C++ constructs are considered safe even if a part of an expression inside them is a constant. Here are some samples when the analyzer considers the code safe:

• a subexpression contains operators sizeof(): if (a == b && sizeof(T) < sizeof(__int64)) {};
• an expression is situated inside a macro: assert(false);
• two numerical constants are being compared: if (MY_DEFINE_BITS_COUNT == 4) {};
• etc.
 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-570, CWE-571.
 You can look at examples of errors detected by the V560 diagnostic.

# V561. It's probably better to assign value to 'foo' variable than to declare it anew.

The analyzer detected a potential error: there is a variable in code which is defined and initialized but not being used further. Besides, there is a variable in the exterior scope which also has the same name and type. It is highly probable that the programmer intended to use an already existing variable instead of defining a new one.

Let's examine this sample:

BOOL ret = TRUE;
if (m_hbitmap)
BOOL ret = picture.SaveToFile(fptr);

The programmer defined a new variable 'ret' by accident, which causes the previous variable to always have the TRUE value regardless if the picture is saved into a file successfully or not. This is the correct code:

BOOL ret = TRUE;
if (m_hbitmap)
ret = picture.SaveToFile(fptr);

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-563.
 You can look at examples of errors detected by the V561 diagnostic.

# V562. It's odd to compare a bool type value with a value of N.

The analyzer detected an issue when a value of the bool type is compared to a number. Most likely, there is an error.

Consider this sample:

if (0 < A < 5)

The programmer not familiar with the C++ language well wanted to use this code to check whether the value lies within the range between 0 and 5. Actually, the calculations will be performed in the following sequence: ((0 < A) < 5). The result of the "0 < A" expression has the bool type and therefore is always below 5.

This is the correct code for the check:

if (0 < A && A < 5)

The previous example resembles a mistake usually made by students. But even skilled developers are not secure from such errors.

Let's consider another sample:

if (! (fp = fopen(filename, "wb")) == -1) {
perror("opening image file failed");
exit(1);
}

Here we have 2 errors of different types at once. First, the "fopen" function returns the pointer and compares the returned value to NULL. The programmer confused the "fopen" function with "open" function, the latter being that very function that returns "-1" if there is an error. Second, the negation operation "!" is executed first and only then the value is compared to "-1". There is no sense in comparing a value of the bool type to "-1" and that is why the analyzer warned us about the code.

This is the correct code:

if ( (fp = fopen(filename, "wb")) == NULL) {
perror("opening image file failed");
exit(1);
}

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-563.
 You can look at examples of errors detected by the V562 diagnostic.

# V563. It is possible that this 'else' branch must apply to the previous 'if' statement.

The analyzer detected a potential error in logical conditions: code's logic does not coincide with the code editing.

Consider this sample:

if (X)
if (Y) Foo();
else
z = 1;

The code editing disorientates you so it seems that the "z = 1" assignment takes place if X == false. But the 'else' branch refers to the nearest operator 'if'. In other words, this code is actually analogous to the following code:

if (X)
{
if (Y)
Foo();
else
z = 1;
}

So, the code does not work the way it seems at first sight.

If you get the V563 warning, it may mean one of the two following things:

1) Your code is badly edited and there is no error actually. In this case you need to edit the code so that it becomes clearer and the V563 warning is not generated. Here is a sample of correct editing:

if (X)
if (Y)
Foo();
else
z = 1;

2) A logical error has been found. Then you may correct the code, for instance, this way:

if (X) {
if (Y)
Foo();
} else {
z = 1;
}

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-670.
 You can look at examples of errors detected by the V563 diagnostic.

# V564. The '&' or '|' operator is applied to bool type value. You've probably forgotten to include parentheses or intended to use the '&&' or '||' operator.

The analyzer detected a potential error: operators '&' and '|' handle bool-type values. Such expressions are not necessarily errors but they usually signal misprints or condition errors.

Consider this sample:

int a, b;
#define FLAG 0x40
...
if (a & FLAG == b)
{
}

This example is a classic one. A programmer may be easily mistaken in operations' priorities. It seems that computing runs in this sequence: "(a & FLAG) == b". But actually it is "a & (FLAG == b)". Most likely, it is an error.

The analyzer will generate a warning here because it is odd to use the '&' operator for variables of int and bool types.

If it turns out that the code does contain an error, you may fix it the following way:

if ((a & FLAG) == b)

Of course, the code might appear correct and work as it was intended. But still you'd better rewrite it to make it clearer. Use the && operator or additional brackets:

if (a && FLAG == b)
if (a & (FLAG == b))

The V564 warning will not be generated after these corrections are done while the code will get easier to read.

Consider another sample:

#define SVF_CASTAI 0x00000010
if ( !ent->r.svFlags & SVF_CASTAI ) {
...
}

Here we have an obvious error. It is the "!ent->r.svFlags" subexpression that will be calculated at first and we will get either true of false. But it does not matter: whether we execute "true & 0x00000010" operation or "false & 0x00000010" operation, the result will be the same. The condition in this sample is always false.

This is the correct code:

if ( ! (ent->r.svFlags & SVF_CASTAI) )

Note. The analyzer will not generate the warning if there are bool-type values to the left and to the right of the '&' or '|' operator. Although such code does not look too smart, still it is correct. Here is a code sample the analyzer considers safe:

bool X, Y;
...
if (X | Y)
{ ... }
 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-480.
 You can look at examples of errors detected by the V564 diagnostic.

# V565. An empty exception handler. Silent suppression of exceptions can hide the presence of bugs in source code during testing.

An exception handler was found that does not do anything. Consider this code:

try {
...
}
catch (MyExcept &)
{
}

Of course, this code is not necessarily incorrect. But it is very odd to suppress an exception by doing nothing. Such exception handling might conceal defects in the program and complicate the testing process.

You must react to exceptions somehow. For instance, you may add "assert(false)" at least:

try {
...
}
catch (MyExcept &)
{
assert(false);
}

Programmers sometimes use such constructs to return control from a number of nested loops or recursive functions. But it is bad practice because exceptions are very resource-intensive operations. They must be used according to their intended purpose, i.e. for possible contingencies that must be handled on a higher level.

The only thing where you may simply suppress exceptions is destructors. A destructor must not throw exceptions. But it is often not quite clear what to do with exceptions in destructors and the exception handler might well remain empty. The analyzer does not warn you about empty handlers inside destructors:

CClass::~ CClass()
{
try {
DangerousFreeResource();
}
catch (...) {
}
}
 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-544.
 You can look at examples of errors detected by the V565 diagnostic.

# V566. The integer constant is converted to pointer. Possibly an error or a bad coding style.

The analyzer detected an explicit conversion of a numerical value to the pointer type. This warning is usually generated for code fragments where numbers are used for flagging objects' states. Such methods are not necessarily errors but usually signal a bad code design. Consider this sample:

const DWORD SHELL_VERSION = 0x4110400;
...
char *ptr = (char*) SHELL_VERSION;
...
if (ptr == (char*) SHELL_VERSION)

The constant value which marks some special state is saved into the pointer. This code might work well for a long time, but if an object is created by the address 0x4110400, we will not determine if this is a magic flag or just an object. If you want to use a special flag, you'd better write it so:

const DWORD SHELL_VERSION = 0x4110400;
...
char *ptr = (char*)(&SHELL_VERSION);
...
if (ptr == (char*)(&SHELL_VERSION))

Note. To make false alarms fewer, the V566 message is not generated for a range of cases. For instance, it does not appear if values -1, 0, 0xcccccccc and 0xdeadbeef are magic numbers; if a number lies within the range between 0 and 65535 and is cast to a string pointer. This enables us to skip correct code fragments like the following one:

CString sMessage( (LPCSTR)IDS_FILE_WAS_CHANGED ) ;

This method of loading a string from resources is rather popular but certainly you'd better use MAKEINTRESOURCE. There are some other exceptions as well.

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-587.

# V567. The modification of a variable is unsequenced relative to another operation on the same variable. This may lead to undefined behavior.

The analyzer detected an expression leading to undefined behavior. A variable is used several times between two sequence points while its value is changing. We cannot predict the result of such an expression. Let's consider the notions "undefined behavior" and "sequence point" in detail.

Undefined behavior is a feature of some programming languages — most famously C/C++. In these languages, to simplify the specification and allow some flexibility in implementation, the specification leaves the results of certain operations specifically undefined.

For example, in C the use of any automatic variable before it has been initialized yields undefined behavior, as do division by zero and indexing an array outside of its defined bounds. This specifically frees the compiler to do whatever is easiest or most efficient, should such a program be submitted. In general, any behavior afterwards is also undefined. In particular, it is never required that the compiler diagnose undefined behavior — therefore, programs invoking undefined behavior may appear to compile and even run without errors at first, only to fail on another system, or even on another date. When an instance of undefined behavior occurs, so far as the language specification is concerned anything could happen, maybe nothing at all.

A sequence point in imperative programming defines any point in a computer program's execution at which it is guaranteed that all side effects of previous evaluations will have been performed, and no side effects from subsequent evaluations have yet been performed. They are often mentioned in reference to C and C++, because the result of some expressions can depend on the order of evaluation of their subexpressions. Adding one or more sequence points is one method of ensuring a consistent result, because this restricts the possible orders of evaluation.

It is worth noting that in C++11, the terms sequenced before/after, sequenced and unsequenced were introduced instead of sequence points. Many expressions, resulting in undefined behavior in C++03, became defined (for instance, i = ++i). These rules were also supplemented in C++14 and C++17. The analyzer issues a false positive regardless of the used standard. The certainty of the expressions of i = ++i type is not an excuse to use them. It is better to rewrite such expressions to make them more understandable. Also if you need to support an earlier standard, you can get a bug that is hardly debugged.

i = ++i + 2;       // undefined behavior until C++11
i = i++ + 2;       // undefined behavior until C++17
f(i = -2, i = -2); // undefined behavior until C++17
f(++i, ++i);       // undefined behavior until C++17,
// unspecified after C++17
i = ++i + i++;     // undefined behavior
cout << i << i++;  // undefined behavior until C++17
a[i] = i++;        // undefined behavior until C++17
n = ++i + i;       // undefined behavior

Sequence points come into play when the same variable is modified more than once within a single expression. An often-cited example is the expression i=i++, which both assigns i to itself and increments i. The final value of i is ambiguous, because, depending on the language semantics, the increment may occur before, after or interleaved with the assignment. The definition of a particular language might specify one of the possible behaviors or simply say the behavior is undefined. In C and C++, evaluating such an expression yields undefined behavior.

C and C++ define the following sequence points:

• Between evaluation of the left and right operands of the && (logical AND), || (logical OR), and comma operators. For example, in the expression *p++ != 0 && *q++ != 0, all side effects of the sub-expression *p++ != 0 are completed before any attempt to access q.
• Between the evaluation of the first operand of the ternary "question-mark" operator and the second or third operand. For example, in the expression a = (*p++) ? (*p++) : 0 there is a sequence point after the first *p++, meaning it has already been incremented by the time the second instance is executed.
• At the end of a full expression. This category includes expression statements (such as the assignment a=b;), return statements, the controlling expressions of if, switch, while, or do-while statements, and all three expressions in a for statement.
• Before a function is entered in a function call. The order in which the arguments are evaluated is not specified, but this sequence point means that all of their side effects are complete before the function is entered. In the expression f(i++) + g(j++) + h(k++), f is called with a parameter of the original value of i, but i is incremented before entering the body of f. Similarly, j and k are updated before entering g and h respectively. However, it is not specified in which order f(), g(), h() are executed, nor in which order i, j, k are incremented. The values of j and k in the body of f are therefore undefined.[3] Note that a function call f(a,b,c) is not a use of the comma operator and the order of evaluation for a, b, and c is unspecified.
• At a function return, after the return value is copied into the calling context. (This sequence point is only specified in the C++ standard; it is present only implicitly in C[4].)
• At the end of an initializer; for example, after the evaluation of 5 in the declaration int a = 5;.
• In C++, overloaded operators act as functions, so a call of an overloaded operator is a sequence point.

Now let's consider several samples causing undefined behavior:

int i, j;
...
X[i]=++i;
X[i++] = i;
j = i + X[++i];
i = 6 + i++ + 2000;
j = i++ + ++i;
i = ++i + ++i;

We cannot predict the calculation results in all these cases. Of course, these samples are artificial and we can notice the danger right away. So let's examine a code sample taken from a real application:

while (!(m_pBitArray[m_nCurrentBitIndex >> 5] &
Powers_of_Two_Reversed[m_nCurrentBitIndex++ & 31]))
{}
return (m_nCurrentBitIndex - BitInitial - 1);

The compiler can calculate either of the left or right arguments of the '&' operator first. It means that the m_nCurrentBitIndex variable might be already incremented by one when calculating "m_pBitArray[m_nCurrentBitIndex >> 5]". Or it might still be not incremented.

This code may work well for a long time. However, you should keep in mind that it will behave correctly only when it is built in some particular compiler version with a fixed set of compilation options. This is the correct code:

while (!(m_pBitArray[m_nCurrentBitIndex >> 5] &
Powers_of_Two_Reversed[m_nCurrentBitIndex & 31]))
{ ++m_nCurrentBitIndex; }
return (m_nCurrentBitIndex - BitInitial);

This code does not contain ambiguities anymore. We also got rid of the magic constant "-1".

Programmers often think that undefined behavior may occur only when using postincrement, while preincrement is safe. It's not so. Further is an example from a discussion on this subject.

Question:

I downloaded the trial version of your studio, ran it on my project and got this warning: V567 Undefined behavior. The 'i_acc' variable is modified while being used twice between sequence points.

The code

i_acc = (++i_acc) % N_acc;

It seems to me that there is no undefined behavior because the i_acc variable does not participate in the expression twice.

There is undefined behavior here. It's another thing that the probability of error occurrence is rather small in this case. The '=' operator is not a sequence point. It means that the compiler might first put the value of the i_acc variable into the register and then increment the value in the register. After that it calculates the expression and writes the result into the i_acc variable and then again writes a register with the incremented value into the same variable. As a result we will get a code like this:

REG = i_acc;
REG++;
i_acc = (REG) % N_acc;
i_acc = REG;

The compiler has the absolute right to do so. Of course, in practice it will most likely increment the variable's value at once, and everything will be calculated as the programmer expects. But you should not rely on that.

Consider one more situation with function calls.

The order of calculating function arguments is not defined. If a variable changing over time serves as arguments, the result will be unpredictable. This is unspecified behavior. Consider this sample:

int A = 0;
Foo(A = 2, A);

The 'Foo' function may be called both with the arguments (2, 0) and with the arguments (2, 2). The order in which the function arguments will be calculated depends on the compiler and optimization settings.

References

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-758.
 You can look at examples of errors detected by the V567 diagnostic.

# V568. It's odd that the argument of sizeof() operator is the expression.

The analyzer detected a potential error: a suspicious expression serves as an argument of the sizeof() operator. Suspicious expressions can be arranged in two groups:

1. An expression attempts to change some variable.

The sizeof() operator calculates the expression's type and returns the size of this type. But the expression itself is not calculated. Here is a sample of suspicious code:

int A;
...
size_t size = sizeof(A++);

This code does not increment the 'A' variable. If you need to increment 'A', you'd better rewrite the code in the following way:

size_t size = sizeof(A);
A++;

2. Operations of addition, multiplication and the like are used in the expression.

Complex expressions signal errors. These errors are usually related to misprints. For example:

SendDlgItemMessage(
hwndDlg, RULE_INPUT_1 + i, WM_GETTEXT,
sizeof(buff - 1), (LPARAM) input_buff);

The programmer wrote "sizeof(buff - 1)" instead of "sizeof(buff) - 1". This is the correct code:

SendDlgItemMessage(
hwndDlg, RULE_INPUT_1 + i, WM_GETTEXT,
sizeof(buff) - 1, (LPARAM) input_buff);

Here is another sample of a misprint in program text:

memset(tcmpt->stepsizes, 0,
sizeof(tcmpt->numstepsizes * sizeof(uint_fast16_t)));

The correct code:

memset(tcmpt->stepsizes, 0,
tcmpt->numstepsizes * sizeof(uint_fast16_t));

3. The argument of the sizeof() operator is a pointer to a class. In most cases this shows that the programmer forgot to dereference the pointer.

Example:

class MyClass
{
public:
int a, b, c;
size_t getSize() const
{
return sizeof(this);
}
};

The getSize() method returns the size of the pointer, not of the object. Here is a correct variant:

size_t getSize() const
{
return sizeof(*this);
}

 You can look at examples of errors detected by the V568 diagnostic.

# V569. Truncation of constant value.

The analyzer detected a potential error: a constant value is truncated when it is assigned into a variable. Consider this sample:

int A[100];
unsigned char N = sizeof(A);

The size of the 'A' array (in Win32/Win64) is 400 bytes. The value range for unsigned char is 0..255. Consequently, the 'N' variable cannot store the size of the 'A' array.

The V569 warning tells you that you have chosen a wrong type to store this size or that you actually intended to calculate the number of items in the array instead of the array's size.

If you have chosen a wrong type, you may correct the code this way:

size_t N = sizeof(A);

If you intended to calculate the number of items in the array, you should rewrite the code this way:

unsigned char N = sizeof(A) / sizeof(*A);

 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-197.
 You can look at examples of errors detected by the V569 diagnostic.

# V570. The variable is assigned to itself.

The analyzer detected a potential error: a variable is assigned to itself. Consider this sample:

dst.m_a = src.m_a;
dst.m_b = dst.m_b;

The value of the 'dst.m_b' variable will not change because of the misprint. This is the correct code:

dst.m_a = src.m_a;
dst.m_b = src.m_b;

The analyzer issues a warning not only for the copy assignment, but for the move assignment too.

dst.m_a = std::move(src.m_a);

The analyzer does not produce the warning every time it detects assignment of a variable to itself. For example, if the variables are enclosed in parentheses. This method is often used to suppress compiler-generated warnings. For example:

int Foo(int foo)
{
UNREFERENCED_PARAMETER(foo);
return 1;
}

The UNREFERENCED_PARAMETER macro is defined in the WinNT.h file in the following way:

#define UNREFERENCED_PARAMETER(P)          \
{ \
(P) = (P); \
} 

The analyzer knows about such cases and will not generate the V570 warning on assignment like this:

(foo) = (foo);

Note. If V570 warning shows on macro that should not be changed, it is possible to use macro suppression mechanism. Special comment in the file that is used in the whole project (for instance, StdAfx.h file) may be enough for that. Example:

//-V:MY_MACROS:V570
 You can look at examples of errors detected by the V570 diagnostic.

# V571. Recurring check. This condition was already verified in previous line.

The analyzer detected a potential error: one and the same condition is checked twice. Consider two samples:

// Example N1:
if (A == B)
{
if (A == B)
...
}

// Example N2:
if (A == B) {
} else {
if (A == B)
...
}

In the first case, the second check "if (A==B)" is always true. In the second case, the second check is always false.

It is highly probable that this code has an error. For instance, a wrong variable name is used because of a misprint. This is the correct code:

// Example N1:
if (A == B)
{
if (A == C)
...
}

// Example N2:
if (A == B) {
} else {
if (A == C)
...
}
 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-571.
 You can look at examples of errors detected by the V571 diagnostic.

# V572. It is odd that the object which was created using 'new' operator is immediately cast to another type.

The analyzer detected a potential error: an object created by the 'new' operator is explicitly cast to a different type. For example:

T_A *p = (T_A *)(new T_B());
...
delete p;

There are three possible ways of how this code has appeared and what to do with it.

1) T_B was not inherited from the T_A class.

Most probable, it is an unfortunate misprint or crude error. The way of correcting it depends upon the purpose of the code.

2) T_B is inherited from the T_A class. The T_A class does not have a virtual destructor.

In this case you cannot cast T_B to T_A because you will not be able to correctly destroy the created object then. This is the correct code:

T_B *p = new T_B();
...
delete p;

3) T_B is inherited from the T_A class. The T_A class has a virtual destructor.

In this case the code is correct but the explicit type conversion is meaningless. We can write it in a simpler way:

T_A *p = new T_B();
...
delete p;

There can be other cases when the V572 warning is generated. Let's consider a code sample taken from a real application:

DWORD CCompRemoteDriver::Open(HDRVR,
char *, LPVIDEO_OPEN_PARMS)
{
return (DWORD)new CCompRemote();
}

The program handles the pointer as a descriptor for its purposes. To do that, it explicitly converts the pointer to the DWORD type. This code will work correctly in 32-bit systems but might fail in a 64-bit program. You may avoid the 64-bit error using a more suitable data type DWORD_PTR:

DWORD_PTR CCompRemoteDriver::Open(HDRVR,
char *, LPVIDEO_OPEN_PARMS)
{
return (DWORD_PTR)new CCompRemote();
}

Sometimes the V572 warning may be aroused by an atavism remaining since the time when the code was written in C. Let's consider such a sample:

struct Joint {
...
};
joints=(Joint*)new Joint[n]; //malloc(sizeof(Joint)*n);

The comment tells us that the 'malloc' function was used earlier to allocate memory. Now it is the 'new' operator which is used for this purpose. But the programmers forgot to remove the type conversion. The code is correct but the type conversion is needless here. We may write a shorter code:

joints = new Joint[n];

 You can look at examples of errors detected by the V572 diagnostic.

# V573. Uninitialized variable 'Foo' was used. The variable was used to initialize itself.

The analyzer detected a potential error: a variable being declared is used to initialize itself. Let's consider a simple synthetic sample:

int X = X + 1;

The X variable will be initialized by a random value. Of course, this sample is farfetched yet it is simple and good to show the warning's meaning. In practice, such an error might occur in more complex expressions. Consider this sample:

void Class::Foo(const std::string &FileName)
{
if (FileName.empty())
return;
std::string FullName = m_Dir + std::string("\\") + FullName;
...
}

Because of the misprint in the expression, it is the FullName name which is used instead of FileName. This is the correct code:

std::string FullName = m_Dir + std::string("\\") + FileName;
 According to Common Weakness Enumeration, potential errors found by using this diagnostic are classified as CWE-457.
 You can look at examples of errors detected by the V573 diagnostic.

# V574. The pointer is used simultaneously as an array and as a pointer to single object.

The analyzer detected a potential error: a variable is used simultaneously as a pointer to a single object and as an array. Let's study a sample of the error the analyzer has found in itself:

TypeInfo *factArgumentsTypeInfo =
new (GC_QuickAlloc) TypeInfo[factArgumentsCount];
for (size_t i = 0; i != factArgumentsCount; ++i)
{
Typeof(factArguments[i], factArgumentsTypeInfo[i]);
factArgumentsTypeInfo->Normalize();
}

It is suspicious that we handle the factArgumentsTypeInfo variable as the "factArg