OSR provides a standard, fixed-price, code review service that has been very popular over the years. This service focuses on reviewing kernel-mode code (typically kernel mode drivers). The purpose of the review might be:
- To “tighten up” code prior to an important beta cycle with customers.
- To seeking an expert opinion as to quality, stability, adherence to standard programming practices of code developed by a 3rd party (such as an outsourcing provider)
- To resolve as yet unidentified stability issues in the field.
All these, and others no doubt, are excellent reason to bring on a team of experts to evaluate the state of a code base.
What Do You Get?
When a client engages OSR to perform a “code review” the outcome will always include a written report. However, the details contained in that report depend on several factors. These factors include client requirements and the extent of what’s delivered to us at the beginning of the review process. So, rather than attempt to describe the results that we provide in detail, we’ll describe the standard process that our team uses during the review process.
OSR code reviews are typically done by multiple staff engineers. It’s been our experience that having multiple people review a code base significantly contributes to finding problems and issues. One person might “have an eye” for multiprocessor serialization issues, while another engineer reviewing the code might readily identify inappropriate API usage. In all cases, code is reviewed by engineers with specific and extensive experience in Windows kernel-mode development.
If all a client delivers us only code at the beginning of the review one of the first things that we will do is attempt to describe the overall function of the code – in essence, to develop a de facto design and architectural reference. This then allows us to review this basic design and architecture since one of our goals is to identify potential issues or flaws within the code, which certainly includes design level flaws.
If, on the other hand, a client delivers design documents, our first review will be over the design itself to ensure that we are comfortable with the scope of the design itself . Alternatively, if design documents aren’t available, an engineer that is knowledgeable in the code’s design and architecture meet with the OSR reviewers (either in-person or via internet). In either of these cases, OSR’s first pass through the code will be to determine how closely the code adheres to the design as either written or described and to extract a unified design from the whole, again with an eye towards identifying potential areas for concern.
When clients are in a position to offer detailed information about code history and any problems that are known to exhibit themselves in the code, this can help to maximize the benefit of the code review assignment. For example, was the code developed in-house or by 3rd party? Are crashes or hangs experienced in certain situations? Is the code already shipping or still in some stage of development? It’s not always possible to collect/provide this information, but this truly is a case of, “the more we get, the better you are served”.
The second pass through the code is typically a more detailed analysis, where we begin to focus on the specific of the implementation and attempt to identify issues within the implementation. For example, a failure to handle cases of which we are aware will be highlighted during this phase. The result from this process then is usually a list of specific details.
If necessary, a third pass through the code will be made again using this list of issues to identify other places where these issues might also appear.
Our work at this point normally turns to writing up our results. Based upon the information we have compiled at this point we try to extract general comments and issues to highlight first. Thus, we want to identify those issues that are endemic or systematic within the code base. For example, we might note a lack of appropriate error checking as a general concern throughout a code base, citing specific examples, without citing every single case. To the extent possible, we would also describe or demonstrate the appropriate implementation technique as a reference. Finally, we will summarize our findings, typically in a module-by-module fashion (although there have been times when we have done this analysis on an operation-by-operation basis because it made more sense). Thus, the final report will identify both general issues as well as specific issues that we have identified.
Sometimes, when we find specific complex code or design issues, we will not only write a description of those errors but will also annotate the source code module itself to highlight the issues we’ve discovered.
In addition, we should note that a code review does not focus on the stylistic aspects of the code. The only time code style is noted is when the styling contributes to other issues. For example, inconsistent variable naming or a gross lack of comments can adversely impact maintainability. So while we might dislike use of the goto statement, our code review would not normally comment on its use unless it was being inappropriately used. Similarly, the scoping and declaration of variables, naming conventions, coding styles, or other issues that are not germane to the function or maintainability of the code are not normally addressed within the scope of our code review.
The length of our final report is typically inversely proportional to the overall quality of the code that’s been reviewed. Thus, if the code base is very high quality the report will be short and succinct. If the code base has some issues the report will be longer. And if the code base is truly deficient the report can be lengthy. This report can be reviewed and discussed via phone conference, internet, email or in-person visit. On request, a sample of such a report (client and product names removed) can be provided to prospective clients.