It’s a common requirement for a file system filter driver to scan file data as part of its normal operation. For example, a file system filter may not want to allow a user to open a file until it has a chance to calculate its MD5 hash. The filter can achieve this by registering an IRP_MJ_CREATE handler and allowing or denying the request to access the file based on the file’s hash value.
The first interesting problem we have is if we want to filter the IRP_MJ_CREATE operation before the file system (PreCreate) or after the file system (PostCreate). If the filter chooses to monitor the operation in PreCreate, the file is not yet opened and thus the file data cannot be retrieved. The filter must perform its own open (e.g. by using FltCreateFileEx2) and scan the file data with the resulting file object.
For this reason, it is almost always a better choice to scan the file in the PostCreate callback. Instead of performing yet another open request, we can simply hijack the user’s open request and use their file object to perform the data scan. In order to guarantee that we have the necessary access to the file, we can delay this processing until we see an open requesting data access. We can also cache our scan result within a Stream Context structure that we can quickly retrieve on subsequent attempts to open the file.
OK, now that we have a File Object we can read from, we have three choices for how to read the file data:
1) Non-Cached I/O
2) Cached I/O
3) Memory Mapped I/O
Let’s look at each of these in turn.
With a File Object in hand, we can call FltReadFile and generate non-cached I/O to the file. This reads the data directly from disk and must be done in sector aligned chunks.
Reading the file in this way is fine and works, but it has a terrible downside: the resulting read data is not cached (duh). If the user application then wants to read the file, we need to re-read the entire file. If we’re going to make the user wait to open the file until we’ve read some (or all) of the file, then the least we can do is not make her fetch it from disk again.
With a File Object in hand, we can call FltReadFile and choose to generate cached I/O instead. This reads the data from disk into the file system cache, then the data is copied from the cache into our supplied data buffer. The data can be read in arbitrary sizes and alignments.
Reading the file in this way is fine and works. It even has the benefit of caching the data that was read, which means that if the user reads the file using cached I/O, the data will (likely) still be in memory. If the user does non-cached I/O, then oh well we lose and need to read the file twice anyway.
We still have a couple of subtle downsides. First of all, what if the user doesn’t read the file data? Then we’ve just put a bunch of stuff in the cache that the user doesn’t care about, potentially evicting things that he does care about. Alternatively, what if the user then proceeds to memory map the file? The memory mapping will be backed by the same pages as the cache, thus you’ll get the benefit of the cached pages. However, we would still have a bunch of unnecessary Cache Manager structures and state floating around.
Memory Mapped I/O
Hopefully you can tell at this point that we’re leading you to a better solution: memory mapped I/O. With memory mapped I/O, the pages that we fault in while scanning the file data are cached in memory. If the user does non-cached I/O, then we still lose and need to read the file twice. However, if the user performs cached I/O, the pages will be used to satisfy the user’s cached I/O requests. Even better, if the user memory maps the file, the pages can be used to back that as well.
Thus, given that we can’t pre-determine how the user application is going to access the file, using memory mapped I/O is the most flexible approach. Memory mapping is also nice because accessing file content as pointer and length is more natural when trying to do things such as calculate hashes or interpret file content (e.g. parse headers).
In fact, scanning a file using a memory mapping is such a beneficial way to achieve this goal that there is an API designed for just this purpose: FsRtlCreateSectionForDataScan.
For reference, here is the FsRtlCreateSectionForDataScan function prototype:
NTSTATUS FsRtlCreateSectionForDataScan( _Out_ PHANDLE SectionHandle, _Out_ PVOID *SectionObject, _Out_opt_ PLARGE_INTEGER SectionFileSize, _In_ PFILE_OBJECT FileObject, _In_ ACCESS_MASK DesiredAccess, _In_opt_ POBJECT_ATTRIBUTES ObjectAttributes, _In_opt_ PLARGE_INTEGER MaximumSize, _In_ ULONG SectionPageProtection, _In_ ULONG AllocationAttributes, _In_ ULONG Flags );
The most important bits are that this API takes a pointer to a File Object and returns a Section (i.e. File Mapping) handle. The handle can either be created as a user mode handle or as a kernel mode handle, depending on the values set in the ObjectAttributes argument. The Section handle can then be passed to ZwMapViewOfSection in kernel mode or, even better, the MapViewOfFile API in user mode.
The fact that this API allows for the creation of a user mode handle makes the API incredibly flexible. For example, it is often desirable to offload the work of calculating hashes or parsing file structures to a user mode service. The only important note is that FsRtlCreateSectionForDataScan creates user mode handles in the current user process. Thus, if you call this API in PostCreate the user mode handle will be created in the process attempting to open the file, not your user mode service. Extra work must be done to make the call to FsRtlCreateSectionForDataScan in the target process of interest. For example, the filter may create this section in response to a message received via a Filter Manager Completion Port.
There are a couple of other interesting things to know about this API. First of all, this API will not allow the filter to map the file for execute access. This means that for executables, the data read from the file represents the structure of the file on disk, not the executable version in memory.
The second issue is a bit more subtle. In our theoretical example, we have taken a File Object from PostCreate and called FsRtlCreateSectionForDataScan on it. This File Object is from a user open request, which was directed to the top of the file system stack. When our file system filter then attempts to memory map the file, the request to memory map must go to the top of the file system stack. For example, when you call this API you should expect your own filter (and those above you) to be called at IRP_MJ_ACQUIRE_FOR_SECTION_SYNCHRONIZATION. Also, any paging I/O requests triggered by accessing a mapped view of the section will arrive at the top of the file system stack.
This is a bit unusual for Filter Manager mini-filter drivers because we’re generally used to only sending I/O requests down the filter stack to avoid recursion. However, in this case we are hijacking the user’s File Object, thus our operations must look like operations that would have come from the user application.
But wait! There’s a FltCreateSectionForDataScan! I know that using Flt APIs prevent recursion and result in I/O requests going down the filter stack, so that must prevent the recursive paging I/Os, right??
That would be a great guess, but, nope, not the case at all…
Then What Good is FltCreateSectionForDataScan?
To quote Jurassic Park, “hold on to your butts”, because the Flt version of this API is a bit insane…
Once we have an active mapping to a section, there are a bunch of failure cases that can be triggered in the file system. Most importantly, attempts to purge file data from memory can fail due to the active mapping. This can result in cache coherency issues that wouldn’t have otherwise occurred. For example, if a user performed a non-cached write on a cached file, the file system will attempt to flush and purge the file data to reconcile the in memory data with the on disk data before allowing the non-cached write. Normally this is a best effort activity as Windows does not guarantee coherency in this case. However, if a filter is creating data scan sections then the scenario might become more likely.
This is the problem that FltCreateSectionForDataScan attempts to solve. As part of setting up the section, Filter Manager sends the file system an FSCTL_SET_PURGE_FAILURE_MODE request. This indicates to the file system that there is an active section for data scan on the specified file. If a conflicting operation occurs on the file while the section is valid, the file system reports the error to Filter Manager by failing the I/O request with STATUS_PURGE_FAILED. Filter Manager in turn calls the offending mini-filter(s) at their SectionConflictNotification callbacks to tear down their sections and allow the file system to retry the file operation.
Every filter needs to decide how to deal with this situation. A scan is in progress to determine if Process A should be allowed access to the file. At the same time, Process B (who presumably was already granted access) performs an operation that is incompatible with the section. A reasonable action in this case might be for the filter to deny Process A’s attempt to access the file and simply retry the scan on a subsequent access.
When the filter is done with its processing, it must call FltCloseSectionForDataScan to indicate to Filter Manager that the scan is complete. Filter Manager then notifies the file system that it may return to its default purge failure processing.
One final note: FltCreateSectionForDatScan was not introduced until Windows 8. Thus, if your filter needs to support Windows 7 and later, the recommendation would be to fall back to the FsRtl version on Windows 7 and use the Flt version where available (see FltGetRoutineAddress).