diff --git a/06_Userspace/06_Resource_Management.md b/06_Userspace/06_Resource_Management.md new file mode 100644 index 0000000..3fb3f03 --- /dev/null +++ b/06_Userspace/06_Resource_Management.md @@ -0,0 +1,118 @@ +# Resource Management + +The kernel manages various things on behalf of the userspace process, including files, sockets, IPC, devices, and more, which we call _resources_. It is a good idea to have a unified way to handle such a range of resources to reduce complexity. Rather than exposing a separate set of syscalls for each resource, a generic abstractation can be introduced to simplify everything while also keeping it all centralized in one place. + +To do this, we implement an API where every resource can be opened, read from, written to, and closed using the same syscalls. Through this design, the kernel is kept small, whilst also letting new resources be added in the future with minimal change to both kernel and user code. + +## Resource Abstractation + +When talking about _resources_, we need a way to distinguish between the different types that the kernel may expect to provide to userspace. Each resource behaves differently internally, but from the view of the userspace process, everything should be accessible from the same set of syscalls. In order to achieve this, we define an enum of resource types to allow the kernel to tag each resource with its category. This way, when a system call is made, the kernel knows how to dispatch the request. + +``` +typedef enum { + FILE, + MESSAGE_ENDPOINT, + SHARED_MEM, + // can extend later +} resource_type_t; +``` + +In this example, `FILE` represents a file on the disk, `MESSAGE_ENDPOINT` is used for an IPC message queue, and `SHARED_MEM` for a shared memory region between processes. As the kernel grows this struct can be extended to support more resource types. + +Next, we need a generic representation of a resource inside the kernel. This can be defined by the `resource_t` struct: + +``` +typedef struct { + resource_type_t type; + void* impl; +} resource_t; +``` + +The `type` field tells the kernel what kind of resource it is, and the `impl` pointer allows the kernel to attach the resource-specific implementation of that resource. For example, a file's `impl` could point to a struct holding the file's offset and inode, or for shared memory, it could point to the physical address of that region. + +## Per Process Resource Table +With an abstract resource now defined, we can extend our previous definition of a process to include a **resource table**: + +``` +typedef struct { +size_t pid; +status_t process_status; + +// Other fields + +resource_t* resource_table[MAX_RESOURCES]; +} process_t; +``` + +Now each process has a resource table that is a map of integers, called _handles_, to the kernel resource objects. A handle is simply an identifier returned by the kernel when opening a resource that is later used by the user to inform what resource the operation should be performed upon. This indirection is important because we do not want to expose any kernel pointers directly to a userspace process. Even if they cannot be used there, passing them could still create security or stability risks. This way, the resource structure is not exposed to userspace. Because of this, the same handle number in different processes can refer to different resources. For example, in Unix, handles `0`, `1`, and `2` refer to stdio for each process and are called "file descriptors". + +With this, we can also define a supporting function allowing the kernel to fetch a resource by handle: + +``` +resource_t* get_resource(process_t* proc, int handle) { + + // Invalid handle + if (handle < 0 || handle >= MAX_HANDLES) + return NULL; + + return proc->table[handle]; +} +``` + +You would also want to implement two other functions: `int register_resource(process_t* proc, resource_t* rec)` that finds a free handle and stores the resource in the array and also `int remove_resource(process_t* proc, resource_t* rec)` that then marks that handle usable and frees the memory of that resource on clean up. + + +## Resource Lifecycle + +A resource follows a rather straightforward lifecycle, regardless of its type: +1. Firstly, a process acquires a handle by calling the `open_resource` system call. +2. While the handle is valid, the process can perform operations such as `read_resource` or `write_resource`. +3. Finally, when the process has finished using the resource, it calls `close_resource`, allowing the kernel to free any associated state. + +Typically, a process should `close()` a resource once it is done using it. However, that is not always the case, as processes may exit without cleaning up properly, and thus it is up to the kernel to ensure resources aren't leaked. This could look like a loop through the process's resource table, calling `close_resource(process, handle);` for each open resource and letting the resource-specific `close()` function handle the work. + +## Generic API + +Now that we have a way of representing resources, we need to define how a process can interact with them. Generally, having a different syscall for each resource type can lead to lots of repeated code segments and make the kernel interface harder to maintain and extend. Instead, the kernel can expose a minimal and uniform API that every resource supports. The generic interface for a resource consists of four primary functions: `open`, `read`, `write`, and `close`, and by restricting all resources to this same interface, we can reduce the complexity of both the kernel and userspace. To begin the implementation of this, our `resource_t` needs extending with a table of function pointers to support these operations. Each resource can then provide its own implementation of these four functions, whilst the generic interface remains the same. + +``` +typedef struct resource { + resource_type_t type; + void* impl; + struct resource_functions_t* funcs; +} resource_t; + +typedef struct resource_functions { + size_t (*read)(resource_t* res, void* buf, size_t len); + size_t (*write)(resource_t* res, const void* buf, size_t len); + void (*open)(resource_t* res); + void (*close)(resource_t* res); +} resource_functions_t; +``` + +Here, `funcs` is the dispatch table that tells the kernel how to perform each operation for each resource. With this, each function pointer can be set differently depending on whether the resource is a file, IPC endpoint, or something else. Operations are defined to be blocking by default, meaning that if a resource is not ready (for example, no data to read), the process is suspended until the operation can complete. Each resource type can override these generic operations to provide behavior specific to that resource. + +It has been left as an exercise to the reader to decide on how they want to handle extending this design for extra resource-specific functionality (ie, renaming a file). A simpler design may be to just add more syscalls to handle this; however, this means the ABI grows as your kernel manages more resources. + +On the kernel side of things, these syscalls can just act as dispatchers. For example, a `read_resource(...)` syscall would look up the process's resource table using the handle, retrieve the `resource_t`, and then forward the call to the correct, resource-specific, function: + +``` +size_t read_resource(process_t* proc, int handle, void* buf, size_t len) { + resource_t* res = get_resource(proc, handle); + + // Invalid handle or unsupported operation + if (!res || !res->funcs->read) + return -1; + + return res->funcs->read(res, buf, len); +} +``` + +The other operations (`write`, `open`, `close`) would follow the same pattern above: get the resource from the handle and then call the appropriate function from the `funcs` table if supported. With this indirect approach, the kernel's syscall layer is kept minimal whilst allowing for each resource type to have its own specialised behavior. + +## Data Copying + +Another thing left as an exercise to the user is to decide their method of copying data between userspace and the kernel. +One option is to use the userspace provided buffers, which is efficient due to a single copy but does require sanitization of pointers and lengths to ensure safety. Some things to consider are other threads in the same address space modifying memory at the same address. Another option is to copy into a kernel buffer first, which simplifies the sanitization but has the added overhead and loss of performance. + +With using the user buffers, it's not necessarily a single copy, and you may be able to operate directly on the buffer (in which case it's zero-copy). Although, this can be dangerous as another user thread can write to, unmap, or remap the buffer while the kernel is operating on it. Holding a lock over the address space for that process throughout the entire duration of the resource operation is impractical, so the kernel must instead rely on fault handling. By faulting when the process tries to access the memory that the kernel is working with, it allows this behaviour to e caught and the kernel can try to abort or retry the operation safely. diff --git a/08_VirtualFileSystem/02_VirtualFileSystem.md b/08_VirtualFileSystem/02_VirtualFileSystem.md index d7e67b3..6eceae2 100644 --- a/08_VirtualFileSystem/02_VirtualFileSystem.md +++ b/08_VirtualFileSystem/02_VirtualFileSystem.md @@ -8,6 +8,8 @@ To keep our design simple, the features of our VFS driver will be: * No extra features like permissions, uid and gid (although we are going to add those fields, they will not be used). * The path length will be limited. +This VFS is built on top of the previously defined resource API (see Resource Management). The VFS creates a file-specific `resource_t` for each open file and returns a handle that is then used with the resource manager's generic `read`, `write`, and `close` functions that will perform the proper handle-to-resource lookup and call our VFS functions. + ## How The VFS Works The basic concept of a VFS layer is pretty simple, we can see it like a common way to access files/directories across different file systems, it is a layer that sits between the higher level interface to the FS and the low level implementation of the FS driver, as shown in the picture: @@ -83,11 +85,14 @@ typedef struct { char device[VFS_PATH_LENGTH]; char mountpoint[VFS_PATH_LENGTH]; + // Driver provided file system operations (open/read/write/close) fs_operations_t *operations; } mountpoint_t; ``` +Note: When the VFS uses a mount to open a file, it will wrap the result of the driver call into a `resource_t` and return the resulting handle to userspace. + The next thing is to have a variable to store those mountpoints, since we are using a linked list it is going to be just a pointer to its root, this will be the first place where we will look whenever we want to access a folder or a file: ```c @@ -101,13 +106,13 @@ This is all that we need to keep track of the mountpoints. Now that we have a representation of a mountpoint, is time to see how to mount a file system. By mounting we mean making a device/image/network storage able to be accessed by the operating system on a target folder (the `mountpoint`) loading the driver and the target device. -Usually a mount operation requires a set of minimum three parameters: +Usually, a mount operation requires a set of minimum three parameters: * A File System type, it is needed to load the correct driver for accessing the file system on the target device. * A target folder (that is the folder where the file system will be accessible by the OS) -* The target device (in our simple scenario this parameter is going to be mostly ignored since the os will not support any i/o device) +* The target device (in our simple scenario, this parameter is going to be mostly ignored since the os will not support any i/o device) -There can be others of course configuration parameters like access permission, driver configuration attributes, etc. For now we haven't implemented a file system yet (we will do soon), but let's assume that our os has a driver for the `USTAR` fs (the one we will implement later), and that the following functions are already implemented: +There can be others of course, configuration parameters like access permission, driver configuration attributes, etc. For now we haven't implemented a file system yet (we will do soon), but let's assume that our os has a driver for the `USTAR` fs (the one we will implement later), and that the following functions are already implemented: ```c int ustar_open(char *path, int flags); @@ -116,7 +121,7 @@ void ustar_read(int ustar_fd, void *buffer, size_t count); int ustar_close(int ustar_fd); ``` -For the mount and umount operation we will need two functions: +For the mount and umount operations, we will need two functions: The first one for mounting, let's call it for example `vfs_mount`, to work it will need at least the parameters explained above: @@ -266,7 +271,7 @@ The `flags` parameter will tell how the file will be opened, there are many flag The flags value is a bitwise operator, and there are other possible values to be used, but for our purpose we will focus only on the three mentioned above. -The return value of the function is the file descriptor id. We have already seen how to parse a path and get the mountpoint id if it is available. But what about the file descriptor and its id? What is it? File descriptors represents a file that has been opened by the VFS, and contain information on how to access it (i.e. mountpoint_id), the filename, the various pointers to keep track of current read/write positions, eventual locks, etc. So before proceed let's outline a very simple file descriptor struct: +The return value of the function is the file descriptor id (a resource handle). We have already seen how to parse a path and get the mountpoint id if it is available. But what about the file descriptor and its id? What is it? File descriptors represents a file resource that has been opened by the VFS, and contain information on how to access it (i.e. mountpoint_id), the filename, the various pointers to keep track of current read/write positions, eventual locks, etc. So before proceed let's outline a very simple file descriptor struct: ```c typedef struct { @@ -280,15 +285,9 @@ typedef struct { } file_descriptor_t; ``` -We need to declare a variable that contains the opened file descriptors, as usual we are using a naive approach, and just use an array for simplicity, this means that we will have a limited number of files that can be opened: - -```c -file_descriptors_t vfs_opened_files[MAX_OPENED_FILES]; -``` - Where the `mountpoint_id` fields is the id of the mounted file system that is containing the requested file. The `fs_file_id` is the fs specific id of the fs opened by the file descriptor, `buf_read_pos` and `buf_write_pos` are the current positions of the buffer pointer for the read and write operations and `file_size` is the size of the opened file. -So once our open function has found the mountpoint for the requested file, eventually a new file descriptor item will be created and filled, and an id value returned. This id is different from the one in the data structure, since it represent the internal fs descriptor id, while this one represent the vfs descriptor id. In our case the descriptor list is implemented again using an array, so the id returned will be the array position where the descriptor is being filled. +So once our open function has found the mountpoint for the requested file, eventually a new file descriptor item will be created and filled, and an resource handle returned. This id is different from the one in the data structure, since it represent the internal fs descriptor id, while this one represent the vfs descriptor id. In our case the descriptor list is implemented again using an array, so the id returned will be the array position where the descriptor is being filled. Why "eventually"? Having found the mountpoint id for the file doesn't mean that the file exists on that fs, the only thing that exist so far is the mountpoint, but after that the VFS can't really know if the file exists or not, it has to defer this task to the fs driver, hence it will call the implementation of a function that open a file on that FS that will do the search and return an error if the file doesn't exists. @@ -319,38 +318,49 @@ int open(const char *path, int flags){ /* IMPLEMENTATION LEFT AS EXERCISE */ // Get a new vfs descriptor id vfs_id vfs_opened_files[vfs_id] = //fill the file descriptor entry at position + + resource_t *res = // left as an exercise, see below + + // Open the resource and return its id to the calling process + int handle = register_resource(current_process, res); + return handle; } } - return vfs_id; + return ERROR; } ``` +NOTE: When creating the `resource_t` for the file, the `impl` field should store the `file_descriptor_t` for this file and the `funcs` could be some global/static table pointing to the series of functions that delegate the operation to the resource's mountpoint, defined below. This should *NOT* be `fs_operations_t` as those are for the driver not the VFS resource. As the `open(...)` is handled by the VFS there is no need to define a resource function for that. + The pseudo code above should give us an idea of what is the workflow of opening a file from a VFS point of view, as we can see the process is pretty simple in principle: getting the mountpoint_id from the vfs, if one has been found get strip out the mountpoint path from the path name, and call the fs driver open function, if this function call is successful is time to initialize a new vfs file descriptor item. -Let's now have a look at the `close` function, as suggested by name this will do the opposite of the open function: given a file descriptor id it will free all the resources related to it and remove the file descriptor from the list of opened files. The function signature is the following: +Let's now have a look at the `close` function, as suggested by the name, this will do the opposite of the open function: when called on a handle, it will free all the resources related to it. The function signature is the following: ```c -int close(int fildes); +int close(resource_t* res); ``` -The fildes argument is the VFS file descriptor id, it will be searched in the opened files list (using an array it will be found at `vfs_opened_files[fildes]`) and if found it should first call the fs driver function to close a file (if present), emptying all data structures associated to that file descriptor (i.e. if there are data on pipes or FIFO they should be discarded) and then doing the same with all the vfs resources, finally it will mark this position as available again. We have only one problem how to mark a file descriptor available using an array? One idea can be to use -1 as `fs_file_id` to identify a position that is marked as available (so we will need to set them to -1 when the vfs is initialized). +The `res` argument is the file resource to be closed, it will be called from the `close_resource(...)` of the kernel's resource manager. It should first call the fs driver function to close a file (if present), emptying all data structures associated to that file descriptor (i.e. if there are data on pipes or FIFO they should be discarded) and then doing the same with all the vfs resources, finally it will mark this handle as available again. + +We have only one problem how to mark a file descriptor available using an array? One idea can be to use -1 as `fs_file_id` to identify a position that is marked as available (so we will need to set them to -1 when the vfs is initialized). In our case where we have no FIFO or data-pipes, we can outline our close function as the following: ```c -int close(int fildes) { - if (vfs_opened_files[fildes].fs_file_id != -1) { - mountpoint_id = vfs_opened_files[fildes].mountpoint_id; - mountpoint_t *mountpoint = get_mountpoint_by_id(mountpoint_id); - fs_file_id = vfs_opened_files[fildes].fs_file_id; - fs_close_result = mountpoint->operations->close(fs_file_id); - if(fs_close_result == 0) { - vfs_opened_files[fildes].fs_file_id = -1; - return 0; - } - } - return -1; +int close(resource_t* res) { + + // Your code should check that these actually exist + file_descriptor_t* file = res->impl; + mountpoint_t* mountpoint = get_mountpoint_by_id(file.mountpoint_id); + + if (mountpoint->operations->close) + mountpoint->operations->close(file->fs_file_id); + + free(f); + remove_resource(current_process, res); + + return 0; } ``` @@ -362,19 +372,20 @@ mountpoint_t *get_mountpoint_by_id(size_t mountpoint_id); This function will be used in the following paragraphs too. + #### Reading From A File So now we have managed to access a file stored somewhere on a file system using our VFS, and now we need to read its contents. The function used in the file read example at the beginning of this chapter is the C read include in unistd, with the following signature: ```c -ssize_t read(int fildes, void *buf, size_t nbyte); +ssize_t read(resource_t* res, void *buf, size_t nbyte); ``` -Where the parameters are the opened file descriptor (`fildes) the buffer we want to read into (`buf`), and the number of bytes (`nbytes`) we want to read. +Where the parameters are the opened file resource (`res`) the buffer we want to read into (`buf`), and the number of bytes (`nbytes`) we want to read. -The read function will return the number of bytes read, and in case of failure -1. Like all other vfs functions, what the read will do is search for the file descriptor with id `fildes`, and if it exists call the fs driver function to read data from an opened file and fill the `buf` buffer. +The read function will return the number of bytes read, and in case of failure -1. Like all other vfs functions, what the read will acutally do is verify the resource and then call the fs driver function to read data from an opened file and fill the `buf` buffer. -Internally the file descriptor keeps track of a 'read head' which points to the last byte that was read. The next read() call will start reading from this byte, before updating the pointer itself. +Internally the file `impl` keeps track of a 'read head' which points to the last byte that was read. The next read() call will start reading from this byte, before updating the pointer itself. For example let's imagine we have opened a text file with the following content: @@ -386,28 +397,29 @@ And we have the following code: ```c char *buffer[5]; -int sz = read(file_descriptor, buffer, 5); -sz = read(file_descriptor, buffer, 5); +int sz = read_resource(proc, file_handle, buffer, 5); +sz = read_resource(proc, file_descriptor, file_handle, 5); ``` The `buffer` content of the first read will be: `Text `, and the second one `examp`. This is the purpose `buf_read_pos` variable in the file descriptor, so it basically needs to be incremented of nbytes, of course only if `buf_read_pos + nbytes < file_size` . The pseudocode for this function is going to be similar to the open/close: ```c -ssize_t read(int fildes, void *buf, size_t nbytes) { - if (vfs_opened_files[fildes].fs_fildes_id != -1) { - int mountpoint_id = vfs_opened_files[fildes].mountpoint_id; - mountpoint_t *mountpoint = get_mountpoint_by_id(mountpoint_id); - int fs_file_id = vfs_opened_files[fildes].fs_file_id; - int bytes_read = mountpoints->operations->read(fs_file_id, buf, nbytes); - if (opened_files[fildes].buf_read_pos + nbytes < opened_files[fildes].file_size) { - opened_files[fildes].buf_read_pos += nbytes; - } else { - opened_files[fildes].buf_read_pos = opened_files[fildes].file_size; - } - return bytes_read; - } - return -1; +ssize_t read(resource_t* res, void *buf, size_t nbytes) { + + // Your code should check these actually exist + file_descriptor_t* file = res->impl; + mountpoint_t* mountpoint = get_mountpoint_by_id(file.mountpoint_id); + + int fs_file_id = file.fs_file_id; + int bytes_read = mountpoint->operations->read(fs_file_id, buf, nbytes); + + if (file.buf_read_pos + nbytes < file.file_size) { + file.buf_read_pos += nbytes; + } else { + file.buf_read_pos = file.file_size; + } + return bytes_read; } ```