Lines Matching full:log

111    data, instead the entire storage is used for a circular log which is
113 changes, while reading requires traversing the log to reconstruct a file.
132 process of cleaning up outdated data from the end of the log, I've yet to
145 block based filesystem and add a bounded log where we note every change
189 reach the root of our filesystem, which is often stored in a very small log.
217 storing data changes directly in a log. They even disassociate the storage
293 storage, in the worst case a small log costs 4x the size of the original data.
358 append more entries, we can simply append the entries to the log. Because
443 more space is available, but because we have small logs, overflowing a log
448 increasing the size of the log and dealing with the scalability issues
499 1. Log is empty, garbage collection occurs once every _n_ updates
500 2. Log is full, garbage collection occurs **every** update
506 Looking at the problem generically, consider a log with ![n] bytes for each
510 this log we get this formula:
514 If we let ![r] be the ratio of static space to the size of our log in bytes, we
522 updating an entry given how full the log is:
526 Assuming 100 byte entries in a 4 KiB log, we can graph this using the entry
625 log₂_n_ pointers that skip to different preceding elements of the
665 the search space for the block in half, giving us a runtime of _O(log n)_.
667 backwards, which puts the runtime at _O(2 log n)_ = _O(log n)_. An interesting
672 of _O(1)_, and can be read with a worst case runtime of _O(n log n)_. Given
725 runtime of reading a file up to _O(n² log n)_. Fortunately, that summation
775 _O(n)_, can be appended in _O(1)_, and can be read in _O(n log n)_. All of