Home
last modified time | relevance | path

Searched refs:x (Results 1 – 7 of 7) sorted by relevance

/littlefs-2.7.6/scripts/
Dreadblock.py22 parser.add_argument('block_size', type=lambda x: int(x, 0),
24 parser.add_argument('block', type=lambda x: int(x, 0),
Dreadtree.py169 parser.add_argument('block_size', type=lambda x: int(x, 0),
172 type=lambda x: int(x, 0),
175 type=lambda x: int(x, 0),
Dreadmdir.py355 parser.add_argument('block_size', type=lambda x: int(x, 0),
357 parser.add_argument('block1', type=lambda x: int(x, 0),
359 parser.add_argument('block2', nargs='?', type=lambda x: int(x, 0),
Dtest.py188 key=lambda x: len(x[0]), reverse=True):
/littlefs-2.7.6/
Dlfs_util.h17 #define LFS_STRINGIZE(x) LFS_STRINGIZE2(x) argument
18 #define LFS_STRINGIZE2(x) #x argument
DDESIGN.md293 storage, in the worst case a small log costs 4x the size of the original data.
531 So at 50% usage, we're seeing an average of 2x cost per update, and at 75%
532 usage, we're already at an average of 4x cost per update.
537 limit. This limits the overhead of garbage collection to 2x the runtime cost,
552 of 4x the original size. I imagine users would not be happy if they found
741 1. ctz(![x]) = the number of trailing bits that are 0 in ![x]
742 2. popcount(![x]) = the number of bits that are 1 in ![x]
1339 with 4 KiB blocks, this is 12 KiB of overhead. A ridiculous 3072x increase.
1426 have a ~4x storage cost, so if our file is smaller than 1/4 the block size,
1471 means that our files never use more than 4x storage overhead, decreasing as
[all …]
/littlefs-2.7.6/tests/
Dtest_exhaustion.toml260 // check we increased the lifetime by 2x with ~10% error
349 // check we increased the lifetime by 2x with ~10% error
438 printf("%08x: wear %d\n", b, wear);