fs: zms: multiple fixes from previous PR review

This resolves some addressed comments in this PR
https://github.com/zephyrproject-rtos/zephyr/pull/77930

It adds as well a section in the documentation about some
recommendations to increase ZMS performance.

Signed-off-by: Riadh Ghaddab <rghaddab@baylibre.com>
This commit is contained in:
Riadh Ghaddab 2024-10-23 18:09:14 +02:00 committed by Mahesh Mahadevan
commit 46e1635773
7 changed files with 273 additions and 181 deletions

View file

@ -201,9 +201,9 @@ An entry has 16 bytes divided between these variables :
struct zms_ate { struct zms_ate {
uint8_t crc8; /* crc8 check of the entry */ uint8_t crc8; /* crc8 check of the entry */
uint8_t cycle_cnt; /* cycle counter for non erasable devices */ uint8_t cycle_cnt; /* cycle counter for non-erasable devices */
uint32_t id; /* data id */
uint16_t len; /* data len within sector */ uint16_t len; /* data len within sector */
uint32_t id; /* data id */
union { union {
uint8_t data[8]; /* used to store small size data */ uint8_t data[8]; /* used to store small size data */
struct { struct {
@ -218,21 +218,22 @@ An entry has 16 bytes divided between these variables :
}; };
} __packed; } __packed;
.. note:: The data CRC is checked only when the whole data of the element is read. .. note:: The CRC of the data is checked only when the whole the element is read.
The data CRC is not checked for a partial read, as it is computed for the complete set of data. The CRC of the data is not checked for a partial read, as it is computed for the whole element.
.. note:: Enabling the data CRC feature on a previously existing ZMS content without .. note:: Enabling the CRC feature on previously existing ZMS content without CRC enabled
data CRC will make all existing data invalid. will make all existing data invalid.
.. _free-space: .. _free-space:
Available space for user data (key-value pairs) Available space for user data (key-value pairs)
*********************************************** ***********************************************
For both scenarios ZMS should have always an empty sector to be able to perform the garbage For both scenarios ZMS should always have an empty sector to be able to perform the
collection. garbage collection (GC).
So if we suppose that 4 sectors exist in a partition, ZMS will only use 3 sectors to store So, if we suppose that 4 sectors exist in a partition, ZMS will only use 3 sectors to store
Key-value pairs and keep always one (rotating sector) empty to be able to launch GC. Key-value pairs and keep one sector empty to be able to launch GC.
The empty sector will rotate between the 4 sectors in the partition.
.. note:: The maximum single data length that could be written at once in a sector is 64K .. note:: The maximum single data length that could be written at once in a sector is 64K
(This could change in future versions of ZMS) (This could change in future versions of ZMS)
@ -240,8 +241,8 @@ Key-value pairs and keep always one (rotating sector) empty to be able to launch
Small data values Small data values
================= =================
For small data values (<= 8 bytes), the data is stored within the entry (ATE) itself and no data Values smaller than 8 bytes will be stored within the entry (ATE) itself, without writing data
is written at the top of the sector. at the top of the sector.
ZMS has an entry size of 16 bytes which means that the maximum available space in a partition to ZMS has an entry size of 16 bytes which means that the maximum available space in a partition to
store data is computed in this scenario as : store data is computed in this scenario as :
@ -265,7 +266,7 @@ Large data values
================= =================
Large data values ( > 8 bytes) are stored separately at the top of the sector. Large data values ( > 8 bytes) are stored separately at the top of the sector.
In this case it is hard to estimate the free available space as this depends on the size of In this case, it is hard to estimate the free available space, as this depends on the size of
the data. But we can take into account that for N bytes of data (N > 8 bytes) an additional the data. But we can take into account that for N bytes of data (N > 8 bytes) an additional
16 bytes of ATE must be added at the bottom of the sector. 16 bytes of ATE must be added at the bottom of the sector.
@ -286,17 +287,17 @@ This storage system is optimized for devices that do not require an erase.
Using storage systems that rely on an erase-value (NVS as an example) will need to emulate the Using storage systems that rely on an erase-value (NVS as an example) will need to emulate the
erase with write operations. This will cause a significant decrease in the life expectancy of erase with write operations. This will cause a significant decrease in the life expectancy of
these devices and will cause more delays for write operations and for initialization. these devices and will cause more delays for write operations and for initialization.
ZMS introduces a cycle count mechanism that avoids emulating erase operation for these devices. ZMS uses a cycle count mechanism that avoids emulating erase operation for these devices.
It also guarantees that every memory location is written only once for each cycle of sector write. It also guarantees that every memory location is written only once for each cycle of sector write.
As an example, to erase a 4096 bytes sector on a non erasable device using NVS, 256 flash writes As an example, to erase a 4096 bytes sector on a non-erasable device using NVS, 256 flash writes
must be performed (supposing that write-block-size=16 bytes), while using ZMS only 1 write of must be performed (supposing that write-block-size=16 bytes), while using ZMS only 1 write of
16 bytes is needed. This operation is 256 times faster in this case. 16 bytes is needed. This operation is 256 times faster in this case.
Garbage collection operation is also adding some writes to the memory cell life expectancy as it Garbage collection operation is also adding some writes to the memory cell life expectancy as it
is moving some blocks from one sector to another. is moving some blocks from one sector to another.
To make the garbage collector not affect the life expectancy of the device it is recommended To make the garbage collector not affect the life expectancy of the device it is recommended
to dimension correctly the partition size. Its size should be the double of the maximum size of to correctly dimension the partition size. Its size should be the double of the maximum size of
data (including extra headers) that could be written in the storage. data (including extra headers) that could be written in the storage.
See :ref:`free-space`. See :ref:`free-space`.
@ -307,10 +308,10 @@ Device lifetime calculation
Storage devices whether they are classical Flash or new technologies like RRAM/MRAM has a limited Storage devices whether they are classical Flash or new technologies like RRAM/MRAM has a limited
life expectancy which is determined by the number of times memory cells can be erased/written. life expectancy which is determined by the number of times memory cells can be erased/written.
Flash devices are erased one page at a time as part of their functional behavior (otherwise Flash devices are erased one page at a time as part of their functional behavior (otherwise
memory cells cannot be overwritten) and for non erasable storage devices memory cells can be memory cells cannot be overwritten) and for non-erasable storage devices memory cells can be
overwritten directly. overwritten directly.
A typical scenario is shown here to calculate the life expectancy of a device. A typical scenario is shown here to calculate the life expectancy of a device:
Let's suppose that we store an 8 bytes variable using the same ID but its content changes every Let's suppose that we store an 8 bytes variable using the same ID but its content changes every
minute. The partition has 4 sectors with 1024 bytes each. minute. The partition has 4 sectors with 1024 bytes each.
Each write of the variable requires 16 bytes of storage. Each write of the variable requires 16 bytes of storage.
@ -361,9 +362,9 @@ Existing features
================= =================
Version1 Version1
-------- --------
- Supports non erasable devices (only one write operation to erase a sector) - Supports non-erasable devices (only one write operation to erase a sector)
- Supports large partition size and sector size (64 bits address space) - Supports large partition size and sector size (64 bits address space)
- Supports large IDs width (32 bits) to store ID/Value pairs - Supports 32-bit IDs to store ID/Value pairs
- Small sized data ( <= 8 bytes) are stored in the ATE itself - Small sized data ( <= 8 bytes) are stored in the ATE itself
- Built-in Data CRC32 (included in the ATE) - Built-in Data CRC32 (included in the ATE)
- Versionning of ZMS (to handle future evolution) - Versionning of ZMS (to handle future evolution)
@ -375,7 +376,7 @@ Future features
- Add multiple format ATE support to be able to use ZMS with different ATE formats that satisfies - Add multiple format ATE support to be able to use ZMS with different ATE formats that satisfies
requirements from application requirements from application
- Add the possibility to skip garbage collector for some application usage where ID/value pairs - Add the possibility to skip garbage collector for some application usage where ID/value pairs
are written periodically and do not exceed half of the partition size (ther is always an old are written periodically and do not exceed half of the partition size (there is always an old
entry with the same ID). entry with the same ID).
- Divide IDs into namespaces and allocate IDs on demand from application to handle collisions - Divide IDs into namespaces and allocate IDs on demand from application to handle collisions
between IDs used by different subsystems or samples. between IDs used by different subsystems or samples.
@ -394,9 +395,9 @@ functionality: :ref:`NVS <nvs_api>` and :ref:`FCB <fcb_api>`.
Which one to use in your application will depend on your needs and the hardware you are using, Which one to use in your application will depend on your needs and the hardware you are using,
and this section provides information to help make a choice. and this section provides information to help make a choice.
- If you are using a non erasable technology device like RRAM or MRAM, :ref:`ZMS <zms_api>` is definitely the - If you are using a non-erasable technology device like RRAM or MRAM, :ref:`ZMS <zms_api>` is definitely the
best fit for your storage subsystem as it is designed very well to avoid emulating erase for best fit for your storage subsystem as it is designed to avoid emulating erase operation using
these devices and replace it by a single write call. large block writes for these devices and replaces it with a single write call.
- For devices with large write_block_size and/or needs a sector size that is different than the - For devices with large write_block_size and/or needs a sector size that is different than the
classical flash page size (equal to erase_block_size), :ref:`ZMS <zms_api>` is also the best fit as there is classical flash page size (equal to erase_block_size), :ref:`ZMS <zms_api>` is also the best fit as there is
the possibility to customize these parameters and add the support of these devices in ZMS. the possibility to customize these parameters and add the support of these devices in ZMS.
@ -414,6 +415,41 @@ verified to make sure that the application could work with one subsystem or the
both solutions could be implemented, the best choice should be based on the calculations of the both solutions could be implemented, the best choice should be based on the calculations of the
life expectancy of the device described in this section: :ref:`wear-leveling`. life expectancy of the device described in this section: :ref:`wear-leveling`.
Recommendations to increase performance
***************************************
Sector size and count
=====================
- The total size of the storage partition should be well dimensioned to achieve the best
performance for ZMS.
All the information regarding the effectively available free space in ZMS can be found
in the documentation. See :ref:`free-space`.
We recommend choosing a storage partition that can hold double the size of the key-value pairs
that will be written in the storage.
- The size of a sector needs to be dimensioned to hold the maximum data length that will be stored.
Increasing the size of a sector will slow down the garbage collection operation which will
occur less frequently.
Decreasing its size, in the opposite, will make the garbage collection operation faster
which will occur more frequently.
- For some subsystems like :ref:`Settings <settings_api>`, all path-value pairs are split into two ZMS entries (ATEs).
The header needed by the two entries should be accounted when computing the needed storage space.
- Using small data to store in the ZMS entries can increase the performance, as this data is
written within the entry header.
For example, for the :ref:`Settings <settings_api>` subsystem, choosing a path name that is
less than or equal to 8 bytes can make reads and writes faster.
Dimensioning cache
==================
- When using ZMS API directly, the recommended cache size should be, at least, equal to
the number of different entries that will be written in the storage.
- Each additional cache entry will add 8 bytes to your RAM usage. Cache size should be carefully
chosen.
- If you use ZMS through :ref:`Settings <settings_api>`, you have to take into account that each Settings entry is
divided into two ZMS entries. The recommended cache size should be, at least, twice the number
of Settings entries.
Sample Sample
****** ******

View file

@ -1,14 +1,14 @@
/* ZMS: Zephyr Memory Storage /* Copyright (c) 2024 BayLibre SAS
*
* Copyright (c) 2024 BayLibre SAS
* *
* SPDX-License-Identifier: Apache-2.0 * SPDX-License-Identifier: Apache-2.0
*
* ZMS: Zephyr Memory Storage
*/ */
#ifndef ZEPHYR_INCLUDE_FS_ZMS_H_ #ifndef ZEPHYR_INCLUDE_FS_ZMS_H_
#define ZEPHYR_INCLUDE_FS_ZMS_H_ #define ZEPHYR_INCLUDE_FS_ZMS_H_
#include <zephyr/drivers/flash.h>
#include <sys/types.h> #include <sys/types.h>
#include <zephyr/drivers/flash.h>
#include <zephyr/kernel.h> #include <zephyr/kernel.h>
#include <zephyr/device.h> #include <zephyr/device.h>
#include <zephyr/toolchain.h> #include <zephyr/toolchain.h>
@ -18,7 +18,6 @@ extern "C" {
#endif #endif
/** /**
* @brief Zephyr Memory Storage (ZMS)
* @defgroup zms Zephyr Memory Storage (ZMS) * @defgroup zms Zephyr Memory Storage (ZMS)
* @ingroup file_system_storage * @ingroup file_system_storage
* @{ * @{
@ -26,37 +25,34 @@ extern "C" {
*/ */
/** /**
* @brief Zephyr Memory Storage Data Structures * @defgroup zms_data_structures ZMS data structures
* @defgroup zms_data_structures Zephyr Memory Storage Data Structures
* @ingroup zms * @ingroup zms
* @{ * @{
*/ */
/** /** Zephyr Memory Storage file system structure */
* @brief Zephyr Memory Storage File system structure
*/
struct zms_fs { struct zms_fs {
/** File system offset in flash **/ /** File system offset in flash */
off_t offset; off_t offset;
/** Allocation table entry write address. /** Allocation Table Entry (ATE) write address.
* Addresses are stored as uint64_t: * Addresses are stored as `uint64_t`:
* - high 4 bytes correspond to the sector * - high 4 bytes correspond to the sector
* - low 4 bytes are the offset in the sector * - low 4 bytes are the offset in the sector
*/ */
uint64_t ate_wra; uint64_t ate_wra;
/** Data write address */ /** Data write address */
uint64_t data_wra; uint64_t data_wra;
/** Storage system is split into sectors, each sector size must be multiple of erase-blocks /** Storage system is split into sectors. The sector size must be a multiple of
* if the device has erase capabilities * `erase-block-size` if the device has erase capabilities
*/ */
uint32_t sector_size; uint32_t sector_size;
/** Number of sectors in the file system */ /** Number of sectors in the file system */
uint32_t sector_count; uint32_t sector_count;
/** Current cycle counter of the active sector (pointed by ate_wra)*/ /** Current cycle counter of the active sector (pointed to by `ate_wra`) */
uint8_t sector_cycle; uint8_t sector_cycle;
/** Flag indicating if the file system is initialized */ /** Flag indicating if the file system is initialized */
bool ready; bool ready;
/** Mutex */ /** Mutex used to lock flash writes */
struct k_mutex zms_lock; struct k_mutex zms_lock;
/** Flash device runtime structure */ /** Flash device runtime structure */
const struct device *flash_device; const struct device *flash_device;
@ -65,7 +61,7 @@ struct zms_fs {
/** Size of an Allocation Table Entry */ /** Size of an Allocation Table Entry */
size_t ate_size; size_t ate_size;
#if CONFIG_ZMS_LOOKUP_CACHE #if CONFIG_ZMS_LOOKUP_CACHE
/** Lookup table used to cache ATE address of a written ID */ /** Lookup table used to cache ATE addresses of written IDs */
uint64_t lookup_cache[CONFIG_ZMS_LOOKUP_CACHE_SIZE]; uint64_t lookup_cache[CONFIG_ZMS_LOOKUP_CACHE_SIZE];
#endif #endif
}; };
@ -75,78 +71,77 @@ struct zms_fs {
*/ */
/** /**
* @brief Zephyr Memory Storage APIs * @defgroup zms_high_level_api ZMS API
* @defgroup zms_high_level_api Zephyr Memory Storage APIs
* @ingroup zms * @ingroup zms
* @{ * @{
*/ */
/** /**
* @brief Mount a ZMS file system onto the device specified in @p fs. * @brief Mount a ZMS file system onto the device specified in `fs`.
* *
* @param fs Pointer to file system * @param fs Pointer to the file system.
* @retval 0 Success * @retval 0 Success
* @retval -ERRNO errno code if error * @retval -ERRNO Negative errno code on error
*/ */
int zms_mount(struct zms_fs *fs); int zms_mount(struct zms_fs *fs);
/** /**
* @brief Clear the ZMS file system from device. * @brief Clear the ZMS file system from device.
* *
* @param fs Pointer to file system * @param fs Pointer to the file system.
* @retval 0 Success * @retval 0 Success
* @retval -ERRNO errno code if error * @retval -ERRNO Negative errno code on error
*/ */
int zms_clear(struct zms_fs *fs); int zms_clear(struct zms_fs *fs);
/** /**
* @brief Write an entry to the file system. * @brief Write an entry to the file system.
* *
* @note When @p len parameter is equal to @p 0 then entry is effectively removed (it is * @note When the `len` parameter is equal to `0` the entry is effectively removed (it is
* equivalent to calling of zms_delete). It is not possible to distinguish between a deleted * equivalent to calling @ref zms_delete()). It is not possible to distinguish between a deleted
* entry and an entry with data of length 0. * entry and an entry with data of length 0.
* *
* @param fs Pointer to file system * @param fs Pointer to the file system.
* @param id Id of the entry to be written * @param id ID of the entry to be written
* @param data Pointer to the data to be written * @param data Pointer to the data to be written
* @param len Number of bytes to be written (maximum 64 KB) * @param len Number of bytes to be written (maximum 64 KiB)
* *
* @return Number of bytes written. On success, it will be equal to the number of bytes requested * @return Number of bytes written. On success, it will be equal to the number of bytes requested
* to be written. When a rewrite of the same data already stored is attempted, nothing is written * to be written or 0.
* to flash, thus 0 is returned. On error, returns negative value of errno.h defined error codes. * When a rewrite of the same data already stored is attempted, nothing is written to flash,
* thus 0 is returned. On error, returns negative value of error codes defined in `errno.h`.
*/ */
ssize_t zms_write(struct zms_fs *fs, uint32_t id, const void *data, size_t len); ssize_t zms_write(struct zms_fs *fs, uint32_t id, const void *data, size_t len);
/** /**
* @brief Delete an entry from the file system * @brief Delete an entry from the file system
* *
* @param fs Pointer to file system * @param fs Pointer to the file system.
* @param id Id of the entry to be deleted * @param id ID of the entry to be deleted
* @retval 0 Success * @retval 0 Success
* @retval -ERRNO errno code if error * @retval -ERRNO Negative errno code on error
*/ */
int zms_delete(struct zms_fs *fs, uint32_t id); int zms_delete(struct zms_fs *fs, uint32_t id);
/** /**
* @brief Read an entry from the file system. * @brief Read an entry from the file system.
* *
* @param fs Pointer to file system * @param fs Pointer to the file system.
* @param id Id of the entry to be read * @param id ID of the entry to be read
* @param data Pointer to data buffer * @param data Pointer to data buffer
* @param len Number of bytes to be read (or size of the allocated read buffer) * @param len Number of bytes to read at most
* *
* @return Number of bytes read. On success, it will be equal to the number of bytes requested * @return Number of bytes read. On success, it will be equal to the number of bytes requested
* to be read. When the return value is less than the number of bytes requested to read this * to be read or less than that if the stored data has a smaller size than the requested one.
* indicates that ATE contain less data than requested. On error, returns negative value of * On error, returns negative value of error codes defined in `errno.h`.
* errno.h defined error codes.
*/ */
ssize_t zms_read(struct zms_fs *fs, uint32_t id, void *data, size_t len); ssize_t zms_read(struct zms_fs *fs, uint32_t id, void *data, size_t len);
/** /**
* @brief Read a history entry from the file system. * @brief Read a history entry from the file system.
* *
* @param fs Pointer to file system * @param fs Pointer to the file system.
* @param id Id of the entry to be read * @param id ID of the entry to be read
* @param data Pointer to data buffer * @param data Pointer to data buffer
* @param len Number of bytes to be read * @param len Number of bytes to be read
* @param cnt History counter: 0: latest entry, 1: one before latest ... * @param cnt History counter: 0: latest entry, 1: one before latest ...
@ -154,40 +149,41 @@ ssize_t zms_read(struct zms_fs *fs, uint32_t id, void *data, size_t len);
* @return Number of bytes read. On success, it will be equal to the number of bytes requested * @return Number of bytes read. On success, it will be equal to the number of bytes requested
* to be read. When the return value is larger than the number of bytes requested to read this * to be read. When the return value is larger than the number of bytes requested to read this
* indicates not all bytes were read, and more data is available. On error, returns negative * indicates not all bytes were read, and more data is available. On error, returns negative
* value of errno.h defined error codes. * value of error codes defined in `errno.h`.
*/ */
ssize_t zms_read_hist(struct zms_fs *fs, uint32_t id, void *data, size_t len, uint32_t cnt); ssize_t zms_read_hist(struct zms_fs *fs, uint32_t id, void *data, size_t len, uint32_t cnt);
/** /**
* @brief Gets the data size that is stored in an entry with a given id * @brief Gets the length of the data that is stored in an entry with a given ID
* *
* @param fs Pointer to file system * @param fs Pointer to the file system.
* @param id Id of the entry that we want to get its data length * @param id ID of the entry whose data length to retrieve.
* *
* @return Data length contained in the ATE. On success, it will be equal to the number of bytes * @return Data length contained in the ATE. On success, it will be equal to the number of bytes
* in the ATE. On error, returns negative value of errno.h defined error codes. * in the ATE. On error, returns negative value of error codes defined in `errno.h`.
*/ */
ssize_t zms_get_data_length(struct zms_fs *fs, uint32_t id); ssize_t zms_get_data_length(struct zms_fs *fs, uint32_t id);
/** /**
* @brief Calculate the available free space in the file system. * @brief Calculate the available free space in the file system.
* *
* @param fs Pointer to file system * @param fs Pointer to the file system.
* *
* @return Number of bytes free. On success, it will be equal to the number of bytes that can * @return Number of free bytes. On success, it will be equal to the number of bytes that can
* still be written to the file system. * still be written to the file system.
* Calculating the free space is a time consuming operation, especially on spi flash. * Calculating the free space is a time-consuming operation, especially on SPI flash.
* On error, returns negative value of errno.h defined error codes. * On error, returns negative value of error codes defined in `errno.h`.
*/ */
ssize_t zms_calc_free_space(struct zms_fs *fs); ssize_t zms_calc_free_space(struct zms_fs *fs);
/** /**
* @brief Tell how many contiguous free space remains in the currently active ZMS sector. * @brief Tell how much contiguous free space remains in the currently active ZMS sector.
* *
* @param fs Pointer to the file system. * @param fs Pointer to the file system.
* *
* @return Number of free bytes. * @return Number of free bytes.
*/ */
size_t zms_sector_max_data_size(struct zms_fs *fs); size_t zms_active_sector_free_space(struct zms_fs *fs);
/** /**
* @brief Close the currently active sector and switch to the next one. * @brief Close the currently active sector and switch to the next one.
@ -195,12 +191,12 @@ size_t zms_sector_max_data_size(struct zms_fs *fs);
* @note The garbage collector is called on the new sector. * @note The garbage collector is called on the new sector.
* *
* @warning This routine is made available for specific use cases. * @warning This routine is made available for specific use cases.
* It collides with the ZMS goal of avoiding any unnecessary flash erase operations. * It collides with ZMS's goal of avoiding any unnecessary flash erase operations.
* Using this routine extensively can result in premature failure of the flash device. * Using this routine extensively can result in premature failure of the flash device.
* *
* @param fs Pointer to the file system. * @param fs Pointer to the file system.
* *
* @return 0 on success. On error, returns negative value of errno.h defined error codes. * @return 0 on success. On error, returns negative value of error codes defined in `errno.h`.
*/ */
int zms_sector_use_next(struct zms_fs *fs); int zms_sector_use_next(struct zms_fs *fs);

View file

@ -83,7 +83,8 @@ int main(void)
int rc = 0; int rc = 0;
char buf[16]; char buf[16];
uint8_t key[8] = {0xDE, 0xAD, 0xBE, 0xEF, 0xDE, 0xAD, 0xBE, 0xEF}, longarray[128]; uint8_t key[8] = {0xDE, 0xAD, 0xBE, 0xEF, 0xDE, 0xAD, 0xBE, 0xEF}, longarray[128];
uint32_t i_cnt = 0U, i; uint32_t i_cnt = 0U;
uint32_t i;
uint32_t id = 0; uint32_t id = 0;
ssize_t free_space = 0; ssize_t free_space = 0;
struct flash_pages_info info; struct flash_pages_info info;
@ -144,7 +145,7 @@ int main(void)
rc = zms_read(&fs, KEY_VALUE_ID, &key, sizeof(key)); rc = zms_read(&fs, KEY_VALUE_ID, &key, sizeof(key));
if (rc > 0) { /* item was found, show it */ if (rc > 0) { /* item was found, show it */
printk("Id: %x, Key: ", KEY_VALUE_ID); printk("Id: %x, Key: ", KEY_VALUE_ID);
for (int n = 0; n < 8; n++) { for (uint8_t n = 0; n < 8; n++) {
printk("%x ", key[n]); printk("%x ", key[n]);
} }
printk("\n"); printk("\n");
@ -181,7 +182,7 @@ int main(void)
if (rc > 0) { if (rc > 0) {
/* item was found, show it */ /* item was found, show it */
printk("Id: %d, Longarray: ", LONG_DATA_ID); printk("Id: %d, Longarray: ", LONG_DATA_ID);
for (int n = 0; n < sizeof(longarray); n++) { for (uint16_t n = 0; n < sizeof(longarray); n++) {
printk("%x ", longarray[n]); printk("%x ", longarray[n]);
} }
printk("\n"); printk("\n");
@ -204,7 +205,7 @@ int main(void)
} }
if (i != MAX_ITERATIONS) { if (i != MAX_ITERATIONS) {
printk("Error: Something went wrong at iteration %u rc=%d\n", i - 1, rc); printk("Error: Something went wrong at iteration %u rc=%d\n", i, rc);
return 0; return 0;
} }
@ -249,7 +250,7 @@ int main(void)
* Let's compute free space in storage. But before doing that let's Garbage collect * Let's compute free space in storage. But before doing that let's Garbage collect
* all sectors where we deleted all entries and then compute the free space * all sectors where we deleted all entries and then compute the free space
*/ */
for (uint32_t i = 0; i < fs.sector_count; i++) { for (i = 0; i < fs.sector_count; i++) {
rc = zms_sector_use_next(&fs); rc = zms_sector_use_next(&fs);
if (rc) { if (rc) {
printk("Error while changing sector rc=%d\n", rc); printk("Error while changing sector rc=%d\n", rc);
@ -261,6 +262,13 @@ int main(void)
return 0; return 0;
} }
printk("Free space in storage is %u bytes\n", free_space); printk("Free space in storage is %u bytes\n", free_space);
/* Let's clean the storage now */
rc = zms_clear(&fs);
if (rc < 0) {
printk("Error while cleaning the storage, rc=%d\n", rc);
}
printk("Sample code finished Successfully\n"); printk("Sample code finished Successfully\n");
return 0; return 0;

View file

@ -1,9 +1,9 @@
#Zephyr Memory Storage ZMS
#Copyright (c) 2024 BayLibre SAS #Copyright (c) 2024 BayLibre SAS
#SPDX-License-Identifier: Apache-2.0 #SPDX-License-Identifier: Apache-2.0
#Zephyr Memory Storage ZMS
config ZMS config ZMS
bool "Zephyr Memory Storage" bool "Zephyr Memory Storage"
select CRC select CRC
@ -34,19 +34,19 @@ config ZMS_DATA_CRC
help help
Enables DATA CRC Enables DATA CRC
config ZMS_CUSTOM_BLOCK_SIZE config ZMS_CUSTOMIZE_BLOCK_SIZE
bool "Custom buffer size used by ZMS for reads and writes" bool "Customize the size of the buffer used internally for reads and writes"
help help
ZMS uses internal buffers to read/write and compare stored data. ZMS uses an internal buffer to read/write and compare stored data.
Increasing the size of these buffers should be done carefully in order to not Increasing the size of this buffer should be done carefully in order to not
overflow the stack. overflow the stack.
Increasing this buffer means as well that ZMS could work with storage devices Increasing this buffer means as well that ZMS could work with storage devices
that have larger write-block-size which decreases ZMS performance that have larger write-block-size which decreases ZMS performance
config ZMS_MAX_BLOCK_SIZE config ZMS_CUSTOM_BLOCK_SIZE
int "ZMS internal buffer size" int "ZMS internal buffer size"
default 32 default 32
depends on ZMS_CUSTOM_BLOCK_SIZE depends on ZMS_CUSTOMIZE_BLOCK_SIZE
help help
Changes the internal buffer size of ZMS Changes the internal buffer size of ZMS

View file

@ -1,8 +1,8 @@
/* ZMS: Zephyr Memory Storage /* Copyright (c) 2024 BayLibre SAS
*
* Copyright (c) 2024 BayLibre SAS
* *
* SPDX-License-Identifier: Apache-2.0 * SPDX-License-Identifier: Apache-2.0
*
* ZMS: Zephyr Memory Storage
*/ */
#include <string.h> #include <string.h>
@ -42,8 +42,10 @@ static inline size_t zms_lookup_cache_pos(uint32_t id)
static int zms_lookup_cache_rebuild(struct zms_fs *fs) static int zms_lookup_cache_rebuild(struct zms_fs *fs)
{ {
int rc, previous_sector_num = ZMS_INVALID_SECTOR_NUM; int rc;
uint64_t addr, ate_addr; int previous_sector_num = ZMS_INVALID_SECTOR_NUM;
uint64_t addr;
uint64_t ate_addr;
uint64_t *cache_entry; uint64_t *cache_entry;
uint8_t current_cycle; uint8_t current_cycle;
struct zms_ate ate; struct zms_ate ate;
@ -110,6 +112,19 @@ static inline off_t zms_addr_to_offset(struct zms_fs *fs, uint64_t addr)
return fs->offset + (fs->sector_size * SECTOR_NUM(addr)) + SECTOR_OFFSET(addr); return fs->offset + (fs->sector_size * SECTOR_NUM(addr)) + SECTOR_OFFSET(addr);
} }
/* Helper to round down len to the closest multiple of write_block_size */
static inline size_t zms_round_down_write_block_size(struct zms_fs *fs, size_t len)
{
return len & ~(fs->flash_parameters->write_block_size - 1U);
}
/* Helper to round up len to multiple of write_block_size */
static inline size_t zms_round_up_write_block_size(struct zms_fs *fs, size_t len)
{
return (len + (fs->flash_parameters->write_block_size - 1U)) &
~(fs->flash_parameters->write_block_size - 1U);
}
/* zms_al_size returns size aligned to fs->write_block_size */ /* zms_al_size returns size aligned to fs->write_block_size */
static inline size_t zms_al_size(struct zms_fs *fs, size_t len) static inline size_t zms_al_size(struct zms_fs *fs, size_t len)
{ {
@ -118,7 +133,8 @@ static inline size_t zms_al_size(struct zms_fs *fs, size_t len)
if (write_block_size <= 1U) { if (write_block_size <= 1U) {
return len; return len;
} }
return (len + (write_block_size - 1U)) & ~(write_block_size - 1U);
return zms_round_up_write_block_size(fs, len);
} }
/* Helper to get empty ATE address */ /* Helper to get empty ATE address */
@ -149,7 +165,7 @@ static int zms_flash_al_wrt(struct zms_fs *fs, uint64_t addr, const void *data,
offset = zms_addr_to_offset(fs, addr); offset = zms_addr_to_offset(fs, addr);
blen = len & ~(fs->flash_parameters->write_block_size - 1U); blen = zms_round_down_write_block_size(fs, len);
if (blen > 0) { if (blen > 0) {
rc = flash_write(fs->flash_device, offset, data8, blen); rc = flash_write(fs->flash_device, offset, data8, blen);
if (rc) { if (rc) {
@ -231,10 +247,11 @@ static int zms_flash_block_cmp(struct zms_fs *fs, uint64_t addr, const void *dat
{ {
const uint8_t *data8 = (const uint8_t *)data; const uint8_t *data8 = (const uint8_t *)data;
int rc; int rc;
size_t bytes_to_cmp, block_size; size_t bytes_to_cmp;
size_t block_size;
uint8_t buf[ZMS_BLOCK_SIZE]; uint8_t buf[ZMS_BLOCK_SIZE];
block_size = ZMS_BLOCK_SIZE & ~(fs->flash_parameters->write_block_size - 1U); block_size = zms_round_down_write_block_size(fs, ZMS_BLOCK_SIZE);
while (len) { while (len) {
bytes_to_cmp = MIN(block_size, len); bytes_to_cmp = MIN(block_size, len);
@ -260,10 +277,11 @@ static int zms_flash_block_cmp(struct zms_fs *fs, uint64_t addr, const void *dat
static int zms_flash_cmp_const(struct zms_fs *fs, uint64_t addr, uint8_t value, size_t len) static int zms_flash_cmp_const(struct zms_fs *fs, uint64_t addr, uint8_t value, size_t len)
{ {
int rc; int rc;
size_t bytes_to_cmp, block_size; size_t bytes_to_cmp;
size_t block_size;
uint8_t cmp[ZMS_BLOCK_SIZE]; uint8_t cmp[ZMS_BLOCK_SIZE];
block_size = ZMS_BLOCK_SIZE & ~(fs->flash_parameters->write_block_size - 1U); block_size = zms_round_down_write_block_size(fs, ZMS_BLOCK_SIZE);
(void)memset(cmp, value, block_size); (void)memset(cmp, value, block_size);
while (len) { while (len) {
@ -284,10 +302,11 @@ static int zms_flash_cmp_const(struct zms_fs *fs, uint64_t addr, uint8_t value,
static int zms_flash_block_move(struct zms_fs *fs, uint64_t addr, size_t len) static int zms_flash_block_move(struct zms_fs *fs, uint64_t addr, size_t len)
{ {
int rc; int rc;
size_t bytes_to_copy, block_size; size_t bytes_to_copy;
size_t block_size;
uint8_t buf[ZMS_BLOCK_SIZE]; uint8_t buf[ZMS_BLOCK_SIZE];
block_size = ZMS_BLOCK_SIZE & ~(fs->flash_parameters->write_block_size - 1U); block_size = zms_round_down_write_block_size(fs, ZMS_BLOCK_SIZE);
while (len) { while (len) {
bytes_to_copy = MIN(block_size, len); bytes_to_copy = MIN(block_size, len);
@ -371,17 +390,17 @@ static int zms_ate_crc8_check(const struct zms_ate *entry)
return 1; return 1;
} }
/* zms_ate_valid validates an ate: /* zms_ate_valid validates an ate in the current sector by checking if the ate crc is valid
* return 1 if crc8 and cycle_cnt valid, * and its cycle cnt matches the cycle cnt of the active sector
* 0 otherwise *
* return 1 if ATE is valid,
* 0 otherwise
*
* see: zms_ate_valid_different_sector
*/ */
static int zms_ate_valid(struct zms_fs *fs, const struct zms_ate *entry) static int zms_ate_valid(struct zms_fs *fs, const struct zms_ate *entry)
{ {
if ((fs->sector_cycle != entry->cycle_cnt) || zms_ate_crc8_check(entry)) { return zms_ate_valid_different_sector(fs, entry, fs->sector_cycle);
return 0;
}
return 1;
} }
/* zms_ate_valid_different_sector validates an ate that is in a different /* zms_ate_valid_different_sector validates an ate that is in a different
@ -422,10 +441,11 @@ static inline int zms_get_cycle_on_sector_change(struct zms_fs *fs, uint64_t add
return 0; return 0;
} }
/* zms_close_ate_valid validates an sector close ate: a valid sector close ate: /* zms_close_ate_valid validates a sector close ate.
* - valid ate * A valid sector close ate should be:
* - len = 0 and id = ZMS_HEAD_ID * - a valid ate
* - offset points to location at ate multiple from sector size * - with len = 0 and id = ZMS_HEAD_ID
* - and offset points to location at ate multiple from sector size
* return true if valid, false otherwise * return true if valid, false otherwise
*/ */
static bool zms_close_ate_valid(struct zms_fs *fs, const struct zms_ate *entry) static bool zms_close_ate_valid(struct zms_fs *fs, const struct zms_ate *entry)
@ -434,9 +454,10 @@ static bool zms_close_ate_valid(struct zms_fs *fs, const struct zms_ate *entry)
(entry->id == ZMS_HEAD_ID) && !((fs->sector_size - entry->offset) % fs->ate_size)); (entry->id == ZMS_HEAD_ID) && !((fs->sector_size - entry->offset) % fs->ate_size));
} }
/* zms_empty_ate_valid validates an sector empty ate: a valid sector empty ate: /* zms_empty_ate_valid validates an sector empty ate.
* - valid ate * A valid sector empty ate should be:
* - len = 0xffff and id = 0xffffffff * - a valid ate
* - with len = 0xffff and id = 0xffffffff
* return true if valid, false otherwise * return true if valid, false otherwise
*/ */
static bool zms_empty_ate_valid(struct zms_fs *fs, const struct zms_ate *entry) static bool zms_empty_ate_valid(struct zms_fs *fs, const struct zms_ate *entry)
@ -531,7 +552,8 @@ static int zms_flash_write_entry(struct zms_fs *fs, uint32_t id, const void *dat
*/ */
static int zms_recover_last_ate(struct zms_fs *fs, uint64_t *addr, uint64_t *data_wra) static int zms_recover_last_ate(struct zms_fs *fs, uint64_t *addr, uint64_t *data_wra)
{ {
uint64_t data_end_addr, ate_end_addr; uint64_t data_end_addr;
uint64_t ate_end_addr;
struct zms_ate end_ate; struct zms_ate end_ate;
int rc; int rc;
@ -569,7 +591,8 @@ static int zms_recover_last_ate(struct zms_fs *fs, uint64_t *addr, uint64_t *dat
static int zms_compute_prev_addr(struct zms_fs *fs, uint64_t *addr) static int zms_compute_prev_addr(struct zms_fs *fs, uint64_t *addr)
{ {
int sec_closed; int sec_closed;
struct zms_ate empty_ate, close_ate; struct zms_ate empty_ate;
struct zms_ate close_ate;
*addr += fs->ate_size; *addr += fs->ate_size;
if ((SECTOR_OFFSET(*addr)) != (fs->sector_size - 2 * fs->ate_size)) { if ((SECTOR_OFFSET(*addr)) != (fs->sector_size - 2 * fs->ate_size)) {
@ -632,7 +655,8 @@ static void zms_sector_advance(struct zms_fs *fs, uint64_t *addr)
static int zms_sector_close(struct zms_fs *fs) static int zms_sector_close(struct zms_fs *fs)
{ {
int rc; int rc;
struct zms_ate close_ate, garbage_ate; struct zms_ate close_ate;
struct zms_ate garbage_ate;
close_ate.id = ZMS_HEAD_ID; close_ate.id = ZMS_HEAD_ID;
close_ate.len = 0U; close_ate.len = 0U;
@ -806,7 +830,8 @@ static int zms_find_ate_with_id(struct zms_fs *fs, uint32_t id, uint64_t start_a
{ {
int rc; int rc;
int previous_sector_num = ZMS_INVALID_SECTOR_NUM; int previous_sector_num = ZMS_INVALID_SECTOR_NUM;
uint64_t wlk_prev_addr, wlk_addr; uint64_t wlk_prev_addr;
uint64_t wlk_addr;
int prev_found = 0; int prev_found = 0;
struct zms_ate wlk_ate; struct zms_ate wlk_ate;
uint8_t current_cycle; uint8_t current_cycle;
@ -848,9 +873,19 @@ static int zms_find_ate_with_id(struct zms_fs *fs, uint32_t id, uint64_t start_a
*/ */
static int zms_gc(struct zms_fs *fs) static int zms_gc(struct zms_fs *fs)
{ {
int rc, sec_closed; int rc;
struct zms_ate close_ate, gc_ate, wlk_ate, empty_ate; int sec_closed;
uint64_t sec_addr, gc_addr, gc_prev_addr, wlk_addr, wlk_prev_addr, data_addr, stop_addr; struct zms_ate close_ate;
struct zms_ate gc_ate;
struct zms_ate wlk_ate;
struct zms_ate empty_ate;
uint64_t sec_addr;
uint64_t gc_addr;
uint64_t gc_prev_addr;
uint64_t wlk_addr;
uint64_t wlk_prev_addr;
uint64_t data_addr;
uint64_t stop_addr;
uint8_t previous_cycle = 0; uint8_t previous_cycle = 0;
rc = zms_get_sector_cycle(fs, fs->ate_wra, &fs->sector_cycle); rc = zms_get_sector_cycle(fs, fs->ate_wra, &fs->sector_cycle);
@ -1027,14 +1062,16 @@ end:
static int zms_init(struct zms_fs *fs) static int zms_init(struct zms_fs *fs)
{ {
int rc, sec_closed; int rc;
struct zms_ate last_ate, first_ate, close_ate, empty_ate; int sec_closed;
/* Initialize addr to 0 for the case fs->sector_count == 0. This struct zms_ate last_ate;
* should never happen as this is verified in zms_mount() but both struct zms_ate first_ate;
* Coverity and GCC believe the contrary. struct zms_ate close_ate;
*/ struct zms_ate empty_ate;
uint64_t addr = 0U, data_wra = 0U; uint64_t addr = 0U;
uint32_t i, closed_sectors = 0; uint64_t data_wra = 0U;
uint32_t i;
uint32_t closed_sectors = 0;
bool zms_magic_exist = false; bool zms_magic_exist = false;
k_mutex_lock(&fs->zms_lock, K_FOREVER); k_mutex_lock(&fs->zms_lock, K_FOREVER);
@ -1285,7 +1322,6 @@ end:
int zms_mount(struct zms_fs *fs) int zms_mount(struct zms_fs *fs)
{ {
int rc; int rc;
struct flash_pages_info info; struct flash_pages_info info;
size_t write_block_size; size_t write_block_size;
@ -1299,7 +1335,7 @@ int zms_mount(struct zms_fs *fs)
} }
fs->ate_size = zms_al_size(fs, sizeof(struct zms_ate)); fs->ate_size = zms_al_size(fs, sizeof(struct zms_ate));
write_block_size = flash_get_write_block_size(fs->flash_device); write_block_size = fs->flash_parameters->write_block_size;
/* check that the write block size is supported */ /* check that the write block size is supported */
if (write_block_size > ZMS_BLOCK_SIZE || write_block_size == 0) { if (write_block_size > ZMS_BLOCK_SIZE || write_block_size == 0) {
@ -1357,8 +1393,10 @@ ssize_t zms_write(struct zms_fs *fs, uint32_t id, const void *data, size_t len)
int rc; int rc;
size_t data_size; size_t data_size;
struct zms_ate wlk_ate; struct zms_ate wlk_ate;
uint64_t wlk_addr, rd_addr; uint64_t wlk_addr;
uint32_t gc_count, required_space = 0U; /* no space, appropriate for delete ate */ uint64_t rd_addr;
uint32_t gc_count;
uint32_t required_space = 0U; /* no space, appropriate for delete ate */
int prev_found = 0; int prev_found = 0;
if (!fs->ready) { if (!fs->ready) {
@ -1498,8 +1536,11 @@ int zms_delete(struct zms_fs *fs, uint32_t id)
ssize_t zms_read_hist(struct zms_fs *fs, uint32_t id, void *data, size_t len, uint32_t cnt) ssize_t zms_read_hist(struct zms_fs *fs, uint32_t id, void *data, size_t len, uint32_t cnt)
{ {
int rc, prev_found = 0; int rc;
uint64_t wlk_addr, rd_addr = 0, wlk_prev_addr = 0; int prev_found = 0;
uint64_t wlk_addr;
uint64_t rd_addr = 0;
uint64_t wlk_prev_addr = 0;
uint32_t cnt_his; uint32_t cnt_his;
struct zms_ate wlk_ate; struct zms_ate wlk_ate;
#ifdef CONFIG_ZMS_DATA_CRC #ifdef CONFIG_ZMS_DATA_CRC
@ -1614,12 +1655,22 @@ ssize_t zms_get_data_length(struct zms_fs *fs, uint32_t id)
ssize_t zms_calc_free_space(struct zms_fs *fs) ssize_t zms_calc_free_space(struct zms_fs *fs)
{ {
int rc;
int rc, previous_sector_num = ZMS_INVALID_SECTOR_NUM, prev_found = 0, sec_closed; int previous_sector_num = ZMS_INVALID_SECTOR_NUM;
struct zms_ate step_ate, wlk_ate, empty_ate, close_ate; int prev_found = 0;
uint64_t step_addr, wlk_addr, step_prev_addr, wlk_prev_addr, data_wra = 0U; int sec_closed;
struct zms_ate step_ate;
struct zms_ate wlk_ate;
struct zms_ate empty_ate;
struct zms_ate close_ate;
uint64_t step_addr;
uint64_t wlk_addr;
uint64_t step_prev_addr;
uint64_t wlk_prev_addr;
uint64_t data_wra = 0U;
uint8_t current_cycle; uint8_t current_cycle;
ssize_t free_space = 0; ssize_t free_space = 0;
const uint32_t second_to_last_offset = (2 * fs->ate_size);
if (!fs->ready) { if (!fs->ready) {
LOG_ERR("zms not initialized"); LOG_ERR("zms not initialized");
@ -1683,9 +1734,8 @@ ssize_t zms_calc_free_space(struct zms_fs *fs)
/* Let's look now for special cases where some sectors have only ATEs with /* Let's look now for special cases where some sectors have only ATEs with
* small data size. * small data size.
*/ */
const uint32_t second_to_last_offset = (2 * fs->ate_size);
for (uint32_t i = 0; i < fs->sector_count; i++) { for (int i = 0; i < fs->sector_count; i++) {
step_addr = zms_close_ate_addr(fs, ((uint64_t)i << ADDR_SECT_SHIFT)); step_addr = zms_close_ate_addr(fs, ((uint64_t)i << ADDR_SECT_SHIFT));
/* verify if the sector is closed */ /* verify if the sector is closed */
@ -1718,7 +1768,7 @@ ssize_t zms_calc_free_space(struct zms_fs *fs)
return free_space; return free_space;
} }
size_t zms_sector_max_data_size(struct zms_fs *fs) size_t zms_active_sector_free_space(struct zms_fs *fs)
{ {
if (!fs->ready) { if (!fs->ready) {
LOG_ERR("ZMS not initialized"); LOG_ERR("ZMS not initialized");

View file

@ -1,9 +1,10 @@
/* ZMS: Zephyr Memory Storage /* Copyright (c) 2024 BayLibre SAS
*
* Copyright (c) 2024 BayLibre SAS
* *
* SPDX-License-Identifier: Apache-2.0 * SPDX-License-Identifier: Apache-2.0
*
* ZMS: Zephyr Memory Storage
*/ */
#ifndef __ZMS_PRIV_H_ #ifndef __ZMS_PRIV_H_
#define __ZMS_PRIV_H_ #define __ZMS_PRIV_H_
@ -23,8 +24,8 @@ extern "C" {
#define SECTOR_NUM(x) FIELD_GET(ADDR_SECT_MASK, x) #define SECTOR_NUM(x) FIELD_GET(ADDR_SECT_MASK, x)
#define SECTOR_OFFSET(x) FIELD_GET(ADDR_OFFS_MASK, x) #define SECTOR_OFFSET(x) FIELD_GET(ADDR_OFFS_MASK, x)
#if defined(CONFIG_ZMS_CUSTOM_BLOCK_SIZE) #if defined(CONFIG_ZMS_CUSTOMIZE_BLOCK_SIZE)
#define ZMS_BLOCK_SIZE CONFIG_ZMS_MAX_BLOCK_SIZE #define ZMS_BLOCK_SIZE CONFIG_ZMS_CUSTOM_BLOCK_SIZE
#else #else
#define ZMS_BLOCK_SIZE 32 #define ZMS_BLOCK_SIZE 32
#endif #endif
@ -46,8 +47,8 @@ extern "C" {
struct zms_ate { struct zms_ate {
uint8_t crc8; /* crc8 check of the entry */ uint8_t crc8; /* crc8 check of the entry */
uint8_t cycle_cnt; /* cycle counter for non erasable devices */ uint8_t cycle_cnt; /* cycle counter for non erasable devices */
uint32_t id; /* data id */
uint16_t len; /* data len within sector */ uint16_t len; /* data len within sector */
uint32_t id; /* data id */
union { union {
uint8_t data[8]; /* used to store small size data */ uint8_t data[8]; /* used to store small size data */
struct { struct {

View file

@ -233,8 +233,7 @@ ZTEST_F(zms, test_zms_gc)
int len; int len;
uint8_t buf[32]; uint8_t buf[32];
uint8_t rd_buf[32]; uint8_t rd_buf[32];
const uint8_t max_id = 10;
const uint16_t max_id = 10;
/* 21st write will trigger GC. */ /* 21st write will trigger GC. */
const uint16_t max_writes = 21; const uint16_t max_writes = 21;
@ -243,7 +242,7 @@ ZTEST_F(zms, test_zms_gc)
err = zms_mount(&fixture->fs); err = zms_mount(&fixture->fs);
zassert_true(err == 0, "zms_mount call failure: %d", err); zassert_true(err == 0, "zms_mount call failure: %d", err);
for (uint32_t i = 0; i < max_writes; i++) { for (int i = 0; i < max_writes; i++) {
uint8_t id = (i % max_id); uint8_t id = (i % max_id);
uint8_t id_data = id + max_id * (i / max_id); uint8_t id_data = id + max_id * (i / max_id);
@ -253,11 +252,11 @@ ZTEST_F(zms, test_zms_gc)
zassert_true(len == sizeof(buf), "zms_write failed: %d", len); zassert_true(len == sizeof(buf), "zms_write failed: %d", len);
} }
for (uint32_t id = 0; id < max_id; id++) { for (int id = 0; id < max_id; id++) {
len = zms_read(&fixture->fs, id, rd_buf, sizeof(buf)); len = zms_read(&fixture->fs, id, rd_buf, sizeof(buf));
zassert_true(len == sizeof(rd_buf), "zms_read unexpected failure: %d", len); zassert_true(len == sizeof(rd_buf), "zms_read unexpected failure: %d", len);
for (uint16_t i = 0; i < sizeof(rd_buf); i++) { for (int i = 0; i < sizeof(rd_buf); i++) {
rd_buf[i] = rd_buf[i] % max_id; rd_buf[i] = rd_buf[i] % max_id;
buf[i] = id; buf[i] = id;
} }
@ -268,11 +267,11 @@ ZTEST_F(zms, test_zms_gc)
err = zms_mount(&fixture->fs); err = zms_mount(&fixture->fs);
zassert_true(err == 0, "zms_mount call failure: %d", err); zassert_true(err == 0, "zms_mount call failure: %d", err);
for (uint32_t id = 0; id < max_id; id++) { for (int id = 0; id < max_id; id++) {
len = zms_read(&fixture->fs, id, rd_buf, sizeof(buf)); len = zms_read(&fixture->fs, id, rd_buf, sizeof(buf));
zassert_true(len == sizeof(rd_buf), "zms_read unexpected failure: %d", len); zassert_true(len == sizeof(rd_buf), "zms_read unexpected failure: %d", len);
for (uint16_t i = 0; i < sizeof(rd_buf); i++) { for (int i = 0; i < sizeof(rd_buf); i++) {
rd_buf[i] = rd_buf[i] % max_id; rd_buf[i] = rd_buf[i] % max_id;
buf[i] = id; buf[i] = id;
} }
@ -286,7 +285,7 @@ static void write_content(uint32_t max_id, uint32_t begin, uint32_t end, struct
uint8_t buf[32]; uint8_t buf[32];
ssize_t len; ssize_t len;
for (uint32_t i = begin; i < end; i++) { for (int i = begin; i < end; i++) {
uint8_t id = (i % max_id); uint8_t id = (i % max_id);
uint8_t id_data = id + max_id * (i / max_id); uint8_t id_data = id + max_id * (i / max_id);
@ -303,11 +302,11 @@ static void check_content(uint32_t max_id, struct zms_fs *fs)
uint8_t buf[32]; uint8_t buf[32];
ssize_t len; ssize_t len;
for (uint32_t id = 0; id < max_id; id++) { for (int id = 0; id < max_id; id++) {
len = zms_read(fs, id, rd_buf, sizeof(buf)); len = zms_read(fs, id, rd_buf, sizeof(buf));
zassert_true(len == sizeof(rd_buf), "zms_read unexpected failure: %d", len); zassert_true(len == sizeof(rd_buf), "zms_read unexpected failure: %d", len);
for (uint16_t i = 0; i < ARRAY_SIZE(rd_buf); i++) { for (int i = 0; i < ARRAY_SIZE(rd_buf); i++) {
rd_buf[i] = rd_buf[i] % max_id; rd_buf[i] = rd_buf[i] % max_id;
buf[i] = id; buf[i] = id;
} }
@ -322,7 +321,6 @@ static void check_content(uint32_t max_id, struct zms_fs *fs)
ZTEST_F(zms, test_zms_gc_3sectors) ZTEST_F(zms, test_zms_gc_3sectors)
{ {
int err; int err;
const uint16_t max_id = 10; const uint16_t max_id = 10;
/* 41st write will trigger 1st GC. */ /* 41st write will trigger 1st GC. */
const uint16_t max_writes = 41; const uint16_t max_writes = 41;
@ -410,7 +408,6 @@ ZTEST_F(zms, test_zms_corrupted_sector_close_operation)
uint32_t *flash_write_stat; uint32_t *flash_write_stat;
uint32_t *flash_max_write_calls; uint32_t *flash_max_write_calls;
uint32_t *flash_max_len; uint32_t *flash_max_len;
const uint16_t max_id = 10; const uint16_t max_id = 10;
/* 21st write will trigger GC. */ /* 21st write will trigger GC. */
const uint16_t max_writes = 21; const uint16_t max_writes = 21;
@ -423,7 +420,7 @@ ZTEST_F(zms, test_zms_corrupted_sector_close_operation)
err = zms_mount(&fixture->fs); err = zms_mount(&fixture->fs);
zassert_true(err == 0, "zms_mount call failure: %d", err); zassert_true(err == 0, "zms_mount call failure: %d", err);
for (uint32_t i = 0; i < max_writes; i++) { for (int i = 0; i < max_writes; i++) {
uint8_t id = (i % max_id); uint8_t id = (i % max_id);
uint8_t id_data = id + max_id * (i / max_id); uint8_t id_data = id + max_id * (i / max_id);
@ -465,7 +462,7 @@ ZTEST_F(zms, test_zms_full_sector)
int err; int err;
ssize_t len; ssize_t len;
uint32_t filling_id = 0; uint32_t filling_id = 0;
uint32_t i, data_read; uint32_t data_read;
fixture->fs.sector_count = 3; fixture->fs.sector_count = 3;
@ -493,7 +490,7 @@ ZTEST_F(zms, test_zms_full_sector)
zassert_true(len == sizeof(filling_id), "zms_write failed: %d", len); zassert_true(len == sizeof(filling_id), "zms_write failed: %d", len);
/* sanitycheck on ZMS content */ /* sanitycheck on ZMS content */
for (i = 0; i <= filling_id; i++) { for (int i = 0; i <= filling_id; i++) {
len = zms_read(&fixture->fs, i, &data_read, sizeof(data_read)); len = zms_read(&fixture->fs, i, &data_read, sizeof(data_read));
if (i == 1) { if (i == 1) {
zassert_true(len == -ENOENT, "zms_read shouldn't found the entry: %d", len); zassert_true(len == -ENOENT, "zms_read shouldn't found the entry: %d", len);
@ -511,8 +508,10 @@ ZTEST_F(zms, test_delete)
{ {
int err; int err;
ssize_t len; ssize_t len;
uint32_t filling_id, data_read; uint32_t filling_id;
uint32_t ate_wra, data_wra; uint32_t data_read;
uint32_t ate_wra;
uint32_t data_wra;
fixture->fs.sector_count = 3; fixture->fs.sector_count = 3;
@ -570,7 +569,9 @@ ZTEST_F(zms, test_delete)
*/ */
ZTEST_F(zms, test_zms_gc_corrupt_close_ate) ZTEST_F(zms, test_zms_gc_corrupt_close_ate)
{ {
struct zms_ate ate, close_ate, empty_ate; struct zms_ate ate;
struct zms_ate close_ate;
struct zms_ate empty_ate;
uint32_t data; uint32_t data;
ssize_t len; ssize_t len;
int err; int err;
@ -642,7 +643,8 @@ ZTEST_F(zms, test_zms_gc_corrupt_close_ate)
*/ */
ZTEST_F(zms, test_zms_gc_corrupt_ate) ZTEST_F(zms, test_zms_gc_corrupt_ate)
{ {
struct zms_ate corrupt_ate, close_ate; struct zms_ate corrupt_ate;
struct zms_ate close_ate;
int err; int err;
close_ate.id = 0xffffffff; close_ate.id = 0xffffffff;
@ -685,10 +687,10 @@ ZTEST_F(zms, test_zms_gc_corrupt_ate)
#ifdef CONFIG_ZMS_LOOKUP_CACHE #ifdef CONFIG_ZMS_LOOKUP_CACHE
static size_t num_matching_cache_entries(uint64_t addr, bool compare_sector_only, struct zms_fs *fs) static size_t num_matching_cache_entries(uint64_t addr, bool compare_sector_only, struct zms_fs *fs)
{ {
size_t i, num = 0; size_t num = 0;
uint64_t mask = compare_sector_only ? ADDR_SECT_MASK : UINT64_MAX; uint64_t mask = compare_sector_only ? ADDR_SECT_MASK : UINT64_MAX;
for (i = 0; i < CONFIG_ZMS_LOOKUP_CACHE_SIZE; i++) { for (int i = 0; i < CONFIG_ZMS_LOOKUP_CACHE_SIZE; i++) {
if ((fs->lookup_cache[i] & mask) == addr) { if ((fs->lookup_cache[i] & mask) == addr) {
num++; num++;
} }
@ -759,20 +761,19 @@ ZTEST_F(zms, test_zms_cache_collission)
{ {
#ifdef CONFIG_ZMS_LOOKUP_CACHE #ifdef CONFIG_ZMS_LOOKUP_CACHE
int err; int err;
uint32_t id;
uint16_t data; uint16_t data;
fixture->fs.sector_count = 4; fixture->fs.sector_count = 4;
err = zms_mount(&fixture->fs); err = zms_mount(&fixture->fs);
zassert_true(err == 0, "zms_mount call failure: %d", err); zassert_true(err == 0, "zms_mount call failure: %d", err);
for (id = 0; id < CONFIG_ZMS_LOOKUP_CACHE_SIZE + 1; id++) { for (int id = 0; id < CONFIG_ZMS_LOOKUP_CACHE_SIZE + 1; id++) {
data = id; data = id;
err = zms_write(&fixture->fs, id, &data, sizeof(data)); err = zms_write(&fixture->fs, id, &data, sizeof(data));
zassert_equal(err, sizeof(data), "zms_write call failure: %d", err); zassert_equal(err, sizeof(data), "zms_write call failure: %d", err);
} }
for (id = 0; id < CONFIG_ZMS_LOOKUP_CACHE_SIZE + 1; id++) { for (int id = 0; id < CONFIG_ZMS_LOOKUP_CACHE_SIZE + 1; id++) {
err = zms_read(&fixture->fs, id, &data, sizeof(data)); err = zms_read(&fixture->fs, id, &data, sizeof(data));
zassert_equal(err, sizeof(data), "zms_read call failure: %d", err); zassert_equal(err, sizeof(data), "zms_read call failure: %d", err);
zassert_equal(data, id, "incorrect data read"); zassert_equal(data, id, "incorrect data read");
@ -846,7 +847,7 @@ ZTEST_F(zms, test_zms_cache_hash_quality)
/* Write ZMS IDs from 0 to CONFIG_ZMS_LOOKUP_CACHE_SIZE - 1 */ /* Write ZMS IDs from 0 to CONFIG_ZMS_LOOKUP_CACHE_SIZE - 1 */
for (uint16_t i = 0; i < CONFIG_ZMS_LOOKUP_CACHE_SIZE; i++) { for (int i = 0; i < CONFIG_ZMS_LOOKUP_CACHE_SIZE; i++) {
id = i; id = i;
data = 0; data = 0;
@ -869,7 +870,7 @@ ZTEST_F(zms, test_zms_cache_hash_quality)
/* Write CONFIG_ZMS_LOOKUP_CACHE_SIZE ZMS IDs that form the following series: 0, 4, 8... */ /* Write CONFIG_ZMS_LOOKUP_CACHE_SIZE ZMS IDs that form the following series: 0, 4, 8... */
for (uint16_t i = 0; i < CONFIG_ZMS_LOOKUP_CACHE_SIZE; i++) { for (int i = 0; i < CONFIG_ZMS_LOOKUP_CACHE_SIZE; i++) {
id = i * 4; id = i * 4;
data = 0; data = 0;