Quick tip on writing status data to CDB_OPERATIONAL datastore.
It is possible to write to this datastore using either the CDB API and MAAPI API.
MAAPI is likely to the most suitable API to use if you need atomicity.
You pay a little bit extra in CPU overhead with MAAPI transactions, so for systems where atomicity can still be met using for example CDB API cdb_set_values() you may want to consider that option for best performance.
To visualise a MAAPI transaction to the CDB_OPERATIONAL datastore using maapi_delete() and
maapi_load_config_file(), here is a trace from the confd_cmd tool (I used the “-d -d" flags):
TRACE MAAPI_START_USER_SESSION —> CONFD_OK <—— Session is started
TRACE MAAPI_START_TRANS —> CONFD_OK <—- the transaction towards CDB_OPERATIONAL datastore is started
Here you can do multiple writes, reads, and deletes. All will be done to a private diff which is not visible to the northbound manager before the transaction is applied/commit:ed.
TRACE MAAPI_DELETE /path/to/old/data —> CONFD_OK <—- Data is deleted - This is done to a private diff, not yet to the CDB_OPERATIONAL datastore
TRACE MAAPI_LOAD_CONFIG_FILE —> CONFD_OK <—- New data is written - This is done to a private diff, not yet to the CDB_OPERATIONAL datastore
TRACE MAAPI_APPLY_TRANS —> CONFD_OK <—- Transaction is applied and the new data is visible to the manager.
Note that you do not have to finish the transaction and close the session here. If you want to avoid the time it takes to restart the user session and transaction later you can leave both the transaction and session open.
TRACE MAAPI_STOP_TRANS —> CONFD_OK <—- transaction is finished.
TRACE MAAPI_END_USER_SESSION —> CONFD_OK <—- Session is closed
If you are not able to handle the RAM memory consumption of
maapi_load_config_file() when loading large operational data updates, you can chunk your writes using for example
Another option is to use the CDB API to write the data to CDB_OPERATIONAL datastore. The benefit is CPU overhead since using the CDB API you are not executing in the context of a transaction engine like you do through MAAPI, NETCONF, REST, CLI, etc.
The downside for you is that CDB is an API going directly towards the datastore and hence there is no atomicity for example. One way to get around that that I showed you was to do a delete and write in the same operation. I.e. the same
cdb_set_values() operation. Here is a visualisation:
TRACE CDB_NEW_SESSION —> CONFD_OK <—— Session is started
Unlike the MAAPI API going through the transaction API and writing to a private diff, the CDB API write directly into the CDB datastores.
TRACE CDB_NUM_INSTANCES /routes/route —> CONFD_OK <—- In this example we here get the number of list instances in a list that we want to delete later in our cdb_set_values() call.
TRACE CDB_GET_VALUES /routes —> CONFD_OK <—- Now we can get the keyes for all list instances that we want to delete in our cdb_set_values() call.
TRACE CDB_SET_VALUES /routes —> CONFD_OK <—- This call, in one singe operation, first delete all the old list values and then write the new ones.
Note that you do not have to close the session here. If you want to avoid the time it takes to restart the user session later you can leave and session open.
TRACE CDB_END_SESSION --> CONFD_OK
So the main potential benefit of the CDB API and
cdb_set_values() is CPU overhead. You will not be able to do multiple operations atomic like you can with MAAPI transactions, e.g. multiple write operations without risking that a manager do a read in-between, and hence it is not possible to chunk the writes to lower RAM and still keep atomicity.
MAAPI will likely be the most suitable API to use if you need atomicity to for example enable a way to chunk lower RAM use.
You pay a little bit extra in CPU overhead with MAAPI transactions, so for systems where atomicity and/or RAM requirements can still be met using the CDB API and for example
cdb_set_values() you may want to consider that option there.