Hello Folks,
I have some doubts regarding handling of multiple ‘edit-cfg’ transactions from different northbound clients which can happen in parallel. The user guide does not make things clear in this regard.
So, from my understanding, there are 2 things:
Use lock
Use of ‘/confdConfig/commitRetryTimeout’.
With ‘Lock’, the transaction fails when someone else is holding the lock. This means that the client should retry, may be ‘N’ number of times before giving up. Is this right ?
With config entry ‘/confdConfig/commitRetryTimeout’ WITHOUT LOCKING, also, I get the error ‘datastore is locked’, but all the transactions seems to get committed. So, the client actually does not know if the commit succeeded or not.
Without both of the above, I stiil get ‘datastore is locked’, and as expected quite a few of the changes are also lost.
So, what is the recommended way of configuration to handle such scenario for ‘Writable-through-candidate’ And ‘read-write’ running configuration.
Note that if you have a subscriber application notified of a configuration change the app will take a CDB read lock when reading the new configuration. E.g. through cdb_start_session(). The northbound APIs will be locked for writing to CDB until your application release the read lock through for example cdb_end_session().
If your commitRetryTimeout is shorter than your edit-config or subscriber read lock you will failed transactions due to a lock.
There is no difference in how the commit to running scenario is handled with a write-through-candidate or with a read-write running configuration. Before the commit in a read-write running scenario you are writing to your own private copy of running anyway.
Note that if you have the candidate enabled and do a confirmed commit you provide a confirm-timeout or the default timeout will be 600s.
Example from RFC6241:
When you are writing to a candidate, the candidate is not private, so you will likely want to lock the candidate. See ConfD UG Advanced Topics → Datastores → candidate for details on locking the candidate or not.
If your commitRetryTimeout is shorter than your edit-config or subscriber read lock you will failed transactions due to a lock.
I had provided it as ‘PT4S’, and I am sure that the transactions were pretty short (in the order of millisecond), even then I was getting the ‘datastore is locked’ errors.
Is locking necessary when we are depending on this configuration, assuming that it works. Since, as I said before no commits were lost when ran with this config (w/o locking), but still got ‘locked…’ errors.
How is ‘locking’ and ‘commitretryTimeout’ related ? Are they supposed to be used together or can be used independently ?
See ConfD UG Advanced Topics → Datastores → candidate for details on locking the candidate or not.
Yes, I was referring this section only, but its not in detail as to how one should go about configuring the system for atomicity.
Section 27.1 mentions that the candidate can be run without taking a lock and thats what I was trying to do with ‘commitRetryTimeout’, so that the transactions will not throw an error most of the time.
If you do a NETCONF <lock> on candidate or running datastores, that is a global lock. The commitRetryTimeout will not help you.
The commitRetryTimeout controls for how long the commit operation will attempt to complete the operation when some other entity is locking the database, e.g. some other commit is in progress or some managed object is locking the database.
Taking a lock on the running data store is usually not necessary. ConfD makes sure that transactions are handled one by one. You would only lock the running datastore if you for example for some reason intend to do several transactions, e.g. multiple <edit-config>, and do not want anyone to do transactions to running during this time.
As ConfD UG Advanced Topics --> Datastores --> candidate states, a candidate lock is usually good practice.
I have a question. If the previous transaction has the lock, how is it possible that another commit operation is attempting to complete the operation? Because when the first transaction the lock, per my understanding, the second transaction can not even start.
Correct , but note that we are discussing the candidate datastore in this topic. To prevent others from writing to candidate before you do a to the running datastore best practice is to first take a on the candidate datastore and it when you have committed your changes.
Thanks for the clarifying. Two more questions, in our product, we only use running datastore, and from our RESTful API we are using MAAPI which starts session/transaction, does it mean it uses the lock mechanism and multiple request will be handled one by one?
If it’s using lock, does the lock only apply to Write? or both Read and Write?
How about the python client “netconf-console”, does it have lock mechanism as MAAPI?
I read that you are not using ConfD’s RESTCONF API, but have built a RESTful API on top of MAAPI for some reason.
we are using MAAPI which starts session/transaction, does it mean it uses the lock mechanism and multiple request will be handled one by one?
Note that there is a difference between a “global lock” and a “transaction lock”.
To try to take a “global lock” using MAAPI you will have to call maapi_lock(). The “transaction lock” is taken when you call maapi_apply_trans() or maapi_validate_trans().
Regarding locks, See ConfD UG under “Advanced Topics->Locks” for more details. See also examples.confd/timeout_and_locks n the ConfD example set .
If it’s using lock, does the lock only apply to Write? or both Read and Write?
Northbound interface clients that write do not lock out other northbound clients from reading.
How about the python client “netconf-console”, does it have lock mechanism as MAAPI?
An <edit-config> with <target><running/> will try to take a “transaction lock” on running when writing the config. For info on how to take a “global lock” over the NETCONF interface, see RFC 6241 - Network Configuration Protocol (NETCONF)