Transaction per second supported by confD basic

Hi,
I tried checking transactions per second for basic confd. I increased Nr value by 1 in my test cases. i executed 4 test cases with Nr 1,2,3,4 and what i see is with Nr 1 it supported 3k Req/Sec. and as I increased Nr it decreased. doesn’t it seems very less . ??? further on checking top i could in any case only 1 thread gets scheduled and rest other threads were in sleeping state.
confd Server and applications ran on different multi core machine.

Please, can you describe in more details what test scenario did you use?

And what is the definition of “transaction per second” to you? Even more important, what is the definition of a “transaction” to you?

transaction means one get or set operation. Currently i did only get leaf of type uint32.

i am giving you pseudo code what i used

myThread()
{
      start = get_start_time(); 
      for (i = 0; i <= 1000; i++)
      {
             cdb_get_uint32();
      } 
      timetaken = get_end_time() - start;  // in milisecond
}

main()
{
       for (numThread = 0; numThread < maxThread; numThread++)
       {
              pthread_create(tid[numThread], myThread)
       }
       for (numThread = 0; numThread < maxThread; numThread++)
       {
              pthread_join(&tid[numThread], &retVal)
              timeTaken += retVal;
       }
       transactionPerSec = (TotalNumberOfGetOperation/TimeTaken)*1000

}

First note that your cdb_get_uint32() call does not pass through the ConfD transaction engine.
It goes straight to CDB. So your cdb_get_uint32() call is not part of a transaction. It’s just a CDB API call.

If you want to test the parallelism of ConfD running transactions towards CDB I suggest you go through a northbound interface such as MAAPI maapi_get_u_int32_elem(), NETCONF , RESTCONF GET etc.

You won’t see much gain running ConfD in SMP mode when the use case is bypassing the ConfD transaction engine and reading data directly from CDB through the CDB API since some key processes in ConfD, such as the CDB server process, are still only one process. So if the use case is CDB API centric it wouldn’t scale up much running ConfD in SMP mode.

However, transactions over for example MAAPI, NETCONF. etc. will make use of that ConfD is massively parallel, so increasing the --smp flag value will spread the load across for transactions. E.g. if many sessions are coming in, they will be spread evenly across the number of cores specified.