ConfD User Community

Stream socket is slow

Hi

I am using python api maapi.save_config to read data. However I noticed reading from stream is rather slow.
It might because confd need process those data before send to socket. However I found a strange thing that the data is sent in a chunk around 10000 b every time I read regardless how big my read buffer is. I suspect it is controlled from sender side which is confd. I am wondering if that size can be optimized or configured from confd side.

id = maapi.save_config(maapi_sock, sub_thandle, maapi.CONFIG_JSON, sub_path)
sstream = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
_confd.stream_connect(sstream, id, 0, confd_addr, confd_port)
        BUFF_SIZE = 4096 * 1024 # 4 MB
        data = b''
        datasize = 0
        # save data to a file
        while True:
              part = sstream.recv(BUFF_SIZE)      <------------------ I get around 10000 every time of read
              if part:
                  datasize += len(part)
                  log.info("read data %s",datasize)  <------------------ this is always same regardless how big BUFF_SIZE is 
                  data += part
              else:
                  sstream.close()
                  break
2022-09-13 13:08:45,732:cdbl        : INFO     read data 10017 
2022-09-13 13:08:45,740:cdbl        : INFO     read data 20065   <------------------ I get around 10000 every time of read
2022-09-13 13:08:45,748:cdbl        : INFO     read data 30117 
2022-09-13 13:08:45,756:cdbl        : INFO     read data 40138 
2022-09-13 13:08:45,763:cdbl        : INFO     read data 50175 
2022-09-13 13:08:45,770:cdbl        : INFO     read data 60218 
2022-09-13 13:08:45,779:cdbl        : INFO     read data 70227 
2022-09-13 13:08:45,787:cdbl        : INFO     read data 80233 
2022-09-13 13:08:45,793:cdbl        : INFO     read data 90270 
2022-09-13 13:08:45,801:cdbl        : INFO     read data 100301
2022-09-13 13:08:45,809:cdbl        : INFO     read data 110364
2022-09-13 13:08:45,817:cdbl        : INFO     read data 120366
2022-09-13 13:08:45,824:cdbl        : INFO     read data 130374                                                                         
2022-09-13 13:08:45,831:cdbl        : INFO     read data 140376                                                                         
2022-09-13 13:08:45,839:cdbl        : INFO     read data 150378                                                                         
2022-09-13 13:08:45,868:cdbl        : INFO     read data 183146                                                                         
2022-09-13 13:08:45,868:cdbl        : INFO     read data 190529                                                                         
2022-09-13 13:08:45,877:cdbl        : INFO     read data 200573                                                                         
2022-09-13 13:08:45,885:cdbl        : INFO     read data 210587                                                                         
2022-09-13 13:08:45,893:cdbl        : INFO     read data 220602                                                                         
2022-09-13 13:08:45,901:cdbl        : INFO     read data 230663   

Thanks

You are right in that ConfD sends data in approx. 10k chunks. But that does not mean much. The fact that you are receiving exactly those chunks only indicates that you are “processing” them faster than ConfD is able to produce them. Include very small sleep (such as time.sleep(0.01)) between your reads and you will see that you receive larger data and your overall time does not change - the data is stored in IP stack buffers and retrieved by your application whenever the application is ready.

In other words: changing the ConfD’s buffer size would not really change anything from the performance point of view (I tested that). There likely is some overhead connected with sending a buffer, but given that the buffer is already quite large, that overhead does not play a role.

By the way: you write that “stream socket is slow” - you received over 200kB in less then 0.2 secs, that may not be lightning fast, but is that really slow?

hi mvf

thanks

In my test code, what has been done is just put them in memory. It took around 6 seconds for a data size around 8MB. I forwarded those data over HTTP to another server and that took around 1 seconds. That why I am thinking this is “slow”. In the system we are using, we have around 50MB to 100MB data. The time to receive them will be around 60 seconds which is too long.

It is very likely that ConfD is not the fastest bulk data storage system around, it was not designed to be one. But given that so as to provide the JSON data ConfD has to retrieve the configuration from its database while taking into account all NACM rules, then render it to desired format, and send it over the socket, the speed compared to your HTTP server test sounds quite reasonable to me.

By the way, the rendering part may have some impact and you can affect that one. In my tests, XML or even XML_PRETTY were (a bit surprisingly) somewhat faster than JSON.

If the data you are working with is all configuration, I don’t see how to speed things up, I’m afraid you have hit the limit. If it is really a bulk data, maybe ConfD is not ideal for storing that and you should store it somewhere else.

1 Like