Netconf server performance

Hi everyone, I am in the process of testing the performance of netconf-server, I am a little worried, I first run confd on the server and load the yang model, then I simulate 100 netconf- client on the client, as a result, only more than 1,000 connections are established and the server is down. I expect about 10,000 connections. The reason is why?
(My server: Xeon 64 CPUs, RAM 256 GB, i’m using confd-base)

There should be some indication in the confd.log of why ConfD went down. Based on what you’re doing and the numbers, I would guess that it’s “Out of file descriptors for accept() - process limit reached”, since the default per-process limit on open file descriptors on Linux is 1024. You can raise that number with ulimit, e.g. ulimit -n 8192, in the shell where you start ConfD.

That said, I have to wonder what your use case is. The NETCONF protocol was designed for a management application - the client - to manage (network) devices - where the server runs. It’s not uncommon for a manager to manage hundreds or even thousands of devices, but the idea of 100 managers managing a single device is quite, um… - strange. And even so, why would they need 100 connections each to do it, one should be enough - the SSH transport provides for multiple channels, each corresponding to a NETCONF session, on a single TCP connection.

Thank per, it’s working. I plan to design netconf-server in the management application so that the devices can synchronize configuration data with management application. Is that okay? is there any other way?

How would the devices sync the config with the mgmt application?
What does sync mean here?

Normally ConfD (NETCONF server) sits on the device that is managed over NETCONF by a manager (NETCONF client).