ConfD User Community

Providing keys in bulk

Hello,

Is there a method to provide the keys for a large list using the data provider API in bulk? For example, my data provider application implements find_next(). However using this API, the application can only provide a single key at a time. A list with 3000 entries will issue the find_next() callback 3000 times. Is there an API to provide the list of 3000 keys in a single callback (or broken up into blocks of n keys similar to data_reply_next_object_arrays())?

Best Regards,
Matt

Hello,

Registering find_next_object() may help depending on the use case, i.e., get the list keys from a certain list entry to the end of the list. For other use cases, find_next() calls may be required. What does your YANG list and GET request look like?

Best regards

I just have a simple list. I’ve included an example showing the issue.

The following lists the steps required for this example.

  1. Run ‘make all’ to build the my_test_module example.

  2. Run ‘make start’ to start confd and the data provider Python application.

  3. Run ‘make query-sutbree’ to issue a with a subtree filter requesting the list keys only.

  4. Run ‘make query-xpath’ to issue a with an xpath filter requesting the list keys only.

I noticed that confd issues different callbacks for requests with xpath filters (cb_find_next) vs. subtree filters(cb_find_next_object). The subtree filter performs far better than the xpath filter, even though both are filtering on list keys only. When the request is issued with the subtree filter, my application can return keys in bulk. The callback sequence takes 4…5s with a subtree filter using cb_find_next_object(). The callback sequence takes over two minutes with an xpath filter using cb_find_next(). For more details, please see below.

Is there a way to improve the performance of the xpath filter by returning keys in bulk, or other callback sequence?

The iteration logic alone takes about ~300 ms to iterate through 100,000 keys.

$ time make perf
...
TEST00099998
TEST00099999
TEST00100000
my_list 100000 keys, 307 ms

real	0m0.441s
user	0m0.231s
sys	0m0.146s

The NETCONF query using a subtree filter to filter on keys via cb_find_next_object() take 4+ seconds.

$ time make query-subtree
/opt/confd/bin/netconf-console --host localhost --port 830 --get-config --subtree-filter subtree-filter.xml
...
      <my-entry>
        <name>TEST00100000</name>
      </my-entry>
    </my-list>
  </data>
</rpc-reply>

real	0m4.200s
user	0m0.341s
sys	0m0.217s

The NETCONF query using an xpath to filter on keys via cb_find_next() take 2.5 minutes.

$ time make query-xpath
/opt/confd/bin/netconf-console --host localhost --port 830 --get-config -x /my-list/my-entry/name
...

      <my-entry>
        <name>TEST00100000</name>
      </my-entry>
    </my-list>
  </data>
</rpc-reply>

real	2m21.215s
user	0m0.373s
sys	0m0.231s

my-test-module.yang

module my-test-module {
    yang-version 1.1;
    namespace "urn:my:yang:my-test-module";
    prefix "my-test-module";
    organization "My Test YANG Module.";
    contact
        "My Test YANG Module.
         ";
    description
        "My Test YANG Module.";
    revision 2022-08-26 {
        reference "My Test YANG Module.";
    }

    container my-list {
        description "An example of a list.";
        list my-entry {
            key name;
            description "An example of a list entry.";
            leaf name {
                type string;
                description "Name identifying a list entry.";
            }
            leaf str-data {
                type string;
                description "String data for this list entry.";
            }
            leaf int-data {
                type int32;
                description "Integer data for this list entry.";
            }
        }
    }
}

my-test-module-ann.yang

module my-test-module-ann {
    namespace "urn:dummy";
    prefix "dummy";

    import tailf-common {
        prefix tailf;
    }

    tailf:annotate-module "my-test-module" {
        tailf:annotate-statement container[name='my-list'] {
            tailf:callpoint "my_test_list";
            tailf:annotate-statement list[name='my-entry'] {
            }
        }
    }
}

my_test_module.py

from __future__ import print_function
import argparse
import select
import socket
import sys
import textwrap
import traceback
import time

import _confd
import _confd.dp as dp
import _confd.maapi as maapi

from my_test_module_ns import ns as my_test_module_ns

def ERROR(*args, **kwargs):
    print("ERROR:", end = '')
    print(*args, **kwargs)

DB_READ_LIMIT = 200

# Test Data
my_list = {}
my_list_keys = {}
MY_LIST_SIZE = 100000

# {
#     "TEST00000001": {
#         "str-data": "some data (1)",
#         "int-data": 1,
#     },
#     "TEST00000002": {
#         "str-data": "some data (2)",
#         "int-data": 2,
#     },
#     "TEST00000003": {
#         "str-data": "some data (3)",
#         "int-data": 3,
#     }
# }

def init_my_list(size):
    global my_list
    global my_list_keys

    my_list = {}
    my_list_keys = {}

    # Populate my_list
    for idx in range(1, size+1):
        key = f"TEST{idx:08}"
        my_list[key] = {
            "str-data": f"some data (2)",
            "int-data": idx,
        }

    # Create a list of keys
    prev_key = None
    for next_key in my_list.keys():
        if prev_key:
            my_list_keys[prev_key] = next_key
        prev_key = next_key
    my_list_keys[prev_key] = None

def get_first_key():
    global my_list_keys
    if my_list_keys:
        next_key = next(iter(my_list_keys))
    else:
        next_key = None
    return next_key

def get_next_key(key):
    global my_list_keys
    if my_list_keys:
        next_key = my_list_keys[key]
    else:
        next_key = None
    return next_key

def make_tag_value(tag, init, vtype):
    """
    Wrapper to create a _confd.TagValue
    """
    return _confd.TagValue(
        _confd.XmlTag(my_test_module_ns.hash, tag),
        _confd.Value(init, vtype))

def dp_op_to_str(op):
    op_to_str = {
        1: "C_SET_ELEM",
        2: "C_CREATE",
        3: "C_REMOVE",
        4: "C_SET_CASE",
        5: "C_SET_ATTR",
        6: "C_MOVE_AFTER",
    }
    try:
        op_str = op_to_str[op]
    except KeyError:
        op_str = 'UNKNOWN'

    return op_str

V = _confd.Value

# call statistics for each fo the registered data callbacks to keep tabs on
# how many times we access the different cb functions to show in the CLI
K_GET_OBJ = 0
K_FIND_NEXT = 1
K_FIND_NEXT_OBJ = 2
K_NUM_INSTANCES = 3
K_SET_ELEM = 4
K_CREATE = 5
K_REMOVE = 6
calls_keys = [K_GET_OBJ, K_FIND_NEXT, K_FIND_NEXT_OBJ,
              K_NUM_INSTANCES, K_SET_ELEM, K_CREATE, K_REMOVE]
dp_calls = {k: 0 for k in calls_keys}

class TransCbs(object):
    # transaction callbacks
    #
    # The installed init() function gets called every time ConfD
    # wants to establish a new transaction, Each NETCONF
    # command will be a transaction
    #
    # We can choose to create threads here or whatever, we
    # can choose to allocate this transaction to an already existing
    # thread. We must tell ConfD which file descriptor should be
    # Used for all future communication in this transaction
    # this has to be done through the call confd_trans_set_fd();

    def __init__(self, workersocket):
        self._workersocket = workersocket

    def cb_init(self, tctx):
        dp.trans_set_fd(tctx, self._workersocket)
        return _confd.CONFD_OK

    # This callback gets invoked at the end of the transaction
    # when ConfD has accumulated all write operations
    # we're guaranteed that
    # a) no more read ops will occur
    # b) no other transactions will run between here and tr_finish()
    #    for this transaction, i.e ConfD will serialize all transactions
    #  since we need to be prepared for abort(), we may not write
    # our data to the actual database, we can choose to either
    # copy the entire database here and write to the copy in the
    # following write operations _or_ let the write operations
    # accumulate operations create(), set(), delete() instead of actually
    # writing

    # If our db supports transactions (which it doesn't in this
    # silly example, this is the place to do START TRANSACTION

    def cb_write_start(self, tctx):
        return _confd.CONFD_OK

    def cb_prepare(self, tctx):
        return _confd.CONFD_OK

    def cb_commit(self, tctx):
        return _confd.CONFD_OK

    def cb_abort(self, tctx):
        return _confd.CONFD_OK

    def cb_finish(self, tctx):
        return _confd.CONFD_OK

class MyTestModuleDataCbs(object):
    """ Data provider callbacks for the my-test-list list of YANG model. """

    def __init__(self):
        pass

    def cb_get_object(self, tctx, kp):
        dp_calls[K_GET_OBJ] += 1
        print()
        print(f"    cb_get_object '{kp}' {type(kp)}")

        list_filter = dp.data_get_list_filter(tctx)
        print(f"    filter '{list_filter}' {type(list_filter)}")

        key = kp[0][0]

        if key in my_list:
            entry = my_list[key]
            vals = [
                make_tag_value(my_test_module_ns.my_test_module_name, key, _confd.C_STR),
                make_tag_value(my_test_module_ns.my_test_module_str_data, entry['str-data'], _confd.C_STR),
                make_tag_value(my_test_module_ns.my_test_module_int_data, entry['int-data'], _confd.C_INT32),
            ]
            dp.data_reply_tag_value_array(tctx, vals)
        else:
            dp.data_reply_not_found(tctx)
        return _confd.CONFD_OK


    def cb_find_next(self, tctx, kp, type_, keys):
        dp_calls[K_FIND_NEXT] += 1
        print()
        print(f"    cb_find_next '{kp}' {type(kp)}, {type_}, {keys}")

        list_filter = dp.data_get_list_filter(tctx)
        print(f"    filter '{list_filter}' {type(list_filter)}")

        try:
            key = None
            if keys:
                key = keys[0].as_pyval()
                if key:
                    next_key = get_next_key(key)
            else:
                next_key = get_first_key()

            print(f"  NEXT {next_key}")
            if next_key:
                next_key_list = [V(next_key, _confd.C_BUF)]  # key of the host is its name
            else:
                next_key_list = None
            dp.data_reply_next_key(tctx, next_key_list, -1)

        except Exception as e:
            ERROR(e)
            ERROR(traceback.format_exc())
            raise

        return _confd.CONFD_OK

    def cb_find_next_object(self, tctx, kp, type_, keys):
        dp_calls[K_FIND_NEXT_OBJ] += 1
        print()
        print(f"    cb_find_next_object '{kp}' {type(kp)}, {type_}, {keys}")

        list_filter = dp.data_get_list_filter(tctx)
        print(f"    filter '{list_filter}' {type(list_filter)}")

        try:
            key = None
            if keys:
                key = keys[0].as_pyval()
                if key:
                    next_key = get_next_key(key)
            else:
                next_key = get_first_key()

            objs = []
            idx = 0
            while next_key and idx < DB_READ_LIMIT:
                print(f"NEXT {next_key}")
                entry = my_list[next_key]
                objs.append(([
                    make_tag_value(my_test_module_ns.my_test_module_name, next_key, _confd.C_STR),
                    # make_tag_value(my_test_module_ns.my_test_module_str_data, entry['str-data'], _confd.C_STR),
                    # make_tag_value(my_test_module_ns.my_test_module_int_data, entry['int-data'], _confd.C_INT32),
                ], idx))
                next_key = get_next_key(next_key)
                idx += 1

            if objs:
                if idx < DB_READ_LIMIT:
                    objs.append((None, -1))
                dp.data_reply_next_object_tag_value_arrays(tctx, objs, 0)
            else:
                dp.data_reply_next_object_tag_value_arrays(tctx, None, -1)

        except Exception as e:
            ERROR(e)
            ERROR(traceback.format_exc())
            raise

        return _confd.CONFD_OK

    def cb_num_instances(self, tctx, kp):
        dp_calls[K_NUM_INSTANCES] += 1
        print()
        print(f"    cb_num_instances '{kp}'")

        list_filter = dp.data_get_list_filter(tctx)
        print(f"    filter '{list_filter}' {type(list_filter)}")

        count = len(my_list)
        v_count = V(count, _confd.C_INT32)
        dp.data_reply_value(tctx, v_count)
        return _confd.CONFD_OK

    def cb_set_elem(self, tctx, kp, newval):
        dp_calls[K_SET_ELEM] += 1
        return _confd.ACCUMULATE

    def cb_create(self, tctx, kp):
        dp_calls[K_CREATE] += 1
        return _confd.ACCUMULATE

    def cb_remove(self, tctx, kp):
        dp_calls[K_REMOVE] += 1
        return _confd.ACCUMULATE



def run(debuglevel):

    # In C we use confd_init() which sets the debug-level, but for Python the
    # call to confd_init() is done when we do 'import confd'.
    # Therefore we need to set the debug level here:
    _confd.set_debug(debuglevel, sys.stderr)

    # init library
    daemon_ctx = dp.init_daemon('my_daemon')

    confd_addr = '127.0.0.1'
    confd_port = _confd.PORT
    managed_path = '/'

    maapisock = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0)
    ctlsock = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0)
    wrksock = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0)
    try:
        dp.connect(daemon_ctx, ctlsock, dp.CONTROL_SOCKET, confd_addr,
                   confd_port, managed_path)
        dp.connect(daemon_ctx, wrksock, dp.WORKER_SOCKET, confd_addr,
                   confd_port, managed_path)

        maapi.connect(maapisock, confd_addr, confd_port, managed_path)
        maapi.load_schemas(maapisock)


        transaction_cb = TransCbs(wrksock)
        dp.register_trans_cb(daemon_ctx, transaction_cb)

        # database_cb = DatabaseCbs(wrksock, db)
        # dp.register_db_cb(daemon_ctx, database_cb)

        data_cb = MyTestModuleDataCbs()
        dp.register_data_cb(
            daemon_ctx,
            my_test_module_ns.callpoint_my_test_list,
            data_cb,
            flags=dp.DATA_WANT_FILTER)

        dp.register_done(daemon_ctx)

        try:
            _r = [ctlsock, wrksock]
            _w = []
            _e = []

            while True:
                print("Waiting for requests...")
                (r, w, e) = select.select(_r, _w, _e, 1)
                for rs in r:
                    if rs.fileno() == ctlsock.fileno():
                        try:
                            dp.fd_ready(daemon_ctx, ctlsock)
                        except _confd.error.Error as e:
                            if e.confd_errno is not _confd.ERR_EXTERNAL:
                                raise e
                    elif rs.fileno() == wrksock.fileno():
                        try:
                            dp.fd_ready(daemon_ctx, wrksock)
                        except _confd.error.Error as e:
                            if e.confd_errno is not _confd.ERR_EXTERNAL:
                                raise e

        except KeyboardInterrupt:
            print('\nCtrl-C pressed\n')

    finally:
        ctlsock.close()
        wrksock.close()
        dp.release_daemon(daemon_ctx)


if __name__ == '__main__':

    debug_levels = {
        'q': _confd.SILENT,
        'd': _confd.DEBUG,
        't': _confd.TRACE,
        'p': _confd.PROTO_TRACE,
    }

    parser = argparse.ArgumentParser(add_help=False,formatter_class=argparse.ArgumentDefaultsHelpFormatter)
    parser.add_argument(       "--help", action="help", default=argparse.SUPPRESS, help="Show this help message and exit.")
    parser.add_argument("-dl", "--debuglevel", dest="debuglevel", action="store", default='t', required=False, choices=debug_levels.keys(),
                        help=textwrap.dedent(
                            '''\
                            set the debug level:
                                q = quiet (i.e. no) debug
                                d = debug level debug
                                t = trace level debug
                                p = proto level debug
                            '''))
    parser.add_argument("-p", "--perf", action="store_true", default=False, help="Run a performance test on the key iteration logic.")
    parser.parse_args()
    args = parser.parse_args()

    init_my_list(MY_LIST_SIZE)

    confd_debug_level = debug_levels.get(args.debuglevel, _confd.TRACE)


    if args.perf:
        doc_start = time.time()
        count = 0
        next_key = get_first_key()
        while next_key:
            print(next_key)
            next_key = get_next_key(next_key)
            count += 1
        doc_end = time.time()
        print(f"my_list {count} keys, {round((doc_end - doc_start) * 1000)} ms")
    else:
        run(confd_debug_level)

subtree-filter.xml

<my-test-module:my-list xmlns:my-test-module="urn:my:yang:my-test-module">
    <my-test-module:my-entry>
        <my-test-module:name/>
    </my-test-module:my-entry>
</my-test-module:my-list>

Makefile

######################################################################
# Introduction example 1-2-3-start-query-model
# (C) 2006 Tail-f Systems
#
# See the README files for more information
######################################################################

usage:
	@echo "See README files for more instructions"
	@echo "make all     Build all example files"
	@echo "make clean   Remove all built and intermediary files"
	@echo "make start   Start CONFD daemon and example agent"
	@echo "make stop    Stop any CONFD daemon and example agent"
	@echo "make query-subtree   Run query with subtree filter against CONFD"
	@echo "make query-xpath     Run query with xpath filter against CONFD"
	@echo "make perf    Run performance test on list iteration"

######################################################################
# Where is ConfD installed? Make sure CONFD_DIR points it out
CONFD_DIR ?= ../../..

# Include standard ConfD build definitions and rules
include $(CONFD_DIR)/src/confd/build/include.mk

# In case CONFD_DIR is not set (correctly), this rule will trigger
$(CONFD_DIR)/src/confd/build/include.mk:
	@echo 'Where is ConfD installed? Set $$CONFD_DIR to point it out!'
	@echo ''

######################################################################
# Example specific definitions and rules

CONFD_FLAGS = --addloadpath $(CONFD_DIR)/etc/confd
START_FLAGS ?=

all: c-all
	@echo "Build complete"

common-all: $(CDB_DIR) ssh-keydir

c-all: common-all my-test-module.fxs my_test_module_ns.py
	@echo "C build complete"

my-test-module.fxs: my-test-module.yang my-test-module-ann.yang
	$(CONFDC) --fail-on-warnings -a my-test-module-ann.yang -c -o my-test-module.fxs my-test-module.yang

my_test_module_ns.py: my-test-module.fxs
	$(CONFDC) --emit-python my_test_module_ns.py my-test-module.fxs


######################################################################
clean:	iclean
	-rm -rf *log *trace cli-history 2> /dev/null || true
	-rm -rf my_test_module_ns.py *.pyc __init__.py __pycache__ 2> /dev/null || true

######################################################################
#start:  stop start_confd start_subscriber
start:  stop
	$(CONFD)  -c confd.conf $(CONFD_FLAGS)
	### * In one terminal window, run: tail -f ./confd.log
	### * In another terminal window, run queries
	###   (try 'make query' for an example)
	### * In this window, the HOSTS confd daemon now starts:
	$(PYTHON) my_test_module.py --debuglevel t

######################################################################
stop:
	### Killing any confd daemon or my-test-module confd agents
	$(CONFD) --stop    || true
	$(KILLALL) `pgrep -f "$(PYTHON) my_test_module.py"` || true

######################################################################
query-subtree:
	$(CONFD_DIR)/bin/netconf-console --host localhost --port 830 --get-config --subtree-filter subtree-filter.xml

######################################################################
query-xpath:
	$(CONFD_DIR)/bin/netconf-console --host localhost --port 830 --get-config -x /my-list/my-entry/name

######################################################################
perf:
	$(PYTHON) my_test_module.py --perf

######################################################################

confd.conf

<!-- -*- nxml -*- -->
<!-- This configuration is good for the examples, but are in many ways
     atypical for a production system. It also does not contain all
     possible configuration options.

     Better starting points for a production confd.conf configuration
     file would be confd.conf.example. For even more information, see
     the confd.conf man page.

     E.g. references to current directory are not good practice in a
     production system, but makes it easier to get started with
     this example. There are many references to the current directory
     in this example configuration.
-->

<confdConfig xmlns="http://tail-f.com/ns/confd_cfg/1.0">
  <!-- The loadPath is searched for .fxs files, javascript files, etc.
       NOTE: if you change the loadPath, the daemon must be restarted,
       or the "In-service Data Model Upgrade" procedure described in
       the User Guide can be used - 'confd - -reload' is not enough.
  -->
  <loadPath>
    <dir>.</dir>
  </loadPath>

  <stateDir>.</stateDir>

  <enableAttributes>false</enableAttributes>

  <cdb>
    <enabled>true</enabled>
    <dbDir>./confd-cdb</dbDir>
    <operational>
      <enabled>true</enabled>
    </operational>
  </cdb>

  <rollback>
    <enabled>true</enabled>
    <directory>./confd-cdb</directory>
  </rollback>

  <!-- These keys are used to encrypt values adhering to the types
       tailf:des3-cbc-encrypted-string and tailf:aes-cfb-128-encrypted-string
       as defined in the tailf-common YANG module. These types are
       described in confd_types(3).
  -->
  <encryptedStrings>
    <DES3CBC>
      <key1>0123456789abcdef</key1>
      <key2>0123456789abcdef</key2>
      <key3>0123456789abcdef</key3>
      <initVector>0123456789abcdef</initVector>
    </DES3CBC>

    <AESCFB128>
      <key>0123456789abcdef0123456789abcdef</key>
      <initVector>0123456789abcdef0123456789abcdef</initVector>
    </AESCFB128>
  </encryptedStrings>

  <logs>
    <!-- Shared settings for how to log to syslog.
         Each log can be configured to log to file and/or syslog.  If a
         log is configured to log to syslog, the settings below are used.
    -->
    <syslogConfig>
      <!-- facility can be 'daemon', 'local0' ... 'local7' or an integer -->
      <facility>daemon</facility>
      <!-- if udp is not enabled, messages will be sent to local syslog -->
      <udp>
        <enabled>false</enabled>
        <host>syslogsrv.example.com</host>
        <port>514</port>
      </udp>
    </syslogConfig>

    <!-- 'confdlog' is a normal daemon log.  Check this log for
         startup problems of confd itself.
         By default, it logs directly to a local file, but it can be
         configured to send to a local or remote syslog as well.
    -->
    <confdLog>
      <enabled>true</enabled>
      <file>
        <enabled>true</enabled>
        <name>./confd.log</name>
      </file>
      <syslog>
        <enabled>true</enabled>
      </syslog>
    </confdLog>

    <!-- The developer logs are supposed to be used as debug logs
         for troubleshooting user-written javascript and c code.  Enable
         and check these logs for problems with validation code etc.
    -->
    <developerLog>
      <enabled>true</enabled>
      <file>
        <enabled>true</enabled>
        <name>./devel.log</name>
      </file>
      <syslog>
        <enabled>false</enabled>
      </syslog>
    </developerLog>

    <auditLog>
      <enabled>true</enabled>
      <file>
        <enabled>true</enabled>
        <name>./audit.log</name>
      </file>
      <syslog>
        <enabled>true</enabled>
      </syslog>
    </auditLog>

    <errorLog>
      <enabled>true</enabled>
      <filename>./confderr.log</filename>
    </errorLog>

    <!-- The netconf log can be used to troubleshoot NETCONF operations,
         such as checking why e.g. a filter operation didn't return the
         data requested.
    -->
    <netconfLog>
      <enabled>true</enabled>
      <file>
        <enabled>true</enabled>
        <name>./netconf.log</name>
      </file>
      <syslog>
        <enabled>false</enabled>
      </syslog>
    </netconfLog>

    <webuiBrowserLog>
      <enabled>true</enabled>
      <filename>./browser.log</filename>
    </webuiBrowserLog>

    <webuiAccessLog>
      <enabled>true</enabled>
      <dir>./</dir>
    </webuiAccessLog>

    <netconfTraceLog>
      <enabled>false</enabled>
      <filename>./netconf.trace</filename>
      <format>pretty</format>
    </netconfTraceLog>

  </logs>

  <!-- Defines which datastores confd will handle. -->
  <datastores>
    <!-- 'startup' means that the system keeps separate running and
         startup configuration databases.  When the system reboots for
         whatever reason, the running config database is lost, and the
         startup is read.
         Enable this only if your system uses a separate startup and
         running database.
    -->
    <startup>
      <enabled>false</enabled>
    </startup>

    <!-- The 'candidate' is a shared, named alternative configuration
         database which can be modified without impacting the running
         configuration.  Changes in the candidate can be commit to running,
         or discarded.
         Enable this if you want your users to use this feature from
         NETCONF, CLI or WebGUI, or other agents.
    -->
    <candidate>
      <enabled>false</enabled>
      <!-- By default, confd implements the candidate configuration
           without impacting the application.  But if your system
           already implements the candidate itself, set 'implementation' to
           'external'.
      -->
      <!--implementation>external</implementation-->
      <implementation>confd</implementation>
      <storage>auto</storage>
      <filename>./confd_candidate.db</filename>
    </candidate>

    <!-- By default, the running configuration is writable.  This means
         that the application must be prepared to handle changes to
         the configuration dynamically.  If this is not the case, set
         'access' to 'read-only'.  If running is read-only, 'startup'
         must be enabled, and 'candidate' must be disabled.  This means that
         the application reads the configuration at startup, and then
         the box must reboort in order for the application to re-read it's
         configuration.

         NOTE: this is not the same as the NETCONF capability
         :writable-running, which merely controls which NETCONF
         operations are allowed to write to the running configuration.
    -->
    <running>
      <access>read-write</access>
    </running>
  </datastores>

  <aaa>
    <sshServerKeyDir>./ssh-keydir</sshServerKeyDir>
  </aaa>

  <netconf>
    <enabled>true</enabled>
    <transport>
      <ssh>
    <enabled>true</enabled>
    <ip>0.0.0.0</ip>
    <port>830</port>
      </ssh>

      <!-- NETCONF over TCP is not standardized, but it can be useful
       during development in order to use e.g. netcat for scripting.
      -->
      <tcp>
    <enabled>false</enabled>
    <ip>127.0.0.1</ip>
    <port>2023</port>
      </tcp>
    </transport>

    <capabilities>
      <confirmed-commit>
        <enabled>false</enabled>
      </confirmed-commit>

      <rollback-on-error>
        <enabled>true</enabled>
      </rollback-on-error>

      <actions>
        <enabled>true</enabled>
      </actions>

    </capabilities>
  </netconf>
  <webui>
    <enabled>false</enabled>

    <transport>
      <tcp>
        <enabled>true</enabled>
        <ip>127.0.0.1</ip>
        <port>8008</port>
      </tcp>

      <ssl>
        <enabled>true</enabled>
        <ip>127.0.0.1</ip>
        <port>8888</port>
      </ssl>
    </transport>

    <cgi>
      <enabled>true</enabled>
      <php>
        <enabled>true</enabled>
      </php>
    </cgi>
  </webui>
</confdConfig>

XPath filters are handled a bit different than a subtree filters. In general, XPath can do much more complex things than subtree filters. So ConfD is currently getting the keys using find_next() when getting the keys when as with subtree filters find_next_object() is used, which is more optimal for your use case. Perhaps there is room for XPath to data provider optimizations. Can you file a ticket with the ConfD Tail-f support?