Cli prompt with clobbered username instead of actual username

while testing on confd8.0 With PAM libraries for tacacs+ protocol and lib_nss, we see the user name and password checks pass, but the username is clobbered in confd_cli when we attempt ssh

In confd -7.3.2 version it is printed properly

ems1 connected from 10.x.x.x using ssh on fed
ems1@fed 04:58:26#
ems1@fed 04:58:27#
ems1@fed 04:58:27# show running-config

In confd 8.0 i have seen below issue

connected from 10.x.x.x using ssh on fed
4d85c773b8d2e131564b4abf
@fed 07:42:52#

Below are the logs

 6843 2023-06-23T07:46:59.971879+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[768]: src/nss_tacplus.c : begin lookup: user=`bobOPER', server=`10.0.210.134:49'
 6844 2023-06-23T07:46:59.986139+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[768]: Args cnt 1
 6845 2023-06-23T07:46:59.986236+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[768]: Adding buf/valuepair (group,OPER)
 6846 2023-06-23T07:46:59.986254+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[768]: src/nss_tacplus.c: found match: user=`bobOPER', server=`10.0.210.134:49', status=1, attributes? yes
 6847 2023-06-23T07:46:59.986266+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[768]: src/nss_tacplus.c: mapped group: oper, mapped shell: /usr/cna/cna-cfgmgr/cfgmgr/bin/cli
 6848 2023-06-23T07:46:59.986708+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[768]: debug3: mm_request_send: entering, type 29
 6849 2023-06-23T07:46:59.986778+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[768]: debug3: mm_answer_pty: tty /dev/pts/0 ptyfd 3
 6850 2023-06-23T07:46:59.986818+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[774]: debug1: session_pty_req: session 0 alloc /dev/pts/0
 6851 2023-06-23T07:46:59.986885+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[774]: debug3: send packet: type 99
 6852 2023-06-23T07:46:59.986937+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[774]: debug3: receive packet: type 98
 6853 2023-06-23T07:46:59.986995+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[774]: debug1: server_input_channel_req: channel 0 request env reply 0
 6854 2023-06-23T07:46:59.987049+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[774]: debug1: session_by_channel: session 0 channel 0
 6855 2023-06-23T07:46:59.987119+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[774]: debug1: session_input_channel_req: session 0 req env
 6856 2023-06-23T07:46:59.987153+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[774]: debug2: Setting env 0: LANG=C.UTF-8
 6857 2023-06-23T07:46:59.987185+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[774]: debug3: receive packet: type 98
 6858 2023-06-23T07:46:59.987251+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[774]: debug1: server_input_channel_req: channel 0 request shell reply 1
 6859 2023-06-23T07:46:59.987366+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[774]: debug1: session_by_channel: session 0 channel 0
 6860 2023-06-23T07:46:59.987397+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[774]: debug1: session_input_channel_req: session 0 req shell
 6861 2023-06-23T07:46:59.987436+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[774]: Starting session:shell on pts/0 for bobOPER from 10.91.179.232 port 27034 id 0
 6862 2023-06-23T07:46:59.987960+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[774]: debug2: fd 4 setting TCP_NODELAY
 6863 2023-06-23T07:46:59.988004+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[774]: debug3: set_sock_tos: set socket 4 IP_TOS 0x48
 6864 2023-06-23T07:46:59.988053+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[774]: debug2: channel 0: rfd 13 isatty
 6865 2023-06-23T07:46:59.988085+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[774]: debug2: fd 13 setting O_NONBLOCK
 6866 2023-06-23T07:46:59.988142+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[774]: debug3: fd 3 is O_NONBLOCK
 6867 2023-06-23T07:46:59.988156+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[775]: debug1: Setting controlling tty using TIOCSCTTY.
 6868 2023-06-23T07:46:59.988287+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[774]: debug3: mm_forward_audit_messages: entering
 6869 2023-06-23T07:46:59.988495+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[775]: debug3: mm_request_send: entering, type 124
 6870 2023-06-23T07:46:59.988614+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[768]: debug3: mm_request_receive: entering
 6871 2023-06-23T07:46:59.988724+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[768]: debug3: monitor_read: checking request 124
 6872 2023-06-23T07:46:59.988765+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[774]: debug3: send packet: type 99
 6873 2023-06-23T07:46:59.988930+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[775]: debug3: Copy environment: GROUP=OPER
 6874 2023-06-23T07:46:59.988987+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[775]: debug3: Copy environment: MOTD_SHOWN=pam
 6875 2023-06-23T07:47:00.054669+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 confd_cli: src/nss_tacplus.c: started, uid=(1628111069:1628111069), gid=(1100:4001)
 6876 2023-06-23T07:47:00.054716+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 confd_cli: src/nss_tacplus.c: begin lookup: user=`95ba917a148448b92023c3c2#012', server=`10.0.210.134:49'
 6877 2023-06-23T07:47:00.057666+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 confd_cli: Args cnt 0
 6878 2023-06-23T07:47:00.057679+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 confd_cli: src/nss_tacplus.c: found match: user=`95ba917a148448b92023c3c2#012', server=`10.0.210.134:49', status=1, attributes? no
 6879 2023-06-23T07:47:00.057684+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 confd_cli: src/nss_tacplus.c: missing required attribute 'GID'
 6880 2023-06-23T07:47:00.068824+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 confd[313]: audit user: 95ba917a148448b92023c3c2#012/25 assigned to groups: oper
 6881 2023-06-23T07:47:00.068902+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 confdAudit: [239] user=95ba917a148448b92023c3c2#012/25 event=105/GROUP_ASSIGN : assigned to groups: oper
 6882 2023-06-23T07:47:00.069040+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 confdAudit: [239] user=95ba917a148448b92023c3c2#012/25 event=152/SESSION_CREATE : created new session via cli from 10.91.179.232:27034 with       ssh : dest externalIp= 10.91.179.141

Does not seem ConfD or confd_cli related to me. You seem to get the username from OpenSSH sshd and authenticate using the src/nss_tacplus.c program. ConfD is not involved yet.

Hi @cohult

Thanks for the response.

  1. The main reason why we asked this is in confd-7.3.2 it worked fine and issue
    seen only when we upgraded to confd-8.0 (we have not changed any other
    packages like nss_tacplus etc…
  2. If we see the logs above shared, in sshd context, it is able to show the
    username correctly like below
   6846 2023-06-23T07:46:59.986254+00:00 pod-cfgmgr-fed-smf-aks-worker- 
   52411039-vmss000003 sshd[768]: src/nss_tacplus.c: found match: 
   user=`bobOPER', server=` 10.0.210.134:49’, status=1, attributes? yes

only when the confd_cli process context , it is showing up in the clobbered username

6874 2023-06-23T07:46:59.988987+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 sshd[775]: debug3: Copy environment: MOTD_SHOWN=pam

6875 2023-06-23T07:47:00.054669+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 confd_cli: src/nss_tacplus.c: started, uid=(1628111069:1628111069), gid=(1100:4001)

6876 2023-06-23T07:47:00.054716+00:00 pod-cfgmgr-fed-smf-aks-worker-52411039-vmss000003 confd_cli: src/nss_tacplus.c: begin lookup: user=`95ba917a148448b92023c3c2#012', server=` 10.0.210.134:49’

So we suspect some issue in confd_cli mapping which we are not getting

Not sure what is the crypted format of the username here user=`95ba917a148448b92023c3c2#012’ , is it some additional char or tab or sapce is added at the end causing this ?

Thanks,
Raja

Those are your application logs, so it is hard to tell why something is incorrectly printed in a log you created. What does the ConfD audit log say?
The confd_cli program takes the user and group you provide and gives it to ConfD for authorization. You have authenticated the user before starting a CLI session using the confd_cli program. For details, see the confd_cli(1) man page and the source code under $CONFD_DIR/src/confd/cli/clistart.c. What parameters do you start the confd_cli program with?
What is the SSH_CONNECTION variable set to?
What is the UID and username in the /etc/passwd file?

hi @cohult

Answers to your Qs:

  1. What parameters do you start the confd_cli program with?
    /confd/bin/confd_cli -C -G 1100 -D 1100 -H . Well no username is being passed from day one.
  2. SSH_CONNECTION env is not set
  3. The uid and username is NOT set in /etc/passwd file as this is a remote (TACACS+) user

Having said that, I found out something interesting in /var/log/secure. As Raja pointed out, the only difference is upgrade from 7.3.2 to 8.0 which is causing this behaviour.

From /var/log/secure in 7.3.2, you can see that the user ‘ems1’ is printed in sshd session and confd session
Jun 22 09:52:30 sshd[579]: pam_unix(sshd:session): session opened for user ems1(uid=5269478) by (uid=0)
Jun 22 09:52:30 confd[321]: audit user: ems1/27 assigned to groups: oper
Jun 22 09:52:30 confd[321]: audit user: ems1/27 created new session via cli from 10.91.175.230:36996 with ssh
Jun 22 09:52:33 confd[321]: audit user: ems1/27 CLI ‘operational top exit’
Jun 22 09:52:33 confd[321]: audit user: ems1/27 terminated session (reason: normal)
Jun 22 09:52:33 sshd[579]: pam_unix(sshd:session): session closed for user ems1

But the 8.0 logs shows that ‘sshd’ is identifying the user correctly. However ‘confd’ module is NOT.
Jun 26 10:17:24 sshd[3324]: pam_unix(sshd:session): session opened for user ems1(uid=5269478) by (uid=0)
Jun 26 10:17:24 confd[316]: audit user: 3549493ec1c742cf1549de53#012/43 assigned to groups: oper
Jun 26 10:17:24 confd[316]: audit user: 3549493ec1c742cf1549de53#012/43 created new session via cli from 10.91.175.230:1119 with ssh
Jun 26 10:17:26 confd[316]: audit user: 3549493ec1c742cf1549de53#012/43 CLI ‘operational top id’
Jun 26 10:17:26 confd[316]: audit user: 3549493ec1c742cf1549de53#012/43 CLI done
Jun 26 10:17:42 sshd[3347]: PAM pam_parse: expecting return value; […requried]
Jun 26 10:17:42 sshd[3347]: PAM pam_parse: expecting return value; […requried]
Jun 26 10:17:42 sshd[3347]: PAM pam_parse: expecting return value; […requried]
Jun 26 10:17:42 sshd[3347]: PAM pam_parse: expecting return value; […requried]
Jun 26 10:19:22 confd[316]: audit user: 3549493ec1c742cf1549de53#012/43 CLI ‘operational top exit’
Jun 26 10:19:22 confd[316]: audit user: 3549493ec1c742cf1549de53#012/43 terminated session (reason: normal)
Jun 26 10:19:22 sshd[3324]: pam_unix(sshd:session): session closed for user ems1

Hi,
As you leave it up to the confd_cli program to determine the user, see $CONFD_DIR/src/confd/cli/clistart.c (search for “determine user name”) the getlogin function is used to determine the username. The Linux getuser function uses utmp to determine the user just as the who command does. See utmp - Wikipedia.
One difference between 7.3.2 and 8.0 is that the confd_cli program uses a buffer to store the user name. If, for example, that memory is overwritten by some other program, you can get the kind of garbage username you observe.
You have the confd_cli program source code, so you should be able to debug the issue.

Hi @cohult - Thanks a lot for the pointers. I had put some debug statements and figured out that call to getlogin_r (this is the actual system call used) is returning a NOENT (means no username) and that’s the reason garbage is printed. The /var/run/utmp file is not present in my container. Creating an empty file, so that calling program can actually write into it, fixed the problem.

1 Like