CVE-2019-19781 poor man's ktrace(1) driven analysis

📆
🏷
security, shitrix, citrix, cve-2019-19781

As far as I understand the situation, CVE-2019-19781 is a path traversal vulnerability at it's core. I don't want to reiterate information that is already readily available via google and such. Especially as the best I can do is to replicate the information without changing even minor details. If you want to learn more about CVE-2019-19781 I have to ask you to use a search engine of your choice.

Lab setup

As mentioned above I wasn't to eager about drilling into a malicious binary inside a production environment so I started to setup a lab for analysing it.

Lab Setup

The setup itself is pretty basic and is running on an ESXi host. While firewall is a piece of infrastructure in order to give me easy access to my target, freebi is the machine I am running the httpd on. In order to stop the malware from phoning home I setup a blocking rule on firewall which blocks and logs all outgoing traffic coming from freebi. Also I specifically chose 198.51.100.0/24 because it is reserved for documentation and it should not being routed on the internet. So even if the box phones home I am safe as long as I won't accidentally NAT the traffic of the box. freebi itself is running a stock FreeBSD 8.4-RELEASE which gives me enough an environment to run httpd without fiddling with libraries and extra software:

root@freebi:~ # uname -a
FreeBSD freebi.my.domain 8.4-RELEASE FreeBSD 8.4-RELEASE #0 r251259: Sun Jun  2 21:26:57 UTC 2013     root@bake.isc.freebsd.org:/usr/obj/usr/src/sys/GENERIC  amd64
root@freebi:~ #

Sneak Peak

Initially I ran strings(1) on httpd in order to see if I can extract something from the source. Sometimes strings(1) is enough to get some valuable information out of a given binary but this time I was out of luck. Even though I will use strings(1) later in the process.

The next best thing I came up with was trying to run the process and ktrace(1) the thing, which I did:

root@freebi:~ # ktrace ./httpd 
root@freebi:~ # pgrep httpd
root@freebi:~ #

Strangely enough httpd immediately quit running, but why? Luckily I traced the binary so I could investigate the resulting ktrace.out via kdump(1). Just by letting kdump(1) spit out the contents of the trace I was able to see that

  1. something was written
  2. whatever it was, it was written to /var/nstmp/.nscache/ a directory unknown to me
  3. after writing the binary just quits

My guess would be that ns is a shortname for netscaler. Searching the directory reveals that there is indeed just one file being created

root@freebi:~ # find /var/nstmp
/var/nstmp
/var/nstmp/.nscache
/var/nstmp/.nscache/httpd
root@freebi:~ #

Checking with stat(1) I was also able to determine that the file has just been recently created

root@freebi:~ # stat -x /var/nstmp/.nscache/httpd 
  File: "/var/nstmp/.nscache/httpd"
  Size: 2055816      FileType: Regular File
  Mode: (0744/-rwxr--r--)         Uid: (    0/    root)  Gid: (    0/   wheel)
Device: 0,83   Inode: 518146    Links: 1
Access: Sun Feb 16 21:39:03 2020
Modify: Sun Feb 16 21:39:03 2020
Change: Sun Feb 16 21:39:03 2020
root@freebi:~ # date
Sun Feb 16 21:53:07 CET 2020
root@freebi:~ #

Checksumming both files with sha1(1) also shows that both files are the same:

root@freebi:~ # sha1 /var/nstmp/.nscache/httpd httpd | column -t
SHA1  (/var/nstmp/.nscache/httpd)  =  1c298ac9cba039a4d3adebedbd7ad714e1633d92
SHA1  (httpd)                      =  1c298ac9cba039a4d3adebedbd7ad714e1633d92
root@freebi:~ #

Rerunning the binary did not change the behaviour, it just kept copying itself to /var/nstmp/.nscache/ and then quits. So the next best thing would be to run the binary from it's new directory and see what's going on:

root@freebi:~ # ktrace /var/nstmp/.nscache/httpd 

Nice, this time the binary kept on running but literally seconds later I was prompted with bad news:

/: write failed, filesystem is full

So I stopped the binary and went on looking for what just filled my disk. I didn't have to search for long to find the culprit: ktrace.out was eating up my disk. kdump(1)ing the file I saw an endless loop of checking whether /netscaler/ports/scripts did exist and doing so as fast as the CPU can spin:

...
   825 httpd    CALL  open(0xc4201c3160,O_CLOEXEC,<unused>0)
   825 httpd    NAMI  "/netscaler/portal/scripts"
   825 httpd    RET   open -1 errno 2 No such file or directory
...

So again I went on, created the neccessary directories and started httpd. Only to find my disk being filled up again. Slower this time but again very well within a minute. This time we are looking up /netscaler/portal/scripts to death. Obviously both directories play a vital role in our kill chain. Funnily enough this time the malware did not create the neccesary directories but just kept on looking for them.

On I went, created the templates directory and started the binary again. To my delight the process kept on running without filling up my disk as quickly as before. Checking the situation with netstat something funny came up. Something was listening on *.18634/udp. Again checking with kdump I can confirm that my malicious httpd opened the socket

root@freebi:~ # kdump | grep 18634
   858 httpd    STRU  struct sockaddr { AF_INET6, [::]:18634 }
   858 httpd    STRU  struct sockaddr { AF_INET6, [::]:18634 }
root@freebi:~ #

Browsing through my ktrace.out I was able to see that the process keeps on looking at the directory entries of both /netscaler/portal/templates and /netscaler/portal/scripts. Dumping an empty file in the templates directory did not do any good. Doing the same in the scripts directory on the other hand immediately got my file deleted. Again digging through my ktrace I saw httpd was indeed opening the file and for some reason deleted it right away

root@freebi:/netscaler/portal/scripts # touch foo.xml
root@freebi:/netscaler/portal/scripts # ls
root@freebi:/netscaler/portal/scripts # cd -
root@freebi:~ # kdump | grep -A1 -B5 asfdg
   920 httpd    CALL  getdirentries(0x5,0xc4200f1000,0x1000,0xc420044c40)
   920 httpd    RET   getdirentries 0
   920 httpd    CALL  getdirentries(0x6,0xc4200be000,0x1000,0xc420042bd0)
   920 httpd    CALL  lstat(0xc4200123a0,0xc420380928)
   920 httpd    RET   getdirentries 512/0x200
   920 httpd    NAMI  "/netscaler/portal/scripts/asfdg"
   920 httpd    CALL  getdirentries(0x6,0xc4200be000,0x1000,0xc420042bd0)
--
   920 httpd    STRU  struct stat {dev=81, ino=77, mode=-rw-r--r-- , nlink=1, uid=0, gid=0, rdev=0, atime=1581888855, stime=1581888569, ctime=1581888652, birthtime=1581888569, size=0, blksize=16384, blocks=0, flags=0x0 }
   920 httpd    CALL  clock_gettime(0x4,0xc420044ed8)
   920 httpd    RET   lstat 0
   920 httpd    RET   clock_gettime 0
   920 httpd    CALL  open(0xc4200123e0,O_CLOEXEC,<unused>0)
   920 httpd    NAMI  "/netscaler/portal/scripts/asfdg"
   920 httpd    RET   open 5
--
   920 httpd    CALL  read(0x6,0xc42042a200,0x200)
   920 httpd    GIO   fd 6 read 0 bytes
       ""
   920 httpd    CALL  unlink(0xc420012400)
   920 httpd    RET   read 0
   920 httpd    NAMI  "/netscaler/portal/scripts/asfdg"
   920 httpd    CALL  close(0x6)
root@freebi:~ #

But why? Without having the source code there is no chance that I was able to determine the behaviour by just looking at syscalls done by httpd. So I went ahead and started to google which let me to the following blog post. Reading through the posting I read about the doFile function which checked the file for the existence of a 32byte long secret. If the secret matches the secret which had been hard coded into the binary the file can stay. Otherwise it will be deleted. Guessing that my key will be composed of only lowercase letters and numbers I went back to strings(1) to see what I could find. As my first pass showed up with too many results I decided to filter out anything that just consisted of either numbers ([0-9]{32}) or letters ([a-z]{32}):

root@freebi:~ # strings httpd | grep -o '[a-z0-9]\{32\}' | egrep -v '^([0-9]{32}|[a-z]{32})$'
cas1cas2cas3cas4cas5cas6chandead
mdnsnoneopenpop3readsbrkscvgseek
smtpstattcp4tcp6trueudp6uintunix
arrayblockcasp1casp2casp3chdirch
gcinghostshttpdhttpsimap2imap3im
apsint16int32int64lstatmatchmkdi
reeatomicand8casgstatuscomplex12
existsfloat32nan2float64nan2floa
protocols19073486328125953674316
conf0123456789abcdef238418579101
0123456789abcdefx119209289550781
nscache1490116119384765625745058
templates36379788070917129516601
type2842170943040400743484497070
ttyb645b0ab298b55816d686c4c5ff52
value142108547152020037174224853
type3552713678800500929355621337
completely0123456789abcdefghijkl
mnopqrstuvwxyz444089209850062616
nextp111022302462515654042363166
expression1387778780781445675529
method00010203040506070809101112
4orashishupjekcekwadvowjovyavjea
root@freebi:~ #

While I could have created a file per result I just went ahead with the most obvious hit 4orashishupjekcekwadvowjovyavjea and lo and behold, the file remained:

root@freebi:/netscaler/portal/scripts # echo 4orashishupjekcekwadvowjovyavjea > asdfg
root@freebi:/netscaler/portal/scripts # ls
asdfg
root@freebi:/netscaler/portal/scripts #

This is where my analysis has to stop because there was nothing else to gain from only reading syscalls, at least for me. At least I was able to retrieve the secret key of the binary that had been used in this case. I wonder if this could be used to trace back the action to a specific actor.

CAVEAT: I did replace the secret key with a fake one generated by me. If the secret key indeed is somewhat unique I don't want to have my analysis being publicly linked to a specific actor. Also, please keep in mind that I am not a security researcher, nor do I work in digital forensics. So if the above seemed to be stupid, it probably was ;-)

Hope you could enjoy the ride and if you have learned something, even better :-)

--EOF