cover image
The Illustrated TLS 1.2 Connection
5 Mar 2026
tls12.xargs.org

Every byte of a TLS connection explained and reproduced

cover image

BinaryAudit benchmarks AI agents using Ghidra to find backdoors in compiled binaries of real open-source servers, proxies, and network infrastructure.

cover image
The dangers of SSL certificates
28 Dec 2025
surfingcomplexity.blog

Yesterday, the Bazel team at Google did not have a very Merry Boxing Day. An SSL certificate expired for and as shown in this screenshot from the github issue. This expired certificate apparently b…

A Brief History of NSA Backdoors.
28 Nov 2025
ethanheilman.com
cover image
An Illustrated Guide to OAuth
25 Aug 2025
ducktyped.org

OAuth was first introduced in 2007.

cover image

A story of secrecy, resistance, and the fight for digital freedom.

cover image
I use Zip Bombs to Protect my Server
30 Apr 2025
idiallo.com

The majority of the traffic on the web is from bots. For the most part, these bots are used to discover new content. These are RSS Feed readers, search engines crawling your content, or nowadays AI bo

cover image

MCP, short for Model Context Protocol, is the hot new standard behind how Large Language Models (LLMs) like Claude, GPT, or Cursor integrate with tools and data. It’s been described as the “USB-C for…

cover image

Containers were a revolutionary jump ahead of virtual machines, and they continue to get faster, lighter and more secure in the years since.

cover image

Telegram's founder Pavel Durov says his company only employs around 30 engineers. Security experts say that raises serious questions about the company's cybersecurity.

cover image

Researchers also disclosed a separate bug called “Inception” for newer AMD CPUs.

cover image
Downfall Attacks
9 Aug 2023
downfall.page

Downfall attacks targets a critical weakness found in billions of modern processors used in personal and cloud computers.

cover image
Tokenized Tokens
29 Jul 2023
fly.io

Documentation and guides from the team at Fly.io.

cover image

Discover the power of browser fingerprinting: personalize user experience, enhance fraud detection, and optimize login security.

cover image

The Mullvad Browser is a privacy-focused web browser developed in collaboration between Mullvad VPN and the Tor Project. It’s produced to minimize tracking and fingerprinting. You could say it’s a Tor Browser to use without the Tor Network.

cover image

Ensure your Rails application stays secure by following some best practices and habits.

cover image

The Cyber Swiss Army Knife - a web app for encryption, encoding, compression and data analysis - gchq/CyberChef

Router Security
30 Jul 2022
routersecurity.org

Router Security Home Page

cover image

Acronyms are shortcuts, and we love using them, specially the catchy ones! Let's decipher some...

cover image

While the core philosophy of blockchains is trustlessness, trusted execution environments can be integral to proof-of-stake blockchains.

cover image
Intelligent System Security
12 Feb 2022
intellisec.de

We present a novel approach to infiltrate data to air-gapped systems without any additional hardware on-site.

cover image

Welcome to the Top 10 (new) Web Hacking Techniques of 2021, the latest iteration of our annual community-powered effort to identify the most significant web security research released in the last year

cover image

This article was originally written by Diogo Souza on the Honeybadger Developer Blog. In the third...

cover image
Never use the word “User” in your code
23 Jan 2022
codewithoutrules.com

You’re six months into a project when you realize a tiny, simple assumption you made at the start was completely wrong. And now you need to fix the problem while keeping the existing system running—with far more effort than it would’ve taken if you’d just gotten it right in the first place. Today I’d like to tell you about one common mistake, a single word that will cause you endless trouble. I am speaking, of course, about “users”. There are two basic problems with this word: “User” is almost never a good description of your requirements. “User” encourages a fundamental security design flaw. The concept “user” is dangerously vague, and you will almost always be better off using more accurate terminology.

cover image

Protect and discover secrets using Gitleaks 🔑.

Hacksplaining
12 Jan 2022
hacksplaining.com
https://httpsecurityreport.com/best_practice.html
12 Jan 2022
httpsecurityreport.com
cover image
The Basics of Web Application Security
13 Dec 2021
martinfowler.com

Security is both very important and often under-emphasized. While many targeted techniques help, there are some basic clean code habits which every developer can and should be doing

cover image

How to secure an ubuntu server against attacks.

cover image

From NotPetya to SolarWinds, it’s a problem that’s not going away any time soon.

cover image
Diffie-Hellman for the Layman
1 May 2021
borisreitman.medium.com

Whitfield Diffie and Martin Hellman are researchers who invented a safe method to communicate a password. Their 1976 paper opens with the…

cover image

Although phishing tests can be helpful to protect users, using questionable tactics has the potential for harming relationships between a company and its employees. The authors suggest that managers avoid this damage by employing phishing tests with three criteria: Test teams, not individuals; don’t embarrass anyone; and gamify and reward.

APT Encounters of the Third Kind - Igor’s Blog
27 Mar 2021
igor-blue.github.io

A few weeks ago an ordinary security assessment turned into an incident response whirlwind. It was definitely a first for me, and I was kindly granted permission to outline the events in this blog post. This investigation started scary but turned out be quite fun, and I hope reading it will be informative to you too. I'll be back to posting about my hardware research soon. How it started What hell is this? The NFS Server 2nd malicious binary Further forensics Eureka Moment The GOlang thingy How the kernel got patched? and why not the golang app? What we have so far Q&A How it started Twice a year I am hired to do security assessments for a specific client. We have been working together for several years, and I had a pretty good understanding of their network and what to look for. This time my POC, Klaus, asked me to focus on privacy issues and GDPR compliance. However, he asked me to first look at their cluster of reverse gateways / load balancers: I had some prior knowledge of these gateways, but decided to start by creating my own test environment first. The gateways run a custom Linux stack: basically a monolithic compiled kernel (without any modules), and a static GOlang application on top. The 100+ machines have no internal storage, but rather boot from an external USB media that has the kernel and the application. The GOlang app serves in two capacities: an init replacement and the reverse gateway software. During initialization it mounts /proc, /sys, devfs and so on, then mounts an NFS share hardcoded in the app. The NFS share contains the app's configuration, TLS certificates, blacklist data and a few more. It starts listening on 443, filters incoming communication and passes valid requests on different services in the production segment. I set up a self contained test environment, and spent a day in black box examination. Having found nothing much I suggested we move on to looking at the production network, but Klaus insisted I continue with the gateways. Specifically he wanted to know if I could develop a methodology for testing if an attacker has gained access to the gateways and is trying to access PII (Personally Identifiable Information) from within the decrypted HTTP stream. I couldn't SSH into the host (no SSH), so I figured we will have to add some kind of instrumentation to the GO app. Klaus still insisted I start by looking at the traffic before (red) and after the GW (green), and gave me access to a mirrored port on both sides so I could capture traffic to a standalone laptop he prepared for me and I could access through an LTE modem but was not allowed to upload data from: The problem I faced now was how to find out what HTTPS traffic corresponded to requests with embedded PII. One possible avenue was to try and correlate the encrypted traffic with the decrypted HTTP traffic. This proved impossible using timing alone. However, unspecting the decoded traffic I noticed the GW app adds an 'X-Orig-Connection' with the four-tuple of the TLS connection! Yay! I wrote a small python program to scan the port 80 traffic capture and create a mapping from each four-tuple TLS connection to a boolean - True for connection with PII and False for all others: 10.4.254.254,443,[Redacted],43404,376106847.319,False 10.4.254.254,443,[Redacted],52064,376106856.146,False 10.4.254.254,443,[Redacted],40946,376106856.295,False 10.4.254.254,443,[Redacted],48366,376106856.593,False 10.4.254.254,443,[Redacted],48362,376106856.623,True 10.4.254.254,443,[Redacted],45872,376106856.645,False 10.4.254.254,443,[Redacted],40124,376106856.675,False ... With this in mind I could now extract the data from the PCAPs and do some correlations. After a few long hours getting scapy to actually parse timestamps consistently enough for comparisons, I had a list of connection timing information correlated with PII. A few more fun hours with Excel and I got histogram graphs of time vs count of packets. Everything looked normal for the HTTP traffic, although I expected more of a normal distribution than the power-low type thingy I got. Port 443 initially looked the same, and I got the normal distribution I expected. But when filtering for PII something was seriously wrong. The distribution was skewed and shifted to longer time frames. And there was nothing similar on the port 80 end. My only explanation was that something was wrong with my testing setup (the blue bars) vs. the real live setup (the orange bars). I wrote on our slack channel 'I think my setup is sh*t, can anyone resend me the config files?', but this was already very late at night, and no one responded. Having a slight OCD I couldn’t let this go. To my rescue came another security? feature of the GWs: Thet restarted daily, staggered one by one, with about 10 minutes between hosts. This means that every ten minutes or so one of them would reboot, and thus reload it’s configuration files over NFS. And since I could see the NFS traffic through the port mirror I had access to, I recokoned I could get the production configuration files from the NFS capture (bottom dotted blue line in the diagram before). So to cut a long story short I found the NFS read reply packet, and got the data I need. But … why the hack is eof 77685??? Come on people, its 3:34AM! What's more, the actual data was 77685 bytes, exactly 8192 bytes more then the ‘Read length’. The entropy for that data was pretty uniform, suggesting it was encrypted. The file I had was definitely not encrypted. Histogram of extra 8192 bytes: When I mounted the NFS export myself I got a normal EOF value of 1! What hell is this? Comparing the capture from my testing machine with the one from the port mirror I saw something else weird: For other NFS open requests (on all of my test system captures and for other files in the production system) we get: Spot the difference? The open id: string became open-id:. Was I dealing with some corrupt packet? But the exact same problem reappeared the next time blacklist.db was send over the wire by another GW host. Time to look at the kernel source code: The “open id” string is hardcoded. What's up? After a good night sleep and no beer this time I repeated the experiment and convincing myself I was not hullucinating I decided to compare the source code of the exact kernel version with the kernel binary I got. What I expected to see was this (from nfs4xdr.c): static inline void encode_openhdr(struct xdr_stream *xdr, const struct nfs_openargs *arg) { __be32 *p; /* * opcode 4, seqid 4, share_access 4, share_deny 4, clientid 8, ownerlen 4, * owner 4 = 32 */ encode_nfs4_seqid(xdr, arg->seqid); encode_share_access(xdr, arg->share_access); p = reserve_space(xdr, 36); p = xdr_encode_hyper(p, arg->clientid); *p++ = cpu_to_be32(24); p = xdr_encode_opaque_fixed(p, "open id:", 8); *p++ = cpu_to_be32(arg->server->s_dev); *p++ = cpu_to_be32(arg->id.uniquifier); xdr_encode_hyper(p, arg->id.create_time); } Running binwalk -e -M bzImage I got the internal ELF image, and opened it in IDA. Of course I didn’t have any symbols, but I got nfs4_xdr_enc_open() from /proc/kallsyms, and from there to encode_open() which led me to encode_openhdr(). With some help from hex-rays I got code that looked very similiar, but with one key difference: static inline void encode_openhdr(struct xdr_stream *xdr, const struct nfs_openargs *arg) { ... p = xdr_encode_opaque_fixed(p, unknown_func("open id:", arg), 8); ... } The function unknown_func was pretty long and complicated but eventually sometimes decided to replace the space between 'open' and 'id' with a hyphen. Does the NFS server care? Apparently this string it is some opaque client identifier that is ignored by the NFS server, so no one would see the difference. That is unless they were trying to extract something from an NFS stream, and obviously this was not a likely scenario. OK, back to the weird 'eof' thingy from the NFS server. The NFS Server The server was running the 'NFS-ganesha-3.3' package. This is a very modular user-space NFS server that is implemented as a series of loadable modules called FSALs. For example support for files on the regular filesystem is implemented through a module called libfsalvfs.so. Having verified all the files on disk had the same SHA1 as the distro package, I decided to dump the process memory. I didn't have any tools on the host, so I used GDB which helpfully was already there. Unexpectadly GDB was suddenly killed, the file I specified as output got erased, and the nfs server process restarted. I took the dump again but there was nothing special there! I was pretty suspicious at this time, and wanted to recover the original dump file from the first dump. Fortunately for me I was dumping the file to the laptop, again over NFS. The file had been deleted, but I managed to recover it from the disk on that server. 2nd malicious binary The memory dump was truncated, but had a corrupt version of NFS-ganesha inside. There were two libfsalvfs.so libraries loaded: the original one and an injected SO file with the same name. The injected file was clearly malicious. The main binary was patched in a few places, and the function table into libfsalvfs.so as replaced with the alternate libfsalvfs.so. The alternate file was compiled from NFS-ganesha sources, but modified to include new and improved (wink wink) functionality. The most interesting of the new functionality were two separate implementations of covert channels. The first one we encountered already: When an open request comes in with 'open-id' instead of 'open id', the file handle is marked. This change is opaque to the NFS server, so unpatched servers just ignore it and nothing much happens. For infiltrated NFS server, when the file handle opened this way is read, the NFS server appends the last block with a payload coming from the malwar...

Linux Hardening Guide | Madaidan's Insecurities
31 Dec 2020
madaidans-insecurities.github.io
cover image
Top 10 Web Hacking Techniques of 2017 | Blog
12 Oct 2018
portswigger.net

The verdict is in! Following 37 nominations whittled down to a shortlist of 15 by a community vote, our panel of experts has conferred and selected the top 10 web hacking techniques of 2017 (and 2016)

cover image
My Bodyguard, My Self
8 Oct 2018
longform.org

The author spent a day with three men in a high-end security detail to find out how it feels to be safe.

cover image

Let's Encrypt does not currently generate Elliptic Curve certificates. Here's how to obtain one.

cover image

Forget the new iPhones, Apple best product is now privacy.

cover image
Certificates for localhost
15 Jul 2018
letsencrypt.org

Sometimes people want to get a certificate for the hostname “localhost”, either for use in local development, or for distribution with a native application that needs to communicate with a web application. Let’s Encrypt can’t provide certificates for “localhost” because nobody uniquely owns it, and it’s not rooted in a top level domain like “.com” or “.net”. It’s possible to set up your own domain name that happens to resolve to 127.

netdev day 1: IPsec!
13 Jul 2018
jvns.ca