The problem

Some networks run Deep Packet Inspection middleboxes that read the SNI field in TLS ClientHello packets. If the SNI matches a blocklist, the connection gets reset or dropped. You try to open a website, and it just hangs. The TLS handshake never completes.

The other half of the problem is DNS poisoning. Even if you somehow get past the DPI, the DNS resolver returns a fake IP address that points to a block page. So you need to solve both: the SNI inspection and the DNS poisoning.

Most solutions to this involve a VPN or a proxy. I didn’t want either. A VPN routes all your traffic through a remote server, which is overkill for this problem. A proxy requires app configuration, and some apps ignore proxy settings entirely. I wanted something that works at the system level, transparently, with one command: sudo gecit run.

The idea

The core trick is simple. Before the real TLS ClientHello reaches the DPI, send a fake one.

The fake ClientHello carries a different SNI (www.google.com) and has a low TTL. The TTL is set just high enough to reach the DPI middlebox but low enough to expire before reaching the actual server. The DPI processes the fake, records “google.com”, and lets the connection through. The server never sees the fake because it expired in transit.

Then the real ClientHello passes through. The DPI already made its decision based on the fake. It’s desynchronized.

App connects to target:443
    |
gecit intercepts the connection
  Linux:  eBPF sock_ops fires (inside kernel, before app sends data)
  macOS:  TUN device captures packet, gVisor netstack terminates TCP
    |
Fake ClientHello with SNI "www.google.com" sent with low TTL
    |
Fake reaches DPI -> DPI records "google.com" -> allows connection
Fake expires before server (low TTL) -> server never sees it
    |
Real ClientHello passes through -> DPI already desynchronized

On top of that, the eBPF program clamps the TCP MSS to a small value (40 bytes). This forces the kernel to fragment the real ClientHello into tiny segments. Some DPI systems only inspect the first TCP segment, so if the SNI spans multiple segments, they can’t read it.

For DNS, gecit runs a local DoH (DNS-over-HTTPS) server on 127.0.0.1:53 and redirects system DNS to it. DNS queries go through encrypted HTTPS to Cloudflare, Google, or whichever upstream you choose. The plaintext DNS poisoning doesn’t work anymore.

Linux: eBPF sock_ops

This is the interesting part.

I wanted the fake packet to be sent before the application sends any data. Not after, not concurrently. Before. If the app’s real ClientHello reaches the DPI before the fake, the game is over.

eBPF sock_ops gives you exactly this. You attach a BPF program to a cgroup, and the kernel calls it at specific points in the TCP lifecycle. The one I care about is BPF_SOCK_OPS_ACTIVE_ESTABLISHED_CB, which fires when an outgoing TCP connection completes the three-way handshake. At that moment, the connection is established but the application hasn’t sent any data yet. Perfect timing.

Here’s the BPF program:

SEC("sockops")
int gecit_sockops(struct bpf_sock_ops *skops)
{
    __u32 key = 0;
    struct gecit_config_t *cfg = bpf_map_lookup_elem(&gecit_config, &key);
    if (!cfg || !cfg->enabled)
        return 1;

    switch (skops->op) {
    case BPF_SOCK_OPS_ACTIVE_ESTABLISHED_CB:
        return handle_established(skops, cfg);
    case BPF_SOCK_OPS_HDR_OPT_LEN_CB:
        return handle_hdr_opt_len(skops);
    case BPF_SOCK_OPS_WRITE_HDR_OPT_CB:
        return handle_write_hdr_opt(skops, cfg);
    }

    return 1;
}

When a new connection to port 443 is established, handle_established does two things:

static __always_inline int handle_established(struct bpf_sock_ops *skops,
                                              struct gecit_config_t *cfg)
{
    __u32 dst_ip = skops->remote_ip4;
    if (bpf_map_lookup_elem(&exclude_ips, &dst_ip))
        return 1;

    __u16 dst_port = (__u16)bpf_ntohl(skops->remote_port);
    if (!bpf_map_lookup_elem(&target_ports, &dst_port))
        return 1;

    // Set small MSS to force ClientHello fragmentation.
    int mss = cfg->mss;
    bpf_setsockopt(skops, IPPROTO_TCP, TCP_MAXSEG, &mss, sizeof(mss));

    // Notify userspace via perf event.
    struct conn_event evt = {};
    evt.src_ip   = skops->local_ip4;
    evt.dst_ip   = skops->remote_ip4;
    evt.src_port = skops->local_port;
    evt.dst_port = dst_port;
    evt.seq      = skops->snd_nxt;
    evt.ack      = skops->rcv_nxt;
    bpf_perf_event_output(skops, &conn_events, BPF_F_CURRENT_CPU,
                          &evt, sizeof(evt));

    // ... MSS restoration tracking omitted for brevity
    return 1;
}

First, it clamps TCP_MAXSEG to 40 bytes using bpf_setsockopt. This happens inside the kernel, per-connection. The application doesn’t know. When it sends the ClientHello, the kernel fragments it into small segments automatically.

Second, it fires a perf event with the connection details: source/destination IPs, ports, and the TCP sequence and acknowledgment numbers. The seq/ack are critical. The fake packet needs the exact same seq/ack as the real connection, otherwise the DPI will ignore it.

On the Go side, a goroutine reads these perf events and sends the fake:

func (m *Manager) readEvents(ctx context.Context) {
    defer m.wg.Done()

    for {
        record, err := m.reader.Read()
        if err != nil {
            select {
            case <-ctx.Done():
                return
            default:
            }
            return
        }

        if len(record.RawSample) < 20 {
            continue
        }

        var evt connEvent
        evt.SrcIP = binary.NativeEndian.Uint32(record.RawSample[0:4])
        evt.DstIP = binary.NativeEndian.Uint32(record.RawSample[4:8])
        evt.SrcPort = binary.NativeEndian.Uint16(record.RawSample[8:10])
        evt.DstPort = binary.NativeEndian.Uint16(record.RawSample[10:12])
        evt.Seq = binary.NativeEndian.Uint32(record.RawSample[12:16])
        evt.Ack = binary.NativeEndian.Uint32(record.RawSample[16:20])

        m.injectFake(evt)
    }
}

The fake packet itself is a minimal TLS ClientHello with SNI set to www.google.com. It gets sent via a raw socket with IP_HDRINCL, using the same 5-tuple (src/dst IP, src/dst port) and the real seq/ack numbers from the eBPF event. The only difference: TTL is set to 8 instead of the default 64.

What stays in the kernel vs what goes to userspace: the eBPF program handles connection detection and MSS clamping. The fake packet construction and raw socket send happen in userspace (Go). Only the handshake touches userspace. After that, data flows through the kernel at full speed. gecit adds zero overhead to bulk data transfer.

macOS: no eBPF, now what?

macOS doesn’t have eBPF. No kernel hooks. No bpf_setsockopt. Apple deprecated kernel extensions and pushed everything to userspace via Network Extensions, which require a developer account, entitlements, and App Store distribution. Not exactly “download and run.”

My first attempt was an HTTP CONNECT proxy. Set the system HTTPS proxy to 127.0.0.1:8443, intercept CONNECT requests, inject the fake, pipe the data. It worked for browsers. Then someone tried it with Discord, and Discord doesn’t respect system proxy settings. Neither do a lot of other apps.

So I switched to TUN. A TUN device is a virtual network interface. You route traffic through it, and your userspace program reads/writes raw IP packets. All traffic goes through it, no app can bypass it.

The implementation uses sing-tun with gVisor’s userspace TCP/IP stack. When a TCP connection to port 443 arrives at the TUN, gVisor terminates the TCP handshake with the app. gecit then opens a real connection to the server, reads the ClientHello from the app side, injects the fake, and forwards the real one:

func (h *handler) injectAndForward(appConn, serverConn net.Conn, dst string) {
    appConn.SetReadDeadline(time.Now().Add(5 * time.Second))
    clientHello := make([]byte, 16384)
    n, err := appConn.Read(clientHello)
    if err != nil {
        return
    }
    clientHello = clientHello[:n]
    appConn.SetReadDeadline(time.Time{})

    if sni := fake.ParseSNI(clientHello); sni != "" {
        dst = fmt.Sprintf("%s:%d", sni, serverConn.RemoteAddr().(*net.TCPAddr).Port)
    }

    seq, ack := seqtrack.GetSeqAck(serverConn)

    // ... build ConnInfo with seq/ack ...

    for i := 0; i < 3; i++ {
        h.mgr.rawSock.SendFake(connInfo, fake.TLSClientHello, h.mgr.cfg.FakeTTL)
    }

    time.Sleep(2 * time.Millisecond)

    serverConn.Write(clientHello)
    pipe(appConn, serverConn)
}

It works. Every app gets intercepted. But there’s a cost: all traffic goes through userspace. Every packet crosses the kernel-user boundary twice. On Linux with eBPF, only the handshake touches userspace. On macOS, everything does. It’s the same overhead as a VPN, just without the remote server.

The seq/ack extraction is another pain point. On Linux, the eBPF program reads snd_nxt and rcv_nxt directly from the socket. On macOS, there’s no kernel API for this. I use pcap to capture SYN-ACK packets on the physical NIC and extract the sequence numbers from them.

Windows: same approach, different pain

Windows uses the same TUN + gVisor approach as macOS. Same architecture, different problems.

First problem: raw sockets. Windows blocks TCP raw socket creation since Vista. You can’t send a spoofed TCP packet through Winsock. The solution is Npcap, which provides pcap_sendpacket to inject raw Ethernet frames through its kernel driver. This means constructing the full Ethernet frame yourself, including the gateway MAC address (discovered from the ARP table):

func (s *pcapRawSocket) SendFake(conn ConnInfo, payload []byte, ttl int) error {
    ipTcp := BuildPacket(conn, payload, ttl)

    frame := make([]byte, 14+len(ipTcp))
    copy(frame[0:6], s.dstMAC)   // gateway MAC
    copy(frame[6:12], s.srcMAC)  // our MAC
    frame[12] = 0x08             // EtherType: IPv4
    frame[13] = 0x00
    copy(frame[14:], ipTcp)

    return s.handle.WritePacketData(frame)
}

On Linux and macOS, the raw socket operates at layer 3. The kernel fills in the Ethernet header, handles ARP, computes IP checksums. On Windows with pcap, you’re at layer 2. You build everything yourself. The IP header checksum that Linux/macOS kernel computes for you? You have to compute it manually, or routers drop your packet silently. I learned this the hard way.

Second problem: most DPI bypass tools on Windows use WinDivert. WinDivert is a great tool, but its code signing certificate expired in 2023. Windows Defender flags it, and some systems refuse to load the driver. gecit uses WinTUN (from the WireGuard project) instead, which is properly signed and actively maintained.

Third problem: Npcap is not redistributable without an OEM license. Users need to install it separately from npcap.com. Not ideal for a “download and run” experience, but there’s no alternative for raw packet injection on Windows.

What eBPF gives you

After building the same thing on three platforms, the contrast is stark.

Linux (eBPF): The eBPF program hooks into the kernel’s TCP stack synchronously. It fires at the exact right moment (connection established, no data sent yet). MSS clamping happens in-kernel with bpf_setsockopt. Seq/ack numbers are available directly. Only the fake packet send touches userspace. Data transfer has zero overhead.

macOS (TUN): Virtual network interface, userspace TCP/IP stack (gVisor), routing table manipulation, pcap for seq/ack extraction, mDNSResponder management, network service detection for DNS. All traffic goes through userspace.

Windows (TUN + Npcap): Everything from macOS, plus: Ethernet frame construction, ARP table parsing for gateway MAC, IP checksum computation, Npcap as a runtime dependency, Windows Defender false positive handling.

The complexity difference is not incremental. It’s categorical. eBPF lets you intervene at the exact right point in the kernel’s TCP stack without building infrastructure around it. On other platforms, you build a small VPN just to do what a short BPF program accomplishes.

That said, the TUN approach has one advantage: it intercepts all traffic at the IP layer, including apps that might bypass proxy settings. On Linux, eBPF sock_ops is attached to a cgroup, so any process in that cgroup is covered regardless of how it configures its network stack.

Rough edges

Different networks have different DPI behavior. Some use passive DPI that injects RST packets. Some actively drop packets. The TTL needs to be high enough to reach the middlebox but low enough to expire before the server. Default is 8, which works for most networks. traceroute helps you find the right value.

The DPI requires correct TCP sequence and acknowledgment numbers in the fake packet. Placeholder values get rejected. This is why the seq/ack extraction is not optional. If pcap fails to capture the SYN-ACK, the fake is sent with placeholder values and the DPI ignores it.

DNS-over-HTTPS has a chicken-and-egg problem: gecit redirects system DNS to its local server, but the DoH client needs to resolve the upstream hostname. If the upstream is domain-based (like dns.nextdns.io), it needs to be resolved before gecit takes over DNS. gecit resolves all upstream hostnames at startup and pins them to IPs.

On macOS, the network service for DNS configuration is per-interface. If you’re connected via USB tethering, gecit needs to change DNS on the tethering service, not on Wi-Fi. It detects the active service from the default route.

Flatpak apps on Linux run in a sandbox with their own DNS resolution. gecit changes /etc/resolv.conf to point to its local DoH server, but Flatpak doesn’t see this change. The DPI bypass still works because the eBPF program runs in the kernel, below any sandbox. But the DNS bypass doesn’t. Users need to configure DNS manually for Flatpak apps. This is a good example of where kernel-level hooks (eBPF) succeed and userspace changes (DNS config) fail.

gecit is on GitHub. GPL-3.0. Supports Linux, macOS, and Windows.

sudo gecit run

It does one thing. It does not hide your IP address, encrypt your traffic, or provide anonymity. It prevents DPI middleboxes from reading the SNI field in TLS handshakes. That’s it.