So this is an interesting one I can’t figure out myself. I have Proxmox on a PowerEdge R730 with 5 NICs (4 + management). The management interface is doing its own thing so don’t worry about that. Currently I have all 4 other interfaces bonded and bridged to a single IP. This IP is for my internal network (192.168.1.0/24, VLAN 1). This has been working great. I have no issues with any containers on this network. One of those containers happens to be one of two FreeIPA replicas, the other living in the cloud. I have had no issues using DNS or anything else for FreeIPA from this internal network nor from my cloud network or VPN networks.

Now, I finally have some stuff I want to toss in my DMZ network (192.168.5.0/24, VLAN 5) and so I’ll just use my nice R730 to do so, right? Nope! I can get internet, I can even use the DNS server normally, but the second I go near my FreeIPA domains it all falls apart. For instance, I can get the records for example.local just fine, but the second i request ipa.example.local or ds.ipa.example.local, i get EDE 22: No Reachable Authority. This is despite the server that’s being requested from being the authority for this zone. I can query the same internal DNS server from either the same internal network or a different network and it works handy dandy, but not from the R730 on another network. I can’t even see the NS glue records on my public DNS root server.

I’m honestly not sure why everything except these FreeIPA domains works. Yes, I have the firewall open for it and I have added a trusted_networks ACL to Bind and allowed queries, recursion, and query_cache for this ACL. The fact it only breaks on these FreeIPA subdomains makes me think it’s a forwarding issue, but shouldn’t it see the NS records and keep going? It can ping all the addresses that might come up from DNS, it’s showing the same SOA when I query the root domain, it just refuses to work from my IPA domains. Can someone provide any insight on this please, I’m sick and tired of trying to debug it.

  • The Stoned Hacker@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 months ago

    I’ve spoken with a colleague who’s more experienced with physical networking (my work is mostly cloud based) and it seems the issue is that i have a dumb switch in-between my server and my managed router/switch so nothing is crossing VLANs properly. We figured this out because I did a packet capture on my network and did two DNS queries, one from my machine on my VPN network to the DNS server and one from the docker container to the DNS server. Both sent the same query except my machine got a response and the container did not. I am a bit skeptical that it’s purely a VLAN issue, but this DNS server hasn’t had any other issues with other subnets that aren’t dealing with VLANs so when you’ve eliminated the impossible all that remains is the improbable.