At some point, I wondered—what if I sent a packet using a transport protocol that didn’t exist? Not TCP, not UDP, not even ICMP—something completely made up. Would the OS let it through? Would it get stopped before it even left my machine? Would routers ignore it, or would some middlebox kill it on sight? Could it actually move faster by slipping past common firewall rules?
No idea.
So I had to try.
↫ Hawzen
Okay so the end result is that it’s technically possible to send a packet across the internet that isn’t TCP/UDP/ICMP, but you have to take that literally: one packet.
I think you meant to link to https://github.com/Hawzen/hdp ?
eIPX would not have had the problem of address space. But i dont know.
NaGERST,
I couldn’t find any information on “eIPX”, is it some variant on IPX?
In hindsight, the 32bit address space (minus a lot of reserved addresses) is really regrettable. 64bit would have solved a lot of growing pains and connectivity problems – had we known just how limiting 32bit would be! All these years after IPv6 day and so many of us are still dependent on IPv4.
This isn’t helped by the fact that google has dragged their feet endlessly on supporting DHCPv6 in android’s IPv6 stack. Every IPv6 implementation in existence (apple/ms/linux/bsd/etc) supports this except for android, all because one high ranking google engineer wants the role of dictator on other people’s networks. It’s so disappointing because he’s single-handedly holding back IPv6 adoption in favor of IPv4. The situation is utterly stupid because SLAAC isn’t usable everywhere and it disincentivizes IPv6 upgrades that we desperately need.
https://lostintransit.se/2020/05/22/its-2020-and-androids-ipv6-is-still-broken/?doing_wp_cron=1740713989.6676199436187744140625
Alfman,
That is true. But we did not have the hindsight in 1970s where every device would be independently connected to the Internet.
Back then, they wanted to connect some campuses, and the main model was mainframes with terminals in labs.
Today even my microwave has an IP address. I am carrying at least two devices that have active internet connections, sometimes more. And even though we have much better suited protocols like Z-Wave, IoT still wants to use TCP/IP instead.
We are lazy to install hubs.
(Though, I finally got to installing Home Assistant + Z-Wave dongle on Raspberry PI. At least I know my lights won’t go out if the Internet is down, or Samsung no longer plays nice)
sukru,
Yes, I understand that they didn’t know any better. Even if they could have known, they likely never imagined their solution would continue to be relied upon half a century later. It would have been perfectly reasonable (albeit wrong) to assume that their implementations would be deprecated as they became dated, yet here we are still very dependent on their standard despite the inadequacy in modern times.
It would help if we could send a message back in time to warn them about these deficiencies, but alas that’s not how things work.
Personally I don’t mind IOT using TCP/IP, but I lament not being able to access my hardware locally. This is an unfortunate con of where the industry is at right now.
DHCPv6 is not the standard way of automatically configuring IPv6, it’s an optional spec so there’s no reason to support it. All of the use cases for DHCPv6 listed are either “we want to stick with what were already doing and not learn something new” or cases for niche devices where such devices already have DHCPv6 support (eg remote booting). The base Android OS is intended for use on cellphones, and cellphones don’t have any of these niche requirements. If you’re building an embedded device based on android which is intended for such niche uses then you can add DHCPv6 support to your build.
There are a lot of bad ISPs out there which provide bare-minimal IPv6 support with only a single /64 prefix, which prevents you from creating multiple VLANs (eg for a guest network). If you gave such providers the opportunity they would allocate even more limited blocks, or even allocate just a /128 and expect you to use NAT. It’s only because of Android that these terrible ISPs give out a /64 and not something even worse.
The 32bit address space was intended as an experimental protocol for the ARPANet, it was never meant to be a production network nor publicly available. IPv6 is the production version for global public use. Vint Cert gave a talk about this a few years back.
bert64,
It is an optional spec, however your other assertions are false. SLAAC can’t be used in some environments, like auditing or subnetting as I already mentioned. Not many people can get a /48 network or even /60. Realistically many users might be forced to pay for additional ISP plans just to make up for this artificially imposed “/64 network can’t be subnetted” policy, which is just dumb. The result is that some admins genuinely CAN’T use SLAAC. and therefor MUST use IPv4 which doesn’t have the problem. This outcome sucks for everyone.
Frankly I find it difficult to customize even my own android devices, much less the devices of all the users I’m expected to support. Furthermore dictating your policy for networks you don’t own is overstepping. It’s not and shouldn’t be up to you.
Do I blame them for only giving me 18446744073709551616 IPv6 addresses? No that’s plenty, but it’s so stupid that I can’t subnet it normally…this limitation only exists on android. If I want to subnet AND use android then I have to go back to borrowing NAT from ipv4 to implement my ipv6 subnets….ugh hell no! Regardless of your personal opinion of why SLAAC is better, that’s fine you can do what you want with your network. But a lot of corporate network admins are just going to keep using IPv4 with it’s crappy mitigations rather than using an IPv6 network with crappy mitigations for android. This is a bad outcome but that’s what’s happened in almost all the enterprise environments I see. Unfortunately the inability to simply upgrade from IPv4 to IPv6 without loosing any functionality is largely google’s fault.
Businesses have legitimate reasons to subnet with DHCP and even if you disagree with that we need to be clear that your not the admin for other people’s networks. Acting like a dictator is part of the problem here.
I’m not anti IPv6, but having a google engineer dictate policy on networks they don’t have jurisdiction over is wrong and unfortunately it’s holding back IPv6 deployment in a significant way.
// SLAAC can’t be used in some environments, like auditing or subnetting as I already mentioned.
No reason why you can’t audit with SLAAC.
If you are relying on DHCP for auditing you’re doing it wrong, as devices could easily self assign addresses. If you care about security at all you’ll have a NAC which keeps track of devices, ports and 802.1x auth irrespective of what IP they have, in which case DHCP is irrelevant anyway. Here my devices are recognised by the cert they use to authenticate to 802.1x, the MAC and IP(s) are logged and tagged to the cert. You can spoof your MAC or self assign an IP but it will still be tagged to your cert, and if you don’t have a valid cert you can’t connect at all.
// Not many people can get a /48 network or even /60. Realistically many users might be forced to pay for additional ISP plans just to make up for this artificially imposed “/64 network can’t be subnetted” policy,
Did you ever consider WHY some lousy ISPs provide only a /64?
The standards recommended by all the RIRs say a /56 for home users and a /48 minimum for corporate users.
Even the smallest ISP gets a /32 which is enough to allocate 16 million /56 networks. If you’re running an ISP with more than 16 million customers it’s absolutely trivial to get a larger allocation from the RIRs for free.
These ISPs are being unnecessarily stingy, and are providing the absolute bare minimum they can, which is a /64 because android won’t work with less. What do you think would happen if android would let them get away with a much smaller allocation?
You’d just get a MUCH smaller delegation, and still wouldn’t be able to subnet, wouldn’t be able to use SLAAC at all, and would have a limit on how many devices you can have connected at once.
// Frankly I find it difficult to customize even my own android devices, much less the devices of all the users I’m expected to support.
I’m talking about manufacturers who are already customising android for the devices they sell. There are all kinds of devices running android these days including many types of device that android was never intended for, so if you were building a device that needed features provided by dhcpv6 you could easily include that support. Mobile handsets don’t need anything dhcpv6 offers.
// Do I blame them for only giving me 18446744073709551616 IPv6 addresses? No that’s plenty, but it’s so stupid that I can’t subnet it normally…
As mentioned, if not for android these lousy providers wouldn’t give you a /64, you’d get something much smaller. If the ISP wants to screw you by not following standards and providing the bare minimum they will. The fact android doesn’t work with less than a /64 keeps the minimum at /64 and not smaller.
// this limitation only exists on android. If I want to subnet AND use android then I have to go back to borrowing NAT from ipv4 to implement my ipv6 subnets….
No you just need an ISP which provides a /56 as per the recommendations from all of the RIRs…
Android is not alone here, a LOT of embedded devices also only support SLAAC. We’re talking switches, printers, wireless access points, tv sets etc. Android is just the big one that they can’t ignore.
// I’m not anti IPv6, but having a google engineer dictate policy on networks they don’t have jurisdiction over is wrong and unfortunately it’s holding back IPv6 deployment in a significant way.
Google is not dictating anything, the standard is designed for subnets to be /64 and google have implemented the standard.
bert64,
A secure router can enforce IP assignments. But I do think you’re missing the point of assigning IPs. It’s not for the benefit of the local subnet, but rather other subnets where mac addresses can’t be logged and compliance reasons that you can’t allow clients to pick IPs using SLAAC willy nilly. It makes it that much harder to track down client activity when IPs aren’t assigned.
Again, I’m not interested in tell others how to run their networks. You do you! But it is offensive when others don’t provide the same courtesy in the other direction. When someone does try to dictate other networks outside their jurisdiction, it’s a vehement overstep and that’s a problem.
That’s only great in theory, but unfortunately that’s not how things have worked out for ipv6 for many people in the real world. Having quadrillions of IP addresses and then not having the option to subnet them to manage your network is really dumb.
The point is kind of mute when artificial barriers only encourage businesses to keep using IPv4 subnets with NAT – yuck. Especially now with the benefit of hindsight, it’s pretty damn clear we should have done a better job making IPv6 play better and more transparently with IPv4. Had we done better IPv6 should have been a nobrainer for everyone. Instead we’ve got these dumb barriers to adoption
I’m sure many devices based on android also have this problem. IOT devices based on normal linux and BSD distros are more likely to work fine. Regardless it doesn’t change the outcome: most people who want to subnet will end up using IPv4 & NAT. It sucks.
Google are dictating other’s ability to use DHCPv6 using android as their method of coercion. It’s like saying apple don’t dictate owners use the app store when they absolutely do.. I appreciate the fact that you don’t care about the inability to subnet /64 blocks, but that doesn’t mean that other people don’t care. It’s one thing to say what’s best for oneself, and I can respect that. But when an individual or company tries to insist they know what’s best for others, that’s insulting and that’s the dictator aspect I’m talking about.
Pretty sure this account was created to generate link spam and uses an AI bot to generate the post. The comment looks fairly authentic if the link spam didn’t give it away. Also in the user profile.
The author doesn’t appear to know about SCTP, but it would be interesting to test it too.
https://www.geeksforgeeks.org/difference-between-sctp-and-tcp/
Custom protocols should traverse the open internet just fine but NAT translators and firewalls near the edges are probably the biggest problems.
The author made one mistake: using targets that reside in the cloud. The problems is that cloud providers have virtualized networks where packets are mangled and encapsulated in very specific custom ways, and they only implement support for the classic stack so custom transport level protocol packets can easily be twisted and become undeliverable.
Windlord,
At a minimum he would need a dedicated IP on both sides. All methods of sharing IPs involve some kind of static or dynamic NAT which need the routers to be able to interpret the protocols and not just the IP headers. NAT can’t work otherwise.
In early days of IPSEC I vaguely recall that only one user could use IPSEC at a time and the router would simply route the packet to the last known address. I wonder if the author experienced something like this with his own protocol, it might explain his odd observations. Resetting the router and trying again would reveal router interference.
The only “downside” is that most end users don’t have static IP addresses.
This makes us, the public, into perpetual consumers, as we can no longer reliably serve content, but only access cloud services within the limits they allow us.
If they don’t deem a service any more worthy they would take it down, even in simple things like a Call of Duty network match.
sukru,
Well, there’s static versus dynamic IPs, but even dynamic IPs aren’t really too big a problem. Dynamic IPs can still easily be used with dynamic DNS. The real connectivity killer is shared IPs using NAT.
This has forced engineers to redesigned networking applications around centralized servers even in cases where doing so adds latency, inefficiency, and cost. NAT can break all kinds of applications such as SIP, P2P, VNC, SSH, security cameras, IOT, games, etc. NAT forces many away from owner controlled options and towards centralized providers. This came at great expense to consumer autonomy.
NAT imposes other tangible costs on top too. I personally found that on my phone I needed my VPN to send 3 pings per minute to reliably keep carrier NAT sessions open. This may not seem that bad, but ~2000MB per month just to keep a notification channel alive can really eat into your mobile quota not to mention the drain on battery life.
I look back to the dialup days when we had unique IPs and ironically enough multiplayer that didn’t rely on connecting to centralized data centers was so much easier to do back then. We were still able to take it for granted because we hadn’t run out of IPs yet. It’s so sad that the internet lost the property of end to end connectivity without hacks.
Many mobile networks support IPv6 so you can avoid the brokenness of NAT.
Legacy IP is simply not designed to scale, so NAT is required with all its drawbacks. IPv6 solves the problem.
bert64,
Everyone should be in agreement that NAT is an ugly hack and that IPv6 fixes that address shortage. However many networks still don’t support IPv6. My ISP doesn’t even offer it at home so if I want to VNC or SSH to a computer at home then IPv4 is the only way, unfortunately.
Also, because of the aformentioned IPv6 problems with android, if my ISP did offer IPv6 /64 and I wanted to continue subnetting my network as I do under IPv4, then android literally forces me to use NAT with IPv6. Hopefully you can agree with me this is a bad outcome. Yet it’s unavoidable, if I don’t use NAT then 1) I’m forced to give up android or 2) I’m forced to give up my ability to manage my network with local subnets on IPv6. Obviously this is not an intrinsic limitation of IPv6, it’s only an android specific problem. No other IPv6 platform has this problem.
IPSEC doesn’t work with NAT for this reason, with ESP traffic the payload is encrypted so the gateway has no way to match the ports, and with AH traffic the NAT rewriting the packet would invalidate the signature and cause the packet to be dropped.
All a NAT gateway can do is dump all traffic to a single device, hence why only one can work at a time.
Now you have the IPSEC-NAT-T kludge, which instead of using ESP just tunnels everything over UDP.
bert64,
Yes, this is what made me wonder if the author was witnessing the same mechanism with his own protocol. A NAT router might use the same generic logic for all “undefined” protocols. It’s not something I’d normally work with so this behavior would have to be tested.
I guess calling it a kludge or not may be in the eye of the beholder. After all many other VPN solutions like openvpn use UDP natively and it’s not considered a kludge there. Regardless of the IP situation, TCP/UDP ports are quite useful as differentiators for multiple concurrent sessions. Otherwise without ports every session would need it’s own IP address. Heck IPs with no ports might be an interesting idea, but engineers went with ports.
No, networks in cloud providers use ip packet mangling with specific optimizations for the common protocols (tcp, udp, etc…) which are not guarantee to respect unknown (to them) protocols that you might write. That is regardless of whether the author has or not static IP/s as this is not related to sharing ip’s rather on how virtualized or better software defined networks operate.
At the end of the day, if you send a packet to/from a cloud provider machine, it has to go through these networks. Also, on top of it some protocols are unprioritized.
Windlord,
It’s not clear to me what you are saying “no” to. Unless they’re doing NAT or firewalls, routing does NOT presume TCP or UDP. It occurs at the network layer of the OSI model…
https://www.imperva.com/learn/application-security/osi-model/
Obviously if you don’t have a host with root access and a dedicated IP it won’t work. And obviously if they use a firewall they can block it too. While I have used providers with a firewall, these are often an up-charge. Without a firewall higher level protocols pass through undisturbed.
To be consistent, I said “dedicated IPs”. If your using shared hosting (“cloud hosting” if we must use marketing jargon) it won’t work because you don’t get a dedicated IP or root access. For shared hosting the provider uses a reverse proxy and/or application server (I use nginx to do both) to share the public IP between customers.
https://en.wikipedia.org/wiki/Reverse_proxy
I obviously can’t guarantee it, but if you have root access and an unfirewalled dedicated IP, there’s a good chance it will work. Not for nothing the author reported it worked under AWS instances. If all your saying is that it won’t work with shared hosting, we can agree, but it doesn’t really contradict what I said originally since shared hosting customers don’t get dedicated IPs, which is a core stipulation.
Yes and no…
For legacy IP the cloud providers have no choice because it simply can’t scale so they’re forced to use various kludges to conserve address space and it’s quite a hideous mess.
For IPv6 there’s no reason they couldn’t implement it in a classic routed model and forward everything through stateless if you turn off the firewall.
Exactly.
For example, using IPsec directly over IP instead of wrapping it in UDP (so-called NAT traversal) is very common. Of course this requires a real internet connection and does not work via CGNAT middleboxes, hence why workaround technologies have been built over the decades.
In the Netherlands it was quite common to have other protocols on top of IP in the earlier days. The first IPv6 tunnel from my ISP was IPv6 in IPv4, so the extra protocol under IPv4 was actually IPv6.. A dedicated tunnel server on the other end would forward the packet on. And the first bit of the path was tunneled rather specifically as well. There was an ADSL modem that talked PPPoA (PPP over ATM), but was relatively dumb. It was typically hooked up to a system that used PPTP (essentially PPP over IP). These PPTP packets used a protocol “GRE” which came after the IP header.
None of this ever was beyond the ISP level though. But the total layering was essentially IPv4 -> PPTP/GRE -> IPv4 -> IPv6 -> TCP/UDP/ICMP/etc, for any IPv6 traffic in the early days when viewed over the link to the ASML modem.
As long as he uses IPv4 (or later IPv6) and a wellknown protocol it should be routed. Firewalls though may block unknown protocols, that’s their job.
So, yes the guy discovered something a google search would reveal. Intereseting for him, but a “OS News”?