“The New York Times this morning published a story about the Spamhaus DDoS attack and how CloudFlare helped mitigate it and keep the site online. The Times calls the attack the largest known DDoS attack ever on the Internet. We wrote about the attack last week. At the time, it was a large attack, sending 85Gbps of traffic. Since then, the attack got much worse. Here are some of the technical details of what we’ve seen.”
This is so idiotic, I don’t even know where to begin. What do these f*ckers think they will accomplish? That’s right: NOTHING!
Do these losers think that just because they don’t agree with a site, they are allowed to bring down the whole fucking ‘net?!? To me, this is the pinnacle of selfishness. I hope these scumbags are tracked down and beaten to within an inch of their lives. >:(
The effect on the net as a whole is very localized. The problems has shown the distributed nature of the internet can shield the rest from the affected areas. Most places on earth has not seen any disruption at all from this DDoS.
But i agree, it is *rather* pointless and not very bright.
This is closer to a publicity stunt than anything else (just like the one Yahoo pulled with Summly).
Although i refrain from linking to gizmodo, there have been some good articles coming out of there lately.
http://gizmodo.com/5992652
That’s such a dumb article. They’ve cherry picked statements to suit their own bias and missed the crux of the official reports that have been made.
I mean, sure, if you just read clickbait headlines like “a DDoS attack make the internet crash” then you’re bound to oppose those reports. But read the content a little deeper and you’ll get the true picture:
1/ this wasn’t just a botnet, it was using open DNS resolvers to multiply forged UDP packets by 10 times their original size (think of this like sending a HTTP request header, the data that’s returned is greater than the data that’s sent. Well this type of attack is similar to that principle except it’s exposing a weakness of UDP (the ease of spoofing IP numbers) so the server sends the reply to a different destination than the guy who sent the UDP packet) and a weakness of open DNS resolvers (ie accepting UDP packets with no maximum bandwidth limit).
2/ The reason most users didn’t see much impact is because of the way how the web works. Think of it like a power grid, if one power station goes down then the others pick up the slack. However if your local substation goes down, then local residents will be without power. The only people affected were those local to the worst hit internet exchanges.
3/ following on the previous comment, the claim that this was “just a Dutch problem” is just an idiotic statement that demonstrates zero knowledge about how the web works. Peers buy and sell bandwidth from each other and re-route traffic to avoid black spots. An attack of this magnitude meant that Cloudflare had to distribute their traffic globally. Furthermore, the aforementioned statement also overlooks the fact that the attackers then started flooding Cloudflares direct peers and exchanges. when it became obvious that Cloudflare themselves had the resources to manage the initials attacks (a bit like trying to cut all the main roads into a city so it starves to death instead of directly invading the city itself).
4/ Of course Amazon’s cloud services were green. Amazon wasn’t being hit. However many of Cloudflare’s services had been impacted. All week I’d been greeted with sporadic static pages from Cloudflare when their network creaked. However I live close to one of the heaviest hit internet exchanges and direct peers to Cloudflare.
The fact that such little impact was felt by internet goers is a great testimony to how sophisticated the design of the internet is. We have a global infrastructure with a design that allows it to reflow traffic dynamically and cooperatively when weaknesses are attacked. I can’t think of a single other man made creation that is a flexible and robust on a grand scale as the internet is. And when you also consider how old many of the design decisions are and how it’s evolved and been built up over the decades, it really shouldn’t work this well!
Edited 2013-03-28 09:39 UTC
You mean like the one on Cloudflare’s blog, one being linked here? Or maybe you mean the NY times article.
Any of them.
You have to remember that headlines are just there to get readers – they’re not intended to be 100% factually accurate (I agree it’s crap how they’re misused – but that’s a whole other debate).
You can’t expect to read a headline to get the full story; just like the old adage that you can’t judge a book by it’s cover.
Edited 2013-03-28 10:15 UTC
Still, saying that “it almost broke the internet” is a big exaggeration and pretty much sensationalist nonsense. There was really no risk that the internet would “break” globally.
It’s pretty damning for the competency of the IT industry that there still are enough open DNS resolvers in 2012 for this to be feasible though. It’s far from rocket science to configure a resolver properly and if you can’t even do that, well, I have bad news for you: you’re grossly incompetent.
Edited 2013-03-29 02:41 UTC
The only thing that this attack could really bring down is the spam protection of organizations that rely on spamhaus (excellent) DNS blacklists. And that’s precisely what the crooks behind this are after – taking the spam protection down that bites into their spammer revenue streams.
It’s a war between spammers and spam protection services, (and cloudfare, who are a DDOS protection firm). A spectacular one, nonetheless, with possible implications for many of us (who doesn’t hate spam?) but the internet as we know it is *not* in danger.
Take care!
You’re both correct and wrong.
This attack wasn’t intended to bring down the net; just Cloudflare (and it didn’t even succeed at that either).
However what this attack does highlight is the seriousness of unprotected open DNS resolvers, which were the servers exploited to generate the ~300Gb/s DDoS traffic. The repercussions of which could be a threat as it means criminals no longer need large botnets to take smaller organisations offline. All they need now is a modest amount of bandwidth to flood poorly configured DNS servers with forged UDP packets that are then multiplied up at the name servers by a factor of 10.
While DDoS attacks will always be a threat, open resolvers make it easier than ever to disrupt services and this latest story is basically a massive advert to anyone considering a denial of service attack.
I just hope the seriousness of this is taken on board and action is taken to mitigate the effectiveness of this attack (there’s a few different approaches to this, one of them being to patch the name servers themselves, but personally I’d rather see ISPs, peers and exchanges to add some reverse engineering to their UDP forwarding – in that they only forward UDP packets if the IP address attached can be routed backwards – thus effectively checking if the sender matches what the UDP packet describes).
Apologies about the crude explanation. I’m not a networking expert (though there is overlap between that an my field in IT) so that last part I’m having to trust online sources as being accurate.
That’s actually a solution to what both the NYT article and one of the commenters on the CloudFlare blog identified as the real problem… that the ‘net is full of routers that perform none of the sanity checks which would block such spoofed packets, regardless of what daemon we discover to be exploitable next week.
I’m no expert either, but your solution sounds more complicated (and, hence, more CPU intensive on the routers) than what they were proposing. It sounded like they were just proposing plain old source-interface checking so, when the attacker sends a spoofed packet to a DNS server, one of the border routers along the way drops it for arriving on the wrong interface.
Also, I believe it was the CloudFlare commenter who pointed out that this isn’t the first attack of this kind. Before spoofed UDP flooding via DNS, there was spoofed SYN flooding.
We’re talking about the same check. What I was describing was the process behind “plain old source-interface checking”.
Totally. But AFAIK we’ve never seen the same degree of amplification (eg every bit being multiplied up to as much as 10bits) before, not even with SYN flooding. Which is where attacking open resolvers come into play.
I might be wrong on this though so welcome any corrections
Laurence,
“We’re talking about the same check. What I was describing was the process behind ‘plain old source-interface checking’.”
It doesn’t seem like source interface filtering is a great solution to me because on the internet there’s technically no requirement that packets come in from the same interface they’ll return out of. In multi-homed setups this can even be explicit. Load balancers might do the same thing. But even in other less exotic cases internet routers can switch paths dynamically as they rerun the shortest path algorithms, I don’t know just how frequently this happens, but it’s the reason UDP packets can arrive out of order.
So do you agree that source interface filtering could negatively affect legitimate users?
It’s a DNS problem, so I feel that a DNS fix should be used instead of modifying our routers. It’s much easier to update dns software than a router. My understanding is that many commercial routers achieve their performance in hardware and become underpowered if too many packets get tossed around into the software stack.
Edited 2013-03-28 14:43 UTC
For core switches, you’d be right. But from what I’ve read, that method could work for routers on the edge of networks. But that’s just what I’ve read, you might well be right
My guess is it would either work well or not at all. I’m by no means a networking expert though so I’ll have to take the lead from someone else.
I’d argue it’s more a problem with the UDP datagram than DNS specifically. DNS just exposes that weakness of UDP. So if we just fix DNS then I’m sure someone will find another UDP service that can be exploited in the same way (possibly games servers?)
Edited 2013-03-28 16:35 UTC
Laurence,
“I’d argue it’s more a problem with the UDP datagram than DNS specifically. DNS just exposes that weakness of UDP. So if we just fix DNS then I’m sure someone will find another UDP service that can be exploited in the same way (possibly games servers?)”
I guess you could put it that way. It’s true that UDP does nothing to confirm the sender IP, but to fix it at this level would mean converting UDP to a stateful protocol with a bidirectional handshake.
Between all the IP protocols already invented, we probably already have something that would work, but it does little good until these actually saw widespread support.
http://en.wikipedia.org/wiki/Reliable_User_Datagram_Protocol
http://en.wikipedia.org/wiki/Stream_Control_Transmission_Protocol
So, the easiest fix in this case is probably just running DNS over TCP.
Edited 2013-03-28 17:56 UTC
I did read someone else suggesting that and at the time I didn’t take their suggestion all that serious because of the disruption it could cause. But thinking about it again, it’s probably a good long term goal.
And with that, I think you’re probably right that the best solution is at the name server end rather than trying to patch all the edge routers.
It’s been an interesting discussion this. Thanks for your insights
Source filtering makes sure that only packets with a valid source comes in on an interface. Valid source means it’s an IP address that has a route via that interface. This is an incredibly simple yet effective way to reduce spoofing on customer-facing equipment and is, as I’ve said previously, already done by most ISP’s.
While DNS has problems this is not one of them. This is simply a problem of misconfigured DNS servers and the only effective way to stop this from happening is by not screwing up the configuration.
Thankfully not everyone uses underpowered Cisco gear
Soulbender,
“Source filtering makes sure that only packets with a valid source comes in on an interface. Valid source means it’s an IP address that has a route via that interface. This is an incredibly simple yet effective way to reduce spoofing on customer-facing equipment and is, as I’ve said previously, already done by most ISP’s.”
They apply egress filtering to make sure their customers don’t sent out source IPs that are external to their network.
They may apply ingress filtering such that the internet backbone cannot send them packets that look like they were sourced from within the ISP. But it’s very unlikely that they apply ingress filtering that discriminates IPs from various peer interfaces since that would break alot of internet traffic. It’s up to the source routers to route the traffic to the destination. The destination has no say which interface will receive traffic for a given IP.
ISPs cannot do anything to detect spoofing outside of their network, which is part of the problem.
“While DNS has problems this is not one of them. This is simply a problem of misconfigured DNS servers and the only effective way to stop this from happening is by not screwing up the configuration.”
Once you read my other response, it should clarify that it is a DNS problem (or even a “UDP” problem if you want to view it like Laurence did).
And that’s the only thing you need to do in order to prevent spoofing from your clients. Very simple, very effective.
Not really. It’s quite possible to filter without breaking the internet traffic but it does require some work. You will always know what prefixes your peers announce so you can set things up to only ever accept packets with an IP in those prefixes from a peer.
True but there’s no scalable and feasible solution for this. Tracking this would create a massive overhead and for what? Just because some people can’t do their jobs?
No, it isn’t. It’s a problem combined of misconfigured DNS servers and ISP’s not doing filtering on their customers. DNS itself is just doing what it was supposed to do. The advantage of UDP over TCP (or other connection-oriented protocols) is that it scales much, much better. The downside to this is that it does expect ISP’s not being schmucks.
Soulbender,
“And that’s the only thing you need to do in order to prevent spoofing from your clients. Very simple, very effective.”
At the expense of features like multi-homing.
“Not really. It’s quite possible to filter without breaking the internet traffic but it does require some work. You will always know what prefixes your peers announce so you can set things up to only ever accept packets with an IP in those prefixes from a peer.”
Those announcements are meant to indicate the paths for routing destination IP’s, not filtering source IPs. While there is nothing stopping you from filtering in this way, at the very least this will cause network disruptions while the network is propagating new optimal paths. But it could cause persistent problems for asymmetric scenarios. For example your organization has two peers: Hurricane Electric and Cogent. A foreign router determines Hurricane Electric’s network is the best way to route an IP to you, but your router determines that Cogent is the best way to send the packet back. If you filter out this IP on Hurricane Electric’s interface, then you loose legitimate traffic. You have no way to determine whether packets on either interface are spoofed or not.
“No, it isn’t. It’s a problem combined of misconfigured DNS servers and ISP’s not doing filtering on their customers. DNS itself is just doing what it was supposed to do.”
Yes, it’s doing what it’s programmed to do, so it’s not a “bug” or misconfiguration. But it causes a problem in this case because the sender isn’t verified. That IS a vulnerability as this incident clearly demonstrates. Any UDP protocol is vulnerable if it responds with lots of traffic without first performing a handshake, so in a way DNS is guilty, even though it works as designed.
Edited 2013-03-29 04:41 UTC
No, not at all. Ingress and egress filtering does not prevent multi-homing.
Those are the sources that should only ever arrive from a certain peer. They indicate exactly what I need to know: what are the valid source IP addresses from a peer. For example, if I have been assigned 1.2.3.4/24 and I peer with HE, announcing 1.2.3.4/24, they should only every accept packets from me with a source in 1.2.3.4/24.
No.
Why would I “filter out this IP”? Ingress and egress filtering works perfectly in this scenario (I know, I’ve done it many times). The packet coming in via HE, addressed to my network, is a valid ingress source and the packet going out via Cogent is from my network and is a valid egress source. Nothing is being filtered, spoofing is prevented.
Obviously I will not accept sources from either that is in my network nor will I accept destinations that is not my network. Neither provider should accept packets from me that is not from my network.
Edited 2013-03-29 04:56 UTC
“No, not at all. Ingress and egress filtering does not prevent multi-homing.”
Depends on the type of multihoming. One type is to NAT traffic across different IPs, but that might be worse than shared IP hultihoming for certain things, esp like VOIP where you really want to use one IP and let the network determine the shortest path on it’s own. Admittedly this is probably very uncommon outside of the enterprise, and not very useful without the help of the ISP to cooperate in advertising your own IP routes through them.
My understanding is that many services, including the root nameservers employ this kind of shared IP address multihoming today in order to increase redundancy and decrease latency.
“Why would I ‘filter out this IP’?”
I may have misunderstood you then, I thought you were advocating blocking externally generated inbound traffic from your peers based on source IP, which is what I’m refuting the benefit of.
http://www.osnews.com/thread?557011
If I misread that it could explain our disagreement
Edited 2013-03-29 06:08 UTC
Yes, that could be done if I’m peering at an internet exchange, for example. In that case I most likely know from my peers announcements what source IP’s they should have and I could reliable use that information for filtering.
If on the other hand, I’m and end-user and only have upstream peers, filtering on source IP would be mostly futile with the exception of not accepting external traffic with a source from within your own network.
Soulbender,
“Yes, that could be done if I’m peering at an internet exchange, for example. In that case I most likely know from my peers announcements what source IP’s they should have and I could reliable use that information for filtering.”
I doubt that’d ever be very useful in practice because it’s going to be very unlikely to find any IP addresses that Cogent can route to and HE cannot. Both networks could legitimately send traffic to and from Spamhaus IPs under various network scenarios. Your router, as a recipient of a packet that appears to be from Spamhaus, is fundamentally unable to determine whether the packet is from Spamhaus or not based on the interface it arrived from. This is the basis of the internet’s redundancy and is usually a good thing.
No but at an IX there are other players than big providers and you might not want to accept any source from those. That said, the proper place to do prevent spoofing is at the customer-facing equipment. Once that is done you’ve solved the spoofing problem.
Edited 2013-03-29 07:13 UTC
Souldbender,
“No but at an IX there are other players than big providers and you might not want to accept any source from those.”
Even so, I don’t imagine smaller peers would be accepting and routing traffic for IP’s that they’re not advertising. Filters at the destination are likely redundant. Maybe it’s not a bad thing to do, but it wouldn’t seem likely to help in a case like this either.
“That said, the proper place to do prevent spoofing is at the customer-facing equipment. Once that is done you’ve solved the spoofing problem.”
I agree, if you could trust every border router to do source-IP filtering for their own authoritative networks, then spoofed packets would never make it onto the internet. Ultimately, the best we can do is limit the size of the circle of trust by keeping most casual users on the outside of core routers and scrubbing the packets that get in for things like spoofed IPs. This is still a big challenge when we consider that the internet transcends national and administrative boundaries.
A) This should be done on the customer-facing equipment, not on border routers.
B) Most ISP’s already do this. Really.
C) You don’t need to spoof the source to make use of open DNS resolvers. That is the crux of the problem, that this attack is created by “valid” packets.
Soulbender,
Depending on which article you read, cloudflare was talking about two types of DDOS attacks.
You are talking about recursive DNS resolvers, which can be done without spoofing. But to be fair, this particular attack WAS based on spoofing the source IP as the victim to get the large DNS responses (rather than the small requests) to eat up their bandwidth. It’s how the bandwidth multiplication was achieved.
http://blog.cloudflare.com/the-ddos-that-knocked-spamhaus-offline-a…
“The basic technique of a DNS reflection attack is to send a request for a large DNS zone file with the source IP address spoofed to be the intended victim to a large number of open DNS resolvers. The resolvers then respond to the request, sending the large DNS zone answer to the intended victim. The attackers’ requests themselves are only a fraction of the size of the responses, meaning the attacker can effectively amplify their attack to many times the size of the bandwidth resources they themselves control.”
A non-spoofing recursive DNS attack is possible too, but it’s not clear that this could have achieved the amount of bandwidth multiplication they got by spoofing the victim’s IP. Let me know if I’m overlooking something.
Ah I see. That’s disturbing. Spoofing should almost be impossible today but I guess there’s no accounting for incompetence.
All the technical means already exist to prevent this from happening yet it does.
Laurence, you are implying here that this is a new attack vector (I understand your statement like this). It definitely isn’t.
CloudFlare:
http://blog.cloudflare.com/deep-inside-a-dns-amplification-ddos-att…
– “a known problems at least 10 years old.”
The number of Open DNS resolvers that can be used in a DNS amplification attack is actually in decline.
This isn’t going to happen anytime soon. Adding such checks in the current infrastructure would reduce the capacity of backbones by a few levels of magnitude. “Backbone routers” are optimized to route tons of traffic, but only blindly. Adding checks would cripple their routing capacity.
Such checks (anti spoofing measures) can only be implemented at the “outskirts” of the Internet, not in it’s core. Admins of small networks are responsible for such security measures, but since such attacks use their infrastructure without damaging it much, there is little incentive to do it.
It is a new vector in attack in that it’s only really been exploited like this in recent years. Or at least I’ve not been aware of hackers targeting open resolvers for DDoS attacks until recently. So I’m assuming it wasn’t a commonly used technique until recently. If you know otherwise then I’ll happily accept the correction
I wasn’t implying that this is a new vulnerability though, just that this existing vulnerability is getting wider exposure (advertising) so this specific type of exploit is becoming more frequent.
Oh I’m well aware of that. This is why there’s a coordinated underway to identify vulnerable name servers and work with the hosts to get them patched (ie it’s the most realistic solution to this immediate concern).
My comment regarding the router checks was what I’d prefer to see; “ideal world” thinking etc. But as that post was quickly becoming my second essay in this thread I decided to cut some detail out for the sake of getting back to work
[edit]
reworded a lot of this as it really wasn’t clear what I was trying to say.
Edited 2013-03-28 12:52 UTC
Well here you might be right.
This is a type of Reflected DDoS (http://en.wikipedia.org/wiki/Denial-of-service_attack#Reflected_.2F…), of which there are many. They were “all the rage” in the late 90ties (smurf attacks, DC attacks anyone?). If specifically DNS amplification attacks are something new, especially on this scale, I don’t know. But they’re just a variation of the same basic concept.
I’ve known about DNS amplification attacks for ~3 years, and by quickly googling around I found that in 2006-2007 they were considered new (http://www.theinquirer.net/inquirer/news/1015743/dns-amplification-…, http://securitytnt.com/dns-amplification-attack/). I really thought this was older
so this may be relatively new, but it’s yet another form of reflective DDOS
I’m aware of that. But you’re still missing my point that the previous reflective attacks didn’t amplify requests by nearly the same ratio as this one does. And that’s the crux of the issue. Previously, reflective attacks were largely used for anonymity (with minor amplification being a bonus). Here the reflection is done specifically for amplification where anonymity is a fortunate (for them) side effect.
Again, you’re arguing points that were never in dispute. I really don’t know how many times I have to reiterate that I’m aware the concept is an old one before you move off that moot point. You’re like a dog with a bone :p
This is already done by any competent provider and it is why spoofing is much less common today than it used to be. It’s not feasible to do this between “Tier 1” peers though (due to, among other things, asymmetric routing) so it’s important that providers closer to the customer does this properly.
This wouldn’t solve the problem with DNS amplification attacks though since the source was valid and not spoofed. The only way to effectively stop these attacks is to not have open DNS resolvers.
The source was spoofed. That’s how the amplification attack works:
1. send spoofed UDP packet to DNS server.
2. server then replies to the spoofed UDP packet.
3. and the target server goes down because the spoofed UDP packet has the target as the source IP.
It’s a bit like me pinging Google pretending to be your IP. Then google responds be sending a reply to you instead of me. Except we’re talking several orders of magnitude more bits being exchanged than in a simple IMCP echo request. And the DNS server replying with more data than they received as part of the domain name lookup request.
“Broke the internet”? what a bullsh*t.
How could you even brake the internet, when it was designed and created just so it would not break in such situations? de-centralisation and many routes.
Please, don’t spread the nonsense. It could possibly block some popular sites, but NOT “break” the internet.
And if you claim it had a potential to “break” the internet, then I claim I have all of the internet written into my floppy disk …
marcp,
“Please, don’t spread the nonsense. It could possibly block some popular sites, but NOT “break” the internet.”
Granted the headline was exaggerated and overgeneralized. But DDOS attacks, while boring and uninteresting, are often very effective. Much like an arm’s race, the side with the most bandwidth will win a denial of service attack. The *only* reason this attack failed is because CloudFlare had enough bandwidth to withstand it. Most DOS victims fall very easily. The internet does not do anything to protect victims from DDOS today.
“How could you even brake the internet, when it was designed and created just so it would not break in such situations? de-centralisation and many routes.”
The internet was designed to be resilient in the face of outright outages (deliberate or accidental), but it actually doesn’t do very much to protect against IP based attacks. Maybe core DNS/BGP attacks would be more interesting to you?
I remember the news surrounding this following incident:
http://www.techrepublic.com/blog/networking/black-hole-routes-the-g…
This was an accident and not an attack, but for all intents and purposes a malicious attack against “the internet” could be achieved the same way. The BGP protocol, which tells all backbone routers where to route IP traffic, is inherently vulnerable to peers lying about IP connectivity. The administrators of such peers have the power to blackhole IPs at will (even those which aren’t traversing their networks).
Presumably anyone guilty of doing this will be found out and eventually kicked out from the BGP peering, but it is a strong example of how the backbone internet is fundamentally built on *trust* in order to operate.
Sure, but there’s no way to be anonymous when you do this.
As soon as other providers figured who was doing the blackhole routing your little take-over-the-internet plan is toast and trust me, it would not take them long to find you.
This is threat is also diminished by the fact that any serious peer will limit the prefixes they will accept from you, usually only accepting the prefixes you’ve been assigned
Only if by “trust* you mean contracts. You can’t just establish a BGP peering with anyone, it requires you to establish a business relationship with those you peer with and unless you’re a “Tier 1” player your peers will only accept the prefixes you’ve been assigned.
Soulbender,
“Sure, but there’s no way to be anonymous when you do this. As soon as other providers figured who was doing the blackhole routing your little take-over-the-internet plan is toast and trust me, it would not take them long to find you.”
I’d like you to give this deeper thought, more like a hacker. For example, a malicious country could advertise routes that are cheaper than they truly are to get foreign routers to route traffic to them. Once they get the packets, they may be able to complete the circuit to the legitimate destination, but now they have not only the ability to snoop packets, but also to filter them using much more discriminate deep packet filtering and even perform targeted injections. It would be very hard for any single organization to prove BGP routes are being manipulated for nefarious purposes.
“Only if by “trust* you mean contracts. You can’t just establish a BGP peering with anyone, it requires you to establish a business relationship with those you peer with and unless you’re a “Tier 1″ player your peers will only accept the prefixes you’ve been assigned.”
Well, consider real world scenarios where A-B are friends and B-C are friends but A-C are enemies. A can abuse the internet’s trust relationship to harm C and visa versa.
Edit: I’m just theorizing here, but if anyone knows of cases where this has actually happened, please jump in! I think subtle BGP manipulations could be achieved without detection, but large changes would give rise to latency and routing bottlenecks such that someone would have to investigate the cause.
Edited 2013-03-29 04:02 UTC
This would be incredibly difficult to orchestrate (for a number of technical and practical reasons) but lets say someone did. This is why we have end-to-end encryption.
Other than by the massive amount of suddenly appearing transit traffic to and from the country?
A and B are peers, B and C are peers but A and C are not. A could only announce C’s prefixes (and vice versa) if B is in on it and if that’s the case,well, you have bigger problems than BGP and then it’s not something that can be solved technically.