Hacker News new | past | comments | ask | show | jobs | submit login
Linode is suffering on-going DDoS attacks (linode.com)
209 points by empressplay on Dec 29, 2015 | hide | past | favorite | 177 comments



I used to run a hosting company similar to Linode back in the day, and DDOS's were the most annoying thing ever.

The main reason DDOS attacks exist is poor security and lack of cooperation between ISPs. Lack of adequate security on desktops (usually Windows) makes it possible to build large bot networks.

Lack of cooperation between ISPs makes it very hard to track down the source of the DOS. Very often the DDOS isn't as distributed as it may seem - it can just be a couple of machines on a very well connected network (e.g. a university). But getting a hold of someone in the middle to filter that traffic can take a very long time or be outright impossible. First-responder network engineers (typically referred to as "security") are overworked and underqualified, and the people who really know their stuff typically can't be bothered with silly DOS attacks.

We've also observed that (D)DOS's happen because of content. Anything political, religious, or whatever other shade of the many things someone out there disapproves of is a potential target for a DOS. Contrary to what you may read in the press, extortion is only a small minority of all DOS attacks out there. We've actually told customers to go away because their content was too DOS-prone.

And because these things usually happen across countries, even though they are very real crimes that cause serious damage and cost money, they are hardly ever prosecuted. As the target of a DOS all you want is for it to stop, nobody ever bothers reporting it to the authorities afterwards (because how would you even know who the "authority" is).


> The main reason DDOS attacks exist is poor security and lack of cooperation between ISPs. Lack of adequate security on desktops (usually Windows) makes it possible to build large bot networks.

These days it's the hundreds of thousands of misconfigured NTP servers, recursive DNS servers, and various other protocols being abused for reflection attacks.

Granted, it still requires that the attacker have the ability to spoof packets, but preventing that requires even more time investment and has very little benefit to the ISP.


BCP 38 is like herd immunity with immunization, and much like anti-vaccine folks, networks that don't follow it are knowingly choosing to infect people for any number of reasons. Despite your claim, it is extremely easy to implement and has been a known best practice, with accompanying educational Web sites devoted to the topic[0], for many years. There are nearly zero reasons for your AS to transmit forged packets, and if your configuration allows it, you are knowingly making the Internet a worse place with your laziness and transferring your laziness to other people like me that carry pagers. This isn't a surprise to any network administrator unless they've spent their entire career not reading RFCs.

I'm of the opinion that networks that allow customers to emit forged source addresses should be depeered until they take the literal hour to fix it. "But we have to update equipment when we get new blocks!" Boo hoo. Automate it or get off the Internet so I can stop spending my life dealing with your customer's amplified traffic.

If you run a non-transit/eyeball AS, you are in the absolute best position to stop these types of attacks from ever happening. The rest of the Internet, particularly your transit peers, can't really clean up after you on this one. Do us all a favor.

[0]: http://bcp38.info


I'd argue that the end networks really aren't the best place to be implementing this. If the big transit providers (Level3, NTT, etc) started enforcing it, it would significantly reduce the effectiveness of spoofed traffic pretty much overnight.


It might seem that way, but they can't scale that, as you might know. It's tougher for DFZ transit to do it because they must know, and programmatically configure, all downstream space to be whitelisted. Now you have a similar conversation to BGP filtering when downstream networks change space, and that's huge administrative overhead (which is why announcements are just trusted without filtering at the higher levels, to avoid this overhead for the larger AS). Your end network is a better place because you know your assigned space better, as well as how you number it; you might be holding half an /18 and not assigning it, whereas your peer would whitelist it all, for example.

If the technicals of the Internet were programmatically available in a sane way (PeeringDB doesn't count here, since it just automates manual work), the Tier 1s could potentially automate against their downstream AS' space and enable your (mostly correct) point. However, we pretty much fly blind in this respect and rely on emails and ticketing and decentralized systems to manage the control plane of the Internet. Which honestly continues to shock me, even though it makes sense since the Internet is designed as "decentralized" despite being anything but in usage.

Edit: While in the car, I realized that Paul Vixie's paper on this discusses the CPE source-filtering angle in great detail, which might illustrate my opinion a little better for you than I ever could: https://queue.acm.org/detail.cfm?id=2578510


Given my ignorance of much of these issues, I probably shouldn't be commenting (take my comment with a huge grain of salt). But the idea of depeering networks on the Internet for misapplication of a voluntary protocol seems like the beginning of the end of a free Internet (if ever such a thing existed).

If BCP38 is critical to the success of the Internet, I think rather than ranting about those not implementing it, energy would be better spent petitioning to have it made a requirement of running a peer on the Internet to begin with.

Perhaps I'm off base or starting some sort of network guru flamewar. If that's the case, down vote me and I'll go away ;)


Your heart's in the right place, but the Internet is built on policies of individual networks because there is nobody to enforce. Your suggestion back to me is simply mine in different clothing, because you think someone can enforce such a global requirement. Enforcing policy like "filter or get depeered" is the only way to achieve a global requirement like you want with the way the Internet is structured. As akerl points out you need consensus, too, because such a policy could drive customers to other networks upon enforcement, which is a business disincentive to do it.

It's kind of a surprising moment when you realize what the Internet is and how little structure it has aside from the protocols themselves. We are one global Internet (semi) outage away from rethinking some of this structure, and I expect one in my lifetime.


Thanks for the kid-gloves reply :)

I hadn't considered that there isn't really a central authority for controlling who runs a peer, aside from ICANN, but they have pretty loose reins.

Funny that everyone waxes poetic about bitcoin being a revolution in anonymous and tacit network management. Meanwhile our little Internet experiment continues to be a HUGE tacit agreement to adhere to a handful of network protocols.


If you talk to ICANN, IANA, and friends, they're very clear (and careful to reiterate, even in minor threads) that all they do is run databases that contain interesting information. All of their policy revolves around admission to and management of said databases. It's comforting to think of authorities that govern the Internet, particularly when one of those organizations even has "Authority" in the name, but the truth is that the Internet is a miracle of decentralization as you say; we as network operators on the Internet give the databases meaning and force, not ICANN or any of its subordinates. ICANN has power because we have chosen their databases as the root of said power.

Even the fact that 'news.ycombinator.com' leads here is a de facto consensus since the Internet could, theoretically, switch roots tomorrow and completely invalidate every domain name. It won't happen, much to the chagrin of alt-root operators from the 90s and 2000s and contemporary attempts like NameCoin[0], but it can, and there is absolutely nothing ICANN could do about it. They charge admission to a well-maintained database that underpins this whole show, and that's pretty much the entirety of what they do.

It's on all of us to tend to the best interests of the Internet, and way too many people with access forget that. However, with responsibility like that also comes opportunity: once you realize that it's basically all of our good faith and de facto consensus holding this thing together, the barrier to entry for you to get involved with the Internet is suddenly far lower.

[0]: Oh yes, it's been tried, a lot: https://en.wikipedia.org/wiki/Alternative_DNS_root


This rethink has already happened - the BGPsec project and related routing security efforts. This was in response to many route hijack cockups and malicious netblock squatting. Hasn't got much deployment yet.


To be clear: having the networks that make up the internet agree to depeer networks that don't implement a protocol is roughly the same as "making it a requirement", and the petitioning you'd need to do is "convince all the networks to agree to do it".

Because the internet is "free", more or less, there isn't much in the way of enforcement of which RFCs and practices must be followed to be part of it, which is a big part of the reason that BCP38 is so underutilized today.


>> We've actually told customers to go away because their content was too DOS-prone.

Isn't that a good way to earn yourself some nasty one-star reviews? "They shut us down because of our political/religious content." That doesn't sound like a way to attract and keep loyal customers through stellar reviews and word-of-mouth advertising.

I hope there's a better way for ISPs to solve the problem than just to get rid of the customers who are the targets.


All of this +1000000

We've found that the more distributed the attack, the easier it is to defend. If it's truly distributed in nature, you can help spread the attack surface through things like anycasting if you have you multiple datacenters. Also, the more distributed it is, the more likely it is to be spread out among your different transit links. The less distributed, the more likely it will hit a single transit link and take that out. You might have 10 different transit providers with 10gig links. 100 gbps of transit is nothing to scoff at for small companies. You can have an 11 gbps DDoS that makes you unreachable for large numbers of people if that 11 gbps lands on a single link (due to AS path length, MED, preference, etc). People at your company are asking you "WTF network dude I thought we had 100gigs?" You can then try to mess with padding route advertisements, but that just messes more with legit traffic than attack traffic.

Same thing with aggregated ethernet links (LACP/802.3ad). You might be 10x10gigs with a single provider, giving you a "100gbps pipe" but if it comes from a few sources all going after multiple destinations, a single link in that bundle could easily be overwhelmed. Then you start playing with the hashing algorithm, but it never really solves the problem.

The only solution right now is to get a bigger boat and get 100gbps links. It's very expensive, not every transit provider offers that to every customer, lol if you run Cisco, and even then it's honestly not big enough. Overrunning 100gbps interfaces with DoS traffic isn't very hard to do if it comes from a few sources.

One must realize that not every country has the network diversity that the US does. In a lot of places, there are only a handful of AS networks that actually provide transit out of the country. This means that an entire country's network traffic can flow over a few (or often times one) providers. This turns into a giant laser that blows out single provider or single LACP member links. China is a great example of this. There are really only 2 AS that provide egress from China. The third is reserved for high ranking party officials and ultra, megarich people who pay the bribes. Korea (and all the broadband that HN people drool over) has a few more transit networks, but not many. Korea is a potent DDoS cannon.

As far as content goes it's by far mostly political and religious (> 70%). The remaining is the usual junk e-commerce, gambling, whistleblower sites, etc.

The lack of communication between transit providers and even between their customers is astoundingly bad. IMHO, there are good transit providers that will help you (NTT, Telia, XO) but most will tell you to go pound sand or not do anything until it starts disrupting their own network (Level3, Cogent, GTT, VZ, AT&T).


I find this ironic because about 2 years ago I had a couple VMs with them that suffered CHRONIC DDoS attacks, all the time. I had to move my clients to a whole other platform. Linode, on the other hand, simply blamed us for supposedly causing the repeat DDoS attacks - one after another for months on end. They even got rather flippant with me exclaiming how "dumb" I was to not understand that is was MY problem apparently, not theirs, even though the target of the DDoS was several addresses in their IP block and not tied to a single client domain, server or anything else. They continued to blame us personally, and even tried to get us to foot the $3,000+ traffic bill.

Well, the tables have turned. Looks like it IS a Linode problem after all. And not a one-time issue, either. This has been happening to them for YEARS.


+1 for IP block DDOS'ing - last time I ran a Linode VM it was DDOS'd to smitheereens within half an hour of putting it up (and of course, without any notice that I was liable for overages...)


>(and of course, without any notice that I was liable for overages...)

Inbound traffic does not count against your bandwidth quota, so if you were the target of a DoS attack, this would not result in overages.

https://www.linode.com/pricing

(Unless you were responding to all that traffic, that is.)


They were DDoSing HTTP amongst other services, so, yeah, there was a response via the websites being hosted. That's where the overages came from.


Every point you make I can counter, I've been with them for years, have multiple VMs with them, find their support team fast, competent and courteous. DDOS is a problem for every ISP and every ISP customer.


Not so fast. We migrated all of our former Linode clients to AWS and Azure and have not suffered a single DDoS in the roughly 2 years since the move to other providers. At Linode, on the other hand, we suffered more than a dozen in less than a year - even with different VMs, different IPs, different OSes and different configurations. We were also plagued with overwhelming attempts to brute-force SSH and other services from IPs in China while we hosted with Linode. Nowhere else did we see so much malicious activity targeted against our clients' VMs as we did at Linode.

During that time, Linode support tried ruthlessly to pass the buck and blame us and our clients, denying they had any major problems. Well, here we are and Linode DDoS attacks are STILL occurring on a regular basis, still making news, and they seem to do little about it in terms of a long-term solution. Whereas I have yet to have a single client suffer a Linode-scale DDoS on AWS or Azure at all after almost a half decade of using them.


To be fair, i've had a single linode VPS for years (i'd have to check to figure out how many) and this ddos is the first time i've ever had any issues whatsoever with my linode.


Luck of the draw. Certain IP blocks in the Linode range are attacked all the time, as evidenced here. We had a couple VMs that were never attacked, but far too many of them were on a regular basis for us to even consider staying with Linode.


I've probably had some 50 nodes with Linode over the last 7ish years. Until about 2 years ago I would enthusiastically recommend them to anyone who would listen, but I've found the service has slipped massivel over the last couple of years and now we're trying to migrate people away. We have about 10 nodes left now and blips of lost connectivity and hardware failures are common unfortunately, and the feeling I get from their support team is that this is just to be expected now.


So weird that Linode hasn't been able to mitigate this. I'd love to learn more about what's happening there. Particularly since we host our production documentation site on a linode vm. I want to move it off their server and into our DC but can't access the server to do it. Bummer. I've been such a big fan of theirs.


I am not sure there is a magic way to mitigate these attacks permanently other than playing cat and mouse until the attacker gets tired. Linode has in fact mitigated the attacks, only to find them pointing to a different part of their infra or DCs. At this point, it is probably a good sign that they haven't gone completely down (for long). Current tooling and botnets make DDoS-ing fairly easy today, despite countermeasures.


This type of action is what the people behind the DDOS attack are looking for. Its not Linode fault...


Thank you for being so understanding!


Assigning fault isn't productive. But as Linode customers, what are we to do? My trust in Linode's reliability is completely shot at this point. I filed a support ticket trying to get more information about the outage, and the response I got was absolutely worthless. No part of this has made me feel better about Linode at all.

AWS is so massive that they can just sustain most DDOSes, and they write real postmortems after attacks. They're not as fast as Linode, but what good is speed if my site is completely vulnerable to every botnet that decides to look Linode's way?


>> Assigning fault isn't productive. But as Linode customers, what are we to do? My trust in Linode's reliability is completely shot at this point.

Then it will also be the same when you are at XYX ISP and they get DDOS'ed. This isn't a Linode specific problem.

>> AWS is so massive that they can just sustain most DDOSes

No.

http://stackoverflow.com/questions/13244713/does-aws-protect...

"It will not, however, employ security algorithms needed to deflect application layer attacks (e.g., HTTP flood) nor will it help against larger network layer threats."

https://www.cloudflare.com/ddos/

"Layer 3 and 4 attacks are difficult—if not impossible—to mitigate with an on-premise solution. "


If the DoS is large enough there is little they can do if their downstream is 100% saturated. They would either need more capacity or for their upstream providers to filter the attack for them.


That's how it's done. We use colo and have a DC outside Seattle with 5 uplinks to Tier 1 providers. We suffered a 20 Gbps (which is relatively small) on a 1 Gbps port that completely saturated our uplink. Our data center worked with their upstreams to route the traffic through a layer 7 DDoS mitigation service. It was amazingly effective. So I'm curious about what they're going through - I want details. How big is the attack? Is it targeting a client or Linode itself? And if they're working with their upstream providers, what are they doing?

Edit: In the mean time, I am (sadly) moving docs.wordfence.com back into our data center. I have access to the Dallas server again and so am starting the transfer now. If I can get some clarity on who did this, why and how it got fixed and why it won't happen in future I'm sure me and a bunch of other frustrated ops folks will consider sticking with them or moving back.


If I can get some clarity on who did this, why and how it got fixed and why it won't happen in future I'm sure me and a bunch of other frustrated ops folks will consider sticking with them or moving back.

I've been a generally happy Linode customer for a long time, but this is my biggest gripe with them. I can almost guarantee you'll never get this. Linode just doesn't do it. Their postmortems on issues (if they ever come) are light on details of the problem, and even lighter on details about their solutions. They seem to do a decent job of addressing most issues, because they rarely have recurring problems, but they never really talk about what they're doing.


Same experience here. I asked Support once about an ongoing outage and was told that "there were some network related issues". Really? I wouldn't have guessed...


Heard of nullroutes?

(Edit: how is it that perfectly valid technical solutions get downvoted?)


I didn't downvote, but there's one thing to keep in mind. You must request RTBH functionality with every peer I've ever worked with. It's free, but they don't set it up automatically. You need to request it during provisioning or wait 3-5 days for someone to activate it.

If you don't already have it enabled....well, good luck mate.


As of a few years ago Linode got transit from the facilities they are in and almost all of them had RTBH set up with a capacity of 5 or 10 routes. It would be incredibly foolish to operate a hosting provider without it.


Wait, you're saying that Linode uses facility transit? Like, they buy bandwidth from Savvis and TelX? Well that would be the problem right there.

From what I can tell, Linode doesn't even have their own AS for customer traffic? It appears that they have an AS for some internal use, but not for customers?


I don't know if I'd outright call it a problem, per se, since in a couple facilities (say, Fremont) it's a good thing given how good the provider (say, Hurricane Electric) is at providing transit. So in the datacenters where the provider is better at transit than facilities (say, Fremont), the equation breaks in their favor.

When I left they also didn't really have the staff to run a proper NOC for a full-on AS; one extremely talented network engineer who has since left mostly owned all the "running a grown-up network" stuff, like running down an ASN, working RTBH with the facilities, well-designed uplink strategy at Linode's edge, and lobbying for anycast infrastructure for DNS and so on. They had just hired another purebred networking wonk as I moved on who probably owns it now, and likely has help. So punting a lot of the typical NOC work upstream to a facility with whom you're already working anyway made a bit of resource-related sense at the time, since staffing a NOC for a network of that scale is a significant challenge and they like to operate lean.

That's changing now (I didn't know this and learned it from agwa's comment), which is probably reflective of growth on the resource front.


Real talk: it's not that hard to run your own network. It sounds like Linode was skirting by on not making the infrastructure investment both in hardware and people. It's finally come back to bite them. No excuses and no pity for them.

Also, I wouldn't run around saying that having HE in house is a great thing.


That's correct. They use facility transit everywhere except their newest datacenters (Singapore and maybe Frankfurt), where they operate their own AS and use blended transit.


Just because they purchase transit doesn't mean they don't have their own AS.

Everyone has to purchase transit at some point. Transit != AS.


Where did I say that purchasing transit relates to an AS? I was speaking out them separately. "House bandwidth" in most colos is pretty garbage, and you don't have much control over it.

I'm a lot less impressed with Linode after this thread than I was before.


I didn't downvote, but I can tell you almost certainly it was because of the snark.


Attackers shift to a new target on the same net as IP's are null routed, until the entire network has null routed itself which achieves the same result as DDoS'ing the network. So I'm not sure I agree that removing target IP's from your network is a great solution.


I would hope it is obvious based on the extent and duration of the attacks that RTBH is not an option in this case.


I've been with Linode for years and they've fought off more than a few DDOS attacks in the past, so they certainly are competent in that regard. I think this one must be a higher level of severity than previous ones. Generally, they seem ok with this stuff. I don't think there are any DDOS-proof VPS's out there.


If anyone from Linode admin team would like some help analyzing the attack/friendly advice on mitigations, please contact me (or anyone else at CloudFlare); we see a lot of these.


How do you guys trace the real packet sender of a packet with a spoofed IP address?


We don't, generally. It would be challenging.

If it were a volumetric attack, you could walk back links to find the source. But for anything but a huge attack which lasted for weeks/months, that would be inefficient use of resources.

Paul Vixie is really at the forefront of pushing for providers to solve this problem. Until that happens (and they've tried for a decade), it will remain technically difficult/impossible, so you need to use other solutions to mitigate attacks.


Clients that don't interpret javascript, typically.


Looks like they have a history of suffering these kinds of attacks:

(2012) Upcoming DDOS Attack - FINAL Warning - https://forum.linode.com/viewtopic.php?t=8530

(2013) Linode Mitigates DDoS Attack on Linode Manager - http://www.thewhir.com/web-hosting-news/linode-mitigates-ddo...

(July 2015) Incident Report for Linode - http://status.linode.com/incidents/vnslh3rmm9gq

So what makes them such an attractive target for these types of attacks?


The fact that so many people on their IRC channel will react to the attacks.


LOL at the "proof" attack. "We have a huge 500gbps cannon! But to show we're serious, here's a 2gbps attack. Take that!" That's hilarious. I wouldn't have taken it seriously. I also doubt that the group that threatened them is the same group that is attacking them. I bet it's at least 1 other group.

2gbps. I doubt that even registered on their bandwidth graphs.


And Google and AWS and Cloudflare and DigitalOcean and every other VPS provider.


Whenever I see top level comments where people are saying "this is what you get for not going with X", I am more and more convinced that a competitor is doing this. I for one am not leaving Linode. I JUST recommended to several new clients to setup at Linode and I've already had to explain that server setup is halted because of this attack. Did I recommend moving away from Linode? Nope. Did I hang my head in shame or stutter when I delivered the news, feeling it would reflect poorly on me or Linode? Nope. I told my clients this is the best place for them and I stick by it. Just because there are scumbags specifically attacking your service right now, doesn't mean another service is better, or that your service is poor. It just means that hey, news flash, there are scumbags in the world.


Yep, this started on Friday (Christmas day). I assume Linode itself is the target of the attacks, since they have spanned multiple regions.


I wonder if the attackers chose Christmas day in order to ensure that there were fewer Linode engineers available to stop the attack.


Dallas is their oldest DC, so they have the most customers there and therefore the most attack surface area at that location. It could just as easily be one of their customers.


The DDOS has hit their data centers in Dallas, Fremont, London, Newark, Atlanta, and Singapore since Christmas.


Performing such large attacks for 5 days in a row costs thousands dollars to the attackers. It is not something a kid can do from their bedroom. Only a big company can have the budget to pay thousands to perform a 5-day multi Gbps attack against another company. Here it is clear that there is a competitor attacking Linode. A competitor who wants to extort them.


I was thinking the same. 5-6 days non stop attacks of dozens Gbps seriously is a big and very costly operation which seems to have been planned for a long time to be launched in Christmas holidays. I wonder which shitty hosting company decided to extort Linode because they can not offer the same quality of service.

Instead of paying thousands $$$$$ to design and perform these attacks they could have just paid this money to buy better infrastructure and provide better services.


This helped me a bit because most of my clients are on vacation so I haven't received as many emails about sites being down/slow as I normally would've.


Can confirm that the attacks started Friday. We got alarms for some tertiary services we run in their datacenters (multiple geos).


An extortion attempt, I suspect.


I would like to correlate the comments in this thread with past comments on every single article about AWS or GCE of the form "this is so expensive / complicated I run my boxes on Linode for half the price". DDoS protection is one of the things you pay for on the big clouds.


What DDoS protection does AWS provide? The only thing mentions on their webpage is autoscaling, more nodes, etc. In other words, AWS' DDoS protection strategy is to open up your wallet.

About 6 months ago they did hire Jeff from BlackLotus. Given that timeline, I'd expect them to announce some sort of DDoS protection offering in the next few quarters.

Edit to be more specific: AWS gets hit with a lot of DDoS attacks. While all of AWS isn't unreachable during an attack, parts of it are. It's so large that you might not notice, but parts are unreachable. AWS/GCE size only makes it less noticeable, but they have no customer facing DDoS protection offerings. Their only offering is to buy more of their services. These providers don't have magical 1000000gbps links. They're regular 100gbps links (or 100gbps LACP channels) that can get overrun in large enough attacks.


> What DDoS protection does AWS provide? >

I've got no internal knowledge of how AWS operates, but I once went too far when load testing a new AWS machine (in terms of reqs/sec), and the traffic started getting dropped somewhere before reaching the box. Not sure if it was an elastic ip or behind an ELB, but I found it interesting.


Actually "Amazon CloudFront also has filtering capabilities to ensure that only valid TCP connections and HTTP requests are made while dropping invalid requests. This takes the burden of handling invalid traffic (commonly used in UDP floods, SYN floods, and slow reads) off your origin."

and

"By using multiple PoPs, Amazon CloudFront has the inherent ability to help mitigate against both infrastructure and some application layer DDoS attacks by dispersing the traffic across multiple locations."

source: https://d0.awsstatic.com/whitepapers/DDoS_White_Paper_June20...


Amazon Cloudfront is a CDN. A significant portion of a CDNs job is to be highly redundant and to handle DDoS.

If you're comparing Amazon's offering to Linode, you should really compare to the protection Amazon offers on EC2 and similar VPS-like products.


I was merely answering the question "What DDoS protection does AWS provide?".

Normally you wouldn't expose a EC2 instance without putting cloudfront, elbs in front of it and locking down incoming traffic to cloudfront IPs.


"I'd expect them to announce some sort of DDoS protection offering in the next few quarters."

It will be very interesting to see how cost effective that protection actually is and what they charge for it. Nice profit center off of FUD.


If they price it like cheap insurance, they will make a lot of money and offer a worthwhile service. Pay them a low monthly rate (X) to have the service active and then pay hourly at Y rate during an attack. You can play with X and Y to maximize your monthly revenue while still offering a cheap (but very valuable) service that retains your customers.


I don't know, but all traffic to GCE is routed through Google's frontend, which provides in-built DDoS protections.


I'd imagine they use VRF's to quickly segment the traffic after ingress. Google.com might have DDoS protection, but I'm wary that it extends to GCE. I've read about Google Andromeda, but there's no real meat in any article about DDoS mitigation.


This document specifically claims that "All traffic is routed through custom GFE (Google Front End) servers to detect and stop malicious requests and Distributed Denial of Service (DDoS) attacks."

https://cloud.google.com/security/whitepaper


How though? There's remarkably little information in there for being a whitepaper. If all they do is drop Christmas-tree packets and bogus UDP/ICMP traffic, that's not much in the way of protection. I'd like to see exactly what/how they're doing. How do they know what traffic is malicious? Do you get a control panel to block certain L7 traffic? What L7 inspectors do they support?

Sorry, but I'm not buying it.


It does extend to GCE, to an extent. The reason for this is that if someone is DDOSing your GCE VMs, they're going to be affecting the network performance of unrelated projects. So, Google has to provide some protection so that, at a minimum, other customers are not affected.


So...Google can stop all DDOS's? Everyone should dump AWS, Linode, Cloudflare, etc. and go to Google for their VMs then.


What about DigitalOcean? Its pricing is comparable to Linode's, yet DigitalOcean is now the second largest web host in the world according to Netcraft. Do you still think that AWS, Google Cloud Platform, and Microsoft Azure offer better protection?


Here's the email I received (many times) when someone sent a smallish 1Gbit/s DDoS to my digitalocean server:

> Our system has automatically detected an inbound DDoS against your droplet named xyz with the following IP Address: xx.xx.xx.xx

> As a precautionary measure, we have temporarily disabled network traffic to your droplet to protect our network and other customers. Once the attack subsides, networking will be automatically reestablished to your droplet. The networking restriction is in place for three hours and then removed.

> Please note that we take this measure only as a last resort when other filtering, routing, and network configuration changes have not been effective in routing around the DDoS attack.

> Please let us know if there are any questions, we're happy to help.

This happened mere seconds after the DDoS begun! Therefore they lied about having tried to mitigate the attack.

No amount of contacting support got me un-blackholed before the 3 hour mark, and when I popped back into the network, I was blackholed again for another 3 hours...

I moved to a $3.50/mo OpenVZ VPS at OVH, and OVH's VAC system soaked up the DDoS just fine.


I've received the same type of email multiple times from DigitalOcean and its extremely frustrating.

With even the smallest of traffic spikes, DigitalOcean will detect it as a DDoS and immediately cut off your server for 3 hours.

If even a typical (< 100Mbit) broadband-cable connection hits your server with a spike of traffic for less than 5 minutes, your server will be taken offline for 3 hours minimum.

I've used multiple VPS providers and dedicated-server providers and DO is absolutely the worst when it comes to DDoS policy.


That's why DigitalOcean isn't ready for production (unless hiding ip address behind, lets say, cloudflare is an option in your case) and that's why I decided not to use their services anymore.


Just curious, what do you use instead?


If you even have the slightest inkling that your site will be attacked by a DDOS (either targeted or randomly) please reconsider DigitalOcean. Their method of handling a DDOS is basically "your node is being attacked, so we're disconnecting it from the network for a few hours. Try to figure out why someone is attacking you." Then, the machine gets reconnected three hours later and the attack cycle continues. Not to mention that since the machine is offline, there is no way of logging in to check access logs, modify firewalls, etc. I've never dealt with a company like DO before. Switched to AWS last year and, even when I was getting extreme traffic, AWS never just shut down access without notice.


OVH offers very comprehensive DDoS protection with all but the most budget servers:

https://www.ovh.com/us/anti-ddos/


Comments from people seem to indicate their anti DDoS was "we blackhole you if you draw an attack". Has that changed?


Yes. This hasn’t been true for a long time. OVH runs a huge network and they invested in tools to mitigate and neutralize DDoS attacks[1]. Of course an attack can still saturate your servers’ NICs but they won’t drop you anymore.

At their scale they are basically forced to handle big attacks on regular basis. The fact that they offer this protection in their basic package is what makes them a great host[2].

They don’t get a fraction of the love Cloudflare gets, but they should.

[1] https://www.ovh.com/us/anti-ddos/hoovering-up.xml

[2] https://www.ovh.com/us/news/articles/a1171.protection-anti-d...


Good to see OVH getting some recognition here. I am a happy customer of both OVH and DigitalOcean but my usage has become strongly weighted towards OVH. The OVH web interface could be improved (I've had to open support tickets for basic things like activating auto-renewal), that's the only downside with them really. Range of products, network, disk speed and responsiveness are all consistently very good. Their new 2016 'Cloud' VPSs have markedly better disk speed than DigitalOcean's SSDs, which are already pretty fast.

For a quick side project I still like DigitalOcean's hourly billing and user interface, but any machine I plan on using for a month or more are all with OVH by default.


Their Anti-DDoS setup is amazing. Once they detect an attack it goes through their VAC setup, and legitimate traffic will still reach you, but illegitimate traffic won't.

Got hit with a DDoS to my DNS servers, I didn't see any external service disruption thanks to OVH's setup. I have several of their cheaper SoYouStart servers, and it's awesome.


Yes. We moved an IRC server that was under attack (and being dropped from DO/Linode) there, and it ate it fantastically well for the price. I've never seen anyone else do it that well for that cheap, to be honest.


Why is ovh so cheap compared to linode and DO, whats the catch?


Yea, throw money at Netcraft and you can be batman of the internet. The numbers being provided to the public are not accurate in any way.


As a Linode customer, I guess this wouldn't have happened with AWS, but the way they handle it is still far better than "your server, your bandwidth, deal with it" that you'd get with e.g. Hetzner dedicated servers. We had several hours of downtime, which sucks, but other than that we didn't have to worry about taking measures against the DDoS ourselves.


On AWS the DDoS hits the wallet instead. What’s the difference?

It’s “denial of service,” not denial of server and network resources.


AWS ELBs don't forward DNS and NTP traffic to the backends, and ingress bandwidth isn't billed to you. So there's no impact on the wallet when the most common DDoS traffic is aimed at you.


That’s a great point. Still leaves you open for a lot of abuse though.


What would be an effective alternative?


I’m a fan of OVH myself[1]. Heard good things about x4b.net as well.

[1] https://news.ycombinator.com/item?id=10807392


If they're using the typical anti-DDoS mitigation equipment (usually Arbor) then they're almost certainly throwing out good traffic along with bad. AWS's approach of having an ingress pipe large enough to absorb DDoS traffic rates and allowing only valid HTTP requests to pass through the ELB is a superior solution, IMO.


Well, yeah, infinite resources are always the best DDoS solution ;)

Valid requests ≠ clean traffic. That just moves the attack couple of layers up.


After thinking about it more, I now realize that there is one way that AWS/GCE can lessen effects of a DDoS. Given that a lot of attack machines are hosted inside of AWS/GCE, they can handle this with lots of internal network bandwidth. They see less north/south (transit) attack bandwidth than a regular hoster would.

In short, if a large amount of the attack traffic originates inside of AWS/GCE, it's better to be off in AWS/GCE.

Between the two, neither have any kind of automated tooling to detect and shutdown rogue attack instances (AFAICT). They still rely on third parties to tell them "Hey, you're sending me 300 gbps of DoS traffic."


Sure, but tell me what your bill is at Amazon when you get hit by a massive attack. Amazon is incredibly resilient against attacks due to their size, but it comes at a price too.


you mean discussions like this (2 days ago), first comment sub-thread: https://news.ycombinator.com/item?id=10794951


Exactamente.


That is true.

However, how many of the people impacted by the current DDoS against Linode are only affected BECAUSE they are using Linode?


Guilty.

This has caused all kinds of pain for us this weekend. We use WPEngine to host some sites, who in turn host everything on Linode.

Honestly WPEngine has some real nerve charging people big bucks for a failover plan that apparently doesn't exist. This is just another of a half-dozen or so Linode failures that took us and loads of other of their customers down completely. We're lucky that we planned for this ahead of time, but we weren't 100% ready to go live on a competing service either. A lot of folks are working on their vacations right now.

I think the two questions I'll be asking every host now and into the future are:

1) Do you host your services on Linode. 2) If you do, do you failover to another provider?

A yes to the first question and a no to the second is a non-starter in my experience.


Hey Busterarm,

David here from WPE. Are you using our HA solution (Geographic redundancy)? Did that fail? If so, did you open a ticket with support & inquire about an SLA credit?

Keep in mind we have many levels of redundancy with all of our plans, but not every plan includes Geo redundancy. Very few sites anywhere truly use hot/hot geo redundancy because of the complexity of database syncing and the expense of duplicating server environments in different data centers.

In a DDoS attack (which can happen at any data center, backbone provider, etc.) or any other data center wide outage, the only work around is Geo redundancy. Many hosts have different infrastructure providers and offsite redundancy (e.g. all of our customers enjoy offsite backups), which can allow you to recover your site even if the data center burns down, but don't necessarily provide a true hot/hot level of redundancy.

A true hot/hot configuration requires hardware in multiple locations, live database syncing, and geo load balancing. While some of our customers do purchase geo redundancy, it isn't a default part of every account.

Again, if you're an HA customer and that failed, please open a ticket. If you have questions on Geo redundancy, you can ask about it in that ticket as well. If you do move hosts, remember any host can be the victim of a DDoS attack and unless you have true hot/hot or hot/warm geo redundancy with your account, you could still be susceptible to data center wide outages like large-scale DDoS attacks.

-D


And yet after years on Linode they are still the best out of many I've tried the past 2 decades, weird how our experiences vary isn't it?


The entire market is quite bad across the board.

I guess what I should be really saying is that if you're hosting your platform on these services and offer some kind of redundancy, make that redundancy through another host.


Been happening daily for about 5 days now. Pretty frustrating but I'm sure it's frustrating for them too.

Thinking about moving to Google Compute Engine instead.


I worry that moving to a larger company will just hasten the consolidation of hosting to a few players.


The best thing for reliability is to use multiple companies. AWS has proved that multiple times.


Disagree. It is hard to maintain codebase and consistent infrastructure setup for multiple providers. Amazon's m3.medium != some vendor's m3.medium. Network setup and configuration are also nightmare. Speaking from experience dealing with four cloud vendors at once. It sounds great from a textbook perspective, but unless you are ready to spend millions every year to fight fire, please don't do that. If you were to use AWS, please build on multiple regions.


I think you're looking through the wrong end of the telescope, here.

We currently have ~50 servers in 8 cities, across Linode, Digital Ocean, and Vultr. It took me two weeks to craft a ~400 line script that abstracted the server creation APIs for each. Once spun up, they're each bootstrapped with a script that builds each server from scratch identically regardless of the provider (with a couple one-offs for Vultr), because they're all running the same distro.

A whole data center can go down, and there's no reason for me to get out of bed.


Maybe, maybe not. I still believe in single-vendor approach, perphas because in my view I am either going for AWS or GCE.

There is so much to with than just be able to spin up an VM and then run Ansible/Chef/Puppet on it. Heck I can write all of that in Fabric. There is no direct connect on Digital Ocean. I am not sure how you set up VPN with Digital Ocean or Linode. We use cloudformation on AWS, and I am pretty sure there is no such thing on Linode or Digital Ocean. Exception and response codes different across providers. Able to reproduce an environment from scratch is important to us, and of course, we try to do that in stages. I own a DO box myself, and that box turns out to be really slow in the NY region (where I live), maybe I am just an lucky bastard.

But to be honest, did you really build your entire infrastructure in three vendors to begin with? What are your reasons to really build on Linode, Digital Ocean and Vultr? How do you copy your data across environments? Are you splitting dev/qa/ci/sandbox/stage/prod?


No, not at all! http://areyouwatchingthis.com is almost 10 years old at this point, and the architecture in its current form wasn't solidified until last year. As a one-man shop, redundancy and failover are my best friends.


Have you considered selling that system in some form? Presumably it has to be tailored to the specific software stack, particularly the database(s) and other persistent storage. But maybe you could write an ebook on how to set up that kind of system, with examples from your scripts, and sell that.


No, no thoughts to sell it. I'd open-source it before I'd do that. It's not complicated when you break it up into two steps.

Step 1: An if/else-heavy script that will take in a few parameters (for us it's city, a server type, and a numeral for naming) and build a clean server with all of the needed keys populated.

Step 2: A "yum install"-heavy script passed into the clean server, that builds everything needed from scratch, sending status emails throughout the build process.


I figured the tricky part would be automatic failover to a different data center when one goes offline. At least, I inferred that you had implemented that, from the part about you not having to get out of bed if a data center goes offline.


Ah, I get you. There's an HAProxy at the front of each data center "pod". If all servers within the pod go down, the proxy routes traffic to servers in another pod. If HAProxy itself goes down, a Route 53 health check just takes it out of the A Record rotation.


We operate on multiple regions at PagerDuty. It's slightly painful, but I've nowhere near as bad an experience as you claim. We've done it using Chef to do the heavy lifting. It's worked pretty well.

It takes a bit of work and determination, but it comes together in the end.

One problem with building on AWS is their lack of network diversity. If there is a network cut, causing congested links, they will do little to try and alleviate congestion to save on cost. Issues in Oregon have caused network degradation between WEST-1 and WEST-2 that last for days with no improvement.


I have a hard time imagining how supporting multiple providers would cost millions of dollars per year. I think it's a worthwhile way to make your software and infrastructure more resilient. And it protects you from vendor lock-in. As long as you keep things simple and don't use features that are only implemented by one provider, VMs are basically interchangeable.


Human capital cost. Also, infrastructure is expensive by nature. We managed to run four at once with a team of 12 people. As I said before, vendor X ~= vendor Y in some way, so your deal and the service you get are not the same, so there is cost associated with that also.

There is a reason to pick a good strong vendor and stick with it. Big names end up building their own data center (they can probably capx it for tax purpose). Small to medium usually don't have time to deal with multiple vendors. Try to be vendor agnostic is really great but again, even in the world of open stack, you either manage all of your open stack installation yourself (which is expensive), or you end up one vendor running one version behind, or not offering 100% feature.

I think it is more worthwhile to first complete automation and self-healing in one vendor, before thinking leaping to another one. It took so many engineers at Netflix to build a resilient AWS environments. VMs go down all the time, more often than AWS pushing some bad commits and cause half of their native services go down. There is so much more to engineer in other areas and those are more likely to put you out of service first, so unless you want to all your team dealing with fires every day.... we were putting band-aids together most of the time.


12 people whose sole, full-time job is to install the same OS on multiple vendors and make the networks look the same?

I'm having trouble imagining what problems have to be constantly dealt with such that multiple millions of dollars have to go into abstracting the multiple vendors.


I can go on and give you the full story, but here is the gist, consolidation is the first step of real resiliency. I don't know what you do in your day job, but you weren't in my position and let me tell you, it was a nightmare to pull off in that situation. I am happy we are dealing with AWS. We can mirror tools to actually make two VPCs alike. Human cost is not cheap, people have to work over time in order put out fire, people rarely worked eight hours a day, we were all consultants.


I know it's extremely hard to make a resilient system, but I don't understand how using multiple vendors makes it more complicated. Just on AWS you already have computers in different datacenters. How much harm can it really cause that datacenter 5 has 20% more ram on the nearly-identical machines, and you have to connect the network differently?

If you're spending millions of dollars on making systems redundant, I completely understand. But that is not the same as spending millions of dollars to allow datacenters 4-7 to be on other vendors.

Maybe if you name some problems specific to multi-vendor support that took several man-months to fix I might comprehend. Just a sentence or two description. Thanks if you do, sorry for being confused if you don't.


I want to say I know the pain of vendor lock-in. We use an open source PaaS solution and it works well for the most part but the solution doesn't really scale the way I want it to be, and doesn't provide all the utility I consider to be crucial for a successful PaaS. Whether moving away from it has always been an interesting discussion at work. So I and my collagues are careful now with what to include in our technology stack pofotilo. I can't just have someone randomly decide to add a Redis to the dependency when the purpose is just a key-value store for some trivial task.

Anyway, the problem started with bad management to be honest, which I don't think my case is too rare to hear. When the first project rolled in, we ran PaaS on a single provider, but more projects joined, it was time to choose another vendor because of better equipment and better deals. Yet, none of them really provide good resiliency.

With the third vendor probably around 12-18 physical servers were purchased and managed by the vendor. It was openstack. That version of OpenStack at the time was not compatible with AWS, which later becomes our 4th vendor, and runs our dev environment. You want to run additional performance testing, well, we can't get the same elasticity as AWS because we only have so many physical servers to provision VMs, but that's where our real data lives. So we had to do our QA on AWS and doing the data copy can take a whole business day. The script we wrote for Amazon doesn't work on OpenStack. Security, network, I/O metrics are not consistent across vendors. That adds complexity to code. I know exactly how to abstract things, but just trust me :-) it really makes code hard to maintain, and really painful to integrate with multiple vendors. As a matter of fact, I don't like working with AWS API (with boto) myself because API response formats are inconsistent!

Finally, we got rid of one of them, then two and finally we are on the final stage of consolidating everything on AWS and focus on infrastructure automation and lower the number of chores.

Like many projects out there, things usually start out real nice, but then once you get too busy fighting fire here and there, you will accumulate some debts. If one is not careful, the debt can backfire and we had our lessons. There is no resiliency in most of the other vendors because they require you to purchase more servers and they themselves have hard time to go true elasticity. AWS, at least, for the most part, doesn't run out of instance availability that often (it happened a few times to our EMR processing). I do have some issues with AWS myself, but so far, AWS seems to be the only true cloud provider you can hang on to for several years.

Before building a grand multi-vendor infrastructure, build on a single-vendor well, then decide on the next step.


Do you run Postgresql, MySQL, or something else? What led you to your choice? Because one is inherently better than the others for what you want to do? That's why you stick with a single cloud provider. You want the vendor lock-in. You go whole hog into using everything that they provide. Once you start mixing and matching the best of breed across cloud providers, you've lost. Once you start coding to the least common denominator amongst cloud providers, you've lost. To really succeed, you must choose the best cloud offering and then use everything that they offer. Otherwise, why are you running stuff in the cloud anyway?

Also, steer clear of "cloud neutral" services and products that will magically move data and services across cloud providers. Interop is the last thing on any proprietary vendor's mind. An example. You have a pair of border edge routers. Do you buy 2 Junipers, 2 Ciscos, or 1 Juniper and 1 Cisco for fear of a bad vendor bug taking out all the routers? I'll tell you which one I would not choose. The .com TLD nameserver requirements used to mandate dual vendor setups. They sure did learn their lesson.


Based on your other comments on this thread, you seem to have real expertise in operating this kind of infrastructure. Nevertheless, this particular comment makes no sense to me; I'm not even sure if you're serious.

It does make sense to standardize on one open-source database package (MySQL in the case of my employer). But that's software that I can take with me anywhere. And it's open-source. So the risks of vendor lock-in don't apply.

> Otherwise, why are you running stuff in the cloud anyway?

How does it follow that if I don't lock myself into one cloud provider all the way, it's not worthwhile to use the cloud at all? Maybe using cloud providers is worthwhile simply because, at a certain scale, they're less expensive than leased dedicated servers, never mind the up-front cost of buying and colocating hardware. Also, it's easy to provision cloud VMs on demand, then throw them away when you're done with them. Those are good reasons to use cloud providers without locking into just one.

It seems to me that the best approach is to use only the subset of features that are common to DigitalOcean, Vultr, and maybe Linode, and abstract over those multiple providers with software like Ansible that can access multiple provider APIs.

I am indeed suspicious of proprietary solutions for deploying and migrating across cloud providers, such as Cloud66. But that's only because using one of those solutions would itself be an instance of vendor lock-in. If there were an open-source package with similar functionality to Cloud66, I would probably use it.


Comcast as a company purchases from two vendors to avoid lock-in and to get the best prices. Juniper and Cisco, although last I heard Arista was also starting to be used.

Dual-sourcing makes sense in a lot of cases.


You can definitely dual source for separate parts of your network. Just don't dual source for the component in the same deployment. You do not want to run a Juniper and Cisco router in a pair, both talking with each other running IS-IS, VRRP, etc.


> They sure did learn their lesson.

Is there any relevant reading material? I would have assumed diversification to be a good idea for such critical infra.


I don't know if there is any reading material. I heard the story from someone who was on the front lines in the outage, directly dealing with the situation.

Vendor diversification is a bad thing for critical infrastructure when interoperation is required. "It's a Cisco problem!" says the Juniper rep. "It's a Juniper problem!" says the Cisco rep. You're stuck in the middle. It's terrible. You can only hold one vendor's feet to the fire and they won't care at all if you're in a heterogeneous environment.

Remember all the middleware products and companies from the late 90's? Neither do I.


I thought the .com servers had been running Verisign's proprietary Atlas software since almost forever - 15 years?


I was referring to routers. They have to buy routers from someone.


How does one set up to use multiple vendors?


You might consider taking a look at http://www.cloud66.com/


http://www.distelli.com is another great company/service for multi-platform+multi-provider support. Dead simple to use. Discovered them a few weeks ago and it's totally upgraded my deployment and orchestration strategies.


You buy servers from multiple different companies, you install software on them.


Anyone can do that. I was actually wondering if there are software to automate this.


You could use something like Terraform to help automate this. It's like CloudFormation but also supports GCE, Azure, DigitalOcean, etc.

0 - https://terraform.io/


Ansible is probably the best bet - once you have SSH it'll pretty much work with anything and it also has support for creation of servers on several cloud providers and stuff.

We use it for automatic configuration of everything from colocated hardware to $5/mo VMs on DigitalOcean and other low-cost virtual server providers and it works great.


Openstack might allow you to do this.

(I haven't used it myself.)


this is going to happen no matter what - the rate of consolidation in my opinion is actually accelerating. despite what anyone says, the vast majority of people choose their hosting provider in order to "not get fired", i.e. they are extremely risk-averse, to the point of irrationality, even when presented with massive savings, better service, better underlying hardware and network, and more transparency.

most of the time they are spending someone else's money, also. you should see some of the deals that have come across my desk. the 'big names' can literally charge 2-5x more than a competitive quote and get away with it, oftentimes with worse deliverables (i.e. long stretches of downtime that somehow get a pass from their customers).

at the end of the day it's a server sitting in a rack in a datacenter, connected to ethernet. beyond a certain level of quality (tier 3 dc, server class hardware, an enterprise quality network) it's really all the same. people should choose their hosting provider on 1. quality of implementation 2. price and 3. whether or not the provider actually gives a shit about you and your account, but they usually just go with the name, like many other markets.


> most of the time they are spending someone else's money, also. you should see some of the deals that have come across my desk. the 'big names' can literally charge 2-5x more than a competitive quote and get away with it, oftentimes with worse deliverables (i.e. long stretches of downtime that somehow get a pass from their customers).

Even AWS can charge you big time. As I am getting more and more familiar with AWS every day, the #1 thing on my list going forward is to to sit down with your TAM and organize architecture review. There are services on AWS lacking completeness and can bite you in the end if you go straight with it without knowing what you are getting into.

Disclaimer: I like AWS pretty well.


[deleted]


He(/she?) never said he wouldn't do it, just that it worried him. Back seat doesn't mean thrown out of the car.

People need to keep bringing this up, lest we completely forget about it and wake up when it's too late.


:(


This may be a dumb question as I haven't read the bill, but isn't this what CISA is supposed to help guard against in the future? If Linode has an easier way of sharing info with the USG, can that help pinpoint and mitigate this attack from happening in the future?

Edit: how about a response instead of downvotes?


DDoSes are easy because ISPs don't want to filter traffic (BCP 38). If ISPs did this, then it wouldn't be so easy to spoof source IPs and much easier to block or track down offenders. I don't see how CISA is gonna change this. If the USG wanted to stop this stuff, they'd create a policy forcing ISPs to filter. Same as VoIP and fake caller ID. If the FCC really cared, they'd start fining.


Dealing with DDOS has always been way below anti-piracy on in the priority of the relevant US authorities. It's very easy to do and hard to identify those ultimately responsible. Although every now and again a big botnet command-and-control network gets shut down.

Nobody's interested in the defensive side of ""cyberwarfare"".


Botnet takedowns seem to be mostly due to industry cooperation without significant help from the cops.


CISA is really intended to address sophisticated attacks that depend on vulnerabilities or vectors, and therefore have distinct signatures. These signatures are what companies will share with the government and each other.

A DDOS is usually just a dumb flood of traffic. There's no signature to share, and blocking IP addresses is usually pointless since big DDOS's come from large networks of hijacked personal computers.


It's funny that it took an event like this for me to realize this but now I see that it's really a bunch of companies (like Linode) all leasing out space in some huge data-center. I use ASO vps hosting and they too have been experiencing a large network outage. I'm sure you can find at least a handful of hosting companies that are SOL right now too. I guess I understand the whole premise of putting all your eggs into one basket, but then again you really shouldn't.


My four London servers were also unreachable last week. Looks like Linode have a pretty rough time.


Oh wow. I remember a couple months ago the ATL datacenter had network issues too. Really annoying, but I guess it's not their fault 100%. I wish they offered more DDoS protection solutions. I know some VPS companies specialize in that offering for things like game servers. It'd be nice if some sort of solution could just be included. I don't know if it's more of a technical issue or legal problem. As far as I know the only way is to null route(which is an inconvenience to the website, but no traffic going to the router other client's are on) or just adding a bunch of servers to "soak" up the extra bandwidth.


OVH does VPS's and they have their own Anti-DDoS network setup that is pretty amazing:

https://www.ovh.com/us/anti-ddos/


Interesting. What is "Multi-point Mitigation"? I know it mentions a few locations. I googled it and it just brings me back to that page.

I wonder if any solution would shutdown a VM and then restart it on another host but that'd be really sucky in some situations like an app might not shutdown cleanly, or the app is in the middle of something like charging a credit card.


Multi-point just means that the traffic is going to flow to the nearest datacenter that hosts the VAC, gets filtered and checked and then traverses on to your system.

The goal is to spread the DDoS out over as many different bandwidth heavy locations as possible.

See the images at the bottom of this page:

https://www.ovh.com/us/anti-ddos/hoovering-up.xml


I have most OVH IP's blocked on my servers because I get too many hack attacks from VMs they are hosting for their customers.


People also block all of Linode, DigitalOcean, AWS and others. To me it just seems silly.


Strangely this whole thread convinced me to try Linode. Why? everyone keeps saying it's so cheap and I do A LOT of web crawling so I don't need 99.999% uptime


I use it for basic stuff, mainly for testing node apps, no high throughput. It's got a nice, simple management console. I don't like the fact that apt-get operations to set up your VM can blow your monthly quota pretty fast if you're not careful, but other than that, it's been fine. Periodically, they double everyone's bandwidth and storage for free; it's something to look forward to in another year or two... I hope.... if these DDOS idiots don't ruin it for everyone.


> I don't like the fact that apt-get operations to set up your VM can blow your monthly quota pretty fast if you're not careful

What are you talking about? From https://www.linode.com/pricing :

"Please note that all inbound traffic is free and will not count against your quota."


Any the distros are configured to use local Linode mirrors for apt-get anyway.


All the inbound traffic, not just traffic inside the Linode network. "Inbound" means from the Internet to the Linode-hosted server.


Just 2 days ago, a Linode employee was badmouthing AWS here on HN for being too expensive:

https://news.ycombinator.com/item?id=10796094

A DDoS will be much more expensive to customers than choosing AWS over Linode (or an equivalent low-priced service).

EC2 has been around since 2006, and never has had any issues resembling this.


So you're comparing DDoS of a provider with everyday spending?

I was contracted a few months ago to save a website crumbling under its bills. They had $11k / month in AWS bills. Brought it down to $600 / month by switching them to Linode with a more reasonable stack.

Not everything is a nail. Sometimes the mistake is choosing AWS.


If you reduced their bill from $11k to $600, I'm betting AWS was not the main problem.


Given that the price calculators for Google Cloud, HP Cloud, Rackspace, etc are online, you can check for yourself. The key one is bandwidth cost for their services.


Amusingly, we moved our infrastructure from Linode to AWS and cut our bill roughly in half about a year ago.


That can happen as well of course. Like I said, not everything is a nail. Sometimes AWS is the wrong choice, sometimes it's the right choice.

However a lot of people jump on AWS when they really should not.


Agreed. We've got a moderately complex stack and on Linode the noisy neighbors problem caused us to require bigger boxes for certain layers than should really have been necessary.

Honestly OpsWorks on AWS has been a fucking dream from an ops perspective - we've been able to fully automate basically everything.

My only complaint about AWS is RDS: in order to squeeze maximum performance out of a database, you need to pay for IOPS, and boy do you pay. Fortunately, we only need a massive database instance a few weeks out of the year (our industry has some massive seasonal spikes), so it's not so bad.


Linode is _still_ hit by DoS. They currently have a "Major Outage" in the London DC. Seems like there are issues in almost all DC's.

Unbelievable they can't get that DoS stopped! My server have had 19 outages the past 7 days, and over 5 hours of downtime!

Still no e-mail from Linode whatsoever..


I know. I didn't mind when the first outage happened in London (24th?) - these things happen. But now... this is getting beyond a joke. I'm going to have to consider moving, which as I host for friends is not as straightforward as it should be (since they control the DNS)


I wonder if this is correlated to the extortion thread on NANOG I saw the other day.


For a change... :-/


Link says DoS, not DDOS (which isn't even a proper acronym).


DoS == Denial of Service; DDoS == Distributed Denial of Service




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: