Plenary session
9 May 2017
2 p.m.

CHAIR: Hello. Good afternoon to you all. Welcome to the afternoon, the first of the two afternoon Plenary sessions, brought to you by a selection of wonderful speakers and presenters today. I'm Brian, and along with Alex we'll be co‑chairing this particular session.

A couple of minor things ‑ small, but very important things before we begin. Again, as has been said, if you wish to ask a question or leave a brief comment after one of the presentations, please state your name and some sort of madeup humourous affiliation of your choice afterwards. We are running a competition.

So, in addition to that, please remember to rate the talks. You can win prizes. Shane has again asked me to tell you not to participate in the 25 years of RIPE quiz. Apparently, I don't know, he needs to win the switch for medical reasons or something.

So, rate the talks, and also reminder about the PC elections, you have until three o'clock, so, you have got the next hour to submit your candidacy or somebody else's candidacy, but please ask her permission beforehand for the hype PC.

I think that's it.

So, the first talk we have is Yossi, and are we there yet on RPKI's development and security.

YOSSI GILAD: Hi everyone. So this is a joint work with these people here.

And it is really all about the resources public key infrastructure or the RPKI which is intended to achieve two goals: The first one is to prevent prefix and sub‑prefix hijacks, and the second is to lay the foundation behind more sophisticated, more advanced defences against more sophisticated attacks like BGP SEC. Let me just briefly recap prefix hijack and sub‑prefix hijacks. So let's say we have AS3320 and that AS announces its IP prefixes, 91.0 /10. So its neighbour receives that announcement and now it learns to route each prefix and, moreover, ASY also forwards that announcement to its own neighbour, of course appending its own identifier to it. Now ASX also learns the route. Let's see if the attacker at ‑‑ of 66 also announces the same IP prefix. In this case, BGP does not contain any authentication, so, ASX can choose which ever announcement it wants, and, in particular, it's going to prefer shorter routes over longer ones, so traffic will actually flow to the attacker.

In in a sub‑prefix hijack, it will form a similar attack except now it will announce a sub‑prefix of that /10. Now we have moved ASY to be on the attacker's route but actually in sub‑prefix hijacks this doesn't matter because the more specific prefix wins, and so traffic will actually throw to the attacker at 666.

So, how does the RPKI mitigate these attacks? Well, the RPKI assigns IP prefix to a public key using a resource certificate, or an RC, and it allows owners who have the private key matching that public key to issue a router geolocation authorisation, ROA, and advertise them. So now anyone who knows the ROA can assist that information north to make its routing decisions.

And in particular the example that I showed you earlier, the prefix 91.0/10 is actually owned by Deutsche Telekom, who have done a good job and introduced ‑‑ got certified from RIPE and introduced the ROA and so now this ROA is advertised and anyone can use that in order to make the routing decisions. In particular, consider the previous scenario, and let's say that ASX does perform filtering according to the RPKI, well now ASX knows that, according to the ROA, AS3320 is the only legitimate origin for that IP prefix. And therefore, it can be discarded as a route and send its traffic down the correct route.

So, in this talk, I'm going to talk a bit about the challenges facing deployment of the RPKI. And then I'm going to talk about the benefits that we might get from it and the partial deployment.

So the first thing that I want to talk about is insecure deployment, which is I guess like most security systems. You can deploy them insecurely and at the beginning when they are being rolled out these mistakes are more common to happen. And in particular, I'm going to talk about the maximum length field that is specified in the ROAs and that field allows ‑‑ actually specifies what the owner allows as the more specific prefix to be announced. And in this case we have ASA and ASA /16. So one prefix is allowed, 1.2./16. So ASA announces that to ASX. The attacker knows about the ROA, so it knows it can not claim to actually originate the route to 1.2/16, and so what it could do is it could fake a link to cast A, so now ASA is the origin but now AS666 actually spoofs the origin and originates the advertisement, sends it to ASX and now ASX needs to decide both announcements look valid according to the RPKI because the origin is correct but it would choose the shorter route. So traffic ‑‑ so that actually fails. So this attack was actually shown to be much less effective than sub‑prefix in prefix hijacks. But what happens if they issue an ASA uses permissive maximum length. In this example the user they use a maximum length of /4 but they actually announce only the /16. In that case we call the ROA loose because it's too permissive. ASX again assumes the announcement, but now the attacker can perform a slight change to the attack and it can actually announce a /24. So, when the attacker's false announcement reaches ASX now ASX is actually going to out according to the more specific roll and send its traffic to 666. So this attack is also mentioned in RFC 7115 and we have measured and actually found it to be quite ‑‑ we found existing ROAs to be quite loose or quite vulnerable to such attacks and, in particular, 30% of the IP prefixes that are covered ROAs are actually vulnerable to such attacks and, in fact, 89% of the prefixes with maximum length greater than the prefix length are vulnerable and we have even found large providers that have issued ROAs vulnerable to these attacks.

This allows the attacker to actually hijack all the traffic to non‑advertised sub‑prefixes, which are allowed by the ROAs and it is just as effective as sub‑prefix hijacks. We expect this vulnerability to be solved only when BGP Sec is fully deployed and ‑‑ but there seems to be a long time until then.

There are also other challenges to the deployment of the RPKI and other mistakes in ROAs. In particular, well the loose ROAs would cause valid prefixes to be unprotected, there are ROAs that actually make, there are also the bad ROAs that make legitimate prefixes appear to be invalid, either because the maximum length is wrong or the AS number is wrong. And there have been studies on this and they are actually non‑negligible.

So, we have built the RAOlert system, one is a web page where you can check the status of your network in the RPKI, and see if your network is valid and known and if not, why not? So it would also indicate the error and in case there exists one and tell you how to fix it.

The other side to the system is this online proactive notification system that periodically compares the information in the RPKI database to what's being announced in BGP and the alerts administrators about loose ROAs and bad ROAs.

So, our initial results running this proactive system are actually quite promising. We managed to reach 168 operators by e‑mail and a month later about 42% of the errors that we reported were actually fixed. So we really ‑‑ one of the challenges we stumbled upon was to get this contact information for the operators. We really want the ROA to be adopted by the RIRs which might make it more reliable and easier to contact operators.

So, the next part of my talk actually talks about the benefits that we might get from the RPKI and the partial adoption, and specifically what I mean by partial adoption is partial adoption of router geolocation validation. Now, we have had a talk earlier today that shows that adoption of ROV is very partial at best. So only if you perform that and it is very important for any security system to be able to provide benefits early on from the very early adopters.

So, performing ROV is actually ‑‑ or deploying ROV is actually quite simple. Most routers today support filtering routes and so what a network operator should do is basically deploy an RPKI cache, which is a general‑purpose machine, that machine syncs with RPKI publication points, it retrieves CCs and ROAs, it verifies the signatures and if the signatures are valid it creates configuration ROAs and puts them. The changes to the routers are actually, have actually been rolled out and they are pretty minor, there is not much impact on performance.

But what would be the impact of partial ROV adoption? So, we identify two interesting phenomenas. The first one is collateral benefit phenomena. So where adopters can actually protect ASs behind them even though they are not doing ROV filtering. So, in the remainder of this talk I'm going to talk just about filtering, although the information in the RPKI can also be used to deprioritise invalid routes, but there have been also work done on that in the past. So I'm going to just talk about filtering where you might hope to get substantial benefits.

So, in this example, we have the attacker at 666, and the origin at AS1, and the origin had also issued a ROA, so as you can see on the slide, now AS2 performance ROV filtering, so it would spoof invalid announcements, AS3 doesn't. And so what would happen if the attacker performance a prefix hijack or a sub‑prefix hijack? Well, in this case, AS2 will identify the issue, discover the malicious route and it will only forward the valid route to AS3. So although AS3 doesn't actually perform the filtering. Since it only learns about the valid route, it would actually send its traffic down the correct route.

So, that was a good effect. But there are actually some collateral damages that might happen with the RPKI. And by collateral damages I mean that in not adopting ROV, ASes might do harm to ASes behind them who actually do perform ROV. And so the first effect that we have noticed is disconnection. So, consider the same topology except now AS3 adopts and filters invalid routes but AS2 doesn't.

Well, so now also assume that AS2, for its own reasons, prefers routes from 666 rather than AS1. Now, AS2 receives two announcements for the same prefix. Let's say that the attacker performance a prefix hijack, and it will forward the route that it deems is better, and in our case it would be the attacker's route. So AS3 only receives one route, which is invalid. It knows that because it does ROV. And so it would discard the route and actually disconnect from the destination.

So that is one effect, the disconnection effect. And you might say, well, I'd rather do ROV and at least when I do ROV, I promise myself that my packets will not reach invalid destinations. So rather get disconnected than have my packets flow to the attacker. But actually, that might not be the case. So, ROV and the partial adoption might actually cause control plane, data plane mismatch. Consider the same scenario, except now we have the attacker at 666 performing a sub‑prefix hijack. In that case AS2 will actually receive announcements for two different prefixes, and therefore, can forward both of them to AS3. Now, AS3 receives these announcements, it knows that the attacker's announcement is actually invalid so it will discard it and would want to send its traffic down the correct route. But actually, we don't have source routing on the Internet. And so what would end up happening is that AS3's packets would reach AS2 but in AS2's forwarding table AS2 is actually victim to the attack and so it would forward traffic to that sub‑prefix to the attacker. So, even though that wasn't the intention of AS3, that would be what happens.

So, lastly, we want to quantify the security benefits and the partial adoption and to do that we use the simulation framework. And we build a network map of the, of an AS level network map of the Internet using the information from CAIDA and in each iteration of our simulation we pick a victim and an attacker. The victim's prefix, we assume that the victim issued a ROA for its prefix and then we pick an adopter set forecasts for doing ROV. Finally, we see which ASes would choose to route the traffic to the victim and which to the attacker and that allows us to compute the attacker's success rate.
So, in this graph, we have on the Y axis the attacker's success rate and on the X axis we have an expected deployment at the top ISPs. This graph tries to measure what would happen under the adoption and the green line shows what happens when every one of these top ISPs actually adopt. So the far most data point is for 100 ISPs, meaning that we take the top 100 ISPs when sorted by the number of customer ASes that they have, and each of them adopts. The rest of the Internet doesn't adopt. And so you can see that the prefix hijack attacker's success rate really diminishes. The red line, in contrast, shows what would happen when the adoption rate is only 25%. So, at the far‑most data point, that means that we have an expected deployment of 100 ISPs. That means that, out of the top 400 ISPs, we have picked the adopting set where each AS adopts with probably one quarter. So on average we expect to get 100 ISPs, but you can see that now the ISPs are actually most, the adopters set is actually more spread out, and not so condensed at the top ISPs, we can see that the attacker is actually doing quite well. So the effect is really ‑‑ although the adoption set is similar, the effect of it is very different.

For the sub‑prefix hijack, we actually, we see some similar results. So again, the attacker is doing quite well when only 25% of the top ISPs adopt, but when everyone ‑‑ when the top 100 ISPs specifically adopt, the attacker's success rate really diminishes.

So, I will just conclude my talk and go to questions. So, what can we improve? Well, we can improve in the RPKI, the information accuracy, and I present a ROA to do that. And it seems that the RPKI actually has potential to provide some substantial benefits, but a sufficient condition to do that is to get the top 100 ISPs to adopt. So, once we get the information in there is right, it might be good to incentivise those top ISPs to adopt and we'll get the benefits.

So, with that, I'd like to thank you. We have a technical report available online with some more results and insights and I'll take questions now.

CHAIR: So. Questions?

AUDIENCE SPEAKER: Aaron Hughes, 6connect. In the modelling and simulation you did here, did you collate this with tight ROAs or with loose ROAs?

YOSSI GILAD: Oh, we just assumed that whichever AS had the perfectly good ROA.

AUDIENCE SPEAKER: So, the reason I ask the question is I think it's probably better not to recommend getting tight ROAs until a certain percentage of the middle global Internet supports dropping and invalid prefix in that at least it gives you the chance to battle the equal size objection if there's an attack on people who are not supporting it in that you could win back a greater percentage of that traffic on the other side of supporting RPKI infrastructure. I don't know what that number is, maybe that's the top 50 or the top 70, but at some point it becomes very useful to become strict but at the moment I think straying loose so that we can inject the same size against an attacker is probably a good thing.

AUDIENCE SPEAKER: Job Snijders, NTT Communications. Contrary to the advice that you heard at this microphone previously, I would advise against over usage of the max length attributes because it actually opens up the possibility for attackers to spoof the origin and inject more specifics. And the fact that there is actually friction between, should we use max length or not use it to either mitigate or open ourselves up to specific cases is worrying in itself. So, perhaps we need to drink a beer and figure this out.

YOSSI GILAD: I think, in general, I might worry that, actually, operators are not sure about how to use that. So, maybe if the operator is well‑informed and knows about these attacks and knows the effects that might happen if, you know, they choose whatever, to use a permissive maximum length or a very tight one, then that would be okay. But I feel like there is confusion and people might not know.


AUDIENCE SPEAKER: This is Alex Band from the RIPE NCC. Of course we may at the user interface in order to manage, create and manage ROAs, we had to make a choice. What would be a sensible default and we, although in the beginning decided that we wanted to set a max length as strictly as possible so what we display in the user interface is what the user announces according to BGP and we suggest the ROA for them with a maximum length that is exactly the same as the announcement. So, we don't do anything less specific by default. So, a network operator would have to explicitly loosen it up if they want to. But the default is to be as strict as possible.

In an earlier version of the interface, we just let people choose something, and they go like what is this maximum length thing? I don't know what that is. I'm only used to using route objects, so let me fill something in here like a /32, which is what mostly happened in the beginning. But the vast majority, I'm sure that's 90‑plus percent of all the ROAs that have been created within the RIPE NCC datasets have a maximum length that is exactly the same as the announcement that is being done. So hardly any loose out there.

AUDIENCE SPEAKER: Hi, Marco Gioanola, Arbor Networks. Talking about the max length, I wanted to point out that there is a legitimate case for more specific announcements and that DDoS mitigation across the Internet based on BGP diversion. I am a bit concerned that recommending not to use loose ROAs, I mean, it's a good recommendation but it removes flexibility by people to announcing more specifics, for example faulty DOS mitigation. Have you considered this type of, let's say, legitimate scenarios for short announcement of more specific that should be done properly with RPKI?


AUDIENCE SPEAKER: Randy Bush. There can can multiple ROAs for the same space. So I have my 16, I have a nice ROA for it. If somebody hits a 31 in that, I can have a ROA for that 31, for the DDoS mitigator who is...

AUDIENCE SPEAKER: My point is just that if operational overload, you know, so you end up creating ROAs for potentially anything that's a bit problematic for end customers.

CHAIR: Ruediger, do you have a question? No, okay. So any other questions? In which case, thank you very much.


So, our second speaker this afternoon is Constanze Dietrich and she'll be speaking about caught between security and time pressure, which is an empirical investigation of operators' perspective on security misconfiguration.

CONSTANZE DIETRICH: Hi, and today I will basically talk about the first results of my master thesis. For starters, just last week there was a conference on national cybersecurity in Germany that brought up some interesting numbers. According to the Federal criminal police office, one our of three German companies is at the receiving end of cyber crimes and the damage that it causes to the economy is estimated at 50 billion euro per year. And it's only supposed to get worse. And the reason for me starting with this lame attempt to get your attention is, we see misconfigurations as the main reason for security incidents. Therefore, we investigate how they happen, and this on a rather personal level.

So in this talk I will first address a few examples of security misconfigurations and then explain our empirical approach. As you can guess by the rest of the outline, we are still at an early stage of our research. Nevertheless, the first glimpse until the topic turned out to be giving some quite surprising insights already. So without further ado.

Security misconfigurations are basically quite simpler errors in deploying an Internet service that lead to security issues. To name a few, we started thinking of, for example, the accidental publication of passwords, which got pushed online with a bunch of other files, or are basically clearly visible because someone forgot to encrypt the login page. There's disabled or missing authentication that might allow everyone to log in as anyone, which gets never noticed because you know it works for me.

And there is quite a variety, but for now let's go, let's briefly look into some examples.

Running MongoDB on the Internet gives a whole new perspective on this. To revive your memory, in early 2015, 40,000 MongoDB instances were found on the Internet unprotected. And as an example, we did a kind of job service platform that gets you the best developers matching your needs, criteria GitHub and aggregated names, e‑mail addresses, location and of course GitHub profiles in a 65 gigabyte database. 8 million data sets in plain text unprotected. And well, sure, you can get all this information by just looking at each individual profile. This pile of data, to see this pile of data outside of GitHub probably passed through data breached trading circles feels just wrong.

Second one. Having your TR‑069 publicly reachable and thousands of customers online. Late 2016 the telecom left a remote maintenance interface unattended. Hence, 900,000 routers had to face a remote code execution attack. Indeed they were quite lucky under these circumstances because the routers didn't have the operating system the attacker expected. What happened, though, is that there was a DOS vulnerability not causing 900,000 routers to get infected, but to crash.

And the last fail I want to mention, the German service, which is basically a kind of car‑sharing service for private people, although they had already withdrawn their services earlier that year, they kept an archive with all the user datas, including bank accounts. It was AS‑encrypted, but what took all the riddling fun away was the key that was stored right next to it in the same Cloud. So exploring a variety of incidents, a few questions come up, don't they?

So, we assume the operator has had something to do with it, right? But still, who are they? Are they storage add misconfiguring a firewall? A database misconfiguring an ISI switch? Or is it a programmer with route? Also, what gets misconfigured easily or often? And finally, why does it happen? Are they simply overworked? Are they caught in constant pressure to deliver or are they missing a KPI for just doing things right? We don't know and we still don't know it all but we want to. And if you have been asking yourselves what to do with this kind of information. In the end it's all about how to prevent misconfigurations. Are there any measures, design patterns, best practices, anything that allows for a less security issues?

And who else would be more suitable to ask than the operators? But there are a few things to keep in mind.

First, there is not a lot to build on. While configuration undoubtedly is a matter of usability and misconfiguration might be the result of a really bad one, the focus on usability and security research mainly focuses on end users, how to make them choose a better password, how to make them care for e‑mail encryption, but the operators, well they are experts and experts don't have expert systems don't have to be usable, right.

So then building on the little we have. Asking people is quite hard in general, especially if you ask them about mistakes they might have made and asking operations people is even harder. Not just because I'm not an operator myself and my vocabulary might lack a few terms, but somehow they tend to be busy operating things. And also a few of them seem to have had some unwanted experiences with undemanded solutions to their problems so the promise of trying to make their work easier isn't necessarily a selling point here.

Finally, as always, you have to know what you want to know and you have to design your questions accordingly and unmistakably.

But despite all these difficulties, we give it a try. And this is actually just in here working in science you have got to show some method here. But it didn't take long for us to realise that operators require a slightly adapted approach so. We went to the local sys admin regulation table and talked to them while trying to stay sober for once. They appeared open and talkative, having recreational beer and schnitzel, but we would soon see that for most of them it takes several e‑mail reminders to recruit them for a focus group. Although, I want to mention that one out of 76 that replied right away. Thank you you're awesome.

The next step was travelling back to, as my advisers said, the good times of the Internet, and search the IRC for operators looking for a break. And thanks to the DNOG channel that worked out pretty well and gave us a pretty decent base to work with.

We were able to conduct 5 interviews and after whining and screaming and rescheduling, like, three times we could even get five volunteers to participate in an RFC focus group. And since we couldn't provide them with snacks and drinks like we would usually do, they were basically reimbursed with a chance to rant as much as they liked during these sessions.

And these evolved around three questions. Did you ever encounter security misconfigurations which also includes the ones made by other operators?

Then, why did they can you remember?

And did a security misconfiguration incident change anything about this?

And there we go. All sorted out!

So the interviews were conducted as anonymously as RFC goes. I told them who I was and they were told I wouldn't tell who they were. And you may notice that the answered questions aren't the ones I initially asked. That's purpose.

So let's talk about what happened?

First, some things just never change. A few of these I mentioned earlier like by default, disabled authentication. But the main and ever‑present issue here is login data. Whether it's the manufacturer's default password that can be Googled in, like, three seconds, or ‑‑ we're just testing, let's go with the usual admin‑admin combo for now. I guess you can guess what ends up being used in production.

The next category we identified is conventions. To assume that all caps and shell scripts imply environmental variables is kind of obvious but still just a convention and not a law. And if you have only seen strictly spaciously divided systems, each handling external or internal services exclusively, elsewhere you might open some unsuspected doors by doing just what you are used to do.

The third class is accidents. This contains all these misconfigurations that happen because the operator, for any reason, just wasn't careful enough. That might be as simple as a typo or working in ten different Windows and finally picking the wrong one or also accidentally deploying debug configuration that lack essential filters.

And lastly, the lost, forgotten and abandoned. This includes all the issues that emerge because someone just didn't take enough care of the system. Like, disregarded security patches or the forgotten prints that hides behind cleaning supplies somewhere in that room at the other end of the floor. But it also contains the, as we call them, "not my department issues" that emerge wherever there is some kind of responsibility gap between at least two parties.

Now, knowing what the issues are kind of. What's their origin?

The first reason is lack of experience. And this is what we have been told by pretty much every participant exactly like that. Newly‑fledged graduates still wet behind their ears might use online resources that make them tear down a firewall without even realise it go and they might also find comfort in keeping default credentials for whenever they need remote support by the manufacturer.

Even just new employees that just started and aren't familiar with the existing system yet might make simple errors, especially if there is no decent documentation to be found.

Often mentioned was also this one programmer, or was he a web designer, anyway he was seen with a Linux screen so he must be legible for the job. So all levels of experience there is some kind of lack to be found. Interestingly, though, all the participants could confirm that the firsthand experience of a security incidence made them more sensitive to security in general. However, there will be a little disclaimer later on.

The next reason is processes. And this is where the responsibility starts slowly drifting towards the management. If there are no specification at all or only really loose ones, basically everyone does what he feels like doing, which leads to quotes like: "The operator password is written in the active directory and has been there for years. We cannot change it. Who knows which software stops working because it got hard coded somewhere." And that already indicates another issue, which is insufficient communication.

Without announcement, for example, random features get deployed, and guess what? They have never been tested because there's no process for that. And also, the other way around: if processes are too strict, operators tend to just check off to‑do lists without thinking outside these check boxes which can lead to the "not my department" issues.

The next explanation is betrayed faith in suppliers. If the manufacturer says "won't break ever," especially the management tends to ‑‑ is tempted to believe them because, well it's less work than building a backstop for something that's supposed to never break, right? And this also includes certificates. It sounds nice if something is certified. But often no one actually checks what was certified or for which system so it lays around never getting updated.

The next one is rather rare but still worth mentioning. Back firing legacy support. Yes, philanthropy is nice sometimes but using outdated encryption just because we don't want to exclude the people with really old and capable Android smartphones might open doors for rather bad guests. And if you are missing an explanation by now, do not worry. It is probably part of the last category.

Unwise budgeting.
So the principle wants abstract security right, but doesn't actually budget any resources for that. You are the operators, just make it secure, but finish until tomorrow evening please! So often there is just no time for test reviews or documentation, and the operators are basically compelled to build makeshift arrangements and hope they are gone before it explodes.

And to seize the issue of accidents here. They have a disposition to happen when people are overworked. Furthermore, there is this ‑‑ "Well, the software we use only runs on Windows XP, do you even have an idea how much it would cost to upgrade? It works, doesn't it?"

So, let's say there was a purchase of new technology, we hear, well this is new and shiny, but the usage can't be so different, right? So if there is any issue, just Google it. And there goes the advanced training. And to be blunt here, I don't want to display operators as only reserved and obedient people. Some of them tend to also rather work on new cool features or they do voice to the management, but it's, rather, that a lot of them have superiors that just don't get what the operators remarks content‑wise. And even worse it gets if they are only hiring external consultants. QA automation, bull shit, we don't need this. I guess you can see the point here.

Now, on how to prevent this. Yes, a lot of decent solutions aren't probably in your personal scope, but there are a few things you can do yourselves if you are an operator. If you are a senior, grab yourself a freshman and provide for experience. Pelt them with your own best practices, show them disastrous postmortems and let them write the documentation. It's also a convenient way to check whether they got it. If processes are broken, or the holy faith in manufacturers just seems a little dubious to you, talk to the management; don't just hide behind the "not my department," and if you want to give it your all, get yourselves a management position if you haven't already. Because it has proven almost that experienced operators in management make a lot of sense, finally having someone that schedules time for reviews, documentation, and testing, like scanning, automation, all that stuff. And also for misconfigurations, because they happen, but if you have already scheduled time for, you know, building double bottoms or incident trouble shooting, it won't hurt as much.

So what's our conclusion so far?

As I said, misconfigurations happen, and sometimes that might even be a good thing because it makes operators more sensitive to security. And not only them, often all these nice ideas regarding, you know, smarter processes and smarter budgeting are already there but don't get implemented until it did actually hurt.

Here is the disclaimer, though, I promised earlier. In these interviews and the focus group, I only talked to seriously interested and zealous operators that are aware of what they know and what they do, probably guys like you. And we don't know what's about these operators that have have been sitting on their jobs for 30 years and meanwhile stop trying to stay up to date, or those bad settled for being the admin without actually having a thing for the job.

But getting those guys to talk is probably a bigger task, or rather material for another study. But even without them we are not ready yet. So if you happen to know someone that is an operator, please feel free to share your experiences with us. And also make sure to fill out that questionnaire, that it's supposed to be released in about four to six weeks and your input is very much needed.

We could get some very vivid insights already and we think that with further and quantitative research, we'll be at least able to provide scientifically acquired plea to all these decision makers out there. And to be honest, it would be nice to have an actual solution here.

So, our question to you here is: How may we help you?


CHAIR: Okay. Do we have a question?

AUDIENCE SPEAKER: Jen Linkova. One comment. I think when you are talking about misconfiguration, you are actually covering a few different things, because one situation is when I do know how my systems should look like, and suddenly, because of human error or something else, a real configuration comes from expected scenario. And a completely different situation is when nobody ever done any risk assessment and we do not have any intended state of the system, in this case it probably not even misconfiguration. It is probably more on the process side of the thing and the risk assessment thing because absolutely the same configuration for one system might be acceptable risk which I looked at and decided okay the prevention cost me too much so I'm okay with accepting this and for the system it might be no, it's definitely security hole. So probably it would be interesting to look and try to differentiate between two scenarios when people made the mistake unintentionally and deviated from the expected thing and other things they don't even in know what to expect.

CONSTANZE DIETRICH: That's true, but we still want to tackle them all. Also, the dubious risk management that went wrong but also the accidents that happen because people weren't careful because they were overworked, but, you know, we don't know why it happens. So...

AUDIENCE SPEAKER: Benedikt Stockebrand. I'm not going to start a rant, I'd love to though, but what I see right now in a project is pretty much everybody there is completely ‑‑ it's a combination of being over‑taxed with the technology they are trying to use plus some pretty impressive level of hubris ‑ I'm a professional, I don't make mistakes ‑ and you combine these two things throughout the entire organisation, it's not just some of them, in some areas it's all of them, it's a recipe for disaster. The important thing here is to come back to your original question. It's even worse when you get to talk about home routers or anything, that's where the real problem is. And it's not so much a problem on the operator's side, it is as well, but it's also a problem of building products, it has to be complex and lots of knobs to tweak, otherwise it can't be usable or whatever. This is ‑‑ take a look at a run‑of‑the‑mill Cisco router, whatever, and then try to find somebody who can actually explain every single feature you find on these things. It's pretty much hopeless except for those two or three people who actually work for Cisco and those who have a Juniper. That is a huge problem. We have ‑‑ we are using technology that's so overly complex that it's really, really troublesome for people to come to grips with it and that to a large degree leads to these misconfigurations.


CHAIR: Anybody else?

AUDIENCE SPEAKER: Hi, Daniel Karrenberg, RIPE NCC. Where did you get the nice cartoons?


So, two more things. If you have further questions or feedback, please come find me during the meeting, or ask for a limited‑edition business card. And also, thanks to the RACI committee for giving me the chance to present our research here. I really really appreciate this. So thank you.

CHAIR: And last comment from ‑‑

AUDIENCE SPEAKER: Randy Bush. IIJ. I really wish somebody else would have said it. But data driven automation. How many times do we have to say it? Data driven automation!


CHAIR: And our third talk for this session is Enno Rey, why IPv6 security is so hard.

ENNO REY: Hi, good afternoon. That's a long bulky German title. Actually, I am going to discuss ‑‑ I am trying to lay out why I think in IPv6 there is some elements which make the life of a security professional much harder than it might, or should be.

A very quick note who I'm. I have a background in network in the nineties as a sys admin and operator. I moved to the security space in '97. I have been doing security since then in various roles. And I am involved in several activities when it comes to IPv6.

My talk is split into three main pieces that have ‑‑ first, I want to lay out what, as a security professional, one would expect or one would hope for when it comes to network like properties and network security. Then we'll do a mapping. Here are the technical properties of IPv6 in the light of those objectives I laid out in the first part. And then I'll try to draw some conclusions.

Let me add a disclaimer first. This is not a rant about the IETF. I am aware that, in the following, I will tap on the feet of many people. I do think that, in the IETF in the last 20 years when it comes to IPv6, a number of things might have evolved or developed in the wrong direction, and I do think that this is partly attributed to, like, an institutional failure of how working groups in the IETF are organised and who is taking part there and, say, trying to play which type of agenda. And I think there is a strong disconnect, nowadays, between what's happening in the world out there, especially when it comes to IPv6 deployments, and what some people in the IETF Working Group, especially in six men, what type of vision they have of the world.

Actually, the slide was meant to be an animated one. I lost this when the conversion to the PDF was made. The thing is, when you look at this picture down here, you might wonder why is it there? When you join nowadays, a six‑men meeting at IETF, there is such a parallel universe you might ask yourself, wait, do we have a contribute of Hilton to put LSD into the drinking water when they meet? This is so disconnected, this is so... I'm really wondering every time I go there, and I don't do any longer. But that's all I want to say in this context. Now, let me get a bit more objective on the technical facts.

From a security perspective, when doing network security, there is some properties, some objectives which I have sympathy for which we strive for when we try to build trustworthy and secure networks, which can be broken down to, like, say predictability, or, in the RFC 2828 there is this old definition of trust that essentially says system behaves as expected. That's probably good for security if you have an understanding what you can expect from a system and you can observe if a system behaves the way you expect it to be. That creates trust and that creates a property trustworthiness.

A second, it might be helpful to identify who is involved in transactions happening over the network. When you look at the Rabobank presentation later on, this topic will come up too. Obviously, for several reasons, they are interested to know who is engaged in type of network connections, and there might be, it might be helpful to have the to filter stuff. Filtering is not the only security control one has at the hands in network security, but it's an important one.

These are quite simple ones. On a basic level, given I do security for quite sometime, there is some things I like to see and which I think are helpful for security, which are:

Simplicity, keeping things in a certain way simple might be good for security and especially when it comes to operational security.

Avoiding complexion. This is not necessarily the same thing but it goes along the same lines, avoiding complexion might be the same thing and minimising the amount of state. These three might be helpful when you are interested in running a secure and stable and resilient network.

As for the keep it simple thing, usually there is a direct relationship between the lines of code a certain component has and the number of vulnerabilities. So, we can certainly state less lines of code equates somewhat, it's not that easy, but there is a relationship equates to the number of vulnerabilities, a system or a component or a certain entity might have.

Second, parsing stuff, parsing network traffic, parsing packets, needs CPU cycles, needs performance, and the higher the amount of parsing the worse for, like, security and the more susceptible components become to the denial of service conditions. And obviously the more protocols you have, the higher the exposure to attacks might be.

From that angle, keeping things simple probably has a benefit for security, and there is this, this is a kind of a tribute to Geoff Huston as I know he likes to employ ‑‑ in the European history of ideas, there was this thing called Ockham's razor, going back to William of Ockham in the 14th century who essentially said when you have several hypotheses at your hands and you have to decide for one explanation, choose the one that is simpler. And in networking, actually this is what RFC1925, 12 networking tools. The last one, in proposal design you might have reached a state of perfection when you can't remove anything. We will get back to this in a second when we discuss the proposal design of IPv6.

The second major objective, avoid complexity. To understand this one, my understanding of complexity, it's a complex term in itself, there is different definitions, but the one that I like to use is the one from Merrion Webster dictionary which essentially says something is considered to be complex once it's posed of many parts. These parts have, say, have relationships, have interactions, and the whole setting has so many relationships and interactions that it gets hard to understand. Many parties, many interactions, hard to understand.

And the hard to understand part is especially important from an operations perspective, as understanding how something works. Usually it puts you in the position you can develop a mental model of an overall system which allows to predict output of a system, probably based on an input to constitute that relationship. And obviously this is helpful for trouble shooting and figuring networks. To have an understanding what can be expected, what is the expected outcome of something. And it's helpful for security too.

So, the complexity part. Reducing complexity probably has a benefit for security as well, and just look at IPv6, in the IPv6 space, you have so many interactions which are complex by themselves. The interaction between SLAAC andDHCP, the one between MD and MLD, do they need each other, do you need each other to run them, on Linux you can filter MLD and it doesn't hurt anything, on Windows you can't. There is the relationship between old advertisement flags and the local routing table and how this influences address selection. There is all these interactions and now consider the complex or the heterogenous networks that you have it becomes incredibly hard to have a mental model of expected input and, or input and expected output for that.

So reducing complexity probably is good for security from that angle. And there is a third one that I mentioned, minimise state. The notion of state that I use here goes back to this great book of Russ White and Geoff. They define the amount of State information you have in a network, they actually pose three or ‑‑ it's four in the book but I only use three here, three dimensions, the simple amount of state‑like entries in a routing table, then the frequency, or the speed of State changes, might come into play, like routing, flapping, that has an impact on stability and maybe security in a certain sense. And there is this very interesting construct of surface, where surface means say to put an example from the routing proposal space, once you do a redistribution, you have two different routing proposals and to interact as a certain point it's probably important for the overall system and its stability and maybe even its, say, core security properties, at how many points you do this, say, interaction and how deep it is, like how, is it just one taking routes or are the metrics adjusted and stuff and look at this from an IPv6 perspective, a simple packet like a router advertisement has, when it comes to the surface thing, it has, it interacts with every system on a local link, and given the complexity of router advertisements themselves, there is a huge amount of interaction. And so a simple router advertisement creates a lot of State, which, as we will see, might not be beneficial for security.

So let's keep these three in mind: avoid complexity, minimise State and simple is good.

I will skip some slides which are hidden in the PPT, but, now, these are the objectives. Now let's look at the technical reality of IPv6.

Well, we could say technical reality, technical properties, look at the RFCs. This is supposed to be funny, as well there is many, many RFCs which have an IPv6 in the title. Let's again bet a bit more factual and technical.

Let's look at four main elements of the IPv6 design.

There is the idea of replacing broadcast traffic by multicast. There is the idea of having multiple addresses and address types. Those of you who know me won't be surprised to see extension headers mentioned and there is the thing of what I call parameter provision, I'll lay out in a second what this means.

Let's start with the mostly innocuous one that is replacing Multicast, replacing broadcast by Multicast. Obviously, I think Multicast has a higher amount of state, and usually we can assume Multicast, say, communication as opposed to broadcast communication, require more parsing, which above more beneficial when it comes to security. The thing is, I have, in the early years of IPv6, I gave a number of ‑‑ I used to frequently ‑‑ I gave workshops on IPv6. And when I had to explain solicited node Multicast it was a lifetime of myself and the people listening which I considered wasted, and at the end of when I managed to explain well you take 32 bits here and then ‑‑ like take 24 bits off an address and add something, don't ask me why you add this specific piece but add it and then transform it to a layer 2 Multicast by 33 and 30 bits and stuff. Is he end of the day they managed to understand this but they looked at me and said why do we need this? And I couldn't ever give a really convincing answer why a simple Arp was replaced by a much more complex thing Multicast based, neighbour solicitation and that nice Multicast group we have there. Just to save, what do we save on a network level we don't save anything. It's just one interrupt which might get saved on a local system.

This is mostly innocuous, this doesn't hurt. Then there is this thing. The idea of having multiple addresses. And multiple address types. The concept itself is not new. In IPv6 as opposed to IPv4, it was kind of institutionalised. The thing is this creates, I would say, two problems from a security perspective.

One is, increased State. Increased local routing tables, increased space in the kernel to the addresses and the properties and the stuff and there is interaction with router advertisements, with the flex and there is timers and for each entry in the neighbour table you have timers, all this is State. But we might live with this. But there is another thing. It creates a decision problem. Let's look at say a simple data centre environment with two segments. In one segment you have ‑‑ and let's assume this environment uses for management purposes they have come with the idea of using ULAs for this, I don't think this is a good idea but I know people who consider this, and pretty much everybody I talked to thinks, except for specific environment, thinks ULAs is a bad idea which would mean we could get rid of them if the IETF has a process of getting rid of stuff but I'll get back to this later.

Let's look at this one. One segment, two servers with three addresses and the other somethingment this one. So, once this guy wants to talk with ‑‑ the guy over there wants to talk to this one, that guy has to choose between three possible source addresses and three possible destination addresses. Which might be solvable. There is some IETF documents for this specific thing. But, say, that guy over there wants to talk to a system over here, and that one has three addresses, this one doesn't, well this one doesn't need a global address, it's just an internal management system. Now, latest now it becomes interesting and complex. Which address to choose from an operations perspective, which one to put in the DNS, and we might have a problem of State here, just to give you an example example. We recently did this in a lab, and on this side there were Windows 8 systems sitting and on a Mac OS system and on that side there was a DNS server which was running on a Linux system and that Linux flavour created, but created the privacy address, it was participating in SLAAC and this wasn't disabled for the lab purpose. And the impact was that when one of the systems over here sent the DNS query, the DNS response was sent with a different address which the Windows system discarded, like, oh, I asked Frank and now Rob is answering, I don't take this. The Mac OS systems did not discard. So we had different behaviour just based on one single network topology. This is the type of problems that arise once stuff comes up which creates decision problems.

And this brings me to this one and this is worth another rant. This is, I think, one of the most ridiculous RFCs which have issued in the last 12 months. And this one is interesting from several angles. Actually it proposes to use multiple addresses, something between like 20 addresses up to a /64 for individual systems which might be either a good idea or not. Let me stay away from this kind of warmth here. The thing is it was brought up from guys from two smart form Windows. I know that Google mainly the profits is made by advertising, but the person from Google who was participating in the mailing list repeatedly said well, from an Android perspective this is a good idea for this or this had reasons. So it's groups with a smartphone focus having brought up an RFC. One of the persons never participated in the mailing list, or ‑‑ I was in six men above in Buenos Aires and in Berlin, I didn't spot any contribution from one of the guys, so you can draw your conclusions why maybe the name is there. But the thing is, this one ended up as a best current practice. I mean, if I made a poll, how many of you in the room think it's a great idea and audits best kind of practice, the best thing that you can do that is a considered operational standard to have between 20 full /64 for individual systems, can you please raise your hand, everybody who thinks this is a good idea. Thank you, I spotted maybe one person.

To this is what ridiculous that this ended up as best kind of practice. I mean there is a number of crazy or maybe debatable RFCs anyway, but best practice, that's interesting to say the least.

But back to technical properties of IPv6.

Proposal design. Like 20 years ago, the decision was made not to design a simple proposal for say a specific set of requirements, but to create an ex tensable proposal which has a space for many ideas and versions and so on. Which, that's a decision which was taken and fully seriously. My approach to life is I think reasonable people take reasonable decisions for reasonable reasons. I'm not joking here. So probably the guys ‑‑ the decision was made, let's take it like this.

But the thing is extensible protocol, one that has TLVs, which has options, which has extension headers, has less predictability, higher complexity, requires more parsing, requires more State, which is all non‑beneficial for security. And which ends up like a data grams can be thought of like sequences of things, headers, actually, or at least the header part. This becomes like parsing the function of certain parameters. Types of attention headers, number and so on, has all this freedom in RFC 2460. And once you come up with such a proposal design, this essentially needs or calls for the robustness principle. The thing is, the robustness principle, which I like as an approach to humanity, be it liberal when you interact with others, but for the Internet the thing is is the robustness principle a good idea? I'm not so sure in 2017 now. But, for extensible proposal, this is strictly, that is a combining a like property. And there is a number of security problems related to extension headers. Which are increased parsing complexity, evasion of all types of blacklist security controls. A blacklist security control is where you have an approach of allow most of the stuff but deny specific things. And this one becomes hard once there is extension headers. We managed in the lab and we do so constantly in penetration tests to evade security controls which are based on blacklist, like mostly first op security and the like.

This brings me to the last one. Parameter provisions.

In IPv6, and RFC 2460 there is a notion of, well, IPv6 hosts and those can be, there is nodes and there is routers and a router is supposed to be a system which routes. Unfortunately, or maybe unfortunately, in 2461 there is this one "Routers advertise their presence together with various parameters", which means, in the end of the day, a router is not just a device forwarding packets but provisioning, configuring the whole local link, which then, combined with the trust model of IPv6, the trust model of IPv6 is I trust all my neighbours on the local link. I don't ask any questions. They can send me whatever they want and I will accept it, and I will process it and once it's an ICMP 134 packet, I will do whatever instructions are in there based on, well, it's on the local link so it must be good.

One cannot come up with the idea, well we can filter this, right? Problem is, it can be easily still today evaded and avoided by extension headers, and I know there is people like, wait, we have RFC 6980 which essentially prescribes do not process packets which are fragmented and depackets in the broader sense. The problem is getting RFC 6980 right is incredibly hard. This is from the latest IPv6 from Windows systems, and I have to state here Microsoft has, overall, a high degree of maturity when it comes to IPv6 to their stacks and capabilities ‑‑ Microsoft does a good job when it comes to IPv6 in many angles. And we did some testing, actually I did this testing, can the ‑‑ is it possible to convince, in that case a Microsoft server 2016 system, to accept route advertisements with fake parameters even if, apparently they have a 6980 implementation to some degree and with R regard enabled on Cisco devices? And it turns out if you look at this, for example, let's go with this one, once you put ‑‑ you send the router advertisement split into three fragments with the fragmentable part for extension headers destination options routing, destination options routing, which is a fully legitimate packet, you can get through a regard and the Windows system will process it. This is because the complexity is so high, how would you counter this in an easy and manageable and still perform way?

So, one could ask, well, these are design decisions taken 20 years ago. We had, 20 years' time, we had 50 IETF meetings since 2460 was published to cure the stuff, but you know, in the IETF stuff is not, once ‑‑ maybe a better idea or an understanding is gained, it's not like the old stuff is withdrawn. The old stuff is just depricated, which means it still stacks up in the stack, in the code, and it means there is different generations of IPv6, and even as complexity, you have to find out, well the system, the heterogenous system I have, which of those standards do they actually support?

So conclusions:
IPv6: Unfortunately, much more complex, much more State needed, both of which are non beneficial, which means IPv6 security is much harder. Usually, I like to try and end on a positive note. This is particularly hard here. My advice would be try to understand, I hope my distribution here is to make clear these interactions and these things you have to keep in mind when you are interested in running a stable, resilient, secure network. Minimise complexity wherever you can, drop extension headers except for AH and ESP wherever you can. Simple addressing schemes. Limit the interactions. Think about disabling MLD if you can. All this stuff. This is the best advice I can provide here. Try to understand the interactions, the complexity and get rid of it wherever you can.

Thank you for spending the last 30 minutes with me.


CHAIR: Thank you. That certainly didn't sound in any way like a rant by the IETF at all. So, questions?

AUDIENCE SPEAKER: Jen Linkova. I don't even know where to start. First of all, we have a lot of complex things. I think BGP is complex, RFC 50 TE is complex, RF 60 is complex, and, to be honest, in case of those complexity cases you mention, I see very good reasons for having every single element of those complexities. They were not added just because people like inventing stuff. They were designed to solve particular problems. And in some cases obviously it might be not your problem so you don't like the part. But for some systems, it's required. And as far as I know, there are some implementations of Arp reporting neighbouring State... so I would disagree with a statement that Arp is much simpler than neighbouring discovering and so on. So probably my take‑away from your presentation is that we need more educational activities in that area, right, so probably we need to explain how neighbour discovery works and, for example, the listed node Multicast address, well, I don't think it's more complex than the whole Multicast user ‑‑ it's also very complicated, right. So, I believe that the right people who are ready to help is education here. And so, I think we need to fix that problem in addition to complaining about complexity.

AUDIENCE SPEAKER: Benedikt Stockebrand. Yes, IPv6 is way more complex, especially with things like stun for IPv6 which we need because of net for IPv6, and other things. All in all, from my experience, in a lot of ways IPv4 is more complex because of the various work‑arounds we introduced to deal with the outage shortage alone. Yes, talking about neighbour discovery, it is more complex, maybe for reasons, maybe not for reasons, whatever. It's possibly a matter of personal taste. But I think just blame it on IPv6 is kind of unfair. The other thing is, IPv6 is about 25 years old now, it started to work on it in 1990 when things were slightly different. People were thinking along ways that are kind of antiquated by our standards today. And the biggest problem I see when it comes to neighbour discovery and all this stuff is actually that people were still thinking that multiple access networks were actually making sense, because in a lot of ways the old telephonic people were right, point to point saves a lot of trouble and wouldn't have a lot of problems we have these days. Only at that time did ‑‑ the charter was, basically, we built everything on ethernet and that's something that's really hard to get rid of these days.

AUDIENCE SPEAKER: Yan Filyurin, Bloomberg. So Jen didn't know where to start. I do. I have actually read Russ's and Geoff's book where they do go into complexity and especially they go into the interaction surfaces. For some of us who actually have to operate all forms of like local area networks, data centre networks, you know kinds of other things that require switching and redundancy, is the complexity of IPv6 that you have described, is that really worse than the complexity of the interaction surface that all the layer 2 setups and layer 3 setups create? The truth of the matter is, IPv6 may have those complexities, but it will save a lot of people's sleep if it removes that whole layer 2, layer 3 interaction.

AUDIENCE SPEAKER: Philip Homburg, not speaking on behalf of the RIPE NCC. So as somebody who routes on IPv6 tech, I completely disagree with Jen. I think IPv6 is so incredibly more complex than IPv4 and if you then also add all the interaction we added between v4 and IPv6 it just goes through the roof. But there is one thing I notice is that we all have our favourite special case, and that's what Jen says there is a reason for everything. And we keep piling on those special cases and there doesn't seem to me a mechanism to say well we recognise your special case but we're not doing it. And so that's why we keep adding more and more special cases because ‑‑ and also every operator has his own special case in his network and wants to have support from devices for that special case and I think that's one of the places where complexity goes through the roof. I mean, we know now after, say, 25 years of IPv6, that barely anybody uses extension headers, and yet when it's brought up people say no, no, no, but we really need them because next year we're going to use them and then it's very hard to say, well, you should have done that ten years ago, now we're not doing it any more. So I think that's sort of a route cause why we get more and more complexity.

AUDIENCE SPEAKER: Jen Linkova. One last thing I forgot to mention. From my experience, you more or less have the constant amount of complexity in the system. What you can do is you can shift it from one part of the system to another, for example, traditional legacy networks. You are not making a networks less complex, you are just shifting complexity from routers to controllers. It's the same here. I have been trouble shooting situations as has been mentioned already for connectivity issues in v4 and v6, and v6 its much less easier to detect particular issues because of this complexity added to the problem. And the second point, I got a strange feeling that some of your problems you described caused by the, by those beneficial middle boxes, right. So it's a problem of someone who is running beneficial middle box and do not see end‑to‑end and if you start looking into the end‑to‑end communication suddenly some of the problems just go away.

ENNO REY: May I ask a question here, Jen. To the best of my knowledge, you employ filter extension headers once they enter your network.

AUDIENCE SPEAKER: I do not filter all of them. I do permit some extension headers.

ENNO REY: Which one do you not filter? AH and ESP, what else do you allow?

AUDIENCE SPEAKER: I can tell you, how can you make DNS work if you filter extension headers?

ENNO REY: I remember a presentation of Geoff Huston that DNS doesn't work so well with fragment header in it, but this is another story.

CHAIR: We don't have time at this point to ‑‑

AUDIENCE SPEAKER: Geoff Huston, APNIC. Look, we're getting to a point and we're going to see it in key role where we are going to move large packets in the DNS, and the problem, or whatever it is, the feature of v6, is that you need to move fragmented packets in UDP and the only way I can do that in 6 is using fragmentation extension headers. What we're finding right now is behaviours that you are suggesting throw away packets with extension headers is kind of making it impossible to do large packets in the DNS in v6. Now, the mantra that I am hearing all the time is, as long as v4 exists, it just doesn't matter. But if you ever believe that at some point we're going to run an all v6 Internet, you are going to have to run it either with extension headers or without the DNS. Your choice.

ENNO REY: There was one gentleman...

AUDIENCE SPEAKER: Marco Hogewoning. It's always somebody's problem. I guess it's mine. A large part of my job is trying to convince people to run IPv6. Thank you for ruining my day. You are not making it very much easier.

On a more positive note, like you and several people mentioned, to a large extent this is probably about education. This is about helping each other and I would really much look forward to continue the discussion with this community in actually writing those best current practices and helping people overcome the problems you have sketched. I think, yes, realistically, we can't really redesign the protocol from scratch, or we'll set ourselves back 20 years. I fear we have to go from what we have. So, please help us in a positive way in bringing this forward.

Just to balance your point a bit, IPv4 isn't very perfect either.

ENNO REY: Actually, the main purpose of this talk was somewhat educational. From a security perspective, here is what we have to face and we have to understand these intricacies to take well‑informed decisions. I'm not ‑‑ well, even if I was, which I'm not, I'm not complaining against IPv6, but I am a security guy and I want to help to run a stable and secure network, which is why I think interactions must be understood.

AUDIENCE SPEAKER: Very brief. Daniel Karrenberg. I think we should be ‑‑ I regret a little bit that this went to a shooting the messenger session and I would really thank Enno for doing this, and, as a final parting shot, those people who remember Fred Brooks, the Mythical Man‑Month repeat it and look at the chapters for second system effect and feature CRYPT.

ENNO REY: Thank you everybody for the engaged discussion and again for your time.


CHAIR: So, that's it for this session. A quick reminder to rate the talks, please. Do the quiz, etc. And we're back in here in 30 minutes after the coffee break for the last Plenary Session of the day. Thanks.

(Coffee break)