DNS Working Group.

11 May 2017

At 2 p.m.:

SHANE KERR: Welcome everyone. My name is Shane and I am co‑chairing this session along with Dave Knight and Jaap who is probably not in the room yet, we had to run here from another meeting. I had my first disagreement about whether we should start the meeting on time or a couple of minutes late, we are doing it on time, thank you for being here.

Start off going through our agenda here, welcome everyone, I appreciate you being here. There is a meeting in a few days and a lot of our usual DNS suspects are not in the room now, so makes me sad and I hate those guys but I will be there as well. Thanks to the scribe and chat monitor and I put the agenda up a week ago, I know it's quite late, I apologise for that, I was trying to trace down some contributors and I think we have a pretty good programme here today, so hopefully you will enjoy it. We can go through the agenda if anyone ‑‑ for anyone has anything they want to add we can do that as well. I will go over the agenda really quickly. We are just going doing to go to the admin strife I can't stuff and talk about the Working Group details, items which will be entertaining I hope, and we have got a presentation by RIPE NCC about what is going on, we have got a very long title presentation, I believe this came from the RACI group, software based approach to generate and detect flooding. I think the kind of cool. And we are having a report ‑‑ RIPE NCC had a hackathon for DNS, they have done a number of hackathons over the last couple of years. This is the first DNS‑focused one. Benno is going to be discussing DNS privacy enhanced services, I think it's a new area which has been long neglected in DNS. Ondrej is going to be talking about DNS violations which is an interesting effort as well, some of the people in the DNS community are working on to try to, from my point of view, to try to improve the quality of DNS, which is always good.

Then we have got a few really short presentations about some tools, Jerry Lundstrom is going to be presenting Drool and ending up our session is about DNS‑DIST so many of you may have worked with or introduce you to the tool if you haven't used it yet. That is our agenda. Does anyone have anything they want to add or feel violently opposed to and want to remove? Great.

The final item is the approval of the minutes from RIPE 73, I did get one piece of feedback that I tried to follow up on for that but unless there is any other comments about the minutes I think we will consider them officially approved now. Thank you everyone. Great.

So, let's move on to the item B on the agenda, I have a separate slide for that, I think.

So, on the RIPE NCC web page that they manage for all the Working Groups they maintain a set of items, actions, that are placed on people in the community for the DNS Working Group, and I looked at it and it has items going back to before we had the Internet, it seems, and so I pulled out the ones which are still open and there is only a couple, which is good, I thought we should in the interests of spring cleaning go through them and clear out all the listing and get that done with. The first one which is from RIPE 51, we are RIPE 74 now so you are do the maths, this is on Peter Koch, are you in the room,Peter, somewhere? No. I talked to him earlier, my own fault. Well the idea was to update one of our RIPE documents, if he is not here to talk to it ‑‑ I did talk to him about it and he is going to be talking with Carsten and they need to find sometime to sit together and work on this item, it has not been dropped or forgotten so hopefully next time we will see some progress or possibly be able to close and I expect in the usual manner, the proposal for changes will get sent to the list and we can discuss it there.

The other action item which is still open is much more recent from RIPE 57, this is on the RIPE NCC but it has Anand's name tagged on it, and I think we can just declare this one overtaken by events because RIPE NCC ‑‑ ISC, rather, has either stopped supporting DLV or is very close to stopping supporting DLV.

ANAND BUDDHDEV: Hi, this is Anand buff deaf from the RIPE NCC. So what you are saying, Shane, is that ‑‑ it is true. The RIPE NCC has actually withdrawn all Trust Anchors from the DLV, we did that, I think about two, maybe even three years ago, I don't remember exactly when, but soon after the root zone was signed the use of the DLV was considered unnecessary, so our Trust Anchors haven't been in ISC's DLV for a long time anyway so I think this has been overtaken really, and we should close it.

SHANE KERR: Great. We will mark this one as closed. Thank you for your attention on it. And hopefully we won't get too many more actions to add to our list.

So, now in case you tonight know, there is a professional super hero whose name is action item, if you are in a meeting and tracking your items think like action item.

That ends our admintive section of the meeting so I would like to invite our first presenter, which is Anand again, and he is going to be giving us the RIPE NCC update.

ANAND BUDDHDEV: Good afternoon everyone. Welcome to Budapest, I am Anand Buddhdev of the RIPE NCC. I am going to be presenting an update here this afternoon about the activities of the RIPE NCC's DNS services, what we have been doing, what we plan to do, recent events that have affected our community and things like that.

I'd like to start first by talking about K‑root, I am sure many of you are familiar with this, but for those who may not be, the RIPE NCC operates one of the 13 root name servers of the DNS hierarchy, and that is K‑root, and we operate it with AS 25152. We have been quite busy doing a lot of expansion of this K‑root service, we started this expansion last year and we have been adding lots and lots of single server K‑root DNS in a box type solutions. We have added a few more since the last RIPE meeting, and currently the count stands at 47, so we have 47 of these single DNS in a box solutions throughout the world, and I would like to thank all our hosts who are hosting these K‑root servers, some have been deployed with the assistance of LACNIC so we would like to thank LACNIC for this as well and this is increasing the footprint of K‑root in areas where we previously did not have a presence. Besides these 47, we also have our five core sites, three in Europe, one in Japan and one in Miami, and these are still operating stably.

One of the things that we did between the last RIPE meeting and this one is several upgrades, we have updated the operating system of all our servers to CentOS 7, this brings with it a new kernel or I should say a new work kernel and along with it, all kinds of extra and interesting tools that make it easier for us to operate the servers, do filtering, you know, better DDOS protection, better management of the services and of course you know good housekeeping, because CentOS 6 will be end of life in about three years from now so we want make sure we are not running old systems at the end of life of CentOS 6.

At our core sites we are also about to embark on a project to upgrade the port speeds to 10 gig up from 1 gig and the reasoning behind this is to again offer better DDOS protection. As you may be aware, DDOS is these days are getting bigger and bigger and having a faster port helps us absorb more traffic and so the idea is that we want to be prepared in case there is a bigger DDOS against K‑root.

Other than that, everything is running stably, we have also contributed some data to the Little project every day, day in the life the Internet where several organisations collect PCAP data of DNS queries arriving at name servers and RIPE NCC has been contributing data to this project almost since its inception, and in April this year there was another run of little and we contributed data there. If anyone is interested in looking at this data you can contact DNS OARC.

The other big DNS service that the RIPE NCC service runs we call authoritative DNS. This is a separate Anycasted DNS platform on which we run the RIPE NCC's primary zones such as We also run our Reverse‑DNS zones there, and we also provide secondary DNS services to ccTLDs and some other organisations so this is completely independent of K‑root, it shares almost nothing with K‑root except perhaps for a couple of internal distribution systems and things like that. We have a separate AS for it, AS 197000. And I'd like to highlight one of the things about this DNS service. On 16th of March we suffered an outage in the Reverse‑DNS space, that is operated on top of this platform. There was a bug in the script, and unfortunately it published empty zone let files. They are not complete zones but snippets of zones that contain DNS delegation data that we pull from other RIRs and publish for the other RIRs, for address space that has been transferred between registries. So the RIPE NCC operates 193.N dash ARPA which is the reverse zone for 193 /8 zone but these days or, since almost ten years ago there has been address space from this registered in the other registries like ARIN and APNIC, and so we publish delegation information inside little zone lets that are pulled in by these registries and then stitched into zones that they operate. And so unfortunately, this bug caused us to publish empty zone lets and both ARIN and APNIC imported these zone lets and because they were properly signed with BGP and the MD 5 sums matched those of empty zone files they got ‑‑ rather, the delegations got removed from the zones that they operate.

This caused an outage that lasted for several hours and it was almost a full day before things were back to normal. This zone let exchange mechanism was designed several years ago when the first transfers were done between the registries, and at that time it was decided that this would be a pull model, so all the registries would publish their zone lets and the others would use FTP or HTTP, to pull these zonelets in and then stitch them into their zones. This pull‑based mechanism has the disadvantage that it takes longer to correct any errors, because we have to publish them and wait for the other RIRs to pick them up and then their provisioning takes a while but it has reconstructed the zones and accomplished them and the DNS TT Hs also kick in so errors are kind of difficult to correct quickly.

So, again, our apologies for this. Our apologies to all the people that we caused inconvenience to. But the result of this outage that was we have begun discussions with the other RIRs about how we can improve this process, and we are discussing various ways in which this could be made better and faster, and we are hoping that soon we will be able to improve this and make the whole zone let exchange mechanism more realtime, which also has the side effect that delegations can be published sooner after they are accomplished in the registries database.

The next thing I'd like to talk about is something called Zonemaster. What is Zonemaster and why. At the RIPE NCC, we do pre delegation checks on any delegations that are submitted into the RIPE database as to main objects, the idea is to make sure that before we publish NS records in a zone the name servers are actually answering for that zone and are functioning correctly over UDP and TCP and things like that. We have been using software called DNS check that was written by IIS in Sweden but DNS check has been abandoned; there is no development going on, there are bugs in it and it doesn't support things like newer DNSSEC algorithms.

So in the meantime, both IIS and AF nick, which is the dot F R registry, have gotten together and developed a new software called Zonemaster, and Zonemaster was designed with a proper test specification and the idea was originally to use Zonemaster for the compliance testing with IANA and the gTLDs and new TLDs there but it's been ‑‑ it's a generic engine that you can use anywhere and several other organisations are making use of it. And in some ways it is similar to DNS check which means it is easier for us to migrate from DNS check to Zonemaster. It has a nice modular design which means if you want to scale up the checking and the processing, you can just add more virtual machines or more database back ends and scale up that way. The previous software DNSCheck had no modularity so you couldn't scale it easily.

There is also bigger team of developers both at AF Nick and IIS and they have been very helpful, they have been taking a lot of feedback that I have been providing, fixing bugs and helping refine the signing, taking feedback from scaling and modularity and making the software better.

Our original plan was to go live with Zonemaster earlier this year, but we found some bugs and issues with Zonemaster that were show stoppers from our point of view. We couldn't use it in its current form so I have been working with the development team to have these bugs fixed. And I am pleased to say that release 1.89 of the Whois software now has support for Zonemaster, and this version has been deployed to the release candidate test environment of the RIPE database. This happened last week. Those of you who follow the database mailing list will have seen the announcement from Tim, and after ‑‑ after this meeting, after RIPE 74, our plan is to put this into production. This means that any domain object submitted into the RIPE database will now be checked by Zonemaster instead of DNSCheck.

What does this mean for end users? Well hopefully, nothing. It means that you should be able to continue submitting your domain objects and they will be checked and accepted if there are no problems. If there are issues, Zonemaster will emit errors and warnings and these will be reported back to the user in pretty much the same way, through e‑mail or the web interface. However the error messages that Zonemaster emits are a bit different because its tests have been improved and refined so if you are familiar with some of the errors from DNSCheck then these messages will look similar but not quite the same. So that is about the only difference for end users. And if folk want to try out the RIPE NCC's instance of Zonemaster, they should already be able to do this at Zonemaster [at] ripe [dot] net.

One of the other services we operate on top of our authoritative platform is secondary service for TLD it is. Has been doing for for a long time. However for a long time we had no guidelines or criteria for how to actually offer this service, who qualifies and who doesn't and all that. So, with the help of a small committee of people from the DNS Working Group committee, RIPE 663, the document was created and accomplished, and this defines the criteria for which ccTLD can receive service, and the three main criteria are zone size or number of delegations, is what I mean here, name server diversity and whether the ccTLD is already making use of commercial DNS service with a third party.

So we used the criteria in RIPE 663 to start the evaluation of all the ccTLDs and it began that last year, just after the RIPE meeting in May 2016. And we found that of the ccTLD that is we were supporting, 25 of them had large zones, so this means that they had more than 10,000 delegations in their zones. So, they essentially did not qualify under the first cite I don't know. We contacted all the ccTLDs and gave them one year grace period, to find alternatives or to withdraw the service away from us. The current status is that 17 of them have now withdrawn the service, so they are fully gone from our service, and there are eight which are still pending, and we expect that by the 1st of July this year they will have also withdrawn the service. So once the grace period is over, they will not be using our service any more.

We then moved on to the remaining ccTLDs that we had in the list, and we had 41 of them. And so we sent them a questionnaire to evaluate and ask them about whether they had commercial contracts or not, and what kind of name servers they had and where they were, to figure out the diversity of their name servers. From the responses that we have received, we determined that 13 of them are ineligible because they either have commercial contracts or they already have a large number of name servers elsewhere, so we have ‑‑ we contacted them and asked them to withdraw service and some of these have already withdrawn service and some are still in progress, and hopefully should have stopped the service also by 1st of July this year.

We also found that 23 of them are still eligible because they are small zones, they don't have enough diversity, they have no other commercial service so we will retain these 23, and five of them have not responded to our questionnaire. We continue to chase them, try and use all available contacts, means, to reach them, and get responses out of them. Which means we will industrial to continue the service for a while longer, but we want to be sure that we have exhausted all possible avenues before we declare them completely unreachable or ‑‑ yes.

And finally, those of you who were present in the RIPE NCC's Services Working Group session yesterday will have seen Kaveh's presentation which he talked about the RIPE NCC's DNS services, what we have been doing and also what is planned for the future, and this it has also published as RIPE Labs article, the URL of which is on this slide, you can download and click on it or just go to RIPE Labs and look for this and read more about our plans for the future, for DNS services at the RIPE NCC: And that brings me to the end of my presentation. Thank you for listening and please ask any questions or give me any comments if you have any. Thank you.

AUDIENCE SPEAKER: Thank you. On behalf of the AFNIC and IS S team thank you for conducting this tool and all the community to bring any comments to improve it. Thank you very much. Second, I was a little bit surprised by one of your sides, better resilience with having gigabyte, that is true but you are if a victim of a reflection attack it might hurt others also, so, in itself it might be not sufficient to be more resilient to DDOS attacks. So just a comment. I know you know it too.

ANAND BUDDHDEV: The idea behind 10 gig is that we have bigger ports, to absorb traffic, rather than just... They want legitimate traffic to be able to get to, so we can answer it and then, with the upgrades to our software and hardware, we can filter out damaging traffic and hopefully keep answering the legitimate queries. So just more capacity, essentially. Yes.

AUDIENCE SPEAKER:  ‑‑ from Netnod. In the previous life I was in the business of serving ccTLDs from a large name server that was bold and had all kinds of interesting things on T going through the same exercise of trying to get them to stand on their own legs I ran into the exact same problem with not having response from, I was more lucky from one of them at all. I tried a lot of tricks in the book, but eventually ended up with the situation, I don't know what to do. So what is your intended way forward? And I am just curious here. It's like for my own information, what is your intended way forward if you continue to not receive response were these five?

ANAND BUDDHDEV: To be honest, I don't know what we want to do and I think it is a decision that I wouldn't be making anyway, but perhaps Kaveh or Romeo would like to offer a comment about that. I mean, in my opinion we should try to keep the service running because there are users affected but we have to make a decision at some point but I don't know what that is yet. I see Romeo, perhaps he has ‑‑

Romeo: RIPE NCC. I can try to reword Anand's words. They will not be as eloquently packaged but the answer will not be much different. We haven't decided upon what the next ‑‑ what their exit strategy actually is. We might return here for guidance. We hope to actually not come to that point and we are still pointing ‑‑ trying to reach the parties who will.

AUDIENCE SPEAKER: We ended up continuing to serve the zone, I offer that as one possible solution.

AUDIENCE SPEAKER: That is one possible solution that we have in mind, yes.

SHANE KERR: This is Shane. So the RIPE document doesn't offer guidance in this case if there is no response in

ANAND BUDDHDEV: It doesn't, no. Yes.

SHANE KERR: Fair enough. I have a personal quick question about Zonemaster. Just to be clear, Zonemaster runs when a delegation is made only?

ANAND BUDDHDEV: Zonemaster runs or is invoked at any update of any domain object so if you create a domain object or you try and update it with name servers, DS records, actually it will be invoked even if you try and change the contacts. So any update to ma main object will trigger Zonemaster check and if it fails the update will be rejected.

SHANE KERR: Okay. So there is some small chance that people will be surprised by the new system?

ANAND BUDDHDEV: I don't think they would be surprised because this is already what is happening. DNSCheck is currently invoked at any update, including just contact updates, and if the test fails then the update will fail. So in that way it will not be any different.

SHANE KERR: Great. One final thing, I am very happy you are no longer running my zone let code.

AUDIENCE SPEAKER: Quick comment on that.

SHANE KERR: Who are you?

NIALL O'REILLY: For this purpose, no affiliation. It seems to me that it may be advantageous to scope the triggering of the zone checking better because changing a phone number at the awkward moment when there is by coincidence some problem with the zone, may make debugging the zone problem more difficult.

ANAND BUDDHDEV: That is true. It would mean more code changes on various sites. I would perhaps offer that no one has, so far, said to us that your DNS checks when updating contacts or phone numbers have inconvenienced us or caused us issues, so in the interests of simplicity I would suggest to keep things as they are, but if there is feedback from people to say no, no, this has affected us more widely then we are happenty take this feedback as well.

NIALL O'REILLY: Some people in the room may not understand this expression but I think only if you are serving as customers even if you have lost the payment contacts for them, you should probably heed them more than the hurler on the ditch.

SHANE KERR: It seems like we are done with our questions and comments so thank you, Anand.


SHANE KERR: Our next talk is by Santiago Ruano Rincón, software based approach for doing flood generation and reproduction and detection of flood attacks.

SANTIAGO RUANO RINCÓN: So Hi, I would like to present this work that has been ton in collaboration with IRIS A, and from AFNIC and like to start saying two things. Thanks to the RACI because it's great to have the chance to present here, and the second one ‑‑ thanks to Greghana and all the people involved, and the second one is, I will be very happy to have feedback from you because this some research work meaning ‑‑ RIPE meeting is great opportunity to meet operators and I need feedback.

So, so if I ‑‑ if I would like to summarise my work, I will use some key words. We are interested in starting in genre search context, starting software approaches and data streaming algorithms to study high speed network traffic and on line, especially to counter measure at least to identify flooding attacks and for the future work I keep in mind to keep and account distribute data sources. Why? Because to make this work useful and that is what I am presenting here, I would like to help to make ‑‑ to improve the resilience of DNS. You know better than me the DNS infrastructure suffers a lot from DDOS and flooding attacks, there was the attack against Dyn but I can give another example, QNAME attack on French server in September 4th 2014 where some random Q names were ‑‑ names were used under the ‑‑ domain were used to flood the French servers, and so the idea of this work is to create a test‑bed, a software based test‑bed to have different elements that will help to study and propose strategies to counter measure these kind of attacks. My prototypes right now I proposing something that will list on the same incoming traffic to DNS servers.

So, a test‑bed will need different components, for the moment I have been focusing on two of them, the first one is reproducing attacks because for me able to analyse attacks I need to reproduce them, it's very difficult to have access to data sets and even if I have access I cannot copy them locally and reproduce them, so the first goal was to develop traffic generator able to saturate 10 gigabit Ethernet ‑‑ after that we have developing some prototypes that are able to analyse the traffic and to identify, for example, heavy hiters. And why I am focusing on software approach, because we want flexible tools, able to ‑‑ evolve, evaluating on time. In the future. And this means that we need to relay on commodity hardware and software. Currently, on the lab we have different machines that have been acquired thanks to the support from the national scientific research centre, especially we have three machines called Curly, Moe and Larry, we can play with them to create scenario where attacker floods server and wave machine where we run the scripts. So these are Dell workstations, dual socket, with different amounts of RAM, run Linux and we have different 10 and 40 gig bit interfaces.

And depending on software means relying on the software system. As you may know, the standard Linux PI is relatively slow and this is a challenge that has been identified some time ago and different frameworks has been proposed to work around the kernel, so I list here some of the frameworks we have studied. Now DPDK is very well known but others very interesting from very nice people, but they come from the academia, and DPDK has advantage of having now stable releases and very strong support from industry and the community.

So we decided to continue working on top of DPDK, and the first goal was to generate traffic, saturating different network interfaces, of course. And to take a look to the challenge we have we can look at tools specifically to reproduce reflection attack called shield of per SOP, it relies on the standard Linux network or NA PI and on single it can produce more or less half million packets per second. Of course this amount increases when you run different processes.

And since we need to work around the Linux kernel we have ‑‑ we need to relay on DPDK and at the same time we found very interesting, a very interesting tool called MoonGen which is by Paul from technical University of Munich which provides interface so we can write Libmoon scripts to control packet generation and this has made the work very easy. So, to reproduce DNS flooding attacks we need to run different fields in the packets, and the idea first time was at least to reproduce random Q names and reflect and amplify attacks. So, we developed a tool called gGALOP because generates a lot of packets, build on top of MoonGen and DPDK and if you take a look to the very well detailed description you can read this paper.

So, gGALOP it's 3,000 line L U A script and can separate 10 gigabit Internet link with single core, it takes advantage of different characteristic of DPDK such as batch processing and it allocates in memory bunch of packets whose content is filled with call back functions and sent directly to the network interface.

To see the CPU requirements of our tool we compared with SO P and script that comes from MoonGen and we can see the amount of packets per second in tens of millions. For example, SPO achieved the maximum of more than half million packets per second, 2.2, the MoonGen sample script as expected, saturated the link even at the lowest frequency and tool produces four random packets at 1 .8 gigahertz.

So, we concluded that this solution of DPDK plus MoonGen plus Lua able to saturate very easily the interface and it is very nice it scales, for example, on a ‑‑ we have to reserve one core for control but we can use the other ‑‑ the three ‑‑ the three cores to saturate each one of them, one port, one 10 giggy bit at the same time so it's quite easy to produce 30 gigabits per second.

So, we would like to pay a little bit and see how the DNS servers behave while they are getting flooded. So, the problem is we don't have 10 gigabit Internet switch yet so we made use of the dual port interface in the target server to build one site the generator and the other port a machine running to produce legitimate request. We tried PowerDNS and BIND, and gGALOP produced 11 million packets per second while SPO produced 600,000 packets per second, both servers were running 3 million record zone. And I was a little bit disappointed of the results because with both, with PowerDNS and BIND, SOP has a stronger impact than the ‑‑ than the generator. Why? Because from 100 million queries sent with both tools a lot of them, the most of them were lost between the hardware interface on the kernel, so the BIND and PowerDNS were able to continue answering because they were actually not busy, the operating system dropped most of the packets.

So, I asked the question myself, I am not an operator but I wonder if a machine serving on different ports will be more reliable to DDOS than listening just to a single interface. And I also wonder if lower attacks can be more successful than flooding at 10 gigabits per second like this case.

So, we did all these because we wanted to analyse it and identify the attacks. So the next step was to capture the traffic and analysing it on line. As a first step we still relay MoonGen to be more precise Libmoon, which is the library below it. As you may know, to identify heavy hiters we need to count the frequency of different elements in the traffic. The problem is the kernel space is quite huge, for IPv4 over IPv6, but also for the case of the DNS we can have, the frequency of the to main ‑‑ of domain names we need to take a look to the payload. And the content could be random and even varying lengths.

So, we need to make use of different statistical tools especially the count minute sketch, estimate the most frequent elements in the data stream. For example, Misra & Gries, and I would like to see here the work that has been done by others that was presented in DNS OARC and they proposed to calculate the anthropy in the DNS queries, which is useful to classify the traffic.

Just to give an idea about what count‑M I N sketch; it's a structure that makes it possible to element the data stream, controlling the size of the table. So, with a fixed size we can estimate according to our error and estimation fact they are a we control, the most frequent elements.

Our first ‑‑ with our first test we can analyse 11 million requests per second using four cores with ‑‑ losing a little bit of packet. This is a simple algorithm, to show you how we... the packets and to count them, the most frequent domains in the queries. So, actually what we do is read the queries and get the most interesting part of it, hash it, pass it by different hashing functions and that is it.

It's better to show you a demo of this but just for the switching interface I prefer to do it later. In caves ‑‑ this is part of the ‑‑ this is just a copy paste of the results. We receive 120 million requests, we can count most of the packets with a little bit of looseness.

For me it's important to talk about some ethical concerns in this analysis because we have access to the payload, and it's interesting to think about the ‑‑ what will happen when the DNS traffic will be encrypted between the client and the servers, I hope this will happen one day and for the analysis I think it will be very challenging and interesting. For the prototypes we are not logging the results, and I am also trying to avoid any linking between the IP sources and the queries, and it will be interesting to know what else will be important in the ‑‑ in this sense.

I would like to thank the CN RS who has funded by research, DNS‑OARC because we are not able to take a look to the data sets they have, they are very interesting. Again, thanks to RACI and also to the Libmoon and MoonGen authors who have made my work much easier. And thanks to you for attention, and I am happy to hear your feedback.

ONDREJ SURY: CZ.NIC. First question, is your tool available somewhere?

SANTIAGO RUANO RINCÓN: So, there are different tools. The generate ‑‑ first of all, this will be free software, the problem is that the generator for legal ‑‑ the French regulations makes it impossible to share it freely, so I need to find a way back, to other people interested in running it.

ONDREJ SURY: Okay. Well, you can reach me because I am interested in it anyway. We spent a lot of time benchmarking DNS servers and we have a 10 gig switch if you need access to infrastructure and we will be happy to help you and we have ‑‑ that is my another question, you said that some of the packets were lost in kernel. Was it on the generator side or on the receiving side? Because they can differ.

SANTIAGO RUANO RINCÓN: It was ‑‑ I am afraid I don't able to answer the question very ‑‑ it was on the receiving side.

ONDREJ SURY: Yes but sometimes, that is our experience sometimes the packets are lost even before it reaches the....

SANTIAGO RUANO RINCÓN: No, the packets were received by the interface.

ONDREJ SURY: So you saw it in the counters?


ONDREJ SURY: Please, contact me, I will be happy to collaborate on that because we already have infrastructure for benchmarking DNS servers and we would be happy to help you and help us in a way.

Pieter Lexis: Thank you for the information. The one thing that I was not here the slide on the last packet was for the PowerDNS part also still in the kernel or this was not responding by itself?

SANTIAGO RUANO RINCÓN: It was not responding.

PIETER LEXIS: Right, can we take this off‑line and have a look at your data and see what is going on?


SHANE KERR: I also have a question, maybe I am being ignorant but the DPDK, the Intel framework, is that specific for Intel hardware or is it an open ‑‑

SANTIAGO RUANO RINCÓN: I have only tried with Intel hardware but you can use other hardware from other manufacturers.

SHANE KERR: So other chips that manufacturers have a compatible version?


SHANE KERR: And I think your observation about encrypted DNS making analysis more difficult is very insightful. I think it's going to be increasing problem in the future. Yes. All right. Any other questions or comments? Well thank you very dmuch, I found it very interesting.

SANTIAGO RUANO RINCÓN: If I have sometime I would like to show the demo.

SHANE KERR: We actually do have time, let's see what happens. And it's a live demo so what could possibly go wrong.

SANTIAGO RUANO RINCÓN: We are at RIPE meeting and I was having some network problems, I hope it will work. I suppose live demo is Morris key so more interesting.

SHANE KERR: You are not about to DDOS our wi‑fi are you?

SANTIAGO RUANO RINCÓN: No. So, I am connected to two ‑‑ both machines here, currently on top which runs the generator, and M O E at bottom so will run gGALOP and will run a proof of concept script that will count the number of ‑‑ will count the number of domains per packet or not ‑‑ my English is terrible, sorry. It will count the number of packets per domain. First I will run the ‑‑ yes. So it will report every 20 seconds the number of packet per domain, and here gGALOP is producing 11 million packets per second. I configure to produce 120 million packets. And now we shall wait for the answer, just, I had configured it for 20 seconds. So it's not visible here but the last line shows the number of packets received by the interface, the last line is statistic from hardware, we are able to count is here so in this case was every single packet and I received some 30 million packets for the ‑‑ I love and ‑‑ that is it. Thank you.

SHANE KERR: Great. Thank you. So for our last presentation before the coffee break, we have Vesna talking to us about the RIPE NCC DNS hackathon.

VESNA MANOJLOVIC: Hi everyone. I am from RIPE NCC, I am a community builder. How many of you are Hungarian? Okay. So, as the customary to say some greetings in the local language, I have learned some Hungarian for this occasion.

(Speaking Hungarian)

For the rest of you, I have been told that that means my hover craft is full of eels. And now for something completely different.

So, we had a hackathon. This time it was not before the RIPE meeting, we decided to hold it outside the RIPE meetings this year because otherwise the 10‑day event just becomes too long, and so it was in Amsterdam and it was of course powered by Stroopwafels and this times we had inflatable one, just to make things even more interesting.

So why do we do this? Well, this was actually already the fifth hackathon and in the meantime we had a mini hackathon so the sixth, oh, yes, it gets complicated and so as part of our community building efforts we decided that bringing together the very diverse group of people such as network operators, researchers, software developers, designers, students and people from all around the world that would ‑‑ put them in little groups where they can brainstorm their ideas and come up with interesting prototypes that also they would encourage each other to come up with like new crazy ideas and so RIPE NCC would also be able to show our data sets and our own tools and maybe use the results of this work and also get a lot of feedback on our tools and our data.

So, this time we wanted to focus on the DNS operations, but still most of the teams in the hackathon has used RIPE Atlas data. And so another goal was to have fun and I hope you will see from the photos that we actually had a lot of fun. So there was a lot of brainstorm and many flip charts were used, we all sat in one room around all these tables and really cooperated intensively, with a lot of Stroopwafels and also visited a local hacker space as is tradition.

All the results are published on GitHub so you can download the code, we published RIPE Labs article. Here is the summary, we had about 40 people, this time we also didn't do it in the weekend but during the working days so that we always experimented with set‑up to see what would be better. We had three sponsors, so I want to thank our sponsors very much for their financial support, from that we could rent a very hipster venue and we could pay travel funding for some participants that are either coming from the countries where they cannot really afford to fly to Amsterdam or are already participating to the OpenSource projects and in this way we could support their work. So that was DENIC, Afilias and far sight security so thank you very much.

And another tradition is to take photos of very interesting stickers on the laptops] the participants so you might be able to recognise either your own laptop because we had a lot of participants that are usual suspects actually from the DNS Working Group, and/or you can recognise your colleagues' laptop.

And so these are the actual results so this is what people ended up working on and monitoring DNS propagation times, then two teams were working on the DNS censorship from two different approaches, so either testing the presumably hijack resolver or fingerprinting resolvers to see how do the regular ones differ from the ones that show some suspicious behaviour. Then, we also had one team that did not use RIPE Atlas so that was the team that was focused on the RIR data about Reverse‑DNS and doing statistical analysis on them. We had a team working on the caching resolvers, there was somebody who analysed the passive DNS aspects of the RIPE Atlas data, anomaly detection and streaming of the RIPE Atlas results and including them into existing monitoring tools such as Telegraaf.

And we try not to make this hackathon a competitive event but still we had to somehow give feedback to the teams and judge who did the better job than the others, and so the jury had a very hard time so I also want to thank the jury, which was Jim Reid, Desiree and Jaap and so they had very hard job of deciding who gets which packet of Stroopwafels and the other reward was for the team to present their work at DNS Working Group but the ‑‑ since there was no really clear winner and the one that was almost the best winner they couldn't be here today so they presented their work at the RACI session because that was mostly academic participants in that team. So these were the three top receivers of various Stroopwafels, so the team called platypus or anomiliser, they used existing scientific paper and made that really applicable for the RIPE Atlas measurements, they didn't create any new measurements so we were happy from our side, from the RIPE NCC side, and they were really large team and they produced many results through their intensive team work. So, they got, let me show you, this large box, white box which is lying there at the bottom on the floor. Then the collection of these six boxes went to the two teams that were working on the DNS censorship, because they collaborated very closely with each other but they did like completely different approach and then they compared the results and helped each other so we thought or the jury thought that was worth of let's say honorary mention and a lot of Stroopwafels and finally a one‑man team who finished the ‑‑ his software on time, using this passive DNS, he received this very unusual red and black kind of hipster package of these traditional cookies. So, those were results of our hackathon and of course we made T‑shirts so this is what you get if you join one of our hackathons.

So, yeah, come join us next time. We still don't know the date or the location of the other one next ‑‑ this year, but it's going to happen sometime in the autumn so watch this space on the RIPE Labs where we publish everything about hackathons and of course we won't participate ‑‑ we want participants but want other kinds of support for these events so you can be a host or a sponsor. On the other hand, very important part of these hackathons is the continuation of the work in the future, so if you like one of these projects and if you would like to get involved in continuing the work of that we can host the small Code Sprint event either at your location or in our new offices in Amsterdam and that would be great to not stop with the work that was happening at the hackathon, but to improve on it and to make it go from the prototype to the actual production and to make it even more useful for the community. And likewise, if you want to use the tools that we produce please get them from the GitHub, approve them and give us feedback on how is it you like them. So that is it, do you have any questions?

SHANE KERR: I have a question or two. So you are inviting people to participate in upcoming hackathons. That was the first DNS focused one right? Do you anticipate that going forward, with a do you think?


SHANE KERR: You hate DNS, right?

VESNA MANOJLOVIC: We could have another DNS‑themed one. Unless we come with a better topic.

SHANE KERR: What could be better than DNS?

VESNA MANOJLOVIC: So if you invite us and if there is more interest, of course we can do that. If that doesn't happen, then we are thinking of maybe IPv6‑themed one but there can be a combination of v6 DNS and always we will probably be using RIPE Atlas data one way or the other. And the other possibility is routing or maybe DNS tools, this was about DNS measurements but if you would like to organise something to actually rewrite certain tools and you don't have time and you want to do it in intensive fashion, then that could be a good topic. So, we are open to suggestions and to cooperation so talk to me today and tomorrow, there is dinner coming up tonight so a lot of opportunities.

SHANE KERR: I participated in this hackathon and it's not my first one and they are a lot of fun and you get to meet a different set of people than at a meeting like this, not to say you are not all wonderful but hackers are different, they have ‑‑ the coding culture is slightly different, more students and things like that. It's really nice. If you are interested at all maybe you can go and/or maybe send some people on your staff who do not otherwise get out of the dark cave.

VESNA MANOJLOVIC: I see a lot of familiar face so can you raise your hand if you have been on this or one of the previous hackathons from the RIPE NCC. That is really nice to see. Thanks. Good to see it.

JIM REID: Who was one of the jurors at the hackathon. One thing which I think you glossed over and I think it needs to be reminded everybody that there was a very, very collaborative and positive atmosphere at the hackathon itself but also a large number of the NCC staff that were there were very, very helpful to all of the teams, advising things like how the APIs for using the RIPE Atlas probes and getting data out of it or from the web‑based interfaces and that was very helpful to all of the participants and I think all the NCC staff that also need to be thanked and recognised for their help.

VESNA MANOJLOVIC: Thank you for saying that.

SHANE KERR: Jim did a great job also in talking to everyone involved and making everyone feel like that they were getting good advice and input.

VESNA MANOJLOVIC: Another thing Jim keeps reminding me, although it's not officially announced as a part of the hackathon we at the RIPE NCC realise how useful this is and we are ready to extend the support for the people who would like to continue with one of these projects but need some help. So we can cannot make it more concrete with that but what we did previously is one of the winners would get paid trip to our offices and work with us on the continuation of their project or we can think of different ways of supporting you, at the minimum if you need RIPE Atlas credits of course you can always get those so talk to us, if you think that we can cooperate in one way or the other and we are ready to support this work further.

SHANE KERR: Great. Thank you, Vesna.

That brings us to the end of our agenda for this first slot. We will be back in here after the coffee break, I look forward to seeing you all then, I am checking my notes to make sure I didn't forget anything. The RIPE PC voting is still open so if you haven't yet vote please go to the page and login with your RIPE access account and vote and if there is nothing else we will sigh after the break.

(Coffee break)