11 May 2017
ONDREJ FILIP: Hello. Welcome everybody. We are co‑chairs of Open Source Working Group. I'm really glad to see that we have almost a full room. That's great.
Let me start with agenda. So, first of all some small administrative stuff before we introduce you to the agenda. The minutes from the previous meeting were published. We haven't received any comments so far. Are there any comments now or can we take the minutes as approved? Excellent. Thank you very much. Minutes are approved.
I don't think we have any review of action points. So there is no change and no progress in that. And the last thing please, don't forget if you want to speak up to the microphone, don't forget to say your name and affiliation.
So, and one more thing we try and forget you know, we have not just a Chair of this session, we have a chat monitor who is Emile and we have a minutes will be taking by Fergal. Thank you for that.
Let me introduce you the agenda. So we have four sort of four feature presentations. Are there any additions to the agenda or are you happy with the agenda as it was introduced? I don't see any comments, no hands. So then I pass the mike to Martin, he will introduce the first speaker.
MARTIN WINTER: So, one addition I just wanted to make sure that people are aware of it. The next RIPE meeting there will be a selection again for Working Group Chair, so if you want to become a working group Chair, that's always, if you look at our charter and the rules, it false about two months before the meeting so it will be starting off the selection. We will send a reminder out on the mailing list but if you consider that you want to be a Chair to help us out, replace one of us, whatever, feel free to basically start thinking about it and you can even start contacting us an ahead of time.
So, the first talk we have. We have like Tomek coming up on the Kea DHCP server, and he will talk a little bit about the new cool things he does there.
TOMEK MRUGALSKI: So, hello everyone. I work for ISC, I also happen to be an engineer working on Kea. So I'd like to talk a little bit today about Kea itself, about the project and talk about the issues that we are facing funding Open Source.
What is Kea and why you could possibly use it. So, if you never heard about Kea, it's a DHCPv6 and DHCPv4 server with several accompanying domain names like DNS Daemon, controlling Daemon. It's a modern solution. It started in 2011 and we had the 1.0 version late 2015. So, it's quite perform aunt so it can handle many thousands of leases per second. It's scaleable. If your network is big, a couple of million of devices that's something that a single instance of Kea could do.
So, it's nice in the sense that any configuration changes doesn't require the server restart. It supports databases. I will be going through more details of this a bit later.
It also supports hooks, which is a mechanism to extend the functionality with additional libraries.
It recently got rest management API to run all the popular systems. Of course it's Open Source and we just go to 1.0 released.
I presume most of you are familiar with the old DHCP implementation, I DHCP. So let's go through them briefly.
The old one started in a different era. So it was over 20 years ago. And Kea is much more recent. So, the dates by themselves, they don't mean much, but it means that the old code is basically approaching only a maintenance mode. It's difficult to add new significant features. There will be some development but it won't be major. As opposed to Kea, that is getting a large new functionality in every release.
Kea is also more public in the sense that the repositories on GitHub, the database ‑‑ the back database is public.
And it's also much more modern in the way how it's being developed. We have tonnes of tests, the documentation is much better. Login system is much more readable in the sense that for every log message that co‑could print, there's at least a paragraph of of description that explains what it exactly means and whether this is a constant for you or is it okay.
So, none of these things would convince to you migrate to Kea on its own, but there are other things that, in my opinion, are convincing enough to migrate. So the performance is, it's definitely one of them.
Another one is management. So the old implementation has an OMA P interface which is something we're not very proud of. So it's complicated to use and very limited and hard to extend. There is only one client so if you want to use a mapping you have to use this client. As opposed to Kea that exposes a rest interface over HTTP and you sent JSON commands. So this is something you do from almost any environment. So this is, in my opinion, a big advantage of Kea.
Also, the set of commands that are available to you are much more robust and allow you to do much more.
Also Kea is much more he thinks tensable. We have hooks, and you can ‑‑ in the old implementation, you could extract some information and run script, but in Kea you can influence the actual logic of the server. For example, if you want to influence the list selection or sub‑net election or insert some extra options or remove options or do whatever you want, it's possible.
Also the configuration is starting JSON format, so it's much easier if you want to generate it or export the information about your network from your own database. Its much easier to work with JSON than with the custom formatted the old implementation is using. And speaking of leases information, we have using standard formats. So depending on how you want to manage your server, you can store the information in flat file, in CSV or you can go with databases.
The same is true for host reservation.
So, to recap the information about the databases, we have support for four database back ends. Could could be a flat file or my SQL or if you want to go a little bit we also have support for Cassandra. It's not just a different way of storing your information, it also allows you to do different things that were not possible with the old one. So Kea instantly picks up everything that the in the database when the data is running you can not identified the data on‑the‑fly. If you know the device was disconnected you are remove the entries from the database you are insert the database or modify the host reservations and Kea will pick this up instantly. Also the deployment models are different in the sense that, for example, you can do single database and connect server on DHCP instances to it, and the information will be shared between them.
And of course, if you don't want to fiddle with the database there are comments that that I allow to you do this over JSON.
Now a bit about the rest interface. For a long time we had a common channel which allowed to send JSON commends over UNIX socket. Recent we had restful interface. You basically push JSON commands and get JSON responses. There is a text client, but it's more like an example than a very robust tool, but it's very easy, so it code is maybe 10 or 15 lines of Python, it's actually easy to implement this in language that go able to handle HTTP and JSON.
And with the commands, there are a bunch of commands that we already support, so the interesting ones are configuration settle when you can push the whole configuration, you can retrieve it, you could test it and do different things with it.
Also, there are other commands for modifying the host reservations and retrieve statistics. More commands are going to be released.
And another cool thing about Kea is the hooks interface. So this is something that enables you to extend how the server is operating. So, instead of going through the details, I'd just like to mention briefly that Facebook adopted Kea well over a year ago, it's almost two years now. And they use hooks interface to interact with their inventory database. And so if they are running Kea without any problems for almost two years now.
So what about road map? Right now we have at least 1.2. We are working on 1.3 and 1.4 is expected (lease) sometime early next year. (Release)
Okay. So, now let's go into the more interesting or controversial part. So can we fund Open Source?
So, the first question that we need to ask is why do we need money anyway? So Kea is a commercial quality software. So, there are two people working on it full‑time with an additional two contributing occasionally. We have proper independent validation team that is developing the tests, running the tests, and maintaining so we are running on 20 different systems. That's how we can ‑‑ and we are running the tests on a daily basis. So, after every command we make, there are tests run and we can pick up any issues that are appearing almost instantly.
So, we also have proper designs. That means that this is not a napkin designs type of thing. It's not a project by one guy. So if you are interested how a feature is working, you can go to our website and there are designs. Of course there is documentation for users, but there is also documentation for developers that goes through the details how it works internally. So that's also a big benefit for Kea, that lots of people are using it for doing experiments like ‑‑ I'm also involved in IETF and there are different ideas floating around. So PI is quite popular with that. Because you can pick it up, look at the code for a couple of hours and then you can start extending it and implementing new concepts.
Okay, so how we have been funding Kea so far. So, it's been in development since 2011. We had several custom development contracts. We enjoyed two sponsors, so thanks a lot to Comcast and Mozilla. We have a very limited number of support customers. We also sporadically receive donations from individual people. So we are very grateful for this. But frankly speaking, this is not a sustainable thing. I mean, in the sense that the donations are not frequent enough to pay for our bills.
So, we decided that we need to think about different ways how we could improve the situation, because every year Kea was generating a loss for the company. And ISC is very small non‑profit organisation, so we don't have funding, so that's why I was told that we really need to find out some ways to improve the financial situation.
Okay, so the remainder of the presentation is just a couple of slides, I have a couple of ideas and I'd love to ask you a question. Which of those would resonate with the community well? Which are you think are good ideas and which we should not pursue. So the first one is that the 1.2 docker list. So it's instillation is moderately easy but if you want to set it into a database it requires independent insulation and. Maybe we could provide a docker image that would automate the whole thing for you. It would be just an experiment. It's not likely to generate a lot of money it's more likely to get people used to the concept of Open Source. Yes the software is free, but sometimes it's okay to donate or pay some money for additional services.
So, another concept that we have are premium features. So, as I mentioned, Kea has the host interface so you can provide additional libraries. So right now the Kea code is almost 500 thousand lines of code, and almost half of it is, are uni test. But anyway it provides the support for hook libraries so we could load additional libraries. So we thought that maybe we could start providing such example libraries. So, they introduced extra features. Of course this is Open Source project, so we want to make sure that lots of people who want to use it still will be able to use it. So we are not going to put any critical features in the premium part. So if you are a small network operator or maybe you run a campus, that's okay, you will be able to continue using the Open Source version. But if you are a large ISP or your network is large, there are some additional benefits that you could acquire when you become a support customer.
So, this is a list of the features that we have in the form of hooks. So some of them are available in the Open Source part and some are part of the premium offering.
So we are trying to balance this out.
So I'd like to briefly explain as an example, one of the libraries. So, in many http servers, including Kea. If you want to specify something for a specific host, typically you use a Mac address or circuit ID or some other identifier. But there are somecations when you need more flexibility like there are deployments that want to have deferral options used together or parts of the option or some field together with an option. So, this is something that you can do with flexible identifier. For example, here you can specify that you are interested in the sub option 1 specified by relay 4 and sub option 2 and those two to be concat Nated and the output of this expression is used as identifier.
So, another idea that we have for funding Kea is to provide a GUI. Now with they've with restful interface, we could develop a web‑based interface that could take advantage of it. We are working on simple GUI that would enable several scenarios. So, in our opinion, the most important parts to cover are the host reservations and the ability to manage sub‑nets.
So, we are working on a code. We have our internal prototype, it's not yet ready for external consumption, but we are at the phase that we could do a demo feed. So, we still haven't decided how we are going to make it available, whether it will be a premium in the sense that you could get the GUI for free, but to use all of its features, probably you would need to get the premium version of Kea, or maybe it will be the opposite way. So the tool will be paid software and the features will be in Kea. So ‑‑ or maybe it's going to be a part of a benefit for the support customers.
So, of course it will provide such basic things as statistics.
Okay. So the last idea is that we could provide a service or a capability to migrate a configuration from DHCP to Kea. So we are working on a tool. We are not sure how exactly it's going to be used. It's currently not ready for use yet. But we are looking for people who are interested in participating in I will say better testing. So if you want to share your configuration, just talk to us.
Okay. So that was my last slide. So, any thoughts, comments, suggestions?
AUDIENCE SPEAKER: Hi. I like to see how Kea has come along, it's a long time, and I understand your struggles with trying to get funding, it was never easy. Of all the options, my personal opinions, I'll go for number 4 and number 3 because one of the things you can expect is that you won't get any traction unless you have a big installed base. It's a bit like buying kind of manges to break‑even because you get a small percentage of a very big installed base that is willing to pay for something, right. You have a little bit of that with the IFC DHCP, the legacy one, Kea has to basically on its base in the room so migration 2 mightn't work because people tend to be conservative and stay around what they already know. On the number 3. On the GUI, rather than cook another one, why don't you have a look at integrating Kea management into something that already exists like C panel or web minute, one of these, you know a lot of people already use (web min) and then be a little evil and make it work better with Kea and other DHCP servers out there but at least get in front of people's eyes, because otherwise you'll have to do it in on a one interactions and those last a short time, they are not dependable and you are always behind.
TOMEK MRUGALSKI: Okay. Thanks.
AUDIENCE SPEAKER: Christian Peter's. We are using DHCP for example with a copper installed server and stuff like this. Do you have a forecast for the implementation in tools like install service, similar stuff?
TOMEK MRUGALSKI: What tools?
AUDIENCE SPEAKER: Copper install server or you can install virtual machines and it works with DHCP automatically. And it writes a DHCP configuration for example and then runs out the machine.
TOMEK MRUGALSKI: No, we don't have this supported yet.
AUDIENCE SPEAKER: Hi. Thanks for this work. I think it's great. I have a question about config files. Did you consider yes, ma'am he'll. Yet another markup language for config files instead of JSON.
TOMEK MRUGALSKI: Okay. We decided that very early doing the Kea development, we had the concept of configuration back ends that you could have multiple of them and quickly we realised that managing those separate configuration back ends would be a nightmare and everyone was simply interested in JSON. So we decided okay, this will be the language. So if you want to use any different way of installing your configuration, you can and probably your environment allows export to JSON.
AUDIENCE SPEAKER: Randy Bush, IETF meeting network operations centre server division. And we have on the order of a dozen, not less, VLANs with a few thousand IP addresses scattered probably, and we're looking at moving from IFC DHCP to Kea and at our scale I wouldn't look to be planning to make much money from conversion tools because it looks too simple.
TOMEK MRUGALSKI: Okay. So ‑‑ if you manage your network properly, yes, then your configuration is simple the but I saw some horror stories that unfortunately I cannot show publicly, but the configuration was over 20 lines long. So... the complexity sometimes not comes from the size of the configuration, just ‑‑
RANDY BUSH: Ours is a few hundred lines. But it's fairly structuraley simple, it's just numerating all the crap then the VLANs the pools and the different defaults for the pools and all that kind of crap. The change between the two looks very straightforward.
TOMEK MRUGALSKI: Yes of course. You mentioned structure. This is the problem. There are some configuration that lack the structure. Just random collection of configuration options stuff there and those people would have problems migrating.
CHAIR: Next up we have Andre, I am sure you are all familiar with the Turris Omnia project and he will tell you about some of the things that happened.
ONDREJ FILIP: I thought I would be the only one talking about funding but the previous speaking did as well, as you can see funding is a big issue in the Open Source work.
I have a very non‑technical presentation, just relax. It will be easy.
ONDREJ FILIP: I will not you know, mention the hardware and software inside. It will be more about the financing and especially about crowdfunding, because it's a very interesting area and we explored it very deeply and I have a lot of advices what you shouldn't do actually.
Because, I'm not a very good presenter and I'm sometimes lacking in my own sentences I help myself with someone else's words. So this is the last quote from me, everything else is somehow stolen.
First of all why we started this. You know, we are not for profit association. So, when we do something, the first idea is not to profit but we want to create some value. So, that's why we do a lot of Open Source projects and that's why we also started this Open Source hardware project. Usually F you create, sometimes some ways how to also finance that, but sometimes not. In this case we chose this way.
Why we did it. The original idea is from 2012. We observed the situation on the CP market and we saw that the situation is just not perfect. That those devices that you have at home and access your home routers are not in the best shape. And especially they weren't supporting IPv6 DNSSEC, you couldn't update the firm aware easily. We said let's do the opposite. Let's create a device that will support this. Some device that will also do some security analysis that will have adaptive firewall that will react to current security situation. And also, if we are doing some hardware, let's create some extensible hardware that has some options to connect some sensors and on. We also plan to have tomorrow application market that we hope some other vendors would join the project and put their own application on top of it. It's not done yet but there are some good beginnings of some cooperation.
So that's how it started. We created a router. Now we call it the blue Turris, and we gave it to 1,000 people for free in Czech Republic. It was a very successful. So next year we created another thousand of those and again we gave it for, technically we rent it for one crown, mainly local in the Czech Republic. Then we present it on various international forum, including RIPE. And we get a lot of attention. Many people were interested in the project and they liked the idea of a powerful hub that you have at home that can do much more than a router, that can act as a home server. That is extendible, and secure and that has automated updates and stuff like that. So many ‑‑ and even many of you here in this room actually approached me and told me, can we buy it and you know, giving me your credit cards and I said no guy, this project was just meant locally, it was it was a not for profit project. So just do it more commercially, just start to sell it and unfortunately I thought it's a good idea.
So that's how the Turris Omnia project started. You can see it's quite powerful, extensible, nice whatever. Then locally it started some issues. If you want to create a bunch of routers you need a lot of money to finance that. We were discussing that not for profit association that's taking care of national domain should finance such a project. And you know, you have always critics when you start something. So we decided how to avoid this criticism is to have a test. And we had no better idea than to test it using crowdfunding. There were more options in the crowdfunding world. I we chose IndieGoGo acts with companies from Czech Republic, kick starter has some restriction in that, I don't know why. We chose them. So we set a very conservative target. We said we'll produce the routers if we collect $100,000. By the way when I talk about numbers, if you see that we collected 100,000, that doesn't mean that we got 100,000. There are several fees that you have to pay, PayPal, IndieGoGo, if you are selling in Europe. VAT, and a lot of minor fees that you pay on the way to produce it. So if you think that it was a lot of money, just think about some of the substraction from that sum.
Anyway, conservative target. Then the campaign started. Actually you can choose the time of the campaign. We chose the longest possible period, which is two months. And even though that's a very, very short period, you don't think so, but it's very quick, people are starting requesting, they have questions, they are sending e‑mails, whatever Jabber chats, whatever, you communicate a lot and you have to make very quickly a lot of decisions. We made a lot of stupid ones and a lot of even more stupid ones. So...
What happened. We collected the money in more than one day, excellent. It was perfect. So, what next? If you collect money, that means you need to fulfil, you need to produce it. And of course because it was so quick, you think we should maybe collect more, but what to offer to the people for them to you know buy the other stuff. So we started to offer some kind of targets. We have said if you reach a certain sum we will offer something more. Also we started to offer some additional perks, something that would be interesting for the people.
So, the next 59 days of the campaign was kind of fed by this. At the end of the day we collected something about $850,000, which is roughly 3,0007 hundred routers. Then we switched the campaign to in demand phase and we collected a little bit more, we collected something roughly 4400 routers. This is how the campaign worked. That's the beginning. It was quite steep. Then the campaign, then the end of the campaign, again quite steep and then we changed to in demand phase, where the speed was not so big.
This is ‑‑ this is daily additions. As you can see the last day was amazing it was more than a hundred thousand in a day. I wish I had more days like that.
If you want to see the countries. Not surprisingly, the biggest contribution was from the Czech Republic. Then United States, Germany and fourth and fifth place were UK and Switzerland. By the way Switzerland was ‑‑ you know we measured the conversion, the number of visits which is converted to one sort of of backing of the campaign, and a Swiss guy when he or she came to our pages they just paid, they were amazing.
In Czech Republic it was something like 1,000 visits but in Switzerland it was something like 15 or something like that. And they got everything. They were great. Thank you guys from Switzerland.
And here is just how it looks. So mail imports and then some addition, more memory, stuff like that.
So now, perks. It was our first campaign and I hope it was also our last campaign. So, we did some mistakes because we hadn't realised if you promise something we also need to fulfil that.
You know, just imagine the situation. Campaign is very quick, you communicate with a lot of people and you think that the best you can do is to collect as much money as possible. We have a kind of E shop, the campaign and you think how to motivate them. You offer some additional stuff and first of all you think let's offer something which is cheap, not complicated. For example, a sticker right. Getting a sticker doesn't cost anything. And they will give you a dollar for that, isn't it great or a photo group. Then you think maybe we should offer more variants. What if some people do not want wifi and then maybe some people want more memory. Maybe some people want LL TE modem and other options. And then have you stupid things like posters, T‑shirts and so on. It was really a nod very good idea. Then we started a stretch goals, again we said if we preach a certain sum we will offer for call user variants. Call user doesn't cost anything. But in logistics, you will see.
Mobile application, some extension that you could use it as the same, you could use the same in Raspberry PI. Some software more monitoring. 3 years warrant. That's the easiest one but we will see in three years.
Then we reached a certain level and we said we will use a metal case, it's better, it's shineier, it's better than plastics. Honestly there were more reasons behind it. First of all it was cooling and also the time delivery because making plastic box Yours sincerely not so easy as we thought originally and many people complained that actually having a router in a metal box is better than a plastic box so there was a lot of discussion. I was afraid that this would kill the whole idea but I think now people understand why. I had to write a long blog post about this just to explain why we chose this one.
Then some of the software. Easy open VPN configuration, active bandwidth monitoring. Configuration backup. Educational videos, almost done, then we updated Internet storage from 4 to 8 gigs. Luckily we had no more time to promise more because I was afraid we would do this.
Why this was not the best idea. So imagine this situation. You have 4400 orders and you have a router in three colours. Wifi, no wifi, and one or 2 gig bits of memory. That's 12 variants of a router. And you know I always say that if the standardisation is done by government and we have some problems. So a good description is power source. Around the world there is different power plugs so. We chose the four most used options. So that means that each our now has 48 variants. And then those tiny things that cost nothing, stickers, yes or no, photo yes or no, LTE, standardised by governments, so it means four options, no LTE, LTE 1, 2, 3, so if you multiply it it's 1,500 variants and then T‑shirts. Actually, the nature is in standardisation a little built worse than the governments. Surprisingly we have all different bodies and of course we say yeah we will do two colours, right, colours doesn't cost anything. So that's ‑‑ we had 6 sizes for gentlemen, for ladies, that's 10 and two colours, that's 20 and of course there is an option without T‑shirts, that's another 21 options. So at the end of the day, we had 30,000 options you could order.
And surprisingly, it happened sometimes that there were two the same, sometimes. Not often. And that's ‑‑ we offered more than that, some nice work and everything. So, we will do a kick‑starting campaign.
Ben gin frank Lynn said a very interesting quote. I used it. But I'm afraid we a little bit improved that. We found more ways how to make it wrong.
And also, then you know, we promised delivery in April and we started the production phase. We ordered you know, we had experience for the two Turris routers, the prototype was ready so we were pretty sure that everything it under control. Of course we forgot that we wanted to produce a little bit less and when you say April you mean the production will start in April but what the back street that we will get the router in April which is not possible when the amount is larger and there is some delays in productions. Also logistics. Logistical companies are marvellous, you believe that they solve the problem of business travel, that does the main core business they do but in the tend of the day you see that it's more like flying Dutch men problem. The package is out travelling around the world you don't know why and suddenly they return and have to be sent again, and it's amazing.
Anyway, we started prototype series, first was from the Czech Republic. We ordered 60 pieces and we got 44 of them and 16 later. That's kind of strange. Anyway, it happens.
Interesting was that just 27 of them from functional partially. There was some problem in the wholing, we ordered some technology and got something. So we scrap it and we stayed let's do it again. We had some argument and you know, we reached agreement about compensation. That's great because we didn't lose much money, but that's not how to ‑‑ they are waiting for the product and the fact that you were compensated is not very interested for them.
We chose another manufacturer, again tested with a good reputation in the UK. We ordered ten pieces and got six of them. It's normal in business actually and late of course. And they said that their machines were broken and unfortunately none of them were functional. We got compensation. Yes, that was perfect.
So, we chose a third manufacturer, it was from China. Actually again a company with excellent reputation. And we thought that's going to be the final one. So, we did some other business with them before so we knew they will be able to deliver and they can produce in large volumes. So we thought that's the final test and then we will order.
We ordered 30 pieces and they were delivered before the deadline. Incredible. Before deadline. That's something.
But you know...
If you do some ‑‑ I'm not a hardware expert honestly, but if you do something like some high frequently communication, communication between memory and CPU, the thickness of the wires is quite important. It's something called controlled imPE dense which must be very well set and we ordered this and we got this unfortunately. Which actually doesn't work very well.
So, talking about Asia. This old ancient Asian said a very interesting quote and I like it. I use it very often during the campaign.
So, we promised April. The delivery started in September, because you know we had to go through many of the prototype series and each prototype ‑‑ in software, you just type make and it compiles and you immediately see the problems in the hardware world you wait at least six weeks. That's a small difference. So we were a little bit delayed. And again, I said, when we set the date we meant we will start shipping but people read we will get it home. Right.
So, we started the production and then we gave it a to a logistical company and I think I explained how they work and they were also not able to ship I don't know, 1,000 of pieces a day because they had to fill, they had to know which of the 30,000 options is that and then to finalise it and send it.
So, the last router from the first phase, from was despatched in November. But since the production was a little bit quicker and some routers in the storage and you know it was quite a good idea to start to sell them regularly. But if you want to make your community very upset I have a good advice for you. Start selling routers before you send it to your backers, I think a friend of mine bought an ironing board to his wife as a Christmas gift. That was better, just don't do that.
That's all to the deadlines.
There is many, many regulations if you do hardware and everything. So I don't have much time to discuss it, but you will enjoy it if you want to do it.
I promised one slide about future. So we are now offering all the ‑‑ doing all the software everything. We are rebuilding it from a device to something more user friendly and also we started come cooperation with other vendors so we will able to deliver products for the smart homes and I hope that those products will be secure enough. And I promise not to do any crowdfunding in the future.
Thank you very much.
MARTIN WINTER: So we are a little bit behind. So keep it short.
AUDIENCE SPEAKER: Freddie thanks for the great product and all the effort you did. We started to resell it in Switzerland for our fibre 7 gigabit FTTH offering. There is only one problem to sell masts it should be half price. Because people, and despite that we have a lot of them nerdey, you guys are our customers basically in Switzerland those which can spell IP, but they still compare the product with the FRITZ box 5490, which is half price. And I think ‑‑ I mean, this is not a criticism at all, but this is the reality what we see in the market. Thank you.
AUDIENCE SPEAKER: Niall O'Reilly tolerant networks limited. I noticed that the bit in coded options still fit in a 16‑bit word and that there is a bit to spare. You are only at the 32,000 level. I have a suggestion for the extra bit. It would be really nice because of the way our bank treats us, if you could let us pay in euro.
MARTIN WINTER: Thank you ‑‑
So, next up we have Andrei us on RPKI tools.
ANDREAS REUTER: I am I will briefly talk about two Open Source tools that we have developed, they are called RPKI middleware owe and RTRlib was developed with people from Hamburg and these tools if you have anything in do with the RPKI in any capacity, operator, developer, researcher, he had carry, maybe these tools are useful.
There are already two talks about the RPKI. I'm not going to go into detail. I'm going to give you the necessary context. In the RPKI resource certificates these are just X 500 lines certificates with some extensions. And you have route origin authorisations that or the eyes AS to legitimately announce prefixes. ROAs are issued using resource certificates. Toes these two are the more interesting in the RPKI. There are others. But, if you are researching the RPKI or you want to look at it as an operator then these are the ones you look at. They reside in the global RPKI. This is a collection of public repositories, anybody can download the contents and local cache fetch the contents periodically, validate them and then BGP routers can download the validated information in the ROAs and use the router geolocation validation. So far forth context. The first tool, MIRO stands for monitoring and inspection of RPKI objects. So there is a very telling already. It has two components. This is the validator, the back end, validates them and exports them them into JSON. This is different from what a local cache does, we export all the RPKI objects which has all the information in them and not just in the ROAs.
The other component is the browser which is a web application and it's a graphical user interface that essentially takes the information from the validator and allows to you click lieu it and inspect objects, look at their dont's and of course filter for specific objects.
So this is something you can either deploy yourself, contains the validator, or you can use the version that we're hosting.
So, this software uses makes heavy use of the RPKI commands laboratory from RIPE NCC which is great if you need to RPKI objects and all the Crypto stuff so if you ever need to do that, touch RPKI objects Praguically I suggest you check this out. It's very helpful.
Just to give you an idea where MIRO is in the RPKI ecosystem. It's in the same layer as a local cache. Again instead of only exporting valid ROA information, you get the certificates, the ROAs the manifests and the CRLs.
I realise you probably can't read most of this. You don't have to (MIRO) here you can have a control bar which you can control with certificate, you are looking at currently RIPE has selected. You can download it and you can filter for a specific object.
Here on the left you have a hierarchical foul browser that shows you the certificate trees on top of the trust anchor and then the resource certificates at the bottom you see the ROAs. If you are wondering where the manifest and CRLs are, we chose not to include them in this browser, they are available and you can look at them but not in this browser because they clutter it up.
If you look, you can also look of course in the contents of the object themselves. So for example if you are an operator and you like to make sure that what are ROA you have issued actually contains what you think it contains, you can look at it here, you see all the relevant information, whether it passed validation or not. If it hasn't, it will show you the errors and warns so you can fix T it shows you of course the AS number and the list of prefixes.
So, to summarise, if you'd like to try this yourself, it's online right now at this URL, you can try it out now. If you'd like to deploy it yourself, then check it out on GitHub. And this is under MIT licence so you can do whatever you want with it.
So, the other tool I'd like to talk about it the RTRlib. So this is a light Open Source C library and it implements the client side of the RPKI /RTR protocols. If you want to read up on T here are the relevant documents. And if you go back to our RPKI ecosystem, then the blue arrows that you see here, that's what the RTR Lib of implements. That's the routers then, you can ‑‑ one of the purposes of the RTRlib is to run on a BGP router to fetch the ROAs and perform router geolocation validation. And this is useful of course if you'd like to have a cache server independent implementation to fetch these ROAs. I already mentioned that this can run on a BGP router so that's obviously interesting if you have routing software. But you can use this without a BGP router, you can just build your own monitoring tools with this if you want programmic access to access ROA data.
So, if it seems a little daunting to you to use a C library monitoring tool we have also Python BINDings and we are working on integrating this into BGP so stream then you'll be able to get a stream of BGP data validated with the result of validation.
We are working with integrating with the Quagga. And it's already integrated into Bird. If you have Bird you can already perform router geolocation validation with the RTRlib.
To give you some inspiration what you could do with this library. So there is a browser plug inthat is for fibre fox and Chrome that will show you if the website you are currently visiting has been secured by RPKI. Or more specifically F the IP of the server that's hosting the website is secured in a ROA.
Or we also have built a monitoring website, the RPKI realtime dashboard so this gives you a realtime information about RPKI deployment status. There is also a number of other monitoring websites that you can check out like SURFnet. Just some things you could do with this.
So if you like to know more about the RTRlib, here is the website. You'll find all the codes and all the tools that you have just shown you and a few more also on GitHub and this is also under MIT licence, so you can use it however you wish.
And that was already the talk. I told you it was going to be very brief. So here is the important links again and thanks to to our supporters and people who contribute to this project.
ONDREJ FILIP: Thank you. Are there any questions? I see one.
AUDIENCE SPEAKER: Hello, just a quick comment. As you may know, we will do significant work on the RTR validator starting this summer. So two things about that. First of all great that you are doing all these visualisations and things that are really useful. I think that means that we shouldn't really spend too much effort there. But we should talk about off line about how you could use the next version of a validator, as library or through API.
ANDREAS REUTER: I wanted to talk to you. So thank you.
MARTIN WINTER: Any other questions? If not, then thank you again.
The next speaker is Martin winter who is going to talk about FRRouting. So Martin.
MARTIN WINTER: Okay. I hope everyone by now has heard about the FRRouting. If not, then well then this is the talk for you.
So, a quick start. What is it for the ones who are living on a different planet or haven't seen IP addresses before. FRRouting is a routing Daemon. It's basically what we call it's a routing stack so don't confuse it with a complete router, it's just a routing protocol. It is a fork of Quagga. We officially announced it about a month ago. We implement most of the common routing protocols and working on more. So we just started having the first version of ERGP in the code. So it works on Linux, and it works on most of the BSD systems there. And it's sometimes not that well known, but I mean Quagga are now using it and the interest is a lot of white box switches which want a lot of routing stacks, a lot of virtual networks there, all VM routers and tonnes of other locations. And obviously we wanted to make sure you could use the route reflector, where Quagga didn't sometimes have the best name.
So, in a new fork? What's different? The key difference is we wanted to have something which is more community driven and led. So we wanted to make sure it's an open community. We needed something or wanted something with a much faster development. The way how Quagga was done before, a lot of us were not that happy, we figured we could change it with the tools, Mick it a much better model there. And we wanted to make a very open community. So everyone is very welcome to join.
So, who is behind? Obviously there is a lot of developers there who sometimes work as individuals. These are some of the companies who initially liked supporting us and are behind it, so you may recognise some names, you may not recognise some of the other less known names. It's a lot of the previous Quagga developers, most of them, there is a lot of other companies who have interest in in it who is supporting us which is great. It is quite a strong community. We are looking at mainly yes make that commercial suite which you can use in a commercial network and should be like more or less the best complete routing suite out there.
So, a bit more in detail. What's different? So we tried to change it around, how can you vet the submissions. Everything is centred behind GitHub now to we were finding now we pull requests, if you remember Quagga before was like sending patches to a mailing list which was a little bit harder to test, so we do pull request. We do like everything automated testings, if you do a pull request, the CI system will do of tests and start sending a comment back making sure immediately what's going wrong or right or something so you have results there so that should also make it easier for getting things in we do we cannily meetings talking about it so the idea is a pull request, should it be open for longer than about two weeks Max and then it should be merged at least if it doesn't find an issue.
We also have the common assets, we wanted some independent, somehow trusted organisations or the common assets domain names is everything like held by the Linux foundation, so we are officially one of these Linux foundation collaborative projects. So this is kind of a lightweight Linux foundation project. If you are joining, it's not that you have to pay money, it's more or less there on our bit if there is everything a dispute or something in the future, pacically they are the arbiter for T
We have elected maintainers and an elected steering committee. Again very open and welcome if you feel like you want to start helping out. We have quite a low bar of entry for being a maintainer. The biggest thing is like staying active there.
Then the key thing for you now. If you haven't figure out how to get it. There is like a two choices mainly. You can do as a binary package. The key package which is available right now easy is like the snap package. In case you are not aware on snap package, if you haven't heard about it, it's something Ubuntu started, it's kind of a different package format which is like an overlay container with more interconnected in there so cannot go like right outside of the container and it's an easy way to do the install. So quite a few network vendors are looking at the snap package on their switches.
And it's also not just Abuntu specific, it runs on Abuntu, it runs on Fedora, it runs on arch Linux, it can run on open WTs, various different ones the same binaries. That's one way to get it.
We are still finally cleaning up and testing packaging, Debbion, so Benion, Ubuntu sent to us parts are very close to be finished in the testing. If you have good experience your contributions are welcome, especially on the testing, sometimes it would be painful for the once who did Benion or RPM packaging, you probably know. We are looking at other packaging snapshot to have them all available.
Obviously the other choice is simple source. There is the link on GitHub. You can go there. You will find mainly three different branches. There is a stable 2.0, that was what was released about a month ago, just basically when we publicly announced. We called the Version 20 mainly because of the changes we have on top of Quagga which is right now at 1.2 I believe the current one. And we have the 3.0 mainly we made a major jump, I'll get into T you will see it soon, about all the features we added so much in there that we figured we needed to call it 3 already. Just to give you an idea. From Quagga 1.1, which is basically was the base up to the 2.0, there are about one‑and‑a‑half thousand GIT commands in there, from the 2.0 from the 3.0 there are about 2,000 GIT commands from all the changes. Then there is the master branch for the latest development part. If you right now want to test it, the 2.0 is basically the perfect stable one. The 3.0 is now in basically get to go release candidate. It's quite stable but I may still run into sometimes an issue. We will be very happy to hear about that. And the master is the unstable latest.
So, to give you an overview. What came in. This is like features came in basically on top of what you have currently in the current Quagga. So we have on the BGP side, a few of the interesting things, the add path support. I know at least one person in this room is very happy about that. The BGP host name support is something, an interesting thing. You can see there is a lot of necks hop tracking which came in to which helps for a bit faster part. We had a big new thing is over here, the LDP, so we have now in zebra we have basic MPLS added in there, we have an LDP Daemon to have all the LDP stuff implemented. We have, on the OSPF, for the open BSD folks who want to run it, that's working again. That was broken for a long time. We did a lot of new testing features that is like started doing some topology testing too, the uni tests are changed over, basically do pie tests from old ‑‑ there is a lot of JSON support for if you want to automate it, so a lot of the commands in the VTR you can do like adding the JSON, and you get a JSON output.
So that's its Version 2.0. And then from the Version 2.0 to the version 3.0, we have a very long list of even more stuff that came in. So we have a lot more LDP enhancements moving in. We had on PIM, the sparse mode, we had on BGP, we had a few things, the BGP shut down message, I'm not sure if our big van of shut down messages is in here. We have a graceful restart something in the VPN part. EVPN in that direction is coming there a lot. We had on ISIS, the SPF back off. And other parts that are included too. So there is quite a bit ISIS work going on. So we are trying to get it much better. A lot of it came thanks to the IETF community who is doing a lot more active things, new stuff work on top of ISIS, so we tried to make sure to have a good solid base to start on.
So, we also have a label manager below, these basically mainly for in the Version 2.0 we had basically the MPLS just for the LDP where we needed labels and now we have more like stuff coming in like somethingment routing and we needed more labels to we have a centralised major manager who basically maintains this stuff.
Let me go a little bit more into the LDP, this is just a long list of all the RFCs which we are supporting. It should be quite complete at least with the version 3.0. It should have most of the things in there.
So I'm not going through all the details here. You can read them up on your own.
I want to just show a simple example also which is documented, I'll give you link later as a test or an example there. A simple topology, a test topology which we had. We had one routing here running on Linux, one running on open BSD and then connected with other devices.
Just a quick overview. It looks quite Cisco like. That's how the LDP configuration looks like. At least partial, it's still missing the network part. So it's very similar to what you may be used to on a Cisco configuration.
One of the key things is like on if you want to play with that MPLS, it's only supported on Linux if you have a kernel 4.5 or newer or an open BSD. So, that's one of the biggest issues. It's enabled and compiled by default. So the free range routing does detect the kernel on its own in the extension too, which, that's the other thing which you shouldn't forget about it. It's mainly here. The model probe MPLS router and MPLS IP tunnel and then you have to turn on the label processing on all the interfaces.
So that's some of the key things, but if you make sure that you have the current kernel, you turn it on, you basically get MPLS automatically running and there is a comment in the free range routing, show MPLS starters, I believe it is, and it will tell you like if it detected the currently kernel and if it detected the MPLS models from the kernel.
And for the full example with more details, there is an URL here on the bottom there, it's basically ‑‑ it's a file which, if you download the free range routing source code, you will find it the documentations, you will find the details exactly how to configure this with Linux or with open BSD.
Okay. I'm handing it over. Speak peak good morning. So, a bit about a few features that were added to 3.0. And one additional feature that will be post 3.0 as well.
So, PIM sparse mode, which was for us, I'm from Cumulus, was the fellow for data centres and then specifically for high frequency trying where they use Multicast for the messages.
So, a few things that were added were ECMP support for example, but also source specific ranges for broadcasting and such.
And one interesting thing is that was already available for BGP is unnumbered interfaces. It's not really unnumbered if you have looked at the details you are using the IPv6 link local address to set up your session to your neighbour. But as you can see here in the example configuration is that it saves you a lot of addressing in your network, and then specifically on your backbone.
Now, another feature that we added specifically was BGP signalling for MPLS labels. That was also more focused on data centres, and that's specifically for customers where they want to do tenant separation. So either for a specific project or specific customers for MSPs, that they can separate the VRFs. Now Linux VRF is available, so that is something that you would need if you would run this on a host.
And well then you wouldn't need a layer 2 overlay if you want to do this.
And now another thing that is already developed but I heard that it won't be in 3.0 because we're already approaching the release candidate. But that will be EVPN VXLAN, and it has a lot of capabilities and I heard someone call it the zip of our networking, but basically, there will be support for three different types. 2, 3 and 5. And the idea with Type II and 3 is that you can build your layer 2 overlay. That's specific for data centres but you can also use it for data centre interconnects.
Well you use VXLAN encapsulation. It also has the possibility to use MPLS data link but that's not planned at the moment.
Now, another thing with type 5 support is that you can do layer 3 separation, so basically you can do the same with MPLS somethingment routing but then you use VXLAN as an encapsulation. For white box switching that has some advantages because some ASICS have a limited label space, but basically you can do that as well.
Now, this is an example for if you would be using it inside your network. But if it's included in FRR, the thing is that you can run it on and host itself. If you look at data centres and overlay networking these days, there are multiple vendors that are building overlay networks, and it sometimes lacks a solution to connect your overlay to your under lay network, and as soon as they integrate it in FRU cab use that as well. So that would give more opportunities there.
MARTIN WINTER: And that basically concludes here. Just for you the links again. As I mentioned from the development is all GitHub focused so we have the link for the code. The issue tracker, basically and seriously, if you find work, we are very interested, please open an issue, we are like very active to monitor that and try to make things like fix that is too. If you have things you don't see and you want us to add, please open an issue too, so we can see like the interest on it and can discuss or see how we can get it added.
And there is also a feature list a little bit ‑‑ well there may be sometimes lagging behind but if you go on the GitHub on where people constantly develop and add other features in there and try to add oh yes, we have that stuff added too. As I mentioned in the latest master web we are E HCP in there too which we haven't talked about.
ONDREJ FILIP: Are there any questions? I don't see any. There is one.
AUDIENCE SPEAKER: I see you guys a lot in the data centres ‑‑ Yan Filyurin, Bloomberg ‑‑ I see you guys in the data centre space, I see you guys in New York a lot. Does it mean the fact that you are here it means you guys are thinking of trying to do Internet routers.
MARTIN WINTER: There is actually a lot of people who use Internet routers, so there are different environments here. One thing is, you may not especially in eastern Europe there was quite a few ISPs who actually use, even today, Quagga as a full routers, their own PC as a router. If you talk about in the Internet space a route reflector, route server, yes we definitely want to. We know about a lot of issues and problems there. We just somehow failed to get them fixed in the Quagga and we are WOCCing. There is a lot of performance fixes already done in there, there is more coming up. (Working) so if you are running routers and you think it might be good to have enough choice, I urge you please test it and if it doesn't work, please open an issue, let us know what part is still broken so we can fix it.
AUDIENCE SPEAKER: Hi. My name is Ben Gordon from Resilans, Sweden. I just wanted to express my gratitude for this, for the whole team of free range routing. I have been using Open Source routing since 1994, before conception of zebra. And we have been building Open Source routers for our own purposes. And I think this is ‑‑ the new fork of Quagga here is a really a leap forward. So thank you.
AUDIENCE SPEAKER: Blake with Zayo. Thanks for putting this together. I know it was a tonne of work over a long period of time. Let's talk about sponsorship.
AUDIENCE SPEAKER: One quick remark about using Quagga for the Internet router. As, you know, the white box switch is could you tell in the full routing table but if you reduce the number of routes installed in the hardware and you actually focus only on the top prefixes which make most of the traffic and put different code on top you can use a white box switches for BGP edge routers. So that's a use we see more and more with our customers and thank you guys for all the contribution you are making to that because I think most of the data centres will move to Open Source routing on the edge.
MARTIN WINTER: Yes. Thank you for mentioning that and as I want to say again, I mean there are two things. One of it and people say not usable for Internet router that was like some you have concerns especially on the Quagga which were well known. The other thing is if you are running on white box switches a lot of them have very limited forwarding space for doing hardware forwarding but that's outside of Quagga or free range routing basically. So we hope and I know there is a lot of vendors working on getting boxes with larger tables in them.
ONDREJ FILIP: I don't see any other questions, so thank you guys.
Gert has something urgent that he needs to share with us so he asked for two minutes on the stage
GERT DÖRING: Andre thanks for giving me a few minutes. I'm not not speaking as a Working Group Chair or anything today but as an open VPN developer. As some of you might be aware there has been a very, very deep audit of the code, and unfortunately they found stuff. There will be an open VPN release at 4 p.m. today, which is fixing a remote denial of service problem. So, if you are running a server, please upgrade. To make this a bit less panicky it's not a remote code execution. Your data is not vulnerable. The problem is that a packet can be sent that is mall formed which the internal checks catch and the server will then exit. Which is a secure error handling but a somewhat stupid thing on external data. So, the patch basically turns this into an error message and drops the packet. There is no remote code execution. There is no loss of data. But your server can be made to stop. So you want to upgrade. That's all of it. Thank you.
JERRY LUNDSTROM: Hi. I work for DNS org, and I attended the RIPE DNS measurement hack‑a‑thon a few weeks ago. And I got to play with the Atlas API with go. And after the hack‑a‑thon, I took that work and made it into go package, so it currently has three back ends, one for reading JSON file, one for using the restful API and one for the streaming AP I and all the measurement results are returned as native go objects. This is a quick example. There is an interface called SLS which all the back end should support and this example (at laser) this example makes a new stream, gets all the DNS measurements results. What's returned is a go channel, which again process the results however you want. There is also help functions to decode stuff like the A BoF and you can view the full example on the link.
The code is on GitHub of course. So, please use, implement and hopefully ill get some pool requests.
Thank you. Any questions?
CHAIR: No questions? Okay. Early lunch.
MARTIN WINTER: So, that concludes our Working Group five minutes early. Great so you have a head start for the lunch so you can be the first one at the lunch buffet, and so see you again next time in Dubai and again, remember, if you consider, or want to be a working group Chair or, we will mention it again on the mailing list, but start to think about it.
LIVE CAPTIONING BY
MARY McKEON, RMR, CRR, CBC