<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div><br></div><span class="Apple-tab-span" style="white-space:pre"> </span>Puede ser interesante para la comunidad.<div><br></div><div><span class="Apple-tab-span" style="white-space:pre"> </span>Por cierto, alguien sabe si OpenFlow soporta ya IPv6? La última vez que lo revisé no lo hacía a full.</div><div><br></div><div>Saludos,</div><div>.as</div><div><br><div><br><div>Begin forwarded message:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px;"><span style="font-family:'Helvetica'; font-size:medium; color:rgba(0, 0, 0, 1);"><b>From: </b></span><span style="font-family:'Helvetica'; font-size:medium;">Eugen Leitl <<a href="mailto:eugen@leitl.org">eugen@leitl.org</a>><br></span></div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px;"><span style="font-family:'Helvetica'; font-size:medium; color:rgba(0, 0, 0, 1);"><b>Date: </b></span><span style="font-family:'Helvetica'; font-size:medium;">17 April 2012 13:37:25 GMT-03:00<br></span></div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px;"><span style="font-family:'Helvetica'; font-size:medium; color:rgba(0, 0, 0, 1);"><b>To: </b></span><span style="font-family:'Helvetica'; font-size:medium;">NANOG list <<a href="mailto:nanog@nanog.org">nanog@nanog.org</a>><br></span></div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px;"><span style="font-family:'Helvetica'; font-size:medium; color:rgba(0, 0, 0, 1);"><b>Subject: </b></span><span style="font-family:'Helvetica'; font-size:medium;"><b>OpenFlow @ GOOG</b><br></span></div><br><div><br><a href="http://www.wired.com/wiredenterprise/2012/04/going-with-the-flow-google/all/1">http://www.wired.com/wiredenterprise/2012/04/going-with-the-flow-google/all/1</a><br><br>Going With The Flow: Google’s Secret Switch To The Next Wave Of Networking<br><br>By Steven Levy April 17, 2012 | 11:45 am | <br><br>Categories: Data Centers, Networking<br><br>In early 1999, an associate computer science professor at UC Santa Barbara<br>climbed the steps to the second floor headquarters of a small startup in Palo<br>Alto, and wound up surprising himself by accepting a job offer. Even so, Urs<br>Hölzle hedged his bet by not resigning from his university post, but taking a<br>year-long leave.<br><br>He would never return. Hölzle became a fixture in the company — called<br>Google. As its czar of infrastructure, Hölzle oversaw the growth of its<br>network operations from a few cages in a San Jose co-location center to a<br>massive internet power; a 2010 study by Arbor Networks concluded that if<br>Google was an ISP it would be the second largest in the world (the largest is<br>Tier 3, which services over 2,700 major corporations in 450 markets over<br>100,000 fiber miles.) ‘You have all those multiple devices on a network but<br>you’re not really interested in the devices — you’re interested in the<br>fabric, and the functions the network performs for you,’ Hölzle says.<br><br>Google treats its infrastructure like a state secret, so Hölzle rarely speaks<br>about it in public. Today is one of those rare days: at the Open Networking<br>Summit in Santa Clara, California, Hölzle is announcing that Google<br>essentially has remade a major part of its massive internal network,<br>providing the company a bonanza in savings and efficiency. Google has done<br>this by brashly adopting a new and radical open-source technology called<br>OpenFlow.<br><br>Hölzle says that the idea behind this advance is the most significant change<br>in networking in the entire lifetime of Google.<br><br>In the course of his presentation Hölzle will also confirm for the first time<br>that Google — already famous for making its own servers — has been designing<br>and manufacturing much of its own networking equipment as well.<br><br>“It’s not hard to build networking hardware,” says Hölzle, in an advance<br>briefing provided exclusively to Wired. “What’s hard is to build the software<br>itself as well.”<br><br>In this case, Google has used its software expertise to overturn the current<br>networking paradigm.<br><br>If any company has potential to change the networking game, it is Google. The<br>company has essentially two huge networks: the one that connects users to<br>Google services (Search, Gmail, YouTube, etc.) and another that connects<br>Google data centers to each other. It makes sense to bifurcate the<br>information that way because the data flow in each case has different<br>characteristics and demand. The user network has a smooth flow, generally<br>adopting a diurnal pattern as users in a geographic region work and sleep.<br>The performance of the user network also has higher standards, as users will<br>get impatient (or leave!) if services are slow. In the user-facing network<br>you also need every packet to arrive intact — customers would be pretty<br>unhappy if a key sentence in a document or e-mail was dropped.<br><br>The internal backbone, in contrast, has wild swings in demand — it is<br>“bursty” rather than steady. Google is in control of scheduling internal<br>traffic, but it faces difficulties in traffic engineering. Often Google has<br>to move many petabytes of data (indexes of the entire web, millions of backup<br>copies of user Gmail) from one place to another. When Google updates or<br>creates a new service, it wants it available worldwide in a timely fashion —<br>and it wants to be able to predict accurately how quickly the process will<br>take.<br><br>“There’s a lot of data center to data center traffic that has different<br>business priorities,” says Stephen Stuart, a Google distinguished engineer<br>who specializes in infrastructure. “Figuring out the right thing to move out<br>of the way so that more important traffic could go through was a challenge.”<br><br>But Google found an answer in OpenFlow, an open source system jointly devised<br>by scientists at Stanford and the University of California at Berkeley.<br>Adopting an approach known as Software Defined Networking (SDN), OpenFlow<br>gives network operators a dramatically increased level of control by<br>separating the two functions of networking equipment: packet switching and<br>management. OpenFlow moves the control functions to servers, allowing for<br>more complexity, efficiency and flexibility.<br><br>“We were already going down that path, working on an inferior way of doing<br>software-defined networking,” says Hölzle. “But once we looked at OpenFlow,<br>it was clear that this was the way to go. Why invent your own if you don’t<br>have to?”<br><br>Google became one of several organizations to sign on to the Open Networking<br>Foundation, which is devoted to promoting OpenFlow. (Other members include<br>Yahoo, Microsoft, Facebook, Verizon and Deutsche Telekom, and an innovative<br>startup called Nicira.) But none of the partners so far have announced any<br>implementation as extensive as Google’s.<br><br>Why is OpenFlow so advantageous to a company like Google? In the traditional<br>model you can think of routers as akin to taxicabs getting passengers from<br>one place to another. If a street is blocked, the taxi driver takes another<br>route — but the detour may be time-consuming. If the weather is lousy, the<br>taxi driver has to go slower. In short, the taxi driver will get you there,<br>but you don’t want to bet the house on your exact arrival time.<br><br>With the software-defined network Google has implemented, the taxi situation<br>no longer resembles the decentralized model of drivers making their own<br>decisions. Instead you have a system like the one envisioned when all cars<br>are autonomous, and can report their whereabouts and plans to some central<br>repository which also knows of weather conditions and aggregate traffic<br>information. Such a system doesn’t need independent taxi drivers, because the<br>system knows where the quickest routes are and what streets are blocked, and<br>can set an ideal route from the outset. The system knows all the conditions<br>and can institute a more sophisticated set of rules that determines how the<br>taxis proceed, and even figure whether some taxis should stay in their<br>garages while fire trucks pass.<br><br>Therefore, operators can slate trips with confidence that everyone will get<br>to their destinations in the shortest times, and precisely on schedule.<br><br>Continue reading ‘Going With The Flow: Google’s Secret Switch To The Next<br>Wave Of Networking‘ …<br><br>Making Google’s entire internal network work with SDN thus provides all sorts<br>of advantages. In planning big data moves, Google can simulate everything<br>offline with pinpoint accuracy, without having to access a single networking<br>switch. Products can be rolled out more quickly. And since “the control<br>plane” is the element in routers that most often needs updating, networking<br>equipment is simpler and enduring, requiring less labor to service.<br><br>Most important, the move makes network management much easier. By early this<br>year, all of Google’s internal network was running on OpenFlow. ‘Soon we will<br>able to get very close to 100 percent utilization of our network,’ Hölzle<br>says.<br><br>“You have all those multiple devices on a network but you’re not really<br>interested in the devices — you’re interested in the fabric, and the<br>functions the network performs for you,” says Hölzle. “Now we don’t have to<br>worry about those devices — we manage the network as an overall thing. The<br>network just sort of understands.”<br><br>The routers Google built to accommodate OpenFlow on what it is calling “the<br>G-Scale Network” probably did not mark not the company’s first effort in<br>making such devices. (One former Google employee has told Wired’s Cade Metz<br>that the company was designing its own equipment as early as 2005. Google<br>hasn’t confirmed this, but its job postings in the field over the past few<br>years have provided plenty of evidence of such activities.) With SDN, though,<br>Google absolutely had to go its own way in that regard.<br><br>“In 2010, when we were seriously starting the project, you could not buy any<br>piece of equipment that was even remotely suitable for this task,” says<br>Hotzle. “It was not an option.”<br><br>The process was conducted, naturally, with stealth — even the academics who<br>were Google’s closest collaborators in hammering out the OpenFlow standards<br>weren’t briefed on the extent of the implementation. In early 2010, Google<br>established its first SDN links, among its triangle of data centers in North<br>Carolina, South Carolina and Georgia. Then it began replacing the old<br>internal network with G-Scale machines and software — a tricky process since<br>everything had to be done without disrupting normal business operations.<br><br>As Hölzle explains in his speech, the method was to pre-deploy the equipment<br>at a site, take down half the site’s networking machines, and hook them up to<br>the new system. After testing to see if the upgrade worked, Google’s<br>engineers would then repeat the process for the remaining 50 percent of the<br>networking in the site. The process went briskly in Google’s data centers<br>around the world. By early this year, all of Google’s internal network was<br>running on OpenFlow.<br><br>Though Google says it’s too soon to get a measurement of the benefits, Hölzle<br>does confirm that they are considerable. “Soon we will able to get very close<br>to 100 percent utilization of our network,” he says. In other words, all the<br>lanes in Google’s humongous internal data highway can be occupied, with<br>information moving at top speed. The industry considers thirty or forty<br>percent utilization a reasonable payload — so this implementation is like<br>boosting network capacity two or three times. (This doesn’t apply to the<br>user-facing network, of course.)<br><br>Though Google has made a considerable investment in the transformation —<br>hundreds of engineers were involved, and the equipment itself (when design<br>and engineering expenses are considered) may cost more than buying vendor<br>equipment — Hölzle clearly thinks it’s worth it.<br><br>Hölzle doesn’t want people to make too big a deal of the confirmation that<br>Google is making its own networking switches — and he emphatically says that<br>it would be wrong to conclude that because of this announcement Google<br>intends to compete with Cisco and Juniper. “Our general philosophy is that<br>we’ll only build something ourselves if there’s an advantage to do it — which<br>means that we’re getting something we can’t get elsewhere.”<br><br>To Hölzle, this news is all about the new paradigm. He does acknowledge that<br>challenges still remain in the shift to SDN, but thinks they are all<br>surmountable. If SDN is widely adopted across the industry, that’s great for<br>Google, because virtually anything that happens to make the internet run more<br>efficiently is a boon for the company.<br><br>As for Cisco and Juniper, he hopes that as more big operations seek to adopt<br>OpenFlow, those networking manufacturers will design equipment that supports<br>it. If so, Hölzle says, Google will probably be a customer.<br><br>“That’s actually part of the reason for giving the talk and being open,” he<br>says. “To encourage the industry — hardware, software and ISP’s — to look<br>down this path and say, ‘I can benefit from this.’”<br><br>For proof, big players in networking can now look to Google. The search giant<br>claims that it’s already reaping benefits from its bet on the new revolution<br>in networking. Big time.<br><br>Steven Levy<br><br>Steven Levy's deep dive into Google, In The Plex: How Google Thinks, Works<br>And Shapes Our Lives, was published in April, 2011. Steven also blogs at<br>StevenLevy.com. Check out Steve's Google+ Profile .<br><br>Read more by Steven Levy<br><br>Follow @StevenLevy on Twitter.<br></div></blockquote></div><br></div></body></html>