[lacnog] [afripv6-discuss] IPv6 rollout.

Arturo Servin arturo.servin en gmail.com
Vie Ago 3 12:53:09 BRT 2012


	Excelente Alvaro.

	Esto deja muy claro el rol super importante que tienen las universidades para motivar la transición a IPv6. El potencial de usuarios de acceso que pueden poner en v6 es gigante, por decir poco.

Slds
as

	

On 3 Aug 2012, at 00:58, Alvaro Vives wrote:

> Hola,
> 
> Siguiendo el hilo de Carlos reenvío otro correo en el que dan más detalles.
> 
> A
> 
> De: afripv6-discuss-bounces en afrinic.net
> [mailto:afripv6-discuss-bounces en afrinic.net] En nombre de Andrew Alston
> Enviado el: martes, 31 de julio de 2012 21:55
> Para: IPv6 in Africa
> Asunto: Re: [afripv6-discuss] IPv6 rollout…
> 
> Hi Guys,
> 
> Ok a bit more info now that I have time to sit down and write, sorry things
> have been rather hectic.
> 
> Here is how this came about, and a bit more of the full story.
> 
> The university in question was running a network without a real topology, in
> essence, it was a flat network, v4 only, and a massive one at that.  This
> was causing REAL issues, and it was the result of years and years of legacy.
>    The decision was taken that IPv6 would be required, but, simply put, we
> had to fix the network first.  So first step, how do you migrate from a flat
> network running a single /16 flat, to a segmented network, and do it on a
> live campus environment that has 38 thousand users on it every day using the
> network?  The answer... very very carefully, with a lot of planning, and
> some very careful segment and IP planning.
> 
> So, the following decisions were made:
> 
> A.) We get rid of NAT in entirety - if we were going to do this, we would do
> it properly, and the dual stack would be on live IPs in identical topology
> v4 and v6 network wide.  
> B.) We divide the network into a three tier network, core, distribution and
> edge. 
> C.) Core/Distribution would be routed, would have to be scalable, and we'd
> use SP style protocols to do this.
> D.) Edge would remain layer 2, but we would choose not to span any L2 across
> distributions.  Since Core/Distribution in future will be MPLS enabled, if
> we need L2 between to points, we can EoMPLS it.
> 
> So we did our planning, and discovered (to our horror), that changing the
> topology, eliminating NAT, and rolling out the wireless infrastructure we
> had planned, we'd need a LOT more IPv4 space.  So, we applied and were
> granted a second /15 PI space from AfriNIC.   We then started
> implementation.
> 
> First step:  Pick a network segment - we chose the student residences (not
> because we hate the students, but because we wouldn't break any critical
> research if it all went horribly wrong).  Then, we moved all the student
> residences between a single distribution, our residence distribution.  At
> this point, Vlan 1 was still spanned to the residence distribution, so
> NOTHING had broken in doing this, all was working.  Then, we created a point
> to point link between that distribution and the core.  On the point to
> point, we put a /30 v4 and a /126 v6.  (Sadly, the gear we are using doesn't
> support either /31 or /127).
> 
> Then, we enabled ospf for v4 and ospf3 for v6 across the point to point.
>  There is a slight difference in the topologies at this point because the
> ospf for v4 was configured to ONLY carry the point to point and the
> loopbacks from the distributions, the rest is covered by iBGP, where as in
> v6, the hardware didn't do ipv6 bgp, so we had to do full route distribution
> of v6 in OSPF3.  
> 
> Note at this point, we had still had zero downtime.
> 
> Then, for each residence between the distribution, we created a vlan, and
> assigned it an IP segment and an IPv6 segment.  To avoid mass routes in our
> routing tables, these segments were all taken out of supernets we had
> dedicated to each distribution, and the supernets were what we pushed into
> the routing table on both v4 and v6 level.  So, the vlans were now created
> on the distribution, the routing was working.  Fixed up the DHCP for v4 as
> well, so that was in place and ready to go with the correct scopes.  (Note,
> we are using RA for v6 at this point, we haven't gotten around to DHCPv6
> yet, so most people are still hitting the DNS servers on v4 addresses, since
> we can't push DNS via RA).   
> 
> Note: Still ZERO downtime to anyone
> 
> Then, we took the created vlan's on the distribution, trunked them down to
> the edge switches, waited till after hours, and moved the edge ports into
> the correct vlan's.  (Different vlans for student pcs and wireless aps etc).
>  The actual move into the correct vlans was like, a single command on each
> switch.  Then simple forced a port flap one very port as we went.  The port
> flap was to force a DHCP reallocation on v4.
> 
> Bang, the residences came up on the new topology with v4 and v6 - total
> downtime to the clients - less than 30 seconds.
> 
> Then, we did a rinse and repeat job through the various distributions (we're
> still busy doing some of them, 6 outta 11 done so far, and probably around
> 300 or 400 edge switches tagged correctly).
> 
> Once we were sure the topology was working, and the IPv6 was working, next
> step was to enable the ipv6 on the proxy servers, so that they could fetch
> via IPv6.  We did this, and instantly saw around 30% of the traffic coming
> in via v6, primarily google, youtube, facebook and akamai.  Note however, at
> this point the clients were still seeing the proxies via v4, though the
> proxies were fetching via v6.  So next step, put in quad-a records for the
> proxy servers and for the pac file round robin.  Suddenly, everyone who had
> a v6 address was fetching from the proxy servers via v6, irrespective of if
> the proxies were fetching v4 or v6.
> 
> Suddenly, we had a situation where 50% of the traffic coming in was via
> IPv6, and we infact peaked at well over 100mbit of IPv6 traffic today coming
> in off the Internet.  
> 
> Our next steps of course are to migrate the rest of the distributions and
> edge to the new network, and infact in the next 10 minutes we'll be moving
> another thousand edge ports into this.  Once this is done, we'll start
> looking closely at the server infrastructure and how we go about putting the
> rest of the production servers both into the new topology and IPv6 enabling
> them.  We expect this to be the most problematic part, since we know there
> are certain services which have issues with IPv6, but we'll work around
> those when we get there.
> 
> In summary - it is entirely possible to take a network with around 15
> thousand wired network points, a few hundred wireless access points, a few
> thousand VOIP phones and completely redeploy it both on a v4 and a v6 level
> with almost no downtime if the planning is correct.  The traffic levels also
> prove, there is IPv6 content out there, lots of it, and we're happy to use
> it!  It just takes some planning, some forethought and some people willing
> to work really hard at strange hours getting it done.
> 
> For interests sake, graphs can be seen here:
> 
> http://graphs.tenet.ac.za/iris/browser/browse?username=UFS&selectedmnemonicg
> roup=TSN81
> 
> The graph marked vl1081 is the IPv4 interface to TENET (The South African
> NREN), the graph marked vl3081 is the IPv4 interface to TENET.  We
> specifically asked them to provide v4 and v6 on separate interfaces as it
> did allow for us to see the traffic on a more individual basis as well,
> which was useful.
> 
> Hope this answers some of the questions I have been sent off list and
> provides hope for those who believe that IPv6 migration is impossible -
> never forget - we did it on both v4 and v6 *at the same time*, on a live
> network, with no downtime, so if anyone doubts it can be done, we're proof
> that it can.
> 
> Thanks
> 
> Andrew Alston
> Network Consultant
> 
> NOTE: I write the above as a private individual and private consultant and
> have gained specific permission from my client (The University of the Free
> State) to relay this story.  I would like to say a special thank you to them
> for allowing me to share this with you as well.
> 
> 
> On 31 Jul 2012, at 3:58 PM, Maye diop <mayediop en gmail.com> wrote:
> 
> 
> Dear Andrew,
> Congratulations.
> I look forward more details to share with my universities.
> Best Regards.
> Le 31 juil. 2012 07:11, "Andrew Alston" <alston.networks en gmail.com> a
> écrit :
>> 
>> Hi Guys,
>> 
>> So, while i'll be sending out a lot more data soon, with a lot more
> information on exactly what we did and how we did it etc, I thought  I would
> share some news that I for one found rather exciting.
>> 
>> Yesterday evening starting at around 7pm one of the South African
> universities turned up IPv6, in a fairly consistent manner.  Now, I'm not
> talking about turning up IPv6 on a few servers, I'm talking about
> integrating it into every part of their network.  By 2:30am this morning it
> was running on all their proxy servers, all their residence networks, the
> wireless networks, all the lab PC's and a good portion of the staff network.
>  The topology used was identical to that of the IPv4, and as the rest of the
> network is migrated to the new IPv4 topology V6 will be implemented on
> everything in dual stack along side that as well.
>> 
>> Now, here is where things get interesting, another network dual stacked is
> no real news, so lets talk about traffic levels.
>> 
>> The University in question is now running anywhere between 30 to 50
> percent of its internet traffic on IPv6, and its working flawlessly so far.
>  So flawlessly infact that even with Apple's default implementation of Happy
> Eyeballs that tests RTT and defaults to v4 if the v6 latency is higher, the
> apples we tested on running lion and mountain lion were still choosing ipv6
> most of the time.
>> 
>> I am not going to say this little rollout has been easy though, we had to
> rearchitecture the entire network (that had to happen anyway for various
> reasons), and we added the v6 as part of that project.  It would not have
> been possible to do that without first getting our hands on another /15
> worth of IPv4 space though to allow that rearchitecturing to happen
> properly.
>> 
>> As I said though, in coming days we'll write up what we did with a lot
> more detail and send through some graphs and other information, I just had
> to share the fact that we're seeing at points half the traffic on a standard
> university coming in from the internet over ipv6!
>> 
>> Thanks
>> 
>> Andrew Alston
>> Network Consultant_______________________________________________
>> afripv6-discuss mailing list
>> afripv6-discuss en afrinic.net
>> https://lists.afrinic.net/mailman/listinfo.cgi/afripv6-discuss
> _______________________________________________
> afripv6-discuss mailing list
> afripv6-discuss en afrinic.net
> https://lists.afrinic.net/mailman/listinfo.cgi/afripv6-discuss
> 
> 
> 
> **********************************************
> IPv4 is over
> Are you ready for the new Internet ?
> http://www.consulintel.es
> The IPv6 Company
> 
> This electronic message contains information which may be privileged or confidential. The information is intended to be for the use of the individual(s) named above. If you are not the intended recipient be aware that any disclosure, copying, distribution or use of the contents of this information, including attached files, is prohibited.
> 
> _______________________________________________
> afripv6-discuss mailing list
> afripv6-discuss en afrinic.net
> https://lists.afrinic.net/mailman/listinfo.cgi/afripv6-discuss
> _______________________________________________
> LACNOG mailing list
> LACNOG en lacnic.net
> https://mail.lacnic.net/mailman/listinfo/lacnog
> Cancelar suscripcion: lacnog-unsubscribe en lacnic.net




Más información sobre la lista de distribución LACNOG