All other Cisco networking related discussions.
dlploh04
New Member
Posts:
15
Joined:
Wed Apr 18, 2012 7:25 pm

Re: Cisco Catalyst 6509E vs Nexus 7009

Mon Apr 30, 2012 5:13 pm

just2cool wrote:
dlploh04 wrote:How many 10GE ports can be used with proper subscription based on what you have in the Aggregation / Distribution / Access layer with two 6509? 6 line cards remaining per chassis if 2 supervisors and a services module is installed.

I'm thinking about 4 Nexus 5596UP's - at the server aggregation layer doing local L2 switching - with 4 10GB uplinks to core per 5596 and 4-6 Catalyst 4507's having one 10GB uplink to each core per 4507.
Don't always pay attention to the "core-->agg-->access" model bs. Agg layers are only required if you absolutely need more bandwidth and the core can't support all that traffic. If you buy a 7K, you can run a collapsed core/agg without an issue, it will be a hell of a lot less complex, will perform a lot better, be easier to manage, and will be more reliable as a result of all this.

The sup720/sup2T is an advanced services router than can also downgrade itself to a datacenter core role or even an access layer switch. The problem with it playing a core role is that it is not designed to strongly support 10gig/40gig/100gig, even with sup2T no matter what cisco tells you. Why bother buying this for the future? What features do you need in the DC core that a modern DC core can't provide? If you have any dream of supporting 10gig server farms, do not buy a 6K as the core, you will inevitably regret your choice. This is an unbiased opinion -- I have roughly 20 sup720s doing other roles in my network today that I would never let my 7K do.

dlploh04 wrote:Seems if go with the 7009 we will have 5 line cards empty, or 4 cards if we add another 48port 10gb which would equal adding two fully loaded 6509e line card capacity?

What am I missing? Maybe I am looking at this with not a long enough timeframe?
Read those two questions again -- you want to buy a core maxed out on day 1 vs one that has room for expansion? Don't feel bad though, no one is looking ahead. Everyone thinks that 10gig uplinks on a 10gig access switch is going to work. Newsflash, it won't -- if 2 servers pull data at the same time and hash to the same member in the port channel (happens all the time), you will have tons of output drops. With a 10gig uplink on a 1gig switch, you can have 10 concurrent 1gig conversations at the same time with 0 drops -- the quality of this uplink is better for what it is supporting.

2 40gig uplinks on a 48 port 10gig switch isn't going to work either. I'd say 4 40gig uplinks is the absolute minimum, but it's still a rather crappy 3:1 oversubscription, so I plan to limit the amount of servers in each cabinet to make it 2:1. 4x100gig is what I want, but that's not happening any time soon.



The 5596's would of had 8x10g uplinks & 8x10g vpc links so essentially a 2 switch VPC with 32port 10gig access ports with ability to add another 48 ports per switch at later date by adding the add-on modules. 4:1 oversubscription w/ 32 ports in use and 10:1 oversubscription with add on card later.

OK If I understand you correctly you are suggesting removing the nexus 5500 switches from the design and running our 10gb server's & san's dual homed into the two 7009's.

Two 7009's:
each 7009 will have the redundant supervisors, five fab2 modules, and two power supplies, running in quad-sup(ISSU) VPC
each 7009 will have a single f2 48port 10gbe card for downlinks to the user access switches catalyst 4507R+E as well as server/san/routers/switches/firewall/appliance connectivity
each 7009 will have LAN Enterprise License (L3 Protocols)
each 7009 will use GLC-T for dual homed connectivity from 1gb routers/switches/firewalls/appliances/servers
each 7009 will use SFP-H10GB-ACU10M for dual homed connectivty from 10gb san/servers or SFP-10G if distance requires.

In this scenario we have 6 slots free for 10/40/100GE per chassis for future connectivity not counting any reduction if they introduce services modules for the 7009 or if we find the need to add a VDC later for DMZ/test.

If we ever fill up this chassis then we then introduce Nexus 5596 & move our servers/san to the 5596(likely a newer generation too) with eventual 40/100ge uplinks to the 7009?

That's pretty easy to do huh?...if dual homed then move the ports that are on 7009-1 to 5596-1 and repeat with 7009 to 5596-2 with little to no downtime?

Regards,
Matt

User avatar
Vito_Corleone
Moderator
Posts:
9850
Joined:
Mon Apr 07, 2008 10:38 am
Certs:
CCNP RS, CCNP DC, CCDP, CCIP

Re: Cisco Catalyst 6509E vs Nexus 7009

Mon Apr 30, 2012 6:20 pm

I don't see where he recommends connecting hosts directly to the 7K. That's quite expensive and really a waste, IMO.
http://blog.alwaysthenetwork.com

dlploh04
New Member
Posts:
15
Joined:
Wed Apr 18, 2012 7:25 pm

Re: Cisco Catalyst 6509E vs Nexus 7009

Mon Apr 30, 2012 6:48 pm

Vito_Corleone wrote:I don't see where he recommends connecting hosts directly to the 7K. That's quite expensive and really a waste, IMO.


Don't always pay attention to the "core-->agg-->access" model bs. Agg layers are only required if you absolutely need more bandwidth and the core can't support all that traffic. If you buy a 7K, you can run a collapsed core/agg without an issue, it will be a hell of a lot less complex, will perform a lot better, be easier to manage, and will be more reliable as a result of all this.


Unless I'm misunderstanding he said combine core & aggregation (collapsed core/agg) so 7009 would be (core & server/san aggregation) together with just two 7009's?

User avatar
Vito_Corleone
Moderator
Posts:
9850
Joined:
Mon Apr 07, 2008 10:38 am
Certs:
CCNP RS, CCNP DC, CCDP, CCIP

Re: Cisco Catalyst 6509E vs Nexus 7009

Mon Apr 30, 2012 6:55 pm

He means network aggregation, as in a distribution layer. He's saying you don't need a distro layer in between your core (7K) and access (5K). At least I'm pretty sure that's what he's saying. Either way, don't connect your hosts directly to your 7K. At this point you can't connect FC/FCoE directly to the 7K (FCoE is supported when the sup2 comes out + license. FC isn't roadmapped on the 7K at all, AFAIK).

We seem to be going in circles a bit, but, again, I would go 7 > 5 > 2 with your storage on the 5Ks.
http://blog.alwaysthenetwork.com

dlploh04
New Member
Posts:
15
Joined:
Wed Apr 18, 2012 7:25 pm

Re: Cisco Catalyst 6509E vs Nexus 7009

Mon Apr 30, 2012 9:42 pm

Vito_Corleone wrote:He means network aggregation, as in a distribution layer. He's saying you don't need a distro layer in between your core (7K) and access (5K). At least I'm pretty sure that's what he's saying. Either way, don't connect your hosts directly to your 7K. At this point you can't connect FC/FCoE directly to the 7K (FCoE is supported when the sup2 comes out + license. FC isn't roadmapped on the 7K at all, AFAIK).

We seem to be going in circles a bit, but, again, I would go 7 > 5 > 2 with your storage on the 5Ks.


Oh I agree we definitely do not need distribution layer (3 tier arch.)! 2 layer is great.

My apologies for getting back to square one.
We currently do not have any FC in our environment and at end of march purchased $240k in 10gbe iscsi kit from Equallogic. The whole unified fabric with LAN/fcSAN and FC/FCoE seems to not be very valuable or applicable to us :/. Maybe that is a positive?

User avatar
Vito_Corleone
Moderator
Posts:
9850
Joined:
Mon Apr 07, 2008 10:38 am
Certs:
CCNP RS, CCNP DC, CCDP, CCIP

Re: Cisco Catalyst 6509E vs Nexus 7009

Mon Apr 30, 2012 9:47 pm

If you're doing iSCSI, you can attach your storage wherever the hell you want. I'd still do 7 > 5 > 2 and storage on the 5Ks most likely.
http://blog.alwaysthenetwork.com

dlploh04
New Member
Posts:
15
Joined:
Wed Apr 18, 2012 7:25 pm

Re: Cisco Catalyst 6509E vs Nexus 7009

Mon Apr 30, 2012 10:23 pm

Vito_Corleone wrote:If you're doing iSCSI, you can attach your storage wherever the hell you want. I'd still do 7 > 5 > 2 and storage on the 5Ks most likely.


Vito,

I understand and it does make sense to have an access layer off the core just like we do for workstations.

Is this what you are suggesting, just to confirm.

7k core -> 5k 10gb server/10gb iscsi San -> 2k 100/1000 server/iDRAC/ilo

7k core -> routers/switches/firewalls/appliances

7k core -> 4507R+E workstation switches Sup 7L-E (all upoe)

All above in single vdc? We do not have any vrf in use right now.

We have a few ipbase 4948e we use for 1g server and iDRAC/ilo now which has 10g uplinks. The 2k fex would replace this to reduce extra management?

Regards,
Matt

'
Previous

Return to Cisco General

Who is online

Users browsing this forum: Exabot [Bot] and 17 guests