just2cool wrote:Don't always pay attention to the "core-->agg-->access" model bs. Agg layers are only required if you absolutely need more bandwidth and the core can't support all that traffic. If you buy a 7K, you can run a collapsed core/agg without an issue, it will be a hell of a lot less complex, will perform a lot better, be easier to manage, and will be more reliable as a result of all this.dlploh04 wrote:How many 10GE ports can be used with proper subscription based on what you have in the Aggregation / Distribution / Access layer with two 6509? 6 line cards remaining per chassis if 2 supervisors and a services module is installed.
I'm thinking about 4 Nexus 5596UP's - at the server aggregation layer doing local L2 switching - with 4 10GB uplinks to core per 5596 and 4-6 Catalyst 4507's having one 10GB uplink to each core per 4507.
The sup720/sup2T is an advanced services router than can also downgrade itself to a datacenter core role or even an access layer switch. The problem with it playing a core role is that it is not designed to strongly support 10gig/40gig/100gig, even with sup2T no matter what cisco tells you. Why bother buying this for the future? What features do you need in the DC core that a modern DC core can't provide? If you have any dream of supporting 10gig server farms, do not buy a 6K as the core, you will inevitably regret your choice. This is an unbiased opinion -- I have roughly 20 sup720s doing other roles in my network today that I would never let my 7K do.Read those two questions again -- you want to buy a core maxed out on day 1 vs one that has room for expansion? Don't feel bad though, no one is looking ahead. Everyone thinks that 10gig uplinks on a 10gig access switch is going to work. Newsflash, it won't -- if 2 servers pull data at the same time and hash to the same member in the port channel (happens all the time), you will have tons of output drops. With a 10gig uplink on a 1gig switch, you can have 10 concurrent 1gig conversations at the same time with 0 drops -- the quality of this uplink is better for what it is supporting.dlploh04 wrote:Seems if go with the 7009 we will have 5 line cards empty, or 4 cards if we add another 48port 10gb which would equal adding two fully loaded 6509e line card capacity?
What am I missing? Maybe I am looking at this with not a long enough timeframe?
2 40gig uplinks on a 48 port 10gig switch isn't going to work either. I'd say 4 40gig uplinks is the absolute minimum, but it's still a rather crappy 3:1 oversubscription, so I plan to limit the amount of servers in each cabinet to make it 2:1. 4x100gig is what I want, but that's not happening any time soon.
The 5596's would of had 8x10g uplinks & 8x10g vpc links so essentially a 2 switch VPC with 32port 10gig access ports with ability to add another 48 ports per switch at later date by adding the add-on modules. 4:1 oversubscription w/ 32 ports in use and 10:1 oversubscription with add on card later.
OK If I understand you correctly you are suggesting removing the nexus 5500 switches from the design and running our 10gb server's & san's dual homed into the two 7009's.
each 7009 will have the redundant supervisors, five fab2 modules, and two power supplies, running in quad-sup(ISSU) VPC
each 7009 will have a single f2 48port 10gbe card for downlinks to the user access switches catalyst 4507R+E as well as server/san/routers/switches/firewall/appliance connectivity
each 7009 will have LAN Enterprise License (L3 Protocols)
each 7009 will use GLC-T for dual homed connectivity from 1gb routers/switches/firewalls/appliances/servers
each 7009 will use SFP-H10GB-ACU10M for dual homed connectivty from 10gb san/servers or SFP-10G if distance requires.
In this scenario we have 6 slots free for 10/40/100GE per chassis for future connectivity not counting any reduction if they introduce services modules for the 7009 or if we find the need to add a VDC later for DMZ/test.
If we ever fill up this chassis then we then introduce Nexus 5596 & move our servers/san to the 5596(likely a newer generation too) with eventual 40/100ge uplinks to the 7009?
That's pretty easy to do huh?...if dual homed then move the ports that are on 7009-1 to 5596-1 and repeat with 7009 to 5596-2 with little to no downtime?