First Look–Dell PowerEdge M I/O Aggregator

[updated 10.11.2012] In many data centers, rack servers offer organizations the ability to keep server and networking responsibilities separated. However, when blade servers are introduced into an environment, the server and network admins roles start to blur. Should the server admin have to learn networking, or should the networking admin have to learn blade servers? Some blade server environments use pass-thru modules instead of network I/O modules. Pass-thru modules are easy to use and offer the ability to pass the networking upstream to the switch via a 1 to 1 port connection . This approach allows the networking admin to maintain ownership but there is no cable savings within the blade infrastructure since each server port requires a connection to the external top of rack switch. Network I/O modules used within the blade chassis offers a reduction of external cabling along with local switching enabling blade servers to communicate easily within the chassis. admins are forced to learn each others’ roles, which is a great practice, but adds complexity to blade server implementations. What if we could take the ease of a pass-thru module and combine it with the ability to communicate locally? Now you can, with the introduction of the Dell PowerEdge M I/O Aggregator.

Dell PowerEdge M I/O AggregatorAttributes

The PowerEdge M I/O Aggregator comes out-of-the-box with 40 x 10GbE ports available through 32 internal ports and 8 external ports.  It also has the option of extending the external capabilities through 2 optional FlexIO modules. The FlexIO modules options include:

* 2-port QSFP+ module in 4x10GbE mode
* 4-port SFP+ 10GbE module
* 4-port 10GBASE-T 10GbE copper module (1/10GB, only 1 module per IOA is supported)

Performance Details

The PowerEdge M I/O Aggregator is fully IEEE DCB compliant for converged IO supporting iSCSI, NAS, converged Ethernet and Fibre-Channel-based storage applications.  Here are the details on the performance of this module:

  • MAC addresses: 128K
  • Switch fabric capacity: 1.28 Tbps (full-duplex)
  • Forwarding capacity: 960 Mpps
  • Link aggregation: Up to 16 members per group, 128 LAG groups
  • Queues per port: 4 queues
  • VLANs: 4094
  • Line-rate Layer 2 switching: all protocols, including IPv4
  • Packet buffer memory: 9MB
  • CPU memory: 2GB

[updated info 10.11.2012] One of the best features the Dell PowerEdge M I/O Aggregator offers is out of the box it comes enabled with instant “plug and play” connectivity to Dell and multi-vendor networks.  The PowerEdge M I/O Aggregator comes with all VLANs on all ports with the option to set VLANs.  It is also designed to be “no touch” with iSCSI DCB and FCoE settings downloaded from the top of rack switch through DCBx protocol.    The Dell PowerEdge M I/O Aggregator supports DCB (protocols PFC, ETC and DCBx), Converged iSCSI with EqualLogic and Compellent (supports iSCSI TLV) and FCoE Transit to Top of Rack Switch via FIP Snooping Bridge.

Since this is a “first look” this is all I can reveal at this time.  Stay tuned – more updates to come shortly.

Additional References

 

 

Kevin Houston is the founder and Editor-in-Chief of BladesMadeSimple.com. He has over 15 years of experience in the x86 server marketplace. Since 1997 Kevin has worked at several resellers in the Atlanta area, and has a vast array of competitive x86 server knowledge and certifications as well as an in-depth understanding of VMware and Citrix virtualization. Kevin works for Dell as a Server Sales Engineer covering the Global 500 market.

32 thoughts on “First Look–Dell PowerEdge M I/O Aggregator

  1. Pingback: Kevin Houston

  2. Pingback: Daniel Bowers

  3. Pingback: Andreas Erson

  4. Pingback: Kevin Houston

  5. Pingback: Kevin Houston

  6. Pingback: Dell Cares PRO

  7. Pingback: Dell Networking

  8. Pingback: SETAKA Takao

  9. Andreas Erson

    Just heard about a “limitation” that it supports up to 16x10GbE external ports using the breakout cables. Physically up to 24x10GbE external ports are possible with the two integrated QSFP+ and two dual-port QSFP+ modules using QSFP+-to-4xSFP+ breakout cables.

  10. Pingback: Kevin Houston

  11. Kevin Houston

    The #Dell PowerEdge M I/O Aggregator has 2 integrated QSFP+ ports that break out into 4 x 10Gb port (each). The additional 2 x FlexIO bays can take 2 x QSFP+ modules, which each break up into 4 x 10Gb ports. Overall that provides 16 x 10Gb ports, not 24. As always, thanks for the comment!

  12. Andreas Erson

    No, 2xQSFP+ integrated plus 2xQSFP+ in each FlexIO bay is a total of six QSFP+. 6×4 is still 24 in my book. But that configuration is not supported.

    This is what I got from the PG team regarding the “Up to 16 external 10GbE ports (4 QSFP+ ports with breakout cables)”-wording in the I/O:

    “Max 16 external 10GbE ports is correct on the IOA. … . It’s counterintuitive but while you can physically add 2 QSFP modules for a total of 6 QSFP ports and what we seem to be 24x10GbE ports using breakout cables… only max 16 10GbE ports are addressable regardless of which FlexIOs you choose.”

  13. Kevin Houston

    I apologize – your math was correct. For some reason, I was calculating 1 port per FlexIO module, not 2. Therefore, if you had 2 x FlexIO modules with QSFP ports, you would have 6 total per PowerEdge M I/O Aggregator. I’m not sure if there is an external limit of 16 ports or not, but realistically, more than 16 wouldn’t be relevant, because then it turns into a pass-through module. Anyway – sorry for the miscalcuation. I’ll look into this and let you know.

  14. Pingback: Angelo Luciani

  15. Pingback: Kevin Houston

  16. Pingback: David Rees

  17. Pingback: Kevin Houston

  18. Pingback: Kevin Houston

  19. Pingback: Shawn Cannon

  20. Pingback: Ed Swindelles

  21. Andreas Erson

    I know there is a limit of 16x10GbE ports since that was the answer the PG team told me when I wondered if the wording in the I/O guide about “Up to 16 external 10GbE ports..” was a miscalculation. Their answer that I quoted above explained that it’s a limitation regarding adressability of more than 16x10GbE external ports. Feel free to verify this.

    Since the I/O aggregator has 32 internal ports it can never be a simple pass-through. It should also be a cheaper option than the Force10 MXL when 4x10GbE mezzanines are released.

  22. Pingback: Pradeep Mascarenhas

  23. Pingback: Arseny Chernov

  24. Pingback: Pradeep Mascarenhas

  25. Pingback: Paul Arts

  26. Pingback: Kevin Houston

  27. Pingback: Matthew Klaus

  28. Pingback: The Data Center Journal Storage Blade Array: HP vs Dell

  29. Pingback: Technology Short Take #26 - blog.scottlowe.org - The weblog of an IT pro specializing in virtualization, storage, and servers

  30. Pingback: Technology Short Take #26 | Strategic HRStrategic HR

  31. Kevin Houston

    Andreas – realized I never got back to you on why the IO Aggregator supports 16 x 10GbE. The reason is that I/O Aggregators support 1 LAG per group, and the LAG only supports 16 x 10GbE uplinks therefore the marketing material shows 16 x 10GbE uplinks as the “max”. Hope this makes sense.

  32. Pingback: 5 Reasons You May NOT Want Blade Servers – Making blade servers simple

Comments are closed.