On Thursday, October 28 after the U.S. markets closed, the Wall Street Journal found it curious that CommScope's closing share price was $31.64, which is 14 cents higher than the price that equity firm Carlyle Group agreed to pay to take the company private.
That the closing price was more than the agreed-upon acquisition price made WSJ reporter Shira Ovide wonder if there is another bidder for CommScope. Ovide pointed out that the agreement between Carlyle and CommScope permits the cabling-systems market-share leader to hear offers from other bidders until December 5. If CommScope accepts such a bid it will owe Carlyle a $43.3 million fee, Ovide reports.
Friday, October 29, 2010
Monday, October 25, 2010
CommScope in talks for $3B buyout to go private
Several media sources including RTT News are reporting that CommScope has confirmed speculation that it is in discussion with private equity firm The Carlyle Group, concerning a deal to take the company private for approximately $3 billion. The deal being discussed is for $31.50 per share.
CommScope had sales of $3.02 billion in fiscal 2009 and is set to announce third-quarter 2010 results on November 1.
While confirming that the discussions are being held, CommScope said there is no guarantee that any deal will take place and it plans to offer no further comment until such commentary is appropriate.
CommScope had sales of $3.02 billion in fiscal 2009 and is set to announce third-quarter 2010 results on November 1.
While confirming that the discussions are being held, CommScope said there is no guarantee that any deal will take place and it plans to offer no further comment until such commentary is appropriate.
Thursday, October 21, 2010
Guest Blog: Converging, colliding and collapsing IO standards and interconnect
by Ed Cady
Strategic marketing director
Siemon Interconnect Solutions
As with organic life forms, the nodes and links of the worldwide web seem to have a varying rhythmic process of differentiation and then integration. At certain inflection points in the process, one can see an intended integration of effort cause some differential effects, which in turn meld together after another natural cycle. More than any other IO interface, Ethernet has expanded well beyond the original LAN section of the web that it has dominated for many years since it overcame the rival Token Ring and VG AnyLAN interfaces.
Responding to Ethernet's expansion and absorption of rivals, champions and evangelists of other IO interfaces like Fibre Channel have created newer standard interface versions using a convergent tunneling method that preserves the native protocol but uses Ethernet physical transport system. Think of protocols tunneling through any other faster physical transport layer as a packet spaceship traveling through wormholes in space, from one data center galaxy to another.
Recently the Ethernet community has evolved its technology to converge LAN with SAN into one physical network. This was partially accomplished with the implementation of the recent Ethernet standard 10GBaseCR. This two-pair, serial single-lane link was expedited without a detailed connector IEEE standard specification clause, but achieved compliance and interoperability through an Ethernet Alliance Plugfest process.
This has caused the Fibre Channel community to create a Fibre Channel over Ethernet (FCoE) specification that helps to preserve the native protocol and its installed base. The InfiniBand community has similarly created its RoCE, or RDMA over Converged Ethernet, standard specification. RDMA is Remote Direct Memory Access, a low-latency and low-power technology used with InfiniBand architecture. So now these four interface, 10GBaseCR, 10GFCoE, 10GFC and 10GRoCE are implemented using the same SFP+ single-lane passive copper cabling. 10G SFP+ usage has grown dramatically because active copper and active optical SFP+ have enabled increased market segments and longer-length applications like digital signage and AV systems.
Besides Fibre Channel, other storage interfaces like NAS, iSCSI, iSATA and ATAoE are tunneled over Ethernet 10GBaseCR. These other storage interfaces are also tunneled over Ethernet 10GBaseT using Category 6a and Category 7a cabling. There are open and closed Consortia de facto standards using these multi-protocols on so-called collapsed architectural fabrics like the Unified Computing System, which also use the SFP+ cabling.
Besides UCS, there are several other de facto standard unified style networks, which also use the SFP+ but with different encryption in memory mapping of the embedded plug EPROMs. One wonders if all of these IO interfaces will expand and use the newly developing 25/26/28Gbit/sec QSFP++ module and cabling system, which is being standardized through the SFF-8661/2/3 specification. See www.sffcommittee.org, www.t11.org and www.fibrechannel.org to learn more, or contact me.
Ethernet 40GBaseCR4, 40GFCoE and InfiniBand 40G QDR standards are using the same four-lane QSFP+ SFF-8436 connector, module and cabling. the SAS storage interface uses QSFP+ AOC (active optical cables) for longer-reach applications as does the CameraLink-2 video networking standard. Will these various interface communities stay converged using the new SFF-8661 QSFP++ connector system for next-generation 100GBaseCR4, 100GFCoE, 100GFC SAN and InfiniBand 100G EDR?
There are many other convergent IO interfaces like FCoIB Fibre Channel over InfiniBand, UAS USB attached SCSI, UoSATA USB over SATA, and of course SoU SATA over USB, which is 3G SATA over 4.8G USB implementation. Watching Ethernet, the other very high volume IO standard, HDMI, has recently released its new revision-1.4 spec. This spec has 1G Ethernet running through the new microHDMI cabling system. However HDMI and DVI video IO signaling is run through Ethernet category cabling systems, as does the HDBaseT signaling and HomePlug Alliance's cabling adapters. So one could say that the shielded Cat 6a, Cat 7a, SFP+ and QSFP+ are the three primary multi-protocol interconnects for now and several years.
Lo, looming ahead is a potential round of interface collisions, convergence and collapsed interconnect. It is starting at the desktop level with DisplayPort, USB, SATA, HDMI and PCI-E converged and transformed to the new multi-protocol LightPeak optical-only single fiber interface. It is rumored that LightPeak would replace short-reach SAS as well. It seems that there is a 10G and 28G version of LightPeak.
At the 25/26/28/40G-per-lane data rate, electrical signaling has very limited copper-cable length reach, like 1 to 3 meters. Active optical cabling seems at this range to have an equal portion of the forecasted TAM volume versus copper. So it is no wonder that there is also looming another generation beyond, a new optical interface that can be supported by developing chips that currently work in labs at 50G per lane and supporting up to 2-km distances. Its next generation of 100G per lane is being co-developed. This optical technology interface is beyond the LightPeak interface and could supplant even Ethernet, InfiniBand, Fibre Channel and other IO interfaces within new data centers within five years.
Coinciding with this new optical interface's emergence is a very new generation of internal active optical cables that connect from either printed circuit boards or nascent fiber circuit boards to other boards/modules and to optical backplanes. These internal AOCs also are being driven by the continual port densification evolution as the internal AOCs connect to the bulkhead with MPO-type connectors and achieved double port density versus either SFP+ or QSFP+ AOC connector/cabling ports. But there will be a large part of the market and systems that stay longer using the various small form-factor pluggable media types, causing the use of many different hybrid cables like QSFP+ SFF-8436 to QSFP++ SFF-8661, and hydra cables like three SFP++ SFF-xxxx (number to be assigned) cable legs going into one QSFP++ SFF-8661. See www.sffcommittee.org.
These internal AOCs and other new CMOS photonic chips may evolve beyond using the QuickPath, HyperTransport and other chip-to-chip IO interfaces. As the highest performance and largest size data center system end-users look at using many thousands of mobile phone processor chips like Intel's Atom, the ARM chip or SmoothStone's new chip to save on power consumption and cooling needs, they are considering a further collapsed optical interface and interconnect that absorbs the LightPeak interface.
You can have fun trying to overlay all these IO roadmaps into one chart. In a parallel universe, voice communication interfaces are melding into Ethernet. Consider that telephony IO interfaces like SS7, TDMS, Utopia, Frame Relay, ATM, PBT and MPLS are merging into a VoIP and Ethernet network. Even IB-WAN, EoS Ethernet over SONET, SONET and SDN are being replaced by enhanced Carrier Ethernet. The same is true for all the old 6-8 Industrial IO interfaces converging into Industrial Ethernet cabling. Within commercial infrastructures various IO interfaces are also quickly melding into a ConvergeIT interconnect network.
Just think if these dozens of interfaces converged into one optical interface in the fuzzy future, we will have many fewer acronyms to keep track of! But will this nascent Camelot interface be called something cryptic like the existing IPoDWDM (Internet Protocol over Dense Wave Division Multilexing) interface?
In the past ten years, the SFF-8470, a primarily dedicated twinaxial copper cabling system was selected and/or implemented in many industry and de facto standards like InfiniBand, Ethernet, SAS, RapidIO, Myrinet and in the very many separate NICs and homogenous switch boxes. Then heterogeneous switches and NICs appeared with the common SFF-8470 cabling handling the different interfaces in one box or rack. Then there were high-port-count multi-protocol chips. Now the protocols run through one slimmer QSFP+ or SFP+ cable assembly using one transport layer. In some SSD (solid state drive) devices the FC and SAS or SATA and USB interfaces are integrated into one chip. I have heard the many wireless interface people are working on their Camelot next-generation convergent interface as well.
How fast will the new data center power and cooling requirements as well as disruptive CMOS photonic technologies impact further convergence and wide market acceptance? So what is your convergence view or vision of interfaces and interconnects over this coming decade?
Ed Cady is senior marketing director with Siemon Interconnect Solutions (Siemon, Siemon Interconnect Solutions). You can reach him at Ed_Cady@siemon.com or 503-359-4556.
Strategic marketing director
Siemon Interconnect Solutions
As with organic life forms, the nodes and links of the worldwide web seem to have a varying rhythmic process of differentiation and then integration. At certain inflection points in the process, one can see an intended integration of effort cause some differential effects, which in turn meld together after another natural cycle. More than any other IO interface, Ethernet has expanded well beyond the original LAN section of the web that it has dominated for many years since it overcame the rival Token Ring and VG AnyLAN interfaces.
Responding to Ethernet's expansion and absorption of rivals, champions and evangelists of other IO interfaces like Fibre Channel have created newer standard interface versions using a convergent tunneling method that preserves the native protocol but uses Ethernet physical transport system. Think of protocols tunneling through any other faster physical transport layer as a packet spaceship traveling through wormholes in space, from one data center galaxy to another.
Recently the Ethernet community has evolved its technology to converge LAN with SAN into one physical network. This was partially accomplished with the implementation of the recent Ethernet standard 10GBaseCR. This two-pair, serial single-lane link was expedited without a detailed connector IEEE standard specification clause, but achieved compliance and interoperability through an Ethernet Alliance Plugfest process.
This has caused the Fibre Channel community to create a Fibre Channel over Ethernet (FCoE) specification that helps to preserve the native protocol and its installed base. The InfiniBand community has similarly created its RoCE, or RDMA over Converged Ethernet, standard specification. RDMA is Remote Direct Memory Access, a low-latency and low-power technology used with InfiniBand architecture. So now these four interface, 10GBaseCR, 10GFCoE, 10GFC and 10GRoCE are implemented using the same SFP+ single-lane passive copper cabling. 10G SFP+ usage has grown dramatically because active copper and active optical SFP+ have enabled increased market segments and longer-length applications like digital signage and AV systems.
Besides Fibre Channel, other storage interfaces like NAS, iSCSI, iSATA and ATAoE are tunneled over Ethernet 10GBaseCR. These other storage interfaces are also tunneled over Ethernet 10GBaseT using Category 6a and Category 7a cabling. There are open and closed Consortia de facto standards using these multi-protocols on so-called collapsed architectural fabrics like the Unified Computing System, which also use the SFP+ cabling.
Besides UCS, there are several other de facto standard unified style networks, which also use the SFP+ but with different encryption in memory mapping of the embedded plug EPROMs. One wonders if all of these IO interfaces will expand and use the newly developing 25/26/28Gbit/sec QSFP++ module and cabling system, which is being standardized through the SFF-8661/2/3 specification. See www.sffcommittee.org, www.t11.org and www.fibrechannel.org to learn more, or contact me.
Ethernet 40GBaseCR4, 40GFCoE and InfiniBand 40G QDR standards are using the same four-lane QSFP+ SFF-8436 connector, module and cabling. the SAS storage interface uses QSFP+ AOC (active optical cables) for longer-reach applications as does the CameraLink-2 video networking standard. Will these various interface communities stay converged using the new SFF-8661 QSFP++ connector system for next-generation 100GBaseCR4, 100GFCoE, 100GFC SAN and InfiniBand 100G EDR?
There are many other convergent IO interfaces like FCoIB Fibre Channel over InfiniBand, UAS USB attached SCSI, UoSATA USB over SATA, and of course SoU SATA over USB, which is 3G SATA over 4.8G USB implementation. Watching Ethernet, the other very high volume IO standard, HDMI, has recently released its new revision-1.4 spec. This spec has 1G Ethernet running through the new microHDMI cabling system. However HDMI and DVI video IO signaling is run through Ethernet category cabling systems, as does the HDBaseT signaling and HomePlug Alliance's cabling adapters. So one could say that the shielded Cat 6a, Cat 7a, SFP+ and QSFP+ are the three primary multi-protocol interconnects for now and several years.
Lo, looming ahead is a potential round of interface collisions, convergence and collapsed interconnect. It is starting at the desktop level with DisplayPort, USB, SATA, HDMI and PCI-E converged and transformed to the new multi-protocol LightPeak optical-only single fiber interface. It is rumored that LightPeak would replace short-reach SAS as well. It seems that there is a 10G and 28G version of LightPeak.
At the 25/26/28/40G-per-lane data rate, electrical signaling has very limited copper-cable length reach, like 1 to 3 meters. Active optical cabling seems at this range to have an equal portion of the forecasted TAM volume versus copper. So it is no wonder that there is also looming another generation beyond, a new optical interface that can be supported by developing chips that currently work in labs at 50G per lane and supporting up to 2-km distances. Its next generation of 100G per lane is being co-developed. This optical technology interface is beyond the LightPeak interface and could supplant even Ethernet, InfiniBand, Fibre Channel and other IO interfaces within new data centers within five years.
Coinciding with this new optical interface's emergence is a very new generation of internal active optical cables that connect from either printed circuit boards or nascent fiber circuit boards to other boards/modules and to optical backplanes. These internal AOCs also are being driven by the continual port densification evolution as the internal AOCs connect to the bulkhead with MPO-type connectors and achieved double port density versus either SFP+ or QSFP+ AOC connector/cabling ports. But there will be a large part of the market and systems that stay longer using the various small form-factor pluggable media types, causing the use of many different hybrid cables like QSFP+ SFF-8436 to QSFP++ SFF-8661, and hydra cables like three SFP++ SFF-xxxx (number to be assigned) cable legs going into one QSFP++ SFF-8661. See www.sffcommittee.org.
These internal AOCs and other new CMOS photonic chips may evolve beyond using the QuickPath, HyperTransport and other chip-to-chip IO interfaces. As the highest performance and largest size data center system end-users look at using many thousands of mobile phone processor chips like Intel's Atom, the ARM chip or SmoothStone's new chip to save on power consumption and cooling needs, they are considering a further collapsed optical interface and interconnect that absorbs the LightPeak interface.
You can have fun trying to overlay all these IO roadmaps into one chart. In a parallel universe, voice communication interfaces are melding into Ethernet. Consider that telephony IO interfaces like SS7, TDMS, Utopia, Frame Relay, ATM, PBT and MPLS are merging into a VoIP and Ethernet network. Even IB-WAN, EoS Ethernet over SONET, SONET and SDN are being replaced by enhanced Carrier Ethernet. The same is true for all the old 6-8 Industrial IO interfaces converging into Industrial Ethernet cabling. Within commercial infrastructures various IO interfaces are also quickly melding into a ConvergeIT interconnect network.
Just think if these dozens of interfaces converged into one optical interface in the fuzzy future, we will have many fewer acronyms to keep track of! But will this nascent Camelot interface be called something cryptic like the existing IPoDWDM (Internet Protocol over Dense Wave Division Multilexing) interface?
In the past ten years, the SFF-8470, a primarily dedicated twinaxial copper cabling system was selected and/or implemented in many industry and de facto standards like InfiniBand, Ethernet, SAS, RapidIO, Myrinet and in the very many separate NICs and homogenous switch boxes. Then heterogeneous switches and NICs appeared with the common SFF-8470 cabling handling the different interfaces in one box or rack. Then there were high-port-count multi-protocol chips. Now the protocols run through one slimmer QSFP+ or SFP+ cable assembly using one transport layer. In some SSD (solid state drive) devices the FC and SAS or SATA and USB interfaces are integrated into one chip. I have heard the many wireless interface people are working on their Camelot next-generation convergent interface as well.
How fast will the new data center power and cooling requirements as well as disruptive CMOS photonic technologies impact further convergence and wide market acceptance? So what is your convergence view or vision of interfaces and interconnects over this coming decade?
Ed Cady is senior marketing director with Siemon Interconnect Solutions (Siemon, Siemon Interconnect Solutions). You can reach him at Ed_Cady@siemon.com or 503-359-4556.
Tuesday, October 12, 2010
Tiger Woods and Intelligent Decisions
When was the last time someone could use the phrases "Tiger Woods" and "Intelligent Decisions" in the same sentence, and keep a straight face doing it?
Perhaps for the first time since last Thanksgiving, it's possible. The website thenewinternet is reporting that the IT solutions firm named Intelligent Decisions has donated equipment, including network cabling, to the Tiger Woods Foundation. According to thenewinternet, the equipment will be used to open two learning centers for underprivileged youth in Washington D.C.
Headquartered in Ashburnham, VA, Intelligent Decisions provides systems, products and solutions to government and civilian organizations.
So now "Tiger Woods" and "Intelligent Decisions" can be used together. Who knows what's next? Maybe the company will donate to the Brett Favre Fourward Foundation.
Perhaps for the first time since last Thanksgiving, it's possible. The website thenewinternet is reporting that the IT solutions firm named Intelligent Decisions has donated equipment, including network cabling, to the Tiger Woods Foundation. According to thenewinternet, the equipment will be used to open two learning centers for underprivileged youth in Washington D.C.
Headquartered in Ashburnham, VA, Intelligent Decisions provides systems, products and solutions to government and civilian organizations.
So now "Tiger Woods" and "Intelligent Decisions" can be used together. Who knows what's next? Maybe the company will donate to the Brett Favre Fourward Foundation.
Subscribe to:
Posts (Atom)