Skip navigation
All Places > Cloud Solutions > Blog

Cloud Solutions

12 posts

You may want to know how to clean fiber optical connector, a right clean way can keep your connector Service life. there are the questions you need know about fiber optical.1. Why Fiber Cleaning is necessary?
The dust is invisible to the naked eye and is very easy to attach to the fiber connector. In the routine maintenance of the fiber optic connector, the fiber optic connector is contaminated with oil, powder and other contaminants. These contaminants may cause problems such as unclean fiber tips, aging connectors, degraded cable quality, and unobstructed network links. Therefore, it is necessary to clean the fiber connector regularly and take dust-proof measures.
2. Parts Needed to be Cleaned
The parts on fiber connector that needs to be cleaned are the fiber end face, the fiber tip, and the optical module port that connects the fibers.These parts are the easiest to enter into dust and contaminant, and are also the most prone to problems due to contaminants. Therefore, to keep the fiber connector clean is to thoroughly clean these three parts.3. Ways to Clean the Optical Fiber Connector
I will talk about optical fiber connector cleaning methods in three parts: ways to clean fiber end face, ways to clean fiber tip and ways to clean optical module port that connects the fiber.A. Ways to clean fiber end face
Fiber ends are generally exposed to the air, so they are easily attached to dust and oil. Due to the exposion, it is convenient and quick to clean them. There are three ways to clean the fiber end face.

1) Clean with dust-free cotton swab and alcohol

Sprinkle alcohol on the dust-free cotton swab and wipe the fiber end face repeatedly until it is clean. Finally, wipe the fiber end face with dry, dust-free cotton swab to keep the end face dry.
2) Clean with fiber optic cleaner cassette



The fiber cleaner cassette is a cleaner used to clean the fiber end face. When using, place the fiber end face on the cleaning tape and wipe it gently. The tape can be rotated to replace the used parts during the cleaning process. The fiber cleaning cassette can be used for approximate 500 times and is easy to operate.

3) Clean with fiber optic cleaner


Fiber cleaners are primarily used to clean fiber ends. Connect the fiber connector to the fiber cleaner and clean the fiber end face with a single push. Fiber cleaners are typically used 750 times. It’s simple to operate and with low cost.
B. Ways to clean fiber tip
Contaminants in the fiber tip are generally inside the joint and are not easy to clean. Therefore, cleaning the fiber tip requires some professional cleaning tools. There are mainly two ways to clean the fiber tip.

1) Clean with fiber optic cleaning pen



The fiber cleaning pen can be used to clean the fiber tip. Insert the pen into the fiber tip and clean the ferrule end of the fiber by pressing the operation button. This method is simple and the cost is low.

2) Clean with automatic electronic fiber optic cleaner



The automatic cleaner is an electric fiber tip cleaner. Connect the fiber tip to the automatic cleaner and press the button to complete the cleaning.C. Ways to clean the optical module ports
The ports of optical module are usually in contact with optical connectors such as LC, SC, and MTP, so the source of contamination generally comes from the ferrules of these connectors. Optical modules are much less contaminated than fiber optic connectors. Moreover, frequent insertion and removal of the optical module will reduce the service life of the optical module. In practical applications, the port needs to be cleaned only when the transmission performance of the optical module is degraded.

1) Clean with dust-free cotton swab and alcohol

First put the dust-free cotton swab on the alcohol, then insert it into the optical module port. And turn it clockwise, and rotate it in the same direction for one turn.
2) Clean with fiber optic cleaning pen
The fiber cleaning pen can be used not only to clean the fiber tip, but also to clean the optical module port. The usage is similar to what introduced of cleaning the fiber tip.

Every cleaning and maintenance of the fiber optic connector will cause loss. Therefore, in daily network transmission, we should make better dust proof work, so as to prevent it from happening!

Fibre optic cables should be very thrilling for anybody genuinely.


You can shine mild thru a bit of glass, but one of the interesting things is that in case you shine a light on the edge of a sheet of glass it's going to tour thru that piece of glass and shine out of the alternative side pretty brightly. Now the beam of light would possibly spread out and you would anticipate it come out at the faces in addition to the rims of the glass, however the differences between the optical residences of air and glass, as well as the perspective of the light imply the light receives reflected in the glass.


Then, thru a system of reflection and refraction you can make the most this along any route of glass.

We can truely construct filaments of very skinny glass fibres that behave like pieces of glass stacked collectively. They are clad in a material of a exclusive, higher, refractive index to reason the inner reflections. Interestingly, the diverse layers of cladding imply that the glass breaks, but it still behaves like a stack of glass sheets stacked upon each other, the mild passes among the blocks and has no wherein else to go in reality. You can bend these glass fibres extra than you'll expect for glass, but bend them too some distance and the glass stacks are now not going to skip the light effectively.


Now that we will pass mild down a fibre, how can we make that useful? Essentially we can pass information by way of turning the light off and on insanely rapid. Another method is to modify how plenty mild is output, extra than simply on and off, then I can represent numbers with the aid of these ranges. Or I also can make adjustments to a frequency or phase carried by means of the light, extraordinary states can represent special numbers. Combined together various schemes may be used to transmit insane portions of data. Now it isn't always white mild that we use, frequently it is simply one frequency of mild after which we will stack diverse different frequencies together to ship more than one signals down one fibre to reach truly ridiculous speeds.


Article posted by, if you have questions of interconnect solution free to contact with



PDF you need here:

When it involves optical modules, the brand Cisco mainly their Cisco SFP+ modules will be cited by way of nearly everybody in optic fiber field. As one of the international leaders in IT and networking, Cisco has committed to networking system design and manufacture for several many years. Among their all types of product lines, SFP+ modules have loved a enormous popularity. In the context of that, in this text, we are going to make an universal exploration of the Cisco’s celebrity product: Cisco SFP+ module.

An Overview of the Cisco SFP+ Module

Like the overall SFP+ transceivers, Cisco SFP+ modules are kinds of optical devices designed for 8 Gbit/s Fibre Channel, 10 Gigabit Ethernet and optical delivery community standard OTU2, assisting information charges up to 16 Gbit/s.



Main Features

Main functions of Cisco SFP+ modules can be shared as under:


Industry’s smallest 10G shape factor for best density in line with chassis

Hot-swappable enter/output devices that plug into an Ethernet SFP+ port of a Cisco transfer (no need to electricity down if installing or replacing)

Supports “pay-as-you-populate” version for funding safety and simplicity of era migration

Optical interoperability with 10GBASE XENPAK, 10GBASE X2, and 10GBASE XFP interfaces on the equal link

Hot-swappable input/output gadgets that plug into an Ethernet SFP+ port of a Cisco transfer (no need to strength down if putting in or changing)

Cisco great identification (ID) characteristic enables a Cisco platform to discover whether the module is certified and tested by Cisco

Cisco SFP+ Module Types

With various Cisco SFP+ module sorts, a wide kind of 10 Gigabit Ethernet connectivity options for various networking environments, like statistics facilities, enterprise wiring closet, and carrier provider shipping programs can be supplied. Altogether, Cisco SFP+ modules include Cisco SFP-10G-SR, SFP-10G-LR (LRM), SFP-10G-ER and SFP-10G-ZR, etc. You can consult with the desk displayed as below for exact specifications of Cisco SFP+ modules.

How to Choose Cisco SFP+ Modules?

If you want to buy Cisco SFP+ modules, it’s realistic to place their most reliable transmission distance and compatibility with other Cisco gadgets into attention.



As for the transmission distance, basically, the levels among a hundred m to 400 m and 10 km to eighty km are commonly seen. For the gap from 100 m to four hundred m, we commonly use Cisco 10G multimode SFP+ transceiver. For example, in case you want to shop for a Cisco SFP+ module for transmission within 300 m, then the Cisco SFP-10G-SR module might be the satisfactory choice. More information approximately the most efficient transmission distance of Cisco SFP+ modules, you may seek advice from the desk above.


Apart from distance, some other vital issue you should be clear about is the SFP+ module’s compatibility with other Cisco devices. You may wonder if this Cisco SFP+ Module can hook up with other gadgets, inclusive of SFP modules. The answer is NO. For instance, If you connect the SFP-10G-SR with Cisco GLC-SX-MMD SFP transceiver (1 Gbps only), they will now not be capable of work. Since the SFP-10G-SR simplest runs at 10 Gbps hyperlink price, it way you force SFP-10G-SR to use 1Gbps velocity. You can by no means interconnect them. For extra information about the compatibility of Cisco modules,you can check the price-effective on sfpcables.Com

By the manner, Cisco SFP+ module price from time to time is likewise troublesome for plenty customers. When you search Cisco SFP+ modules , you may discover the price of these modules from original emblem stores is not competitively priced. Therefore, in current years, the use of non-unique brand retailers optical transceivers in fiber optic community has been a trend. More and extra users decide on 1/3-birthday celebration modules, like 10Gtek.Com, pluggable optics as they are confident to be fully like minded with the authentic logo hardware as well as having a cheap charge.



As a main position in fiber optic network, Cisco SFP+ module has been witnessed a glorious length. But with the unremitting efforts of different producers, surely, other non-original brand outlets manufacturers might be seen on the upward push. For sure, the FS.COM will be that first rate case.


sfp modules/transceiver you may need:

SFP+ 10GBase-LRM 1310nm, 220M

10Gb/s SFP+ SR Transceiver 10GBase-SR 850nm

SFP+ LR Lite Transceiver 10GBase-LR 1310nm


As the extraordinary community growth of information middle, high-performance computing networks, corporation center and distribution layers, and carrier company software and so on., a fee effected, excessive-density and coffee-power 100G Ethernet connectivity answer is in need urgently. Based in this, the 100G QSFP28(Quad Small Form-Factor Pluggable) transceiver is precisely what you want as a desired community solution. The primary functions beneath are the maximum of advantages through making use of 100G QSFP28 transceiver:




Techniques supported by using all community gadgets manufacturers

Hot-pluggable to 100G Ethernet QSFP28 port

Compliant with 100G EthernetIEEE 802.3bm

Compliantto SFF-8665(QSFP28 Solution) Revision 1.Eight

Supports 100G information fee hyperlinks of as much as 30km

Low energy consumption of max four.5w

Smallest length

100G QSFP28 SR4 Transceiver

This complete-duplex module gives 4 unbiased transmit and receive channels, attain to 70 meters by OM3 MMF(multi mode fiber) and a hundred meters by using OM4 MMF. The 4 channels of sign come via the paralleled module via the MPO/MTP connector to finish the development of transmission. QSFP28 transceiver as a access level for 100G Ethernet solution, it’s a priority for brief attain links consisting of statistics interchange centre or service centre.

100G QSFP28 LR4 Transceiver

The 100G QSFP28 Transceiver is the module designed for transmission span of as much as 10km operated over SMF(single mode fiber) through LC connector. When start to run the module to connect with the records hyperlinks, it is going to be converting each individual channel of electrical signal to LAN WDM optical signal and then multiplex all four channels of 25G sign right into a output of 100G single channel, this is undergoing on the transmit aspect. While on the get hold of aspect that’s a opposite method through the demultiplex’s response i.E. 100G input of optical sign be demultiplexed to LAN WDM signal and then converted to 4 channels of 25G electrical signal.



LC and MPO/MTP Connector



LC Connector may be very commonplace for optical module utility mainly for QSFP28 transceiver, which is a SFF(small form component) connector evolved by Lucent. The 1.25mm ferrule is precisely designed for high density cabling. In consistent with the various attributes it's been taken care of into SMF(single mode fiber) LC & MMF LC connector and duplex and simplex LC connector.


MPO/MTP Connector involves 12 to 24 combined fibers within the single rectangular ferrule, it’s ubiquitously used for 100G Optical parallel module of MMF. This connector is much more complicated than others and categorised by key-up & key-down, and male & female MPO/MTP connector.

100G QSFP28 IR4 PSM Transceiver

Look into the definition of 100G PSM4 MSA(multi source agreement), 100G QSFP28 PSM4 transceiver is operating over 4 paralleled lanes(4 transmit and 4 receive) on each direction. At this point it’s same as 100G QSFP28 SR4 transceiver. But differently, it requires eight single mode fibers to coordinate the PSM4’s deployment in transmission links. The reach of up to 2km stay at a medium level between SR and LR modules, which makes it complemental to the 100G QSFP28 transceiver family, diversity of choice and much more economic.

100G QSFP28 CWDM4 Transceiver

By applying CWDM technology the 100G QSFP28 CWDM4 successfully integrates and multiplexes four differed wavelength(1270nm, 1290nm, 1310nm and 1330nm) into one SMF for signal transmission, similar to the process of 100G QSFP28 LR4 on the receive side. The incoming signal is demultiplexed to separated four channels over another SMF, therefore you can see the total usage of SMF is two rather than eight compare to 100G QSFP28 IR4 PSM transceiver. The reach of 100G QSFP28 CWDM4 Transceiver is up to 2km.

CWDM(Coarse Wavelength Division Multiplex) Vs DWDM(Dense Wavelength Division Multiplex)

Both CWDM and DWDM technology are in purpose for broadening bandwidth, maximizing usage of fiber and ultimately optimize the network. They can send various data flow simultaneously over a single mode fiber. CWDM is a flexible deployment for fiber networks especially for point to point topology of enterprise networks while DWDM is considered to connect the metropolitan network, interconnecting data center and financial service network. Following is a table display to summarise the main differences between CWDM and DWDM.

100G QSFP28 ER4 Lite Transceiver

What if customers have call for to construct a extremely-long hyperlink 100G network past 10km 100G QSFP28 LR4 transceiver? The solution is absolute! The 100G QSFP28 ER4 Lite transceiver is exactly born to meet your call for of this distinctiveness. It adopts EML laser in transmit facet and multiplexes/demultiplexes four lanes sign come from four wavelength(1295.56nm, 1300.05nm, 1304.58nm and 1309.14nm) which is working over one unmarried mode however twin fibers. Particularly there is TEC(thermo electric cooler) in the creation to constant inner temperature and save you wavelength from flowing. The enhanced reach of 100G QSFP28 ER4 Lite is 30km or above.

If you want to buy 100G QSFP28 Transceiver. welcome to check on sfpcables blog to get more deatils before buy.

A pathologist’s report after reviewing a patient’s biological tissue samples is often the gold standard in the diagnosis of many diseases. For cancer in particular, a pathologist’s diagnosis has a profound impact on a patient’s therapy. The reviewing of pathology slides is a very complex task, requiring years of training to gain the expertise and experience to do well.

Even with this extensive training, there can be substantial variability in the diagnoses given by different pathologists for the same patient, which can lead to misdiagnoses. For example, agreement in diagnosis for some forms of breast cancer can be as low as 48 per cent, and similarly low for prostate cancer. The lack of agreement is not surprising given the massive amount of information that must be reviewed in order to make an accurate diagnosis. Pathologists are responsible for reviewing all the biological tissues visible on a slide. However, there can be many slides per patient, each of which is 10+ gigapixels when digitized at 40X magnification. Imagine having to go through a thousand 10 megapixel (MP) photos, and having to be responsible for every pixel. Needless to say, this is a lot of data to cover, and often time is limited.

To address these issues of limited time and diagnostic variability, we are investigating how deep learning can be applied to digital pathology, by creating an automated detection algorithm that can naturally complement pathologists’ workflow. We used images (graciously provided by the Radboud University Medical Center) which have also been used for the 2016 ISBI Camelyon Challenge1 to train algorithms that were optimized for localization of breast cancer that has spread (metastasized) to lymph nodes adjacent to the breast.

The results? Standard “off-the-shelf” deep learning approaches like Inception (aka GoogLeNet) worked reasonably well for both tasks, although the tumor probability prediction heatmaps produced were a bit noisy. After additional customization, including training networks to examine the image at different magnifications (much like what a pathologist does), we showed that it was possible to train a model that either matched or exceeded the performance of a pathologist who had unlimited time to examine the slides.


Left: Images from two lymph node biopsies. Middle: earlier results of our deep learning tumor detection. Right: our current results. Notice the visibly reduced noise (potential false positives) between the two versions.

In fact, the prediction heatmaps produced by the algorithm had improved so much that the localization score (FROC) for the algorithm reached 89%, which significantly exceeded the score of 73% for a pathologist with no time constraint2. We were not the only ones to see promising results, as other groups were getting scores as high as 81% with the same dataset. Even more exciting for us was that our model generalized very well, even to images that were acquired from a different hospital using different scanners. For full details, see our paper “Detecting Cancer Metastases on Gigapixel Pathology Images”.


A closeup of a lymph node biopsy. The tissue contains a breast cancer metastasis as well as macrophages, which look similar to tumor but are benign normal tissue. Our algorithm successfully identifies the tumor region (bright green) and is not confused by the macrophages.

While these results are promising, there are a few important caveats to consider.

  • Like most metrics, the FROC localization score is not perfect. Here, the FROC score is defined as the sensitivity (percentage of tumors detected) at a few pre-defined average false positives per slide. It is pretty rare for a pathologist to make a false positive call (mistaking normal cells as tumor). For example, the score of 73% mentioned above corresponds to a 73% sensitivity and zero false positives. By contrast, our algorithm’s sensitivity rises when more false positives are allowed. At 8 false positives per slide, our algorithms had a sensitivity of 92%.
  • These algorithms perform well for the tasks for which they are trained, but lack the breadth of knowledge and experience of human pathologists — for example, being able to detect other abnormalities that the model has not been explicitly trained to classify (e.g. inflammatory process, autoimmune disease, or other types of cancer).
  • To ensure the best clinical outcome for patients, these algorithms need to be incorporated in a way that complements the pathologist’s workflow. We envision that algorithm such as ours could improve the efficiency and consistency of pathologists. For example, pathologists could reduce their false negative rates (percentage of undetected tumors) by reviewing the top ranked predicted tumor regions including up to 8 false positive regions per slide. As another example, these algorithms could enable pathologists to easily and accurately measure tumor size, a factor that is associated with prognosis.

Training models is just the first of many steps in translating interesting research to a real product. From clinical validation to regulatory approval, much of the journey from “bench to bedside” still lies ahead — but we are off to a very promising start, and we hope by sharing our work, we will be able to accelerate progress in this space


After installing the latest Mellanox OFED (MLNX_OFED_LINUX-2.0-3.0.0) and configuring it for SR-IOV, per ophirmaor 's community post (see: Mellanox OFED Driver Installation and Configuration for SR-IOV,, I moved on to installing a KVM (Kernel-based Virtual Machine) so that I can create and run virtual machines. I happen to have a server in my lab with CentOS 6.4 so I found the following post on HowtoForge extremely useful – see: Virtualization With KVM On A CentOS 6.4 Server | HowtoForge - Linux Howtos and Tutorials ( More to come...

Written by: Paul Garrison on July 16, 2013, Nimbix Blog


Nimbix supports a variety of interconnect options ranging from our standard 1GB/s Ethernet to 56Gb/s FDR InfiniBand. As with most things one size does not fit all. Not all applications need or can benefit from InfiniBand, but for many of them there is a noticeable performance benefit. Especially HPC applications that leverage parallel processing such as climate research, molecular modeling, physical simulations, crypto analysis, geophysical research, automotive and aerospace design, financial modeling, data mining and more.


Our deployment of InfiniBand offers low-latency, high-bandwidth, high message rate, transport offload to facilitate extremely low CPU overhead, Remote Direct Memory Access (RDMA), and advanced communications offloads. It takes advantage of the world’s fastest interconnect, supporting up to 56Gb/s and extremely low application latency (as low as 1 microsecond).


So what kind of difference can InfiniBand make? Latency and bandwidth are the two most common performance parameters used in comparing between interconnects. Mellanox Technologies (A leading supplier of end-to-end InfiniBand and Ethernet interconnect solutions) has facilitated the testing of its InfiniBand solutions via the Compute Cluster Center operated by the HPC Advisory Council.


Here is a benchmark from Mellanox’s website that helps highlight the differences:



Mellanox 56Gb/s FDR IB

Intel 40Gb/s QDR IB

Intel 10GbE NetEffect NE020


6.8 GB/s

3.2 GB/s

1.1 GB/s





Message Rate

137 Million msg/sec

30 Million msg/sec

1.1 Million msg/sec


Source: Mellanox Technologies testing; Ohio State University; Intel websites


InfiniBand availability in the cloud opens up new application opportunities for customers. A great real world example is illustrated
in our recent participation in the Ubercloud HPC Experiment. Nimbix worked with Simpson Strong-Tie, Simulia Dassault Systems, DataSwing Corporation, NICE Software and Beyond CAE to leverage our InfiniBand enabled cloud infrastructure on NACC to accelerate heavy duty ABAQUS structural analysis:


The job: Cast-in-Place Mechanical Anchor Concrete Anchorage Pullout Capacity Analysis
Materials: Steel & Concrete
Procedure: 3D Nonlinear Contact, Fracture & Damage Analysis
Number of Elements: 1,626,338
Number of DOF: 1,937,301
Run time:
Single 12 core system = 29 hours 03 minutes 41 seconds
InfiniBand enabled 72 core parallel computing cluster = 05 hours 30 minutes 00 seconds


Certainly performance increase is driven by the ability to bring more processing cores to the problem, but the compute nodes must have the low latency switch interconnect provided by InfiniBand. Without it, this kind of application cannot be successfully scaled to take advantage of more compute horsepower. So the availability of InfiniBand in a high performance compute cloud opens up new application options for end users who wish to leverage available cloud resources.


In summary, not every application needs InfiniBand, but as with most things in this world options are important. That is why Nimbix offers a range of solutions around interconnection of both our NACC cluster and the custom turn-key clusters we deploy for our clients.


Credit: Nimbix Blo-

Guest blog by: Alon Harel


If your job is related to networking, be it a network admin, an R&D engineer, an architect, or any other job involving networks, it is very likely you have heard people around you (or GASP! maybe even heard yourself) express doubts about the proliferation of Software Defined Networking (SDN) and OpenFlow. How many times have you encountered skepticism about this new revolutionary concept of decoupling control and data planes and “re-inventing the wheel”? Many people used to think “this is hype; it will go away like other new technologies did, and it will never replace the traditional network protocols…” Well, if you perceive SDN/OpenFlow only as a replacement for the current network distributed protocol, these doubts may be turn out to be valid. The concept of saying “OpenFlow is here to replace the old strict protocols” is pretty much the message one gets from reading the old white papers regarding OpenFlow. These papers used to describe the primary motivation for moving to OpenFlow as the determination to introduce innovation in the control plane (that is, the ability to test and apply new forwarding schemes in the network).


This long preface is the background for the use case we present below. This use case is not about a new forwarding scheme, nor is it about re-implementing protocols; rather, it is a complementary solution for existing traditional networks. It is about adding network services in an agile way, allowing cost-efficient scalability. It is innovative and fresh and, most importantly, it could have not been done prior to the SDN era. Its simplicity and the fact that it relies on some very basic notions of OpenFlow can only spark the imagination about what can be done further using the SDN toolbox.


RADWARE’s security appliance, powered by Mellanox’s OpenFlow-enabled ConnectX®-3 adapter, brings a new value proposition to the network appliance market, demonstrating the power of SDN by enabling the addition of network services in a most efficient and scalable way.


Security and attack mitigation service is applied for pre-defined protected objects (servers) identified by their IP address. Prior to SDN, the security appliance had to be a ‘bump in the wire’ because all traffic destined for the protected objects must traverse through it. This, of course, dictates network physical topology, limited by the appliance’s port bandwidth and imposing high complexity when scale comes into play.


RADWARE’s DefenseFlow software is capable of identifying abnormal network behavior by monitoring the amount of bytes and packets of specific flows destined for the protected objects. The monitoring is performed by installing specific flows in the forwarding hardware only for the sake of counting the amount of data traversing it. Flow configuration and counter information is retrieved via standard OpenFlow primitives. The naïve approach would be to use the OpenFlow switches to accommodate the flows (counters); however, the limited resource capacity of commodity switches (mainly TCAM, which is the prime resource for OpenFlow) rules out this option. (Note that a switch may be the data path for hundreds or thousands of VMs, each with several monitored flows). Thus, the viability of the solution must come from somewhere else. Enter Mellanox’s OpenFlow-enabled ConnectX-3 SR-IOV adapter.


ConnectX-3 incorporates an embedded switch (or eSwitch) enabling VM communication to enjoy bare metal performance. The HCA driver includes OpenFlow agent software, based on the Indigo-2 open source project, which enables the eSwitch to be controlled using standard OpenFlow protocol.


Installing the flows (counters) on the edge switch (eSwitch) makes a lot of sense. First, each eSwitch is responsible only for a relatively small amount of protected objects (only those servers running on a specific host), therefore the scale obstacle becomes a non-issue. Moreover, more clever or sophisticated monitoring (for example, event generation when a threshold is crossed) can easily be added, offloading the monitoring application (DefenseFlow in this case).


You might think, “What’s new about that? We already have Open vSwitch (OVS) on the server which is OpenFlow capable.” Well, when performance is the name of the game, OVS is out and SR-IOV technology is in. While in SR-IOV mode, VM communication is performed by interfacing the hardware, directly bypassing any virtual switch processing software; therefore, in this mode OVS’s OpenFlow capabilities cannot be used (as it is not part of the data path).


Let’s take a look at this practically by describing the setup and operation of the joint solution. The setup is based on standard servers equipped with Mellanox’s ConnectX-3 adapter and OpenFlow-enabled switch and with RADWARE’s DefensePro appliance and DefenseFlow software, which interacts with the Floodlight OpenFlow controller.


SDN bog iamge1.png

Figure 1 – Setup


Here’s a description of the joint solution operation, as depicted in Figure 2:

  • DefenseFlow installs the relevant flows on each ConnectX-3 adapter.
  • The security appliance does not participate in the normal data path.
  • ConnectX-3 counts traffic matching the installed flows.
  • Flow counters are retrieved from ConnectX-3.
  • Once an attack is identified, only relevant traffic is diverted to the security appliance (where it is cleared of malicious flows and inserted back toward its destination).



SDN bog iamge2.png

Figure 2 -Joint Solution


I would argue that every skeptic seeing this example use case and the added value it brings to existing network environments using these very basic OpenFlow knobs, would have to reconsider his SDN doubts…

If you are an OpenStack user or have customers with OpenStack deployments, please take 10 minutes to respond to our first User Survey or pass it along to your network. Our community has grown at an amazing rate in 2.5 years, and it’s time to better define our user base and requirements, so we can respond and advocate accordingly.


Below you’ll find a link and instructions to complete the User Survey by April 1, 2013. It takes 10 minutes. Doing so will help us better serve the OpenStack user community, facilitate communication and engagement among our users as well as uncover new OpenStack users that might be willing to tell their stories publicly.



All of the information you provide is confidential to the Foundation and will be aggregated anonymously unless you clearly indicate we can publish your organization’s logo and profile on the OpenStack User Stories page.


Make sure to tune in to the User Committee when they present the aggregate findings of this important survey at the OpenStack Summit, April 15-18, in Portland, OR. For those unable to attend, we’ll share the presentation and have a video of the session to view after the event.


Please help us promote the survey, and thank you again for your support!



Atlantic.Net is a global cloud hosting provider that offers Atlantic.Net can now offer customers more robust cloud hosting services through a reliable, adaptable infrastructure, all at a lower cost in comparison to traditional interconnect solutions.

Why Atlantic.Net Chose Mellanox

  • Price and Cost Advantage

Expensive hardware, overhead costs while scaling, as well as administrative costs can be avoided with Mellanox’s interconnect technologies and thereby reduce costs 32% per application.

  • Lower Latency and Faster Storage Access:

By utilizing the iSCSI RDMA Protocol (iSER) implemented in KVM servers over a single converged InfiniBand interconnect adapter, iSER delivers lower latency and is less complex, resulting in lower costs to the user.

  • Consolidate I/O Transparently

LAN and SAN connectivity for VMs on KVM and Atlantic.Net’s management environment is tightly integrated; allowing Atlantic.Net to transparently consolidate LAN, SAN, live migrations and other traffic.

The Bottom Line


By deploying Mellanox’s InfiniBand solution, Atlantic.Net can support high volume and high-performance requirements– on-demand – and offer a service that scales as customers’ needs change and grow. Having built a high performance, reliable and redundant storage infrastructure using off-the-shelf commodity hardware, Atlantic.Net was able to avoid purchasing expensive Fibre Channel storage arrays, saving significant capital expenses per storage system.



Written By: Eli Karpilovski, Manager, Cloud Market Development


The new open source cloud orchestration platform called OpenStack is the promise of flexible network virtualization, and network overlays are looking closer than ever. The vision of this platform is to enable the on-demand creation of many distinct networks on top of one underlying physical infrastructure in the cloud environment. The platform will support automated provisioning and management of large groups of virtual machines or compute resources, including extensive monitoring in the cloud.


There is still a lot of work to be done, as there are many concerns around the efficiency and simplicity of the management solution for the compute and storage resources. A mature solution will need to incorporate different approaches to interact within the intra-server provisioning, QoS and vNIC management. For example, by leaning on local network adapters that are capable of managing the requests utilizing OpenFlow protocol, or by using a more standard approach which is managed by the switch. Using only one method, might create performance and efficiency penalties.


Learn how Mellanox’s OpenStack solution offloads the orchestration platform from the management of individual networking elements, with the end-goal of simplifying operations of large-scale, complex infrastructures

Written By: Eli Karpilovski, Manager, Cloud Market Development


With expansive growth expected in the cloud-computing market , some researches expects the market will grow from $70.1 billion in 2012 to $158.8 billion in 2014 – cloud service providers must find ways to provide increasingly sustainable performance. At the same time, they must accommodate an increasing number of internet users, whose expectations about improved and consistent response times are growing.


However, service providers cannot increase performance if the corresponding cost also rises. What these providers need is a way to deliver low latency, fast response, and increasing performance while minimizing the cost of the network.


One good example to accomplish that is RDMA, Traditionally centralized storage was either slow or created bottlenecks and deemphasized the need for fast storage networks. With the advent of fast solid state devices, we are seeing a need for a VERY fast and converged network, to leverage the capabilities that is been offered, in particular, we are starting to see cloud arch using RDMA based storage appliances to accelerate access storage time, reduce latency and achieve the best CPU utilization on the end point.


To learn more about the usage of RDMA in providing cloud infrastructure requirements for meeting performance, availability and agility needs, now and in the future check the following link.


Mellanox- InfiniBand makes headway in the cloud - YouTube