Skip navigation

News

Top & Trending
wonzhq
Hi all,   I have a cluster running ROCE on Mellanox NIC. # lspci | grep Mellanox 03:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5] 03:00.1 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5] There is a problem when I run large block size workload on it. The bandwidth is fairly poor. I
5

maxg
This post provides an example for how to bring up NVMe over Fabrics persistent association between initiator (host) and target, using RDMA transport layer. This solution uses systemd services to ensure NVMe-oF initiator remains persistent in case fatal errors occur in the low level device.   >>Learn for free about Mellanox solutions
0
Top & Trending
szefoka
Hi,   I have a problem with mlnx_qos with ConnectX-4 NICs. It works perfectly if 8 SR-IOV devices are enabled. If changing the value of NUM_OF_VFS from 8 to 32 with mlxconfig tool after the reboot mlnx_qos cannot assign priority values to TCs, and gives this error message "Netlink error: Bad value. see dmesg", but nothing is displayed in
5
Top & Trending
ophirmaor
This post is a quick guide to bring up NVMe over Fabrics host to target association using RDMA transport layer. NVMEoF can run over any RDMA capable adapter (e.g. ConnectX-3/ConnectX-4) using IB/RoCE link layer.   >>Learn for free about Mellanox solutions and technologies in the Mellanox Online Academy   Note: This post focus on
105