2 Replies Latest reply on Jun 22, 2015 7:26 AM by reyemge

    Configuration of ConnectX-3 to work with Virtualbox

      I have a dual port ConnectX-3 installed in a Dell R620 server running the following

      • Centos 6.6
      • mlnx driver 2.4.1.0.0.1
      • Virtualbox 4.3.28
        • Guest OS also Centos 6.6
        • Bridged network adapter pointing to p1p1

       

      Everything looks good when the VM boots up, I am able to ping remote machines over the 10G interface from the virtual machine and the host OS.

      Problem occurs when I try to use the interface for anything more than ping, for example ssh to a machine on the 10G network.

      This causes an error on the host OS, no error seen on the Guest OS.

       

      Jun 19 13:25:29 localhost kernel: WARNING: at net/core/dev.c:1921 skb_warn_bad_offload+0x99/0xb0() (Tainted: P        W  ---------------   )

      Jun 19 13:25:29 localhost kernel: Hardware name: PowerEdge R620

      Jun 19 13:25:29 localhost kernel: mlx4_core: caps=(0x3011cbb3, 0x0) len=73 data_len=0 ip_summed=1

      Jun 19 13:25:29 localhost kernel: Modules linked in: bridge fuse nfsd exportfs autofs4 coretemp nfs lockd fscache auth_rpcgss nfs_acl sunrpc tun bnx2fc cnic uio fcoe libfcoe libfc scsi_transport_fc scsi_tgt vboxpci(U) vboxnetadp(U) vboxnetflt(U) vboxdrv(U) 8021q garp stp llc nf_conntrack_netbios_ns nf_conntrack_broadcast nf_conntrack_tftp nf_conntrack_ftp ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables ipv6 uinput microcode iTCO_wdt iTCO_vendor_support dcdbas nvidia(P)(U) i2c_core power_meter acpi_ipmi ipmi_si ipmi_msghandler sb_edac edac_core ses enclosure sg lpc_ich mfd_core shpchp mlx4_ib(U) mlx4_en(U) mlx4_core(U) compat(U) tg3 ptp pps_core sr_mod cdrom ext4 jbd2 mbcache usb_storage sd_mod crc_t10dif ahci wmi megaraid_sas dm_mirror dm_region_hash dm_log dm_mod [last unloaded: speedstep_lib]

      Jun 19 13:25:29 localhost kernel: Pid: 43322, comm: lspci Tainted: P        W  ---------------    2.6.32-504.23.4.el6.x86_64 #1

      Jun 19 13:25:29 localhost kernel: Call Trace:

      Jun 19 13:25:29 localhost kernel: <IRQ>  [<ffffffff81074e47>] ? warn_slowpath_common+0x87/0xc0

      Jun 19 13:25:29 localhost kernel: [<ffffffff81074f36>] ? warn_slowpath_fmt+0x46/0x50

      Jun 19 13:25:29 localhost kernel: [<ffffffff81292985>] ? __ratelimit+0xd5/0x120

      Jun 19 13:25:29 localhost kernel: [<ffffffff8145d569>] ? skb_warn_bad_offload+0x99/0xb0

      Jun 19 13:25:29 localhost kernel: [<ffffffff8144f8d7>] ? copy_skb_header+0x17/0xa0

      Jun 19 13:25:29 localhost kernel: [<ffffffff81461bd1>] ? __skb_gso_segment+0x71/0xc0

      Jun 19 13:25:29 localhost kernel: [<ffffffff81461c33>] ? skb_gso_segment+0x13/0x20

      Jun 19 13:25:29 localhost kernel: [<ffffffffa0db88c2>] ? vboxNetFltLinuxPacketHandler+0x352/0x620 [vboxnetflt]

      Jun 19 13:25:29 localhost kernel: [<ffffffff8128c539>] ? cpumask_next_and+0x29/0x50

      Jun 19 13:25:29 localhost kernel: [<ffffffff81014a29>] ? read_tsc+0x9/0x20

      Jun 19 13:25:29 localhost kernel: [<ffffffff810a9c47>] ? getnstimeofday+0x57/0xe0

      Jun 19 13:25:29 localhost kernel: [<ffffffff8145d069>] ? __netif_receive_skb+0x4b9/0x570

      Jun 19 13:25:29 localhost kernel: [<ffffffff81460b78>] ? netif_receive_skb+0x58/0x60

      Jun 19 13:25:29 localhost kernel: [<ffffffff814ded68>] ? lro_flush+0x1a8/0x1b0

      Jun 19 13:25:29 localhost kernel: [<ffffffff814dee04>] ? lro_flush_all+0x54/0x70

      Jun 19 13:25:29 localhost kernel: [<ffffffffa01e525a>] ? mlx4_en_process_rx_cq+0xc8a/0xea0 [mlx4_en]

      Jun 19 13:25:29 localhost kernel: [<ffffffffa01e54c4>] ? mlx4_en_rx_irq+0x54/0x60 [mlx4_en]

      Jun 19 13:25:29 localhost kernel: [<ffffffffa01e5571>] ? mlx4_en_poll_rx_cq+0xa1/0x180 [mlx4_en]

      Jun 19 13:25:29 localhost kernel: [<ffffffff81462653>] ? net_rx_action+0x103/0x2f0

      Jun 19 13:25:29 localhost kernel: [<ffffffff8107d901>] ? __do_softirq+0xc1/0x1e0

      Jun 19 13:25:29 localhost kernel: [<ffffffff810eac70>] ? handle_IRQ_event+0x60/0x170

      Jun 19 13:25:29 localhost kernel: [<ffffffff8107d95f>] ? __do_softirq+0x11f/0x1e0

      Jun 19 13:25:29 localhost kernel: [<ffffffff8100c38c>] ? call_softirq+0x1c/0x30

      Jun 19 13:25:29 localhost kernel: [<ffffffff8100fbd5>] ? do_softirq+0x65/0xa0

      Jun 19 13:25:29 localhost kernel: [<ffffffff8107d7b5>] ? irq_exit+0x85/0x90

      Jun 19 13:25:29 localhost kernel: [<ffffffff81533ba5>] ? do_IRQ+0x75/0xf0

      Jun 19 13:25:29 localhost kernel: [<ffffffff8100ba53>] ? ret_from_intr+0x0/0x11

      Jun 19 13:25:29 localhost kernel: <EOI>  [<ffffffff812a7b53>] ? pci_user_read_config_word+0x93/0xc0

      Jun 19 13:25:29 localhost kernel: [<ffffffff812a7b3d>] ? pci_user_read_config_word+0x7d/0xc0

      Jun 19 13:25:29 localhost kernel: [<ffffffff812a7c92>] ? pci_vpd_pci22_wait+0x52/0x110

      Jun 19 13:25:29 localhost kernel: [<ffffffff812a7fa3>] ? pci_vpd_pci22_read+0xe3/0x180

      Jun 19 13:25:29 localhost kernel: [<ffffffff812a6ceb>] ? pci_read_vpd+0x2b/0x30

      Jun 19 13:25:29 localhost kernel: [<ffffffff812b2c20>] ? read_vpd_attr+0x30/0x40

      Jun 19 13:25:29 localhost kernel: [<ffffffff8120d7d7>] ? read+0x127/0x210

      Jun 19 13:25:29 localhost kernel: [<ffffffff8118e9c5>] ? vfs_read+0xb5/0x1a0

      Jun 19 13:25:29 localhost kernel: [<ffffffff8118ecf2>] ? sys_pread64+0x82/0xa0

      Jun 19 13:25:29 localhost kernel: [<ffffffff8100b0f2>] ? system_call_fastpath+0x16/0x1b

      Jun 19 13:25:29 localhost kernel: ---[ end trace bb7fb3c39b1f3d2c ]---

       

      Is there a configuration setting that needs to be set?