1) Maybe you don't use IB on the guest then I doubt that you need GUIDs
2) What kind of the RDMA problems you saw? Seeing the error message can help to understand the problem better.
3) Usually, HPC benchmark don't use virtualization because of performance issue. Virtual machine will never run faster then the real host
4) It is possible to use inbox driver that comes with OS.
Thanks for your response.
I believe I am using IB inside the VMs. I think newer versions of the SM automatically assign GUIDs. I've verified that all the tests I run within the VM, use the verbs device, furthermore, MPI reports to be using IBV for transport.
For some reason, I get very bad numbers for osu alltoall latency compared to a few papers which are out there on the net regarding IB on VMs using SR-IOV. I get around 9 us and sometimes this can jump up to 40 us. On the same machines when performing bare-metal latency tests, I get around 2 us.
I wasn't able to use the CentOS 6.5 inbox OS drivers. The driver did not recognize the VFs. I had to install the Mellanox OFED pack by passing the --guest option to the installer.
I've been trying to figure out how the other videos and research papers on the net were able to achieve anywhere between 3-5 us within VMs. Definitely they were using different versions of the OS, cpus etc, but my tests vary so much, pointing something fundamentally wrong in what I'm doing.
Did you go thought Mellanox tuning guide? If system not tuned it can explain these variations in performance.