CPUID features “hidden” or “masked” from the native hardware in KVM - virtualization

Which CPUID features are “hidden” or “masked” from the native hardware in KVM implementation. Why does the KVM hypervisor hide / mask
such features?

Related

Can I run CUDA C code without an nVida GPU? [duplicate]

What do I have to do, to be able to do Cuda programming on a Macbook Air with Intel HD 4000 graphics?
Setup a virtual machine? Buy an external Nvidia card? Is it possible at all?
If you have a new(-ish) Macbook Air you could perhaps use an external (NVidia) graphics device like this:
external Thunderbolt PCIe case
Otherwise it will not be possible to run Cuda programms on non NVidia Hardware (since it is a proprietary framework)
You may also be able to run Cuda code through converting it to OpenCL first (for example with this framework: Swan Framwork )

How to monitor the C-states of an Intel (Core 2 Duo) processor? [closed]

I am studying the effects user usage on the power consumption. How do I measure the C-state occupancy in Intel core-2 Duo processor (Windows 7) ? Is there a software which can do this in Windows ?
Intel does provide a number of tools and guidelines for power, and power-checker in particular lists "core based processors" as supported, and C-state occupancy as one of the features.

Tesla k20m interoperability with Direct3D 11

I would like to know if I can work with Nvidia Tesla K20 and Direct3D 11?
I'd like to render an image using Direct3D, Then process the rendered image with CUDA, [ I know how to work out the CUDA interoperability].
Tesla k20 doesn't have a display adapter (physically remote adapter )
I managed to do so with Tesla C2075, however with K20 I can't receive the device adapter ( EnumAdapters command ).
Is it possible to work with Tesla K20 and Direct3D ?
Frankly speaking, this code was written in notepad
Thanks
IDXGIFactory* factory = 0 ;
IDXGIAdapter* adapter = 0 ;
int dev = 0;
CreateDXGIFactory( __uuidof(IDXGIFactory) , (void**)&factory);
for (unsigned int i = 0 ; !adapter ; ++i )
{
if ( FAILED( factory->EnumAdapters (i , &adapter )))
break;
if ( cudaD3D11GetDevice(&dev , adapter) == cudaSuccess )
break;
adapter->Release()
}
No, this won't be possible.
K20m can be used (with some effort) with OpenGL graphics on Linux, but at least up through windows 8.x, you won't be able to use K20m as a D3D device in Windows.
The K20m does not publish a VGA classcode in PCI configuration space, which means that niether windows nor the NVIDIA driver will build a proper windows display driver stack on this device. Without that, you cannot use it as a D3D device. Additional evidence of this is visible through the nvidia-smi utility, which will show the K20 device as being in TCC mode. Any attempts to switch it to WDDM mode will fail (in some fashion - the failure may not be evident until a reboot).
If you find another GPU (such as Tesla C2075) for which it's possible, it invariably means, among other things, that the GPU is publishing a VGA classcode in PCI config space.
This general document covers classcode location in the PCI header on slide 62. This ECN excerpts the classcode definition. A VGA classcode is 0x0300, whereas a 3D controller classcode (I believe that is what K20m publishes) is 0x0302.
There are a few limited exceptions to the above. For example, a Tesla M2070Q in one configuration does not publish a VGA classcode (there is another configuration where it does publish a VGA classcode), but instead publishes a 3D controller classcode. In this configuration, it is usable by Microsoft RemoteFX as a shared graphics device for multiple Hyper-V VMs. In this situation, some D3D capability (up through DX9) is possible in the VMs.
In linux, the difference between a "3D Controller" and "VGA Controller" is evident using the lspci command.
In windows, you can get a config space reader to look at the difference, or you can look in Device Manager. The Tesla C2075 should show up under "Display Adapters" whereas the K20m will show up somewhere else.

PowerPC 970 Based Macs, Why Is Hypervisor Mode Unavailable?

I recently have acquired a Apple G5 computer (PPC 970) and am interested in learning more about the PowerPC architecture (most of my systems programming knowledge comes from x86 and my own hobby kernel). After using the computer a while and getting used to PowerPC assembly (RISC), I noticed that low level CPU virtualization is not possible on PowerPC 970 based Macs. The CPU in documentation (PowerPC 64) seems to support hypervisor mode, but it has been noted that it is not possible due to Open Firmware. Do all operating systems which are loaded from Open Firmware on PowerPC 970 series Macs load in hypervisor mode, making "nested" virtualization impossible. If this is true, why does Open Firmware load all Operating systems in hypervisor mode? Is this in order to provide a secure layer for communication between the the Operating System and Open Firmware (using firmware for everything except ACPI and memory discovery during boot, which requires a transition into "real-mode", is unsafe in x86?). Also if the Operating system were using hyper-calls to facilitate a secure transition to firmware based routines, wouldn't this impose a large penalty just as syscalls do?
I'm not privy to Apple's hardware designs, but I've heard that the HV mode (ie., HV=1 in the Machine State Register) was disabled, through hardware, on the CPUs used in the G5 machines.
If this is the case, then it's not up to the system firmware to enable/disable HV mode - it's simply not available.
At the time that these machines were available, other Power hardware designs had a small amount of firmware running in HV=1 mode, and only exposed HV=0 to the kernel. However, the G5 wasn't one of these.

Use NVIDA card for CUDA, motherboard for video

I want use the motherboard as the primary display adapter and my NVIDIA graphics card as a dedicated CUDA processor. My first thought was to simply plug the monitor's VGA cable into the motherboard's VGA port and hope the BIOS was smart enough to use the on-board video as the display adapter when it booted. That didn't work. The BIOS must have detected the NVIDIA card and continued to use it as the display adapter. The next thing I looked for was a setting in the BIOS to tell it "don't use the the NVIDIA 560 as the display adapter, use the on-board video as the display adapter". I search through the BIOS and the Web, but either this cannot be done or I cannot figure out how to do it. The mobo is a BIOSTAR TH67+ LGA 1155. Windows 7 OS.
RESULTS SUMMARY (from answers provided below)
Enabling the Integrated Graphics Device (IGD) in the BIOS will allow the system to be driven from the on-board graphics even with the graphics card connected to the system bus. However, the graphics card cannot be used for CUDA processing. Windows will not enable graphics devices unless a monitor is attached to them. The normal driver stack cannot see them. Solution: use Linux, or attach a display to the graphics card but do not use it. The Tesla cards (GPGPU-only) are not recognized by Windows as graphics devices, so they don't suffer from this.
Also ,a newer BIOSTAR motherboard, the TZ68A+, supports the Virtu drivers which permit sophisticated simultaneous use of the graphics cards and on-board video.
Looking at the BIOS manual (.zip), the setting you probably want is Chipset -> North Bridge -> Initiate Graphics Adapter. Try setting it to IGD (Integrated Graphics Device).
I believe this will happen automatically as the native video won't support CUDA. After installing the SDK, if you run DeviceQuery, do you see more than one result?
I believe h67 allows coexistence of both integrated & dedicated GPU. Check out Lucid Virtu here http://www.lucidlogix.com/driverdownloads-virtu.html it allows switching GPUs on the fly. But I don't know if it affects CUDA device query.
I never tried it on my rig, because its x58, I just heard it from tomshardware. Try it out and let us know. Lucid Virtu is definitely worth a try, its free, and it can cut you electric bill.

Resources