I get huge differences when passing float from C # to C ++. I go through a dynamic float that changes over time. With the debugger, I get the following:
c++ lonVel -0.036019072 float
c# lonVel -0.029392920 float
I installed my MSVC ++ 2010 floating point model in / fp: fast, which should be the standard in .NET, if I'm not mistaken, but that didn't help.
Now I can’t give out the code, but I can show part of it.
From the C # side, it looks like this:
namespace Example
{
public class Wheel
{
public bool loging = true;
#region Members
public IntPtr nativeWheelObject;
#endregion Members
public Wheel()
{
this.nativeWheelObject = Sim.Dll_Wheel_Add();
return;
}
#region Wrapper methods
public void SetVelocity(float lonRoadVelocity,float latRoadVelocity {
Sim.Dll_Wheel_SetVelocity(this.nativeWheelObject, lonRoadVelocity, latRoadVelocity);
}
#endregion Wrapper methods
}
internal class Sim
{
#region PInvokes
[DllImport(pluginName, CallingConvention=CallingConvention.Cdecl)]
public static extern void Dll_Wheel_SetVelocity(IntPtr wheel,
float lonRoadVelocity, float latRoadVelocity);
#endregion PInvokes
}
}
And on the C ++ side @ exportFunctions.cpp:
EXPORT_API void Dll_Wheel_SetVelocity(CarWheel* wheel, float lonRoadVelocity,
float latRoadVelocity) {
wheel->SetVelocity(lonRoadVelocity,latRoadVelocity);
}
So, any suggestions on what I should do to get 1: 1 results, or at least 99% of the correct results.