How to use polymorphism in CUDA

I am moving some physics modeling code from C ++ to CUDA.

The fundamental algorithm can be understood as: applying the operator to each element of the vector. In pseudo code, the simulation may include the following kernel call:

apply(Operator o, Vector v){ ... } 

For instance:

 apply(add_three_operator, some_vector) 

will add three to each element of the vector.

In my C ++ code, I have an abstract base class operator with many different concrete implementations. An important method is the Operator class {virtual double work (double x) = 0; The operator is composed (operator lo, operator ro); ...}

An implementation for AddOperator might look like this:

 class AddOperator : public Operator{ private: double to_add; public: AddOperator(double to_add): to_add(to_add){} double operator(double x){ return x + to_add; } }; 

The operator class has methods for scaling and compiling specific implementations of the Operator. This abstraction allows me to simply compose leaf statements into more general transformations.

For instance:

 apply(compose(add_three_operator, square_operator), some_vector); 

will add three, then the square of each element of the vector.

The problem is that CUDA does not support virtual method calls in the kernel. My current thought is to use patterns. Then the kernel calls will look something like this:

 apply<Composition<AddOperator,SquareOperator>> (compose(add_three_operator, square_operator), some_vector); 

Any suggestions?

+6
source share
1 answer

Something like this is possible ...

 template <class Op1, class Op2> class Composition {...} template <class Op1, class Op2> Composition<Op1, Op2> compose(Op1& op1, Op2& op2) {...} template<class C> void apply(C& c, VecType& vec){...} 
+1
source

Source: https://habr.com/ru/post/950123/


All Articles