my plan is to compute the distance matrix using Pearson's correlation, and get q-nearest neighbors for each node (q = ln (n)) from the distance matrix and put them in the result vector. I did this in C ++ using the STL priority queue inside the correlation function loop.
But do you think there is a way to do this in the GPU?
- Can someone help me how can I do the same thing in the GPU (maybe Thrust will be easier for me!)
- How to implement priority queue in GPU?
Here is my CPU code (C ++ STL):
For instance,
distance matrix
-----------------------
0 3 2 4
3 0 4 5
2 4 0 6
.....
output in a object vector
==================
source target weight
--------------------------------
0 2 2
0 1 3 .... (sorted by Edge weight)
1 0 3
1 2 4
2 0 2
.....
calculatePearsonCorrelation (float vector1 [], float vector2 [], int m) {
// float distancePearson (vector vector1, vector vector2) {
int i;
float a = 0, b = 0, c = 0, d = 0, e = 0, sumX = 0, sumY = 0;
// m = vector1.size ();
for (i = 0; iq) {
MIN = pqx.top (). Get_corr ();
if (corr :: iterator it = qNNVector.begin (); it! = qNNVector.end (); ++ it) {
fout
source
share