I got a data block in spark sql like this:
scala> result.show
+-----------+--------------+
|probability|predictedLabel|
+-----------+--------------+
| [0.0,1.0]| 0.0|
| [0.0,1.0]| 0.0|
| [0.0,1.0]| 0.0|
| [0.0,1.0]| 0.0|
| [0.0,1.0]| 0.0|
| [0.1,0.9]| 0.0|
| [0.0,1.0]| 0.0|
| [0.0,1.0]| 0.0|
| [0.0,1.0]| 0.0|
| [0.0,1.0]| 0.0|
| [0.0,1.0]| 0.0|
| [0.0,1.0]| 0.0|
| [0.1,0.9]| 0.0|
| [0.6,0.4]| 1.0|
| [0.6,0.4]| 1.0|
| [1.0,0.0]| 1.0|
| [0.9,0.1]| 1.0|
| [0.9,0.1]| 1.0|
| [1.0,0.0]| 1.0|
| [1.0,0.0]| 1.0|
+-----------+--------------+
only showing top 20 rows
I want to create a new dataframe with a new column called prob, which is the first value from the probability column of the original data frame as follows:
+-----------+--------------+----------+
|probability|predictedLabel| prob |
+-----------+--------------+----------+
| [0.0,1.0]| 0.0| 0.0|
| [0.0,1.0]| 0.0| 0.0|
| [0.0,1.0]| 0.0| 0.0|
| [0.0,1.0]| 0.0| 0.0|
| [0.0,1.0]| 0.0| 0.0|
| [0.1,0.9]| 0.0| 0.0|
| [0.0,1.0]| 0.0| 0.0|
| [0.0,1.0]| 0.0| 0.0|
| [0.0,1.0]| 0.0| 0.0|
| [0.0,1.0]| 0.0| 0.0|
| [0.0,1.0]| 0.0| 0.0|
| [0.0,1.0]| 0.0| 0.0|
| [0.1,0.9]| 0.0| 0.1|
| [0.6,0.4]| 1.0| 0.6|
| [0.6,0.4]| 1.0| 0.6|
| [1.0,0.0]| 1.0| 1.0|
| [0.9,0.1]| 1.0| 0.9|
| [0.9,0.1]| 1.0| 0.9|
| [1.0,0.0]| 1.0| 1.0|
| [1.0,0.0]| 1.0| 1.0|
+-----------+--------------+----------+
How can i do this? thank!