"Failed to perform key operation" by creating indexes NULL_FILTERED

I cannot create indexes in Cloud Spanner tables after receiving the error message: "Failed to perform key operation."

Even after increasing the cluster size from 6 to 16 nodes, I cannot create two indexes in a table of 12 million rows.

Failed to perform wrench operation

What I've done:

  • Created objects table in a wrench on a 3-node cluster
  • The table contains 10-12 columns consisting of STRING , INT64 and one ARRAY<STRING>
  • The primary key is two columns; fragment value (hash object_id ) and object_id
  • ~ 12 million lines loaded
  • There were no indexes when loading the table (except for the primary key)
  • Download pegged 3- node; Updated to 6 nodes.

What I tried:

  • I tried to build three indexes (via DDL in the console) - I got "Failed to perform operation with a wrench"
  • Increased wrench node count from 6 β†’ 12,
  • It was possible to build 1 out of 3 indexes ( UNIQUE in one STRING column)
  • I tried to build the other two indexes ( UNIQUE NULL_FILTERED in single columns STRING ) - got "Failed to perform key operation"
  • Enlarged nodes with keys from 12 to> 16 (maximum account)
  • I tried to build the other two indexes ( UNIQUE NULL_FILTERED in single columns STRING ) - got "Failed to perform key operation"

What else I tried (updated):

  • Removed the NULL_FILTERED and tried to create the other two indexes. Failed to solve, still failed to create.
+5
source share
1 answer

Reply from GCS:

Our development team was able to find the reason why indexes could not be created. As it turned out, in the data you have two records that are not unique and thus create a violation of uniqueness [1] and prevent the creation of an index. This error occurs before the index tries, and therefore index creation fails before it tries.

You can use the query to search for duplicate entries:

 SELECT column, count(column) FROM table GROUP BY column HAVING COUNT(column) > 1 

You can modify this query to search all keys at the same time or change it after each search. After the duplicates are counted, you can manage these records and start the index creation again.


I hope the Spanner team can fix this error and return the correct error in a future version.

+5
source

Source: https://habr.com/ru/post/1276098/


All Articles