How to perform arithmetic operations in Sparql using python?

I am writing a public domain calculator whose code is available at: https://github.com/okfn/pdcalc/blob/master/pd/map.rdf

Currently, the code cannot correctly determine the status of the public domain for work due to a problem that has arisen with sparql 1.0: it is not possible to perform an arithmetic operation on dates, which means that the calculator cannot determine, for example, regardless of whether it was published work 70 years after the death of the author. Unfortunately, none of the standard rdf python libraries have yet implemented sparql 1.1 support. Therefore, I was wondering if anyone has any suggestions on how to overcome this limitation, or maybe knows some python library with the best sparql support?

We look forward to your rejection!

+4
source share
2 answers

Even SPARQL 1.1 does not support arithmetic operations by default dates. See Section SPARQL Statement Mapping : Arithmetic operations are defined only in numeric data types.

Perhaps there are some implementations of SPARQL 1.1 that offer an extension for this purpose, but I do not immediately understand what is built-in now, of course, not in Python.

It is best to contact the developers of the SPARQL engine of your choice and molest them to implement such an extension, or, alternatively, turn it over yourself.

As a rule, most SPARQL engines (even 1.0) support date comparison operations, so you can do things like sort and compare, but you will have to do some custom post processing of the query result.

Update I just realized that I forgot something very important: SPARQL 1.1, of course, supports functions such as year() , month() , etc., which return the year and month components of datetime as an integer, and which you could use to do arithmetic of a roundabout by date.

+3
source

While you cannot perform arithmetic operations on dates in SPARQL 1.0, if the implementation meets the specification, you should at least compare the dates:

 PREFIX xsd: <http://www.w3.org/2001/XMLSchema#> SELECT * WHERE { # Your Triple Patterns here FILTER( ?date > "2011-11-20T00:00:00Z"^^xsd:dateTime) } 

Now that you still can’t get around your problem, you need to take the date of the author’s death and add 70 to it. Probably, you need to calculate this part of the client code and enter it into your SPARQL queries. Thus, this means that you may have to fulfill two requests: one to get information, and one to calculate if this is a public work. Tbh, you can probably calculate the second part only in the client code to save an additional request.

Until it's perfect, there won't be a good Python library compatible with SPARQL 1.1 that you're stuck with.

+1
source

Source: https://habr.com/ru/post/1382132/


All Articles