Testing second-hand knowledge in ontology

How would you assign objective certainties to operators approved by different users in the ontology?

For example, think that user A claims that “Bob hat is blue”, while user B claims that “Bob hat is red”. How would you determine if:

  • User A and User B all refer to different people named Bob and may or may not be correct.
  • Both users refer to the same person, but user A is right and user B is wrong (or vice versa).
  • Both users access the same person, but user A is right and user B is (or vice versa).
  • Both users refer to the same person, and both uses are either erroneous or lie.

The main difficulty that I see is that the ontology is not able to obtain first-hand data (for example, it cannot ask Bob what color his hat is).

I understand that there is probably no absolutely objective way to solve this problem. Are there any heuristics that could be used? Does this problem have an official name?

+3
source share
3 answers

I am not a specialist in this field, but I have worked a little with uncertainties in ontologies and the Semantic Web. Of course, there are approaches to this problem that have nothing to do with the semantic network, but my knowledge ends there.

, , , : Identity Crisis URI. RDF (Resource Description Framework).

" /" , :

1:

  • X isA Person
  • X hasName "Bob"
  • X H1
  • H1 isA Hat
  • H1 hasColor Blue

2:

  • Y isA Person
  • Y hasName "Bob"
  • Y H2
  • H2 isA Hat
  • H2 hasColor Red

, X, Y, H1 H2 , . , , X Y , . ( .)

, A B , "" . RDF Reification , , . , , "UserA statesThat (...)" .

, . - RACER , , .

, RDF, , LISP.

, .

+4

, " ", . , , .

: . A , " - ", B : " ", , , , .

+1

1: . . : " bob , 90%, A , ".

2: . (.. , bob , , A ), .

0
source

Source: https://habr.com/ru/post/1756577/


All Articles