Lab: SPARQL Programming: Difference between revisions
mNo edit summary |
|||
Line 113: | Line 113: | ||
from SPARQLWrapper import SPARQLWrapper | from SPARQLWrapper import SPARQLWrapper | ||
SERVER = 'http://localhost:7200 | SERVER = 'http://localhost:7200' # you may want to change this | ||
REPOSITORY = 'lab04' # you most likely want to change this | REPOSITORY = 'lab04' # you most likely want to change this | ||
endpoint = f'{SERVER} | endpoint = f'{SERVER}/repositories/{REPOSITORY}' # standard path for GraphDB queries | ||
query = """ | query = """ | ||
Line 141: | Line 141: | ||
from SPARQLWrapper import SPARQLWrapper | from SPARQLWrapper import SPARQLWrapper | ||
SERVER = 'http://localhost:7200 | SERVER = 'http://localhost:7200' # you may want to change this | ||
REPOSITORY = 'lab04' # you most likely want to change this | REPOSITORY = 'lab04' # you most likely want to change this | ||
endpoint = f'{SERVER} | endpoint = f'{SERVER}/repositories/{REPOSITORY}' # standard path for GraphDB updates | ||
update_str = """ | update_str = """ |
Revision as of 12:14, 16 February 2024
Topics
SPARQL programming in Python:
- with rdflib: to manage an rdflib Graph internally in a program
- with SPARQLWrapper and GraphDB: to manage an RDF graph stored externally in GraphDB (on your own local machine, but in principle it could be anywhere on the internet)
Motivation: Last week we entered SPARQL queries and updates manually from the web interface. But in the majority of cases we want to program the management of triples in our graphs, for example to handle automatic or scheduled updates.
Important: There were quite a lot of SPARQL tasks in the last exercise. There are a lot of tasks in this exercise too, but the important thing is that you get to try the different types of SPARQL programming. How many SPARK queries and updates you do is a little up to you, but you must try at least one query and one update both using rdflib and SPARQLWrapper. And it is best if you try several different types of SPARQL queries too: both a SELECT, a CONSTRUCT or DESCRIBE, and an ASK.
Useful materials
Tasks
SPARQL programming in Python with rdflib
Getting ready: No additional installation is needed. You are already running Python and rdflib.
Parse the file russia_investigation_kg.ttl into an rdflib Graph. (The original file is available here: File:Russia investigation kg.txt. Rename it from .txt to .ttl).
Task: Write the following queries and updates with Python and rdflib. See boilerplate examples below.
- Print out a list of all the predicates used in your graph.
- Print out a sorted list of all the presidents represented in your graph.
- Create dictionary (Python dict) with all the represented presidents as keys. For each key, the value is a list of names of people indicted under that president.
- Use an ASK query to investigate whether Donald Trump has pardoned more than 5 people.
- Use a DESCRIBE query to create a new graph with information about Donald Trump. Print out the graph in Turtle format.
Note that different types of queries return objects with different contents. You can use core completion in your IDE or Python's dir() function to explore this further (for example dir(results)).
- SELECT: returns an object you can iterate over (among other things) to get the table rows (the result object also contains table headers)
- ASK: returns an object that contains a single logical value (True or False)
- DESCRIBE and CONSTRUCT: return an rdflib Graph
Contents of the file 'spouses.ttl':
@prefix ex: <http://example.org/> .
@prefix schema: <https://schema.org/> .
ex:Donald_Trump schema:spouse ( ex:IvanaTrump ex:MarlaMaples ex:MelaniaTrump ) .
Boilerplate code for rdflib query:
from rdflib import Graph
g = Graph()
g.parse("spouses.ttl", format='ttl')
result = g.query("""
PREFIX ex: <http://example.org/>
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX schema: <https://schema.org/>
SELECT ?spouse WHERE {
ex:Donald_Trump schema:spouse / rdf:rest* / rdf:first ?spouse .
}""")
for row in result:
print("Donald has spouse %s" % row)
Boilerplate code for rdflib update (using the KG4News graph again):
from rdflib import Graph
update_str = """
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX dct: <http://purl.org/dc/terms/>
PREFIX kg: <http://i2s.uib.no/kg4news/>
PREFIX ss: <http://semanticscholar.org/>
INSERT DATA {
kg:paper_123 rdf:type ss:Paper ;
ss:title "Semantic Knowledge Graphs for the News: A Review"@en ;
kg:year 2022 ;
dct:contributor kg:auth_456, kg:auth_789 .
}"""
g = Graph()
g.update(update_str)
print(g.serialize(format='ttl')) # format=’turtle’ also works
SPARQL programming in Python with SPARQLWrapper and GraphDB
Getting ready: You need a running and activated GraphDB repository as in Exercise 3: SPARQL. You will run GraphDB locally to keep things simple.
Install SPARQLWrapper (in your virtual environment):
pip install SPARQLWrapper
Some older versions also require you to install requests API. The SPARQLWrapper page on GitHub contains more information.
Continue with the russia_investigation_kg.ttl example.
Task: Program the following queries and updates with SPARQLWrapper and GraphDB.
- Ask whether there was an ongoing investigation on the date 1990-01-01.
- List ongoing investigations on that date 1990-01-01.
- Describe investigation number 100 (muellerkg:investigation_100).
- Print out a list of all the types used in your graph.
- Update the graph to that every resource that is an object in a muellerkg:investigation triple has the rdf:type muellerkg:Investigation.
- Update the graph to that every resource that is an object in a muellerkg:person triple has the rdf:type muellerkg:IndictedPerson.
- Update the graph so all the investigation nodes (such as muellerkg:watergate) become the subject in a dc:title triple with the corresponding string (watergate) as the literal.
- Print out a sorted list of all the indicted persons represented in your graph.
- Print out the minimum, average and maximum indictment days for all the indictments in the graph.
- Print out the minimum, average and maximum indictment days for all the indictments in the graph per investigation.
Note that different types of queries return different data formats with different structures:
- SELECT and ASK: return a SPARQL Results Document in either XML, JSON, or CSV/TSV format.
- DESCRIBE and CONSTRUCT: return an RDF graph serialised in TURTLE or RDF/XML syntax, for example.
- Use a DESCRIBE query to create an rdflib Graph about Oliver Stone. Print the graph out in Turtle format.
Boilerplate code for SPARQLWrapper query:
from SPARQLWrapper import SPARQLWrapper
SERVER = 'http://localhost:7200' # you may want to change this
REPOSITORY = 'lab04' # you most likely want to change this
endpoint = f'{SERVER}/repositories/{REPOSITORY}' # standard path for GraphDB queries
query = """
PREFIX ex: <http://example.org/>
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX schema: <https://schema.org/>
SELECT ?spouse WHERE {
ex:Donald_Trump schema:spouse / rdf:rest* / rdf:first ?spouse .
}"""
client = SPARQLWrapper(endpoint)
client.setReturnFormat('json')
client.setQuery(query)
print('Spouses:')
results = client.queryAndConvert()
for result in results["results"]["bindings"]:
print(result["spouse"]["value"])
Boilerplate code for SPARQLWrapper update:
from SPARQLWrapper import SPARQLWrapper
SERVER = 'http://localhost:7200' # you may want to change this
REPOSITORY = 'lab04' # you most likely want to change this
endpoint = f'{SERVER}/repositories/{REPOSITORY}' # standard path for GraphDB updates
update_str = """
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX dct: <http://purl.org/dc/terms/>
PREFIX kg: <http://i2s.uib.no/kg4news/>
PREFIX ss: <http://semanticscholar.org/>
INSERT DATA {
kg:paper_123 rdf:type ss:Paper ;
ss:title "Semantic Knowledge Graphs for the News: A Review"@en ;
kg:year 2023 ;
dct:contributor kg:auth_654, kg:auth_789 .
}"""
client = SPARQLWrapper(endpoint)
client.setMethod('POST')
client.setQuery(update_str)
res = client.queryAndConvert()
If you have more time
Continue with the russia_investigation_kg.ttl example. Use either rdflib or SPARQLWrapper as you prefer - or both :-)
Task: Write a query that lists all the resources in your graph that have Wikidata prefixes (i.e., http://www.wikidata.org/entity/). Use the result to generate a list of Wikidata entity identifiers (i.e., Q-codes like these ['Q13', 'Q42', 'Q80'].
Task: Install the wikidata API:
pip install wikidata
Check out the following code:
from wikidata.client import Client client = Client() q80 = client.get('Q80')
Use the API to extend your local graph, for example with descriptions of some of your resources.
Task: The wikidata API is good for simple tasks, but SPARQL is must more powerful. To explore available Wikidata properties, you can go to the [http:query.wikidata.org web GUI] and try
DESCRIBE wd:Q80 # or Q7358961...
You want to use prefixes like these (predefined in Wikidata query interface):
PREFIX wd: <http://www.wikidata.org/entity/> # for resources PREFIX wdt: <http://www.wikidata.org/prop/direct/> # for properties
Stay away from the p: and wds: prefixes for now.
Task: Write an embedded query that extends your local graph further, for example with more resource types. Property P31 in Wikidata corresponds to rdf:type in your local graph. Use LIMIT, and make sure the query runs in the web GUI before you embed it.
Task: For resources that are humans (entity Q5), you can add further information, for example about party affiliation and about significant events the person has been involved in.
Boilerplate for embedded Wikidata queries:
PREFIX wd: <http://www.wikidata.org/entity/> # for Wikidata resources
PREFIX wdt: <http://www.wikidata.org/prop/direct/> # for Wikidata properties
SELECT * WHERE {
# your local query heere, which binds the Wikidata identifier ?wdresource
# ?wdresource must be a URI that starts with http://www.wikidata.org/entity/
# test binding:
BIND(wd:Q80 AS ?wdresource)
SERVICE <https://query.wikidata.org/bigdata/namespace/wdq/sparql> {
# return the Wikidata types of ?wd resource
SELECT * WHERE {
?wdresource wdt:P31 ?wdtype .
}
LIMIT 5 # always use limit in remote queries
}
# possible to continue local query here
}
LIMIT 10
Task: You can also try to connect to the INFO216 Sandbox (read/write) and KG4News Server (read-only).
Both SPARQL endpoints run Blazegraph, which uses "namespaces" instead of "repositories", so the URLs are a little different. In the web UI , the "NAMESPACES" tab lets you select an existing or create a new namespace to use.
SERVER = 'http://info216.i2s.uib.no/bigdata/' # you may want to change this
NAMESPACE = 'lab04' # you most likely want to change this
endpoint = f'{SERVER}namespace/{NAMESPACE}/sparql' # standard path for Blazegraph queries (and updates)