Feedback

Creating synergies and interactions among the analysts and the “producers” of data. See details.

Cemetery of Père-Lachaise of Paris - year of death
Cimetière du Père-Lachaise - année de décès
Personnalités enterrées au cimetière du Père-Lachaise (hors columbarium) par année de décès

Reuse these data in your code

Query, endpoint and code for reusing the same data
https://query.wikidata.org/sparql
PREFIX wikibase: <http://wikiba.se/ontology#>
PREFIX wd: <http://www.wikidata.org/entity/> 
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>

SELECT ?year (COUNT (DISTINCT ?a) AS ?count) WHERE {
   ?a wdt:P119 wd:Q311 .    # buried at Père-lachaise cemetery 
   ?a wdt:P570 ?date . 
   BIND(year(?date) AS ?year)
   FILTER(?year > 1)
} GROUP BY ?year ORDER BY ?year
Howto write a query SPARQL? (in french)
{{#sparql:PREFIX wikibase: <http://wikiba.se/ontology#>
PREFIX wd: <http://www.wikidata.org/entity/> 
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>

SELECT ?year (COUNT (DISTINCT ?a) AS ?count) WHERE {
   ?a wdt:P119 wd:Q311 .    # buried at Père-lachaise cemetery 
   ?a wdt:P570 ?date . 
   BIND(year(?date) AS ?year)
   FILTER(?year > 1)
} GROUP BY ?year ORDER BY ?year
| endpoint = https://query.wikidata.org/sparql }}
Howto insert this graph in my wiki ?
Test this script in a new tab.
<html>
    <head>
        <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.4/jquery.min.js"></script>
    </head>
    <body onload="testQuery();">
        <script>
function testQuery(){
    var endpoint = "https://query.wikidata.org/sparql";
    var query = "PREFIX wikibase: <http://wikiba.se/ontology#>\n\
PREFIX wd: <http://www.wikidata.org/entity/> \n\
PREFIX wdt: <http://www.wikidata.org/prop/direct/>\n\
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>\n\
\n\
SELECT ?year (COUNT (DISTINCT ?a) AS ?count) WHERE {\n\
   ?a wdt:P119 wd:Q311 .    # buried at Père-lachaise cemetery \n\
   ?a wdt:P570 ?date . \n\
   BIND(year(?date) AS ?year)\n\
   FILTER(?year > 1)\n\
} GROUP BY ?year ORDER BY ?year"

   // $('#bodyContentResearch').append(queryDataset);
    $.ajax({
                url: endpoint,
                dataType: 'json',
                data: {
                    queryLn: 'SPARQL',
                    query: query ,
                    limit: 'none',
                    infer: 'true',
                    Accept: 'application/sparql-results+json'
                },
                success: displayResult,
                error: displayError
        });
}

function displayError(xhr, textStatus, errorThrown) {
    console.log(textStatus);
    console.log(errorThrown);
}

function displayResult(data) {
    $.each(data.results.bindings, function(index, bs) {
        console.log(bs);
        $("body").append(JSON.stringify(bs) + "<hr/>");
    });
}

        </script>
    </body>
</html>
Test this script in a new tab (Careful, several charts need a API key).
Howto insert this graph in my html page?
<html>
    <head>
        <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.4/jquery.min.js"></script>
        <script type="text/javascript" src="https://www.google.com/jsapi"></script>
        <script type="text/javascript" src="https://bordercloud.github.io/sgvizler2/sgvizler2/sgvizler2.js"></script>
    </head>
<body style="margin:0;">
<div id="sgvzl_example_query"
   data-sgvizler-endpoint="https://query.wikidata.org/sparql"
   data-sgvizler-query="PREFIX wikibase: &lt;http://wikiba.se/ontology#&gt;
PREFIX wd: &lt;http://www.wikidata.org/entity/&gt; 
PREFIX wdt: &lt;http://www.wikidata.org/prop/direct/&gt;
PREFIX rdfs: &lt;http://www.w3.org/2000/01/rdf-schema#&gt;

SELECT ?year (COUNT (DISTINCT ?a) AS ?count) WHERE {
   ?a wdt:P119 wd:Q311 .    # buried at P&egrave;re-lachaise cemetery 
   ?a wdt:P570 ?date . 
   BIND(year(?date) AS ?year)
   FILTER(?year &gt; 1)
} GROUP BY ?year ORDER BY ?year"
    data-sgvizler-chart='google.visualization.LineChart'
    data-sgvizler-chart-options=''
    data-sgvizler-endpoint_output_format='json'
    data-sgvizler-log='2'
    style='width:100%; height:200px;'  />



<script>
$(document).ready(function() {
   sgvizler2.containerDrawAll({
       // Google Api key
       googleApiKey : "GOOGLE_MAP_API_KEY",
       // OpenStreetMap Access Token
       //  https://www.mapbox.com/api-documentation/#access-tokens
       osmAccessToken : "OSM_MAP_API_KEY"
     });

});
</script>

</body>
</html>
from SPARQLWrapper import SPARQLWrapper, JSON

sparql = SPARQLWrapper("https://query.wikidata.org/sparql")
sparql.setQuery("""
    PREFIX wikibase: <http://wikiba.se/ontology#>
PREFIX wd: <http://www.wikidata.org/entity/> 
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>

SELECT ?year (COUNT (DISTINCT ?a) AS ?count) WHERE {
   ?a wdt:P119 wd:Q311 .    # buried at Père-lachaise cemetery 
   ?a wdt:P570 ?date . 
   BIND(year(?date) AS ?year)
   FILTER(?year > 1)
} GROUP BY ?year ORDER BY ?year""")
sparql.setReturnFormat(JSON)
results = sparql.query().convert()

for result in results["results"]["bindings"]:
    print(result)
    #print(result["label"]["value"])
Howto use SPARQL with Python ?
library(SPARQL) # SPARQL querying package
library(ggplot2)

# Step 1 - Set up preliminaries and define query
# Define the data.gov endpoint
endpoint <- "https://query.wikidata.org/sparql"
# create query statement
query <- "PREFIX wikibase: <http://wikiba.se/ontology#>
PREFIX wd: <http://www.wikidata.org/entity/> 
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>

SELECT ?year (COUNT (DISTINCT ?a) AS ?count) WHERE {
   ?a wdt:P119 wd:Q311 .    # buried at Père-lachaise cemetery 
   ?a wdt:P570 ?date . 
   BIND(year(?date) AS ?year)
   FILTER(?year > 1)
} GROUP BY ?year ORDER BY ?year"
# Step 2 - Use SPARQL package to submit query and save results to a data frame
qd <- SPARQL(endpoint,query)
df <- qd$results
SPARQL with R in less than 5 minutes
#!/usr/bin/env ruby
#
# Install sparql for Ruby
#   gem update --system
#   gem install sparql
#
require 'sparql/client'

endpoint = "https://query.wikidata.org/sparql"
sparql = <<-EOT
PREFIX wikibase: <http://wikiba.se/ontology#>
PREFIX wd: <http://www.wikidata.org/entity/> 
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>

SELECT ?year (COUNT (DISTINCT ?a) AS ?count) WHERE {
   ?a wdt:P119 wd:Q311 .    # buried at Père-lachaise cemetery 
   ?a wdt:P570 ?date . 
   BIND(year(?date) AS ?year)
   FILTER(?year > 1)
} GROUP BY ?year ORDER BY ?year
EOT

#For Wikidata, the method get is required
#For other SPARQL endpoints, the method post is prefered
client = SPARQL::Client.new(endpoint, :method => :get)
rows = client.query(sparql)

puts "Number of rows: #{rows.size}"
for row in rows
  for key,val in row do
    # print "#{key.to_s.ljust(10)}: #{val}\t"
    print "#{key}: #{val}\t"
  end
  print "\n"
end
Doc Ruby for SPARQL 1.1
endpoint = 'https://query.wikidata.org/sparql';

query = ['PREFIX wikibase: <http://wikiba.se/ontology#> '...
'PREFIX wd: <http://www.wikidata.org/entity/>  '...
'PREFIX wdt: <http://www.wikidata.org/prop/direct/> '...
'PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> '...
' '...
'SELECT ?year (COUNT (DISTINCT ?a) AS ?count) WHERE { '...
'   ?a wdt:P119 wd:Q311 .    # buried at Père-lachaise cemetery  '...
'   ?a wdt:P570 ?date .  '...
'   BIND(year(?date) AS ?year) '...
'   FILTER(?year > 1) '...
'} GROUP BY ?year ORDER BY ?year '];

url_head = strcat(endpoint,'?query=');
url_query = urlencode(query);
format = 'text/tab-separated-values';
url_tail = strcat('&format=', format);

url = strcat(url_head, url_query, url_tail);

% get the data from the endpoint
query_results = urlread(url);

% write the data to a file so that tdfread can parse it
fid = fopen('query_results.txt','w');
if fid>=0
    fprintf(fid, '%s\n', query_results)
    fclose(fid)
end

% this reads the tsv file into a struct
sparql_data = tdfread('query_results.txt')
Project Github MatlabSPARQL
<?php
require __DIR__ . '/../vendor/autoload.php';
use BorderCloud\SPARQL\SparqlClient;

$endpoint ="https://query.wikidata.org/sparql";
$sp_readonly = new SparqlClient();
$sp_readonly->setEndpointRead($endpoint);
$q = <<<EOD
PREFIX wikibase: <http://wikiba.se/ontology#>
PREFIX wd: <http://www.wikidata.org/entity/> 
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>

SELECT ?year (COUNT (DISTINCT ?a) AS ?count) WHERE {
   ?a wdt:P119 wd:Q311 .    # buried at Père-lachaise cemetery 
   ?a wdt:P570 ?date . 
   BIND(year(?date) AS ?year)
   FILTER(?year > 1)
} GROUP BY ?year ORDER BY ?yearEOD;
$rows = $sp_readonly->query($q, 'rows');
$err = $sp_readonly->getErrors();
if ($err) {
      print_r($err);
      throw new Exception(print_r($err,true));
}

foreach($rows["result"]["variables"] as $variable){
        printf("%-20.20s",$variable);
        echo '|';
 }
 echo "\n";

foreach ($rows["result"]["rows"] as $row){
        foreach($rows["result"]["variables"] as $variable){
                printf("%-20.20s",$row[$variable]);
        echo '|';
        }
        echo "\n";
 }
 ?>
Project Github BorderCloud/SPARQL
import com.bordercloud.sparql.Endpoint;
import java.util.ArrayList;
import java.util.HashMap;

public class Main {

    public static void main(String[] args) {
        try {
            Endpoint sp = new Endpoint("https://query.wikidata.org/sparql";, false);

            String querySelect = 'PREFIX wikibase: <http://wikiba.se/ontology#> \n'
                    + 'PREFIX wd: <http://www.wikidata.org/entity/>  \n'
                    + 'PREFIX wdt: <http://www.wikidata.org/prop/direct/> \n'
                    + 'PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> \n'
                    + ' \n'
                    + 'SELECT ?year (COUNT (DISTINCT ?a) AS ?count) WHERE { \n'
                    + '   ?a wdt:P119 wd:Q311 .    # buried at Père-lachaise cemetery  \n'
                    + '   ?a wdt:P570 ?date .  \n'
                    + '   BIND(year(?date) AS ?year) \n'
                    + '   FILTER(?year > 1) \n'
                    + '} GROUP BY ?year ORDER BY ?year \n';

            HashMap rs = sp.query(querySelect);
            printResult(rs,30);

        }catch(EndpointException eex) {
            System.out.println(eex);
            eex.printStackTrace();
        }
    }

    public static void printResult(HashMap rs , int size) {

      for (String variable : (ArrayList) rs.get("result").get("variables")) {
        System.out.print(String.format("%-"+size+"."+size+"s", variable ) + " | ");
      }
      System.out.print("\n");
      for (HashMap value : (ArrayList>) rs.get("result").get("rows")) {
        //System.out.print(value);
        /* for (String key : value.keySet()) {
         System.out.println(value.get(key));
         }*/
        for (String variable : (ArrayList) rs.get("result").get("variables")) {
          //System.out.println(value.get(variable));
          System.out.print(String.format("%-"+size+"."+size+"s", value.get(variable)) + " | ");
        }
        System.out.print("\n");
      }
    }
}
Project Github BorderCloud/SPARQL-JAVA