<?xml version="1.0" encoding="utf-8"?>

<rdf:RDF
  xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
  xmlns:dc="http://purl.org/dc/elements/1.1/"
  xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
  xmlns="http://purl.org/rss/1.0/"
  xmlns:admin="http://webns.net/mvcb/"
  xmlns:annotate="http://purl.org/rss/1.0/modules/annotate/"
  xmlns:dcterms="http://purl.org/dc/terms/"
  xmlns:cc="http://web.resource.org/cc/"
  xmlns:content="http://purl.org/rss/1.0/modules/content/"
  xmlns:foaf="http://xmlns.com/foaf/0.1/"
  xmlns:trackback="http://madskills.com/public/xml/rss/module/trackback/">

  <channel rdf:about="http://www.thefigtrees.net/lee/blog/">
    <title>TechnicaLee Speaking</title>
    <link>http://www.thefigtrees.net/lee/blog/</link>
    <description>Software designs, implementations, solutions, and musings by Lee Feigenbaum</description>
    <dc:language>en-us</dc:language>
    <dc:creator></dc:creator>
    <dc:date>2012-03-05T00:55:26-05:00</dc:date>
    <admin:generatorAgent 
       rdf:resource="http://www.movabletype.org/?v=4.23-en" />
    

    <items>
       <rdf:Seq>
          <rdf:li rdf:resource="http://www.thefigtrees.net/lee/blog/2012/03/enterprise_semantics_blog.html" />
       
          <rdf:li rdf:resource="http://www.thefigtrees.net/lee/blog/2011/12/linked_enterprise_data_pattern.html" />
       
          <rdf:li rdf:resource="http://www.thefigtrees.net/lee/blog/2011/09/saving_months_not_milliseconds.html" />
       
          <rdf:li rdf:resource="http://www.thefigtrees.net/lee/blog/2011/09/why_semantic_web_technologies_1.html" />
       
          <rdf:li rdf:resource="http://www.thefigtrees.net/lee/blog/2011/08/the_magic_crank.html" />
       
          <rdf:li rdf:resource="http://www.thefigtrees.net/lee/blog/2011/08/why_semantic_web_technologies.html" />
       
          <rdf:li rdf:resource="http://www.thefigtrees.net/lee/blog/2011/06/anzo_connect_semantic_web_etl.html" />
       
          <rdf:li rdf:resource="http://www.thefigtrees.net/lee/blog/2011/05/evolution_towards_web_30_the_s.html" />
       
          <rdf:li rdf:resource="http://www.thefigtrees.net/lee/blog/2011/03/describing_the_structure_of_rd.html" />
       
          <rdf:li rdf:resource="http://www.thefigtrees.net/lee/blog/2011/01/cambridge_semantics_is_hiring.html" />
       
          <rdf:li rdf:resource="http://www.thefigtrees.net/lee/blog/2010/11/sparql_rdf_datasets_from_from.html" />
       
          <rdf:li rdf:resource="http://www.thefigtrees.net/lee/blog/2010/07/could_semtech_run_on_excel_sem.html" />
       
          <rdf:li rdf:resource="http://www.thefigtrees.net/lee/blog/2010/07/early_sparql_reviews.html" />
       
          <rdf:li rdf:resource="http://www.thefigtrees.net/lee/blog/2009/09/does_anyone_use_sparql_over_so.html" />
       
          <rdf:li rdf:resource="http://www.thefigtrees.net/lee/blog/2009/07/constructing_quads.html" />
       </rdf:Seq>
    </items>

  </channel>

  
    <item rdf:about="http://www.thefigtrees.net/lee/blog/2012/03/enterprise_semantics_blog.html">
      <title>Enterprise Semantics Blog</title>
      <link>http://www.thefigtrees.net/lee/blog/2012/03/enterprise_semantics_blog.html</link>
      <description>We (Cambridge Semantics) have recently launched a new blog, Enterprise Semantics. The blog covers a mix of technical and business topics related to the use of semantic technologies inside large enterprises. I&apos;m writing some posts on that blog, and I&apos;ll be continuing to put posts here as well. You can sign up to follow the blog in an RSS reader via its feed, or you can receive emails when there are new posts by subscribing on the blog itself, or you can just follow us @CamSemantics. (The feed is not currently syndicated by Planet RDF, so if you read my...</description>
      <dc:subject>semantic web</dc:subject>
      <dc:creator>lee</dc:creator>
      <dc:date>2012-03-05T00:55:26-05:00</dc:date>
    <content:encoded><![CDATA[<p>We (<a href="http://www.cambridgesemantics.com">Cambridge Semantics</a>) have recently launched a new blog, <a href="http://www.cambridgesemantics.com/blog">Enterprise Semantics</a>. The blog covers a mix of technical and business topics related to the use of semantic technologies inside large enterprises. I'm writing some posts on that blog, and I'll be continuing to put posts here as well. You can sign up to follow the blog in an RSS reader via <a href="http://feeds.feedburner.com/EnterpriseSemantics">its feed</a>, or you can receive emails when there are new posts by subscribing on the blog itself, or you can just follow us <a href="https://twitter.com/#!/CamSemantics">@CamSemantics</a>. (The feed is not currently syndicated by Planet RDF, so if you read my blog via Planet RDF and are interested in enterprise semantics, you should probably still sign up separately.)</p>  <p>Here's just a taste of some of the content we've published in the first two months of the blog:</p>  <h2><font color="#0066cc"><a href="http://www.cambridgesemantics.com/blog/-/blogs/what-happened-to-nosql-for-the-enterprise-">What Happened to NoSQL for the Enterprise?</a></font></h2>  <blockquote>   <p>So what it comes down to is that for decades we’ve had one standard way to store and query important data, and today there are new choices.&#160; As with any choice, there are tradeoffs, and for some applications NoSQL databases, including Semantic Web databases, can enable organizations to get more done in less time and with less hardware than relational databases.&#160; The trick is to know when and how to deploy these new tools.</p> </blockquote>  <h2><font color="#0066cc"><a href="http://www.cambridgesemantics.com/blog/-/blogs/big-data-or-right-data-">Big Data... or Right Data?</a></font></h2>  <blockquote>   <p>What matters most, Big Data or Right Data? One look at all the IT headlines these days would suggest that Big Data is the most important data issue today. After all, with lots of computing power and better database storage techniques it is now practical to analyze petabytes of data. However, is that really the most compelling need that end users have? I don’t think so. Instead, I would claim that the issue most end users have is getting together the right data to help them do their jobs better, not analyzing billions of individual transactions.</p> </blockquote>  <h4><a href="http://www.cambridgesemantics.com/blog/-/blogs/what-the-semantic-web-and-digital-cameras-have-in-common">What the Semantic Web and Digital Cameras have in Common</a></h4>  <blockquote>   <p>Analog photography went through lots of phases of dramatic improvement, becoming a mass-market technology. But...no matter how far it went it was limited in its flexibility. Every picture was pretty much as you took it. Any modification required real experts, with specialist equipment and working in a dark room. With the advent of digital photography we have achieved extreme flexibility. The picture you take is simply the starting point to create the picture you want, and the end users themselves can make the changes with easy to use tools.</p>    <p>Semantic Web technology represents the same dramatic shift from the traditional technologies.</p> </blockquote>  <h2><font color="#0066cc"><a href="http://www.cambridgesemantics.com/blog/-/blogs/why-semantic-web-software-must-be-easy-er-to-use">Why Semantic Web Software Must Be Easy(er) to Use</a></font></h2>  <blockquote>   <p>In short, if Semantic Web software is hard to use, then many of the benefits of using these technologies in the first place are immediately lost. Conversely, if Semantic Web software is easy to use, on the other hand, then the benefits of Semantic Web technologies' flexibility are brought directly to the end user, the business user. The business manager can bring together new data sets for analysis today, rather than a week for now. An analyst can setup triggers and alerts to monitor for key business indicators today, rather than waiting 3 months. A senior scientist can begin looking for correlations within ad-hoc sets of data today, rather than next year.</p> </blockquote>  <h4><a href="http://www.cambridgesemantics.com/blog/-/blogs/it-s-all-about-the-data-model">It's All About the Data Model</a><font color="#0066cc"></font></h4>  <blockquote>   <p>There is a new data model called RDF—the data model of the Semantic Web—which combines the best of both worlds: the flexibility of a spreadsheet and the manageability and data integrity of a relational database. Based on standards set by the World Wide Web Consortium (W3C) to enable data combination on the Web, RDF defines each data cell by the entity it applies to (row) and the attribute it represents (column). Each cell is self-describing and not locked into a grid, in other words the data doesn't have to be &quot;regular&quot;. Further, it has formal operations that can be performed on it, much like relational algebra, but clearly at a more atomic level.</p></blockquote>
]]></content:encoded>

    </item>
  
    <item rdf:about="http://www.thefigtrees.net/lee/blog/2011/12/linked_enterprise_data_pattern.html">
      <title>Linked Enterprise Data Patterns Workshop</title>
      <link>http://www.thefigtrees.net/lee/blog/2011/12/linked_enterprise_data_pattern.html</link>
      <description>I spent Tuesday and Wednesday this week at the W3C Linked Enterprise Data Patterns workshop at MIT (#LEDP). After all, we do linked data and we work with large enterprise customers, so it seemed like a natural fit. The workshop was an interesting two days hearing folks share their experiences using linked data (and sometimes not using linked data) in enterprise situations (and sometimes not in enterprise situations). The main consensus that emerged from the workshop was a desire for a set of profiles of conformance criteria for what constitutes interoperable linked data implementations. I&apos;m personally pretty certain though that...</description>
      <dc:subject>semantic web</dc:subject>
      <dc:creator>lee</dc:creator>
      <dc:date>2011-12-09T00:13:51-05:00</dc:date>
    <content:encoded><![CDATA[<p>I spent Tuesday and Wednesday this week at the <a href="http://www.w3.org/2011/09/LinkedData/">W3C Linked Enterprise Data Patterns workshop</a> at MIT (<a href="http://twitter.com/#!/search/%23ledp">#LEDP</a>). After all, <a href="http://www.cambridgesemantics.com/products/semantic-web-platform">we do linked data</a> and <a href="http://www.cambridgesemantics.com/company/customers">we work with large enterprise customers</a>, so it seemed like a natural fit. The workshop was an interesting two days hearing folks share their experiences using linked data (and sometimes not using linked data) in enterprise situations (and sometimes not in enterprise situations). The main consensus that emerged from the workshop was a desire for a set of profiles of conformance criteria for what constitutes interoperable linked data implementations. I'm personally pretty certain though that the consensus ends there: people continue to have very different views of what pieces of the Semantic Web technology stack (or related technologies like REST and Atom) are most important for a linked data deployment. Eric Prud'hommeaux tried to classify the linked data camps into those doing data integration and storage and query and those doing HTTPy resource linking, but I'm guessing the distinctions are even more nuanced than that.</p>  <p>Anyways, on Wednesday I gave a talk on the patterns we use to segment data within Anzo, as well as some of our other usages of Semantic Web technologies and where we see gaps in the standards world (frankly, more in adoption than in specification). I recorded a screencast of the talk—it's not the most polished, but if you weren't able to attend the workshop you might be interested in the talk. I've also posted <a href="http://www.slideshare.net/LeeFeigenbaum/data-segmenting-in-anzo">the slides</a> themselves online. Here's the video:</p> <iframe height="315" src="http://www.youtube.com/embed/pCDZOB4zrgU" frameborder="0" width="420" allowfullscreen="allowfullscreen"></iframe>  <p align="left">&#160;</p>  <p align="left">There were a couple of discussions in the middle of the talk that I had to cut out because they involved too much cross-talk taking place far away from the mic and were hard to understand. One was a discussion around the way that we (by default) break data into graphs and how it privileges RDF subjects over objects, and whether that affects access control decisions (our experience: no). Another discussion around the 9 minute mark was about the use of the same URI to identify a graph and the subject of data within that graph. A third discussion surrounded ongoing efforts to extend VoID to do additional descriptions of linked data endpoints.</p>
]]></content:encoded>

    </item>
  
    <item rdf:about="http://www.thefigtrees.net/lee/blog/2011/09/saving_months_not_milliseconds.html">
      <title>Saving Months, Not Milliseconds: Do More Faster with the Semantic Web</title>
      <link>http://www.thefigtrees.net/lee/blog/2011/09/saving_months_not_milliseconds.html</link>
      <description>When I suggested that we&apos;re often asking the wrong question about why we should use Semantic Web technologies, I promised that I&apos;d write more about what it is about these technologies that lowers the barrier to entry enough to let us do (lots of) things that we otherwise wouldn&apos;t. In the meantime, some other people have done a great job of anticipating and echoing my own thoughts on the topic, so I&apos;m going to summarize them here. The bottom line is this: The Semantic Web lets you do things fast. And because you can do things fast, you can do...</description>
      <dc:subject>semantic web</dc:subject>
      <dc:creator>lee</dc:creator>
      <dc:date>2011-09-27T00:11:38-05:00</dc:date>
    <content:encoded><![CDATA[<p>When I suggested that <a href="http://www.thefigtrees.net/lee/blog/2011/08/why_semantic_web_technologies.html">we're often asking the wrong question about why we should use Semantic Web technologies</a>, I promised that I'd write more about what it is about these technologies that lowers the barrier to entry enough to let us do (lots of) things that we otherwise wouldn't. In the meantime, some other people have done a great job of anticipating and echoing my own thoughts on the topic, so I'm going to summarize them here.</p>  <p>The bottom line is this: The Semantic Web lets you do things fast. And because you can do things fast, you can do lots more things than you could before. You can afford to do things that fail (fail fast); you can afford to do things that are unproven and speculative (exploratory analysis); you can afford to do things that are only relevant this week or today (on-demand or situational applications); and you can afford to do things that change rapidly. Of course, you can also do things that you would have done with other technology stacks, only you can have them up and running (&amp; ready to be improved, refined, extended, and leveraged) in a fraction of the time that you otherwise would have spent.</p>  <p>The word 'fast&quot; can be a bit deceptive when talking about technology. We can all be a bit obsessed with what I call <em>stopwatch time</em>. Stopwatch time is speed measured in seconds (or less). It's raw performance: How much quicker does my laptop boot up with an SSD? How long does it take to load 100 million records into a database? How many queries per second does your SPARQL implementation do on the Berlin benchmark with and without a recent round of optimizations?</p>  <p>We always talk about stopwatch time. Stopwatch time is impressive. Stopwatch time is <em>sexy</em>. But stopwatch time is often far less important than <em>calendar time</em>.</p>  <p>Calendar time is measured in hours and days or in weeks and months and years. Calendar time is the <em>actual time it takes to get an answer to a question</em>. Not just the time it takes to push the &quot;Go&quot; button and let some software application do a calculation, but all of the time necessary to get to an answer: to install, configure, design, deploy, test, and use an application.</p>  <p>Calendar time is what matters. If my relational database application renders a sales forecast report in 500 milliseconds while my Semantic Web application takes 5 seconds, you might hear people say that the relational approach is 10 times faster than the Semantic Web approach. But if it took six months to design and build the relational solution versus two weeks for the Semantic Web solution, Semantic Sam will be adjusting his supply chain and improving his efficiencies long before Relational Randy has even seen his first report. The Semantic Web lets you do things fast, in calendar time.</p>  <p>Why is this? Ultimately, it's because of the inherent flexibility of the Semantic Web data model (RDF). This flexibility has been described in many different ways. RDF relies on an <a href="http://www.mkbergman.com/974/making-the-argument-for-semantic-technologies/">adaptive, resilient schema</a> (from Mike Bergman); it enables <a href="http://weblog.clarkparsia.com/2011/05/04/how-to-create-business-value-with-semantic-tech/">cooperation without coordination</a> (from David Wood via Kendall Clark); it can be incrementally evolved; changes to one part of a system don't require re-designs to the rest of the system. These are all dimensions of the same core flexibility of Semantic Web technologies, and it is this flexibility that lets you do things fast with the Semantic Web.</p>  <hr /><em>(There is a bit of nuance here: if stopwatch performance is below a minimum threshold of acceptability, then no one will use a solution in the first place. Semantic Web technologies have had a bit of a reputation for this in the past, but it's long since true. I'll write more about that in a future post.)</em>
]]></content:encoded>

    </item>
  
    <item rdf:about="http://www.thefigtrees.net/lee/blog/2011/09/why_semantic_web_technologies_1.html">
      <title>Why Semantic Web Technologies: Common, Coherent, Standard</title>
      <link>http://www.thefigtrees.net/lee/blog/2011/09/why_semantic_web_technologies_1.html</link>
      <description><![CDATA[To paraphrase both Ecclesiastes and Michael Stonebraker &amp; Joseph Hellerstein, there is nothing new under the sun. It's as true with Semantic Web technologies as with anything else—tuples are straightforward, ontologies build on schema languages and description logics that have been around for ages, URIs have been baked into the Web for twenty years, etc. But while the technologies are not new, the circumstances are. In particular, the W3C set of Semantic Web technologies are particularly valuable for having been brought together as a common, coherent, set of standards. Common. Semantic Web technologies are broadly applicable to many, many different...]]></description>
      <dc:subject>semantic web</dc:subject>
      <dc:creator>lee</dc:creator>
      <dc:date>2011-09-12T10:34:18-05:00</dc:date>
    <content:encoded><![CDATA[<p>To paraphrase both Ecclesiastes and <a href="http://mitpress.mit.edu/books/chapters/0262693143chapm1.pdf">Michael Stonebraker &amp; Joseph Hellerstein</a>, there is nothing new under the sun. </p>  <p>It's as true with Semantic Web technologies as with anything else—tuples are straightforward, ontologies build on schema languages and description logics that have been around for ages, URIs have been baked into the Web for twenty years, etc. But while the technologies are not new, the circumstances are. In particular, the <a href="http://www.w3.org/2001/sw/">W3C set of Semantic Web technologies</a> are particularly valuable for having been brought together as a <em>common, coherent, set of standards</em>. </p>  <ul>   <li><em>Common. </em>Semantic Web technologies are broadly applicable to many, many different use cases. People use them to publish pricing data online, to uncover market opportunities, to integrate data in the bowels of corporate IT, to open government data, to promote structured scientific discourse, to build open social networks, to reform supply chain inefficiencies, to search employee skill sets, and to accomplish about ten thousand other tasks. This makes a one-size-fits-all elevator pitch challenging, but it also means that there's a large audience of practitioners that are benefitting from these technologies and so are coming together to create standards, build tool sets, and implement solutions. These are not niche technologies with limited resources for ongoing development or at risk to be hijacked for a purpose at odds with your own.</li>    <li><em>Coherent. </em>Semantic Web technologies are designed to work together. The <a href="http://www.w3.org/2007/03/layerCake.png">infamous layer cake diagram</a> may have many shortcomings, but it does demonstrate that these technologies fit together like jigsaw puzzle pieces. This means that I can build an application using the RDF data model, and then incrementally bring new functionality online by adopting other Semantic Web technologies. Without a coherent set of technologies, I'd have to either roll my own solutions for new functionality (expensive, error-prone) or try to overcome impedance mismatches in connecting together multiple unrelated technologies (expensive, error-prone). </li>    <li><em>Standard</em>. Semantic Web technologies are developed in collaborative working groups under the auspices of the World Wide Web Consortium (W3C). The specifications are free (both as in beer and as in not constrained by intellectual property) and are backed by test suites and implementation reports that go a long way to encouraging interoperable tools. </li> </ul>  <p>The technologies are not novel and are not perfect. But they are common, coherent, and standard and that sets them apart from a lot of what's come before and a lot of other options that are currently out there.</p>
]]></content:encoded>

    </item>
  
    <item rdf:about="http://www.thefigtrees.net/lee/blog/2011/08/the_magic_crank.html">
      <title>The Magic Crank</title>
      <link>http://www.thefigtrees.net/lee/blog/2011/08/the_magic_crank.html</link>
      <description>As a brief addendum to my previous post: I&apos;ve been using this image for a few years now to illustrate what the Semantic Web is not. I call it the magic crank. I imagine that it sits in the corner of the office of some senior pharma executive, and every time their drug development pipeline gets a bit thin or patent protection for the big blockbuster drugs wears off, the executive pulls it out. She dusts off the crank and plugs in the latest databases full of data on genomics, protein interactions, efficacy and safety studies, etc. A few turns...</description>
      <dc:subject>semantic web</dc:subject>
      <dc:creator>lee</dc:creator>
      <dc:date>2011-08-29T10:23:15-05:00</dc:date>
    <content:encoded><![CDATA[<p>As a brief addendum to <a href="http://www.thefigtrees.net/lee/blog/2011/08/why_semantic_web_technologies.html">my previous post</a>: I've been using this image for a few years now to illustrate what the Semantic Web <em>is not</em>. I call it the <em>magic crank.</em> I imagine that it sits in the corner of the office of some senior pharma executive, and every time their drug development pipeline gets a bit thin or patent protection for the big blockbuster drugs wears off, the executive pulls it out. She dusts off the crank and plugs in the latest databases full of data on genomics, protein interactions, efficacy and safety studies, etc. A few turns of the magic crank later, and she's rewarded with a little card that tells her exactly what drug to invest in next.</p>  <p>To me, the magic crank is the unrealized holy grail of the Semantic Web in the pharma industry. And it's an extremely powerful and valuable goal. But it's a bit dangerous as well: every time someone new to the Semantic Web learns that the magic crank is what the Semantic Web is all about, they end up trying to tackle large, unsolved problems. They end up asking &quot;<em>What can I do with Semantic Web technologies that I can't do otherwise?&quot;. </em>Once you've latched onto the potential of the magic crank, it's very hard to ratchet your questions back down to the less-impressive-but-practical-and-still-very-valuable, &quot;<em>What can I do with Semantic Web technologies that I <strong>wouldn't</strong> do otherwise?</em>&quot;.</p>  <p><a href="http://www.thefigtrees.net/lee/blog/Windows-Live-Writer/The-Magic-Crank_14B70/screenshot_0698_2.jpg"><img style="background-image: none; border-bottom: 0px; border-left: 0px; padding-left: 0px; padding-right: 0px; display: block; float: none; margin-left: auto; border-top: 0px; margin-right: auto; border-right: 0px; padding-top: 0px" title="screenshot_0698" border="0" alt="screenshot_0698" src="http://www.thefigtrees.net/lee/blog/Windows-Live-Writer/The-Magic-Crank_14B70/screenshot_0698_thumb.jpg" width="240" height="182" /></a></p>  <p><em>Credit for the image goes to <a href="http://idekerlab.ucsd.edu/Pages/default.aspx">Trey Ideker</a> of UCSD. I first saw the image in a presentation by <a href="http://www.linkedin.com/in/eshuang">Enoch Huang</a> at <a href="http://www.iscb.org/cshals2012">CSHALS</a> a few years ago.</em></p>
]]></content:encoded>

    </item>
  
    <item rdf:about="http://www.thefigtrees.net/lee/blog/2011/08/why_semantic_web_technologies.html">
      <title>Why Semantic Web Technologies: Are We Asking the Wrong Question?</title>
      <link>http://www.thefigtrees.net/lee/blog/2011/08/why_semantic_web_technologies.html</link>
      <description>I haven&apos;t written much lately. I&apos;ve been busy building things. And while I&apos;ve been building things, I&apos;ve been learning things. I&apos;d like to start writing and start sharing some of the things I&apos;ve been learning. I&apos;d say that at least once a week, when talking to prospective customers, I get asked the following: What can I do with Semantic Web technologies that I can&apos;t do otherwise? It&apos;s a question that&apos;s asked in good faith: enterprise software buyers have heard tales of rapid data integration, automated data inference, business-rules engines, etc. time and time again. By now, any corporate IT department...</description>
      <dc:subject>semantic web</dc:subject>
      <dc:creator>lee</dc:creator>
      <dc:date>2011-08-22T10:47:10-05:00</dc:date>
    <content:encoded><![CDATA[<p>I haven't written much lately. I've been busy building things. And while I've been building things, I've been learning things. I'd like to start writing and start sharing some of the things I've been learning. </p>  <p>I'd say that at least once a week, when talking to prospective customers, I get asked the following:</p>  <blockquote>   <p><em>What can I do with Semantic Web technologies that I can't do otherwise?</em></p> </blockquote>  <p>It's a question that's asked in good faith: enterprise software buyers have heard tales of rapid data integration, automated data inference, business-rules engines, etc. time and time again. By now, any corporate IT department likely owns several software packages that purport to accomplish the same things that Semantic Web vendors are selling them. And so a potential buyer learns about Semantic Web technologies and searches for what's new:</p>  <blockquote>   <p><em>What can I do with Semantic Web technologies that I can't do otherwise?</em></p> </blockquote>  <p>The real answer to this question is distressingly simple: not much. IT staff around the world are constantly doing data integration, data inference, data classification, data visualization, etc. using the traditional tools of the trade: Java, RDBMSes, XML…</p>  <p>But the real answer to the question misses the fact that this is the wrong question. We ought instead to ask:</p>  <blockquote>   <p><em>What can I do with Semantic Web technologies that I <strong>wouldn't</strong> do otherwise?</em></p> </blockquote>  <p>Enterprise projects are proposed all the time, and all eventually reach a go/no-go decision point. Businesses regularly consider and reject valuable projects not because they require revolutionary new magic, but because they're simply too expensive for the benefit or they'd take too long to fix the situation that's at hand <em>now. You don't need brand new technology to make dramatic changes to your business.</em> </p>  <p>The point of semantic web tech is not that it's revolutionary – it's not cold fusion, interstellar flight, quantum computing – it's an evolutionary advantage – you could do these projects with traditional techs but they're just hard enough to be impractical, so IT shops don't – that's what's changing here. Once the technologies and tools are good enough to turn &quot;no-go&quot; into &quot;go&quot;, you can start pulling together the data in your department's 3 key databases; you can start automating data exchange between your group and a key supply-chain partner; you can start letting your line-of-business managers define their own visualizations, reports, and alerts that change on a daily basis. And when you start solving enough of these sorts of problems, you derive value that can fundamentally affect the way your company does business.</p>  <p>I'll write more in the future about what changes with Semantic Web technologies to let us cross this threshold. But for now, when you're looking for the next &quot;killer application&quot; for Semantic Web in the enterprise, you don't need to look for the impossible, just the not (previously) practical.</p>
]]></content:encoded>

    </item>
  
    <item rdf:about="http://www.thefigtrees.net/lee/blog/2011/06/anzo_connect_semantic_web_etl.html">
      <title>Anzo Connect: Semantic Web ETL in 5 Minutes</title>
      <link>http://www.thefigtrees.net/lee/blog/2011/06/anzo_connect_semantic_web_etl.html</link>
      <description>At last week&apos;s SemTech conference, my colleague Ben Szekely kicked off the business track of the lightning talks by debuting our new product, Anzo Connect. Ben showed how Anzo Connect can be used in just a few minutes (4.5 to be precise) to pull data from a relational database, map it to an ontology, integrate the data into an existing RDF store, and visualize the results in a Web-based dashboard. We took this video of the lightning demo from the audience, but it gives some idea of what Anzo Connect is all about. If you&apos;re interested in learning more about...</description>
      <dc:subject>semantic web</dc:subject>
      <dc:creator>lee</dc:creator>
      <dc:date>2011-06-27T09:24:44-05:00</dc:date>
    <content:encoded><![CDATA[<p>At last week's <a href="http://semtech2011.semanticweb.com/">SemTech conference</a>, my colleague Ben Szekely kicked off the business track of the lightning talks by debuting our new product, <a href="http://www.cambridgesemantics.com/products/anzo_connect">Anzo Connect</a>. Ben showed how Anzo Connect can be used in just a few minutes (4.5 to be precise) to pull data from a relational database, map it to an ontology, integrate the data into an existing RDF store, and visualize the results in a Web-based dashboard.</p>  <p>We took this video of the lightning demo from the audience, but it gives some idea of what Anzo Connect is all about. If you're interested in learning more about Anzo Connect or any of our other software, please <a href="mailto:lee@cambridgesemantics.com">drop me a note</a>.</p> <iframe height="265" src="http://player.vimeo.com/video/25201456?portrait=0&amp;color=196e87" frameborder="0" width="400"></iframe>  <p>(<a href="http://vimeo.com/25201456">5 Minute ETL with Anzo Connect</a>)</p>
]]></content:encoded>

    </item>
  
    <item rdf:about="http://www.thefigtrees.net/lee/blog/2011/05/evolution_towards_web_30_the_s.html">
      <title>Evolution Towards Web 3.0: The Semantic Web</title>
      <link>http://www.thefigtrees.net/lee/blog/2011/05/evolution_towards_web_30_the_s.html</link>
      <description><![CDATA[On April 21, 2011, I had the pleasure of speaking to Professor Stuart Madnick's &quot;Evolution Towards Web 3.0&quot; class at the MIT Sloan School of Management. The topic of the lecture was—unsurprisingly—the Semantic Web. I had a great time putting together the material and discussing it with the students, who seemed to be very engaged in the topic. It was a less technical audience then I often speak with, and so I tried to focus on some of the motivating trends, use cases, and challenges involved with Semantic Web technologies and the vision of the Semantic Web. I've now placed...]]></description>
      <dc:subject>semantic web</dc:subject>
      <dc:creator>lee</dc:creator>
      <dc:date>2011-05-31T02:52:05-05:00</dc:date>
    <content:encoded><![CDATA[<p>On April 21, 2011, I had the pleasure of speaking to <a href="http://esd.mit.edu/Faculty_Pages/madnick/madnick.htm">Professor Stuart Madnick's</a> <a href="http://mitsloan.mit.edu/academic/courses/15.565.php">&quot;Evolution Towards Web 3.0&quot;</a> class at the MIT Sloan School of Management. The topic of the lecture was—unsurprisingly—the Semantic Web. I had a great time putting together the material and discussing it with the students, who seemed to be very engaged in the topic. It was a less technical audience then I often speak with, and so I tried to focus on some of the motivating trends, use cases, and challenges involved with Semantic Web technologies and the vision of the Semantic Web.</p>  <p>I've now placed the presentation online. It's broken down into three basic parts:</p>  <ul>   <li>What about the development of the Web and enterprise IT motivates the Semantic Web?</li>    <li>How is it being used today?</li>    <li>What are some of the challenges facing the Semantic Web, both on the World Wide Web and within enterprises?</li> </ul>  <p>I found the last of the three sections particularly interesting, and I hope you do too.</p>  <p>The presentation has speaker's notes along with them that add significant commentary to the slides. You can view them by clicking on the &quot;Speaker Notes&quot; tab below the slides. Please let me know what you think: <a href="http://www.slideshare.net/LeeFeigenbaum/evolution-towards-web-30-the-semantic-web">Evolution Towards Web 3.0: The Semantic Web</a>.</p>
]]></content:encoded>

    </item>
  
    <item rdf:about="http://www.thefigtrees.net/lee/blog/2011/03/describing_the_structure_of_rd.html">
      <title>Describing the Structure of RDF Terms</title>
      <link>http://www.thefigtrees.net/lee/blog/2011/03/describing_the_structure_of_rd.html</link>
      <description><![CDATA[I'm wondering if there are existing vocabularies and best practices that deal with the following use case: How do I write down metadata about the return type of a SPARQL function that returns a URI? Since &quot;returns a URI&quot; can be a bit ambiguous in the face of things like xsd:anyURI typed literals, we can be a bit more precise: How do I write down metadata about the return type of a SPARQL function that returns a term for which the isURI function returns true? Functions like this have all sorts of uses. We use them all the time in...]]></description>
      <dc:subject>semantic web</dc:subject>
      <dc:creator>lee</dc:creator>
      <dc:date>2011-03-24T10:20:16-05:00</dc:date>
    <content:encoded><![CDATA[<p>I'm wondering if there are existing vocabularies and best practices that deal with the following use case:</p>  <blockquote>   <p><em>How do I write down metadata about the return type of a SPARQL function that returns a URI?</em></p> </blockquote>  <p>Since &quot;returns a URI&quot; can be a bit ambiguous in the face of things like <tt>xsd:anyURI</tt> typed literals, we can be a bit more precise:</p>  <blockquote>   <p><em>How do I write down metadata about the return type of a SPARQL function that returns a term for which the <tt>isURI</tt> function returns <tt>true</tt>?</em></p> </blockquote>  <p>Functions like this have all sorts of uses. We use them all the time in conjunction with <tt>CONSTRUCT</tt> queries and the SPARQL 1.1 <a href="http://www.w3.org/2009/sparql/docs/query-1.1/rq25.xml#assignment">BIND</a> clause to generate URIs for new resources.</p>  <p>So, when describing this function, how do I write down the return type of one of these URI-generating functions? I want to write something like:</p>  <code>   fn:GenerateURI fn:returns ?? </code>  <p>If I had a function that returned an integer, I'd expect to be able to write something like:</p>  <code>   fn:Floor fn:returns xsd:integer </code>  <p>But in that case, I'm taking advantage of the fact that datatyped literals denote themselves. (Thanks to Andy Seaborne for pointing this out to me.) I can't say this:</p>  <code>   fn:GenerateURI fn:returns xsd:anyURI </code>  <p>This seems to tell me that my function returns something that denotes a URI. (One such things that denotes a URI is an <tt>xsd:anyURI</tt> literal.) But, again, that's not what I want to say here. I want to say that my function returns something that is syntactically a URI. That is, it returns something that is named by a URI. I considerd something like:</p>  <code>   fn:GenerateURI fn:returns rdfs:Resource </code>  <p>But <tt>rdfs:Resource</tt> is a class of everything, and as far as I can tell would mean that my function could return a URI, a literal, or a blank node.</p>  <p>So any suggestions for how to approach this sort of modeling of the return type (and parameter types) for SPARQL functions?</p>
]]></content:encoded>

    </item>
  
    <item rdf:about="http://www.thefigtrees.net/lee/blog/2011/01/cambridge_semantics_is_hiring.html">
      <title>Cambridge Semantics is Hiring</title>
      <link>http://www.thefigtrees.net/lee/blog/2011/01/cambridge_semantics_is_hiring.html</link>
      <description>At Cambridge Semantics, we&apos;re excited to be bringing a few new people onto our team. We&apos;re looking to hire: A Web Engineer. If you&apos;re an expert in serious JavaScript, HTML, and CSS development, this is a great position for you. You&apos;ll be working to further Anzo on the Web, our Web-based self-service reporting and data collection tool that uses semantic technologies to put flexible, data-driven visualizations and analytics in the hands of non-technical business users. A Customer Implementation Engineer. We&apos;re looking for a sharp, creative problem solver to join our professional services team and help our customers use Anzo for...</description>
      <dc:subject>semantic web</dc:subject>
      <dc:creator>lee</dc:creator>
      <dc:date>2011-01-17T10:18:06-05:00</dc:date>
    <content:encoded><![CDATA[<p>At <a href="http://www.cambridgesemantics.com">Cambridge Semantics</a>, we're excited to be bringing a few new people onto our team. We're looking to hire:</p>  <ul>   <li><a href="http://www.cambridgesemantics.com/company/jobs/web_engineer">A Web Engineer</a>. If you're an expert in serious JavaScript, HTML, and CSS development, this is a great position for you. You'll be working to further <a href="http://www.cambridgesemantics.com/products/anzo_on_the_web">Anzo on the Web</a>, our Web-based self-service reporting and data collection tool that uses semantic technologies to put flexible, data-driven visualizations and analytics in the hands of non-technical business users.</li>    <li><a href="http://www.cambridgesemantics.com/company/jobs/customer_implementation_engineer">A Customer Implementation Engineer</a>. We're looking for a sharp, creative problem solver to join our professional services team and help our customers use Anzo for Excel, Anzo on the Web, and the rest of our Anzo semantic technologies&#160; You'll work directly with our customers to solve a wide variety of business problems and also work closely with our entire Cambridge Semantics team, from engineering to sales to marketing.</li>    <li><a href="http://www.cambridgesemantics.com/company/jobs/quality_assurance_engineer">A Quality Assurance Engineer</a>. If you're experienced in designing and executing software test plans and are looking for an exciting opportunity to apply your talents to cutting-edge enterprise semantic software, then check out this position. You'll be working to design, execute, and automate test cases for all of our current and new Anzo products to help make the software as good as it can possibly be.</li> </ul>  <p>If you're interested in applying for any of these positions, please send your resume to <a href="mailto:jobs@cambridgesemantics.com">jobs@cambridgesemantics.com</a>. If you know anyone who might be interested, please send them our way!</p>
]]></content:encoded>

    </item>
  
    <item rdf:about="http://www.thefigtrees.net/lee/blog/2010/11/sparql_rdf_datasets_from_from.html">
      <title>SPARQL, RDF Datasets, FROM, FROM NAMED, and GRAPH</title>
      <link>http://www.thefigtrees.net/lee/blog/2010/11/sparql_rdf_datasets_from_from.html</link>
      <description><![CDATA[Bob DuCharme suggested that I share this explanation about the role of FROM, FROM NAMED, and GRAPH within a SPARQL query. So here it is… A SPARQL query goes against an RDF dataset. An RDF dataset has two parts: A single default graph -- a set of triples with no name attached to them Zero or more named graphs -- each named graph is a pair of a name and a set of triples The FROM and FROM NAMED clauses are used to specify the RDF dataset. The statement &quot;FROM u&quot; instructs the SPARQL processor to take the graph that...]]></description>
      <dc:subject>semantic web</dc:subject>
      <dc:creator>lee</dc:creator>
      <dc:date>2010-11-24T00:23:41-05:00</dc:date>
    <content:encoded><![CDATA[<p><a href="http://www.snee.com/bobdc.blog/">Bob DuCharme</a> suggested that I share this explanation about the role of FROM, FROM NAMED, and GRAPH within a SPARQL query. So here it is…</p>  <hr />  <p>A SPARQL query goes against an RDF dataset. An RDF dataset has two parts: </p>  <ul>   <li>A single default graph -- a set of triples with no name attached to them </li>    <li>Zero or more named graphs -- each named graph is a pair of a name and a set of triples </li> </ul>  <p>The FROM and FROM NAMED clauses are used to specify the RDF dataset. </p>  <p>The statement &quot;FROM u&quot; instructs the SPARQL processor to take the graph that it knows as &quot;u&quot;, take all the triples from it, and add them to the single default graph. If you then also have &quot;FROM v&quot;, then you take the triples from the graph known as v and also add them to the default graph. </p>  <p>The statement &quot;FROM NAMED x&quot; instructs the SPARQL processor to take the graph that it knows as &quot;x&quot;, take all the triples from it, pair it up with the name &quot;x&quot;, and add that pair (x, triples from x) as a named graph in the RDF dataset.    <br />Note that &quot;known as&quot; is purposefully not specified -- some implementations dereference the URI to get the triples that make up that graph; others just use a graph store that maps names to triples. </p>  <p>All the parts of the query that are outside a GRAPH clause are matched against the single default graph. </p>  <p>All the parts of the query that are inside a GRAPH clause are matched individually against the named graphs. </p>  <p>This is why it sometimes makes sense to specify the same graph for both FROM and FROM NAMED: </p>  <blockquote>   <p><font face="Courier New">FROM x        <br />FROM NAMED x </font></p> </blockquote>  <p>...puts the triples from x in the default graph and also includes x as a named graph. So that later in the query, triple patterns outside of a GRAPH clause can match parts of x and so can triple patterns inside a GRAPH clause. </p>  <p>There's a visual picture of this on slide 13 of my <a href="http://www.slideshare.net/LeeFeigenbaum/sparql-cheat-sheet">SPARQL Cheat Sheet slides</a>.</p>
]]></content:encoded>

    </item>
  
    <item rdf:about="http://www.thefigtrees.net/lee/blog/2010/07/could_semtech_run_on_excel_sem.html">
      <title>Could SemTech Run On Excel? (SemTech Lightning Demo)</title>
      <link>http://www.thefigtrees.net/lee/blog/2010/07/could_semtech_run_on_excel_sem.html</link>
      <description>At SemTech a couple of weeks ago, I participated in the jam-packed lightning talk session, 90 minutes packed with 5-minute talks and moderated with great aplomb by Paul Miller. While most of the speakers presented pithy, informative, and witty slide decks, I opted to go a different route: I&apos;ve long believed that some of the biggest value in Semantic Web technologies lies in their ability to dramatically change the timescales involved in traditional IT projects—to this end, I used my 5 minute slot to give a live demo of using our Anzo software suite to build a solution for running...</description>
      <dc:subject>semantic web</dc:subject>
      <dc:creator>lee</dc:creator>
      <dc:date>2010-07-07T10:02:35-05:00</dc:date>
    <content:encoded><![CDATA[<p>At SemTech a couple of weeks ago, I participated in the jam-packed lightning talk session, 90 minutes packed with 5-minute talks and moderated with great aplomb by Paul Miller. While most of the speakers presented pithy, informative, and witty slide decks, I opted to go a different route: I've long believed that some of the biggest value in Semantic Web technologies lies in their ability to dramatically change the timescales involved in traditional IT projects—to this end, I used my 5 minute slot to give a live demo of using our Anzo software suite to build a solution for running a conference such as SemTech using just Excel and a Web browser. </p>  <p>When I got back to Boston, I made a recording of the same lightning demo for posterity. Please enjoy it here and <a href="mailto:lee@thefigtrees.net">drop me a note</a> if you have any questions or would like to learn more.</p>  <object width="640" height="385"><param name="movie" value="http://www.youtube.com/v/hJQMlHoUEVU&amp;hl=en_US&amp;fs=1?rel=0&amp;hd=1"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/hJQMlHoUEVU&amp;hl=en_US&amp;fs=1?rel=0&amp;hd=1" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="640" height="385"></embed></object>  <p>(Best viewed in full screen, 720p.)</p>
]]></content:encoded>

    </item>
  
    <item rdf:about="http://www.thefigtrees.net/lee/blog/2010/07/early_sparql_reviews.html">
      <title>Early SPARQL Reviews</title>
      <link>http://www.thefigtrees.net/lee/blog/2010/07/early_sparql_reviews.html</link>
      <description>The SPARQL Working Group is still working on all of our specifications. None are yet at Last Call, though we feel our designs are quite stable&#160; and we&apos;re hoping to reach Last Call within a few months. Standard W3C process encourages interested community members to review Working Drafts as they&apos;re produced, but especially encourages reviews of Last Call drafts. While we will of course do this (solicit as widespread review of our Last Call drafts as possible), I&apos;d like to put out a call for reviews of our current set of Working Drafts. If you can only do one review,...</description>
      <dc:subject>semantic web</dc:subject>
      <dc:creator>lee</dc:creator>
      <dc:date>2010-07-01T13:59:49-05:00</dc:date>
    <content:encoded><![CDATA[<p><a href="http://www.thefigtrees.net/lee/blog/WindowsLiveWriter/d50358b40700_8F8D/sw-sparql-orange%5B1%5D_2.png"><img style="border-bottom: 0px; border-left: 0px; margin: 5px 2px 0px 0px; display: inline; border-top: 0px; border-right: 0px" title="sw-sparql-orange[1]" border="0" alt="sw-sparql-orange[1]" align="left" src="http://www.thefigtrees.net/lee/blog/WindowsLiveWriter/d50358b40700_8F8D/sw-sparql-orange%5B1%5D_thumb.png" width="84" height="19" /></a>The SPARQL Working Group is still working on all of our specifications. None are yet at Last Call, though we feel our designs are quite stable&#160; and we're hoping to reach Last Call within a few months. Standard W3C process encourages interested community members to review Working Drafts as they're produced, but <em>especially</em> encourages reviews of Last Call drafts.</p>  <p>While we will of course do this (solicit as widespread review of our Last Call drafts as possible), I'd like to put out a call for reviews of our current set of Working Drafts. If you can only do one review, you're probably best off waiting for Last Call; but if you have the inclination and time, it would be great to receive reviews of our current set of Working Drafts at our comments list at <a href="mailto:public-rdf-dawg-comments@w3.org">public-rdf-dawg-comments@w3.org</a>. The Working Group has committed to responding formally to all comments received from hereon out.</p>  <p>Here is our current set of documents, along with a few explicit areas/issues that the Working Group and editors would love to receive feedback about (of course, <em>all</em> reviews &amp; <em>all</em> feedback is welcome):</p>  <h2><a href="http://www.w3.org/TR/sparql11-query/">SPARQL 1.1 Query</a></h2>  <ul>   <li>Feedback on <tt>MINUS</tt> and <tt>NOT EXISTS</tt>, the two new negation constructs in SPARQL 1.1 (section 8) </li>    <li>Feedback on the new functions in SPARQL 1.1 (15.4.14 through 15.4.21) </li>    <li>Feedback on the aggregates (&quot;set functions&quot;) included in SPARQL 1.1 (section 10.2.1) </li>    <li>Feedback on property paths (currently in <a href="http://www.w3.org/TR/sparql11-property-paths/">its own document</a>) </li> </ul>  <h2><a href="http://www.w3.org/TR/sparql11-update/">SPARQL 1.1 Update</a></h2>  <ul>   <li>Handling of RDF datasets in SPARQL Update (particularly the <tt>WITH</tt>, <tt>USING</tt>, and <tt>USING NAMED</tt> clauses) </li> </ul>  <h2><a href="http://www.w3.org/TR/sparql11-service-description/">SPARQL 1.1 Service Description</a></h2>  <ul>   <li>Discovery mechanism for service descriptions (section 2) </li>    <li>Modeling of graphs and RDF datasets (3.2.7 through 3.2.10 and 3.4.11 through 3.4.17) </li>    <li>Service description as related to entailment (3.2.5, 3.2.6 and 3.4.3 through 3.4.5) </li> </ul>  <h2><a href="http://www.w3.org/TR/sparql11-entailment/">SPARQL 1.1 Entailment Regimes</a></h2>  <ul>   <li>The mechanisms for restricting solutions in all regimes </li>    <li>Are the OWL Direct Semantics too general? E.g. it allows for variables in complex class expressions </li> </ul>  <h2><a href="http://www.w3.org/TR/sparql11-federated-query/">SPARQL 1.1 Federation Extensions</a></h2>  <ul>   <li>Should support for SERVICE be mandatory in SPARQL 1.1 Query implementations? </li>    <li>Should support for BINDINGS be mandatory in SPARQL 1.1 Query implementations? </li> </ul>  <h2><a href="http://www.w3.org/TR/sparql11-http-rdf-update/">SPARQL 1.1 Uniform HTTP Protocol for Managing RDF Graphs</a></h2>  <ul>   <li>Interpretation/translation of HTTP verbs into SPARQL Update statements </li>    <li>Handling of indirect graph identification (section 4.2 et al.) </li> </ul>
]]></content:encoded>

    </item>
  
    <item rdf:about="http://www.thefigtrees.net/lee/blog/2009/09/does_anyone_use_sparql_over_so.html">
      <title>Does anyone use SPARQL over SOAP?</title>
      <link>http://www.thefigtrees.net/lee/blog/2009/09/does_anyone_use_sparql_over_so.html</link>
      <description>The SPARQL Working Group would like to know if anyone uses SPARQL over SOAP. Please leave a comment if you do. (We know that several implementations support a SOAP implementation of the SPARQL protocol, but we don’t have much evidence that this part of such implementations is ever used.) Thanks!...</description>
      <dc:subject>semantic web</dc:subject>
      <dc:creator>lee</dc:creator>
      <dc:date>2009-09-08T11:32:35-05:00</dc:date>
    <content:encoded><![CDATA[<p>The SPARQL Working Group would like to know if anyone uses SPARQL over SOAP. Please leave a comment if you do. (We know that several implementations support a SOAP implementation of the SPARQL protocol, but we don’t have much evidence that this part of such implementations is ever used.)</p>  <p>Thanks!</p>
]]></content:encoded>

    </item>
  
    <item rdf:about="http://www.thefigtrees.net/lee/blog/2009/07/constructing_quads.html">
      <title>CONSTRUCTing Quads</title>
      <link>http://www.thefigtrees.net/lee/blog/2009/07/constructing_quads.html</link>
      <description>I promised Danny that I’d write this up, so here’s to making good on promises. Open Anzo is a quad store. (I’ve written about this before.) All of the services Open Anzo offers—versioning, replication, real-time updates, access control, etc.—are oriented around named graphs. Time and time again we’ve found named graphs to be invaluable in building applications atop an RDF repository. And while SPARQL took the first steps towards standardizing quads via the named graphs component of the RDF dataset, the CONSTRUCT query result form only returned triples. For our purposes in Open Anzo, this severely limits the usefulness of...</description>
      <dc:subject>semantic web</dc:subject>
      <dc:creator>lee</dc:creator>
      <dc:date>2009-07-07T18:18:08-05:00</dc:date>
    <content:encoded><![CDATA[<p>I promised <a href="http://twitter.com/danja">Danny</a> that I’d write this up, so here’s to making good on promises.</p>  <p><a href="http://openanzo.org">Open Anzo</a> is a quad store. (I’ve written about this <a href="http://www.thefigtrees.net/lee/blog/2009/03/named_graphs_in_open_anzo.html">before</a>.) All of the services Open Anzo offers—versioning, replication, real-time updates, access control, etc.—are oriented around named graphs. Time and time again we’ve found named graphs to be invaluable in <a href="http://www.cambridgesemantics.com/products/anzo_for_excel">building</a> <a href="http://www.cambridgesemantics.com/products/anzo_on_the_web">applications</a> atop an RDF repository.</p>  <p>And while SPARQL took the first steps towards standardizing quads via the named graphs component of the <a href="http://www.w3.org/TR/rdf-sparql-query/#rdfDataset">RDF dataset</a>, the CONSTRUCT query result form only returned triples.</p>  <p>For our purposes in Open Anzo, this severely limits the usefulness of CONSTRUCT. We can’t use it to pull out a subset of the server’s data, as any data returned has been stripped of its named graph component. The solution was pretty simple, and is a good example of practicing what I’ve been preaching recently: a key part of the standards process is for implementations to extend the standards.</p>  <p>In this case, we simply extended Glitter’s (Open Anzo’s SPARQL engine) CONSTRUCT templates to support a GRAPH clause, in exactly the same way that SPARQL query patterns support GRAPH clauses. This means that any triple pattern within a CONSTRUCT template will now either output a triple (if its outside any GRAPH clause) or a quad (if its inside a GRAPH clause).</p>  <p>Key to making this happen is the fact that both the Open Anzo server and the three client APIs (Java, JavaScript, and .NET) support serializing and deserializing quads to/from the <a href="http://www4.wiwiss.fu-berlin.de/bizer/TriG/">TriG RDF serialization format</a>. TriG’s a very straightforward extension of Turtle, and I’d like to see it used more and more throughout Semantic Web circles.</p>  <p>Anyway, here are a few simple examples of CONSTRUCTing quads in practice:</p>  <pre># fix up typo'ed predicates
CONSTRUCT {
  GRAPH ?g {
    ?s rdf:type ?o
  }
} WHERE {
  GRAPH ?g {
    ?s rdf:typo ?o
  }
}

# copy triples into a new graph
CONSTRUCT {
  GRAPH ex:newGraph {
    ?s ?p ?o
  }
} WHERE {
  ?s ?p ?o
}

# more complicated -- place constructed triples in
# a new “inferred” graph and indicate this fact in
# an Open Anzo metadata graph associated with the
CONSTRUCT {
  GRAPH ex:inferredGraph {
    ?p ex:uncle ?uncle
  }
  GRAPH ?mdg {
    ?mdg anzo:hasInferredGraph true
  }
} WHERE {
  GRAPH ?g {
    ?p ex:parent [ ex:brother ?uncle ] .
  }
  GRAPH ?mdg {
    ?mdg a anzo:metadatagraph ; anzo:namedGraph ?g
  }
}   </pre>

<p>Of course, combine this with some of <a href="http://www.openanzo.org/projects/openanzo/wiki/SPARQLExtensions">the other SPARQL extensions that Glitter supports</a>—subqueries, projected expressions, assignment, and aggregates being my favorites—and you’ve got a powerful way to transform and extract quad-based RDF data.</p>
]]></content:encoded>

    </item>
  

</rdf:RDF>