LEADER 05301nam 2200625 450 001 9910813903503321 005 20200903223051.0 010 $a1-61499-337-8 035 $a(CKB)2550000001179582 035 $a(EBL)1589005 035 $a(SSID)ssj0000728176 035 $a(PQKBManifestationID)11426952 035 $a(PQKBTitleCode)TC0000728176 035 $a(PQKBWorkID)10689948 035 $a(PQKB)11383643 035 $a(Au-PeEL)EBL1589005 035 $a(CaPaEBR)ebr10827968 035 $a(CaONFJC)MIL559721 035 $a(OCoLC)868226524 035 $a(MiAaPQ)EBC1589005 035 $a(EXLCZ)992550000001179582 100 $a20140123h20102010 uy 0 101 0 $aeng 135 $aur|n|---||||| 181 $ctxt 182 $cc 183 $acr 200 10$aBenchmarking semantic web technology /$fRau?l Garci?a Castro 210 1$aHeidelberg, Germany :$cIOS Press :$cAKA,$d2010. 210 4$dİ2010 215 $a1 online resource (338 p.) 225 1 $aStudies on the Semantic Web,$x1868-1158 ;$vVolume 003 300 $aDescription based upon print version of record. 311 $a1-60750-053-1 311 $a1-306-28470-8 320 $aIncludes bibliographical references. 327 $aTitle Page; Acknowledgements; Contents; Introduction; Context; The Semantic Web; Brief introduction to Semantic Web technologies; Semantic Web technology evaluation; The need for benchmarking in the Semantic Web; Semantic Web technology interoperability; Heterogeneity in ontology representation; The interoperability problem; Categorising ontology differences; Thesis contributions; Thesis structure; State of the Art; Software evaluation; Benchmarking; Benchmarking vs evaluation; Benchmarking classifications; Evaluation and improvement methodologies; Benchmarking methodologies 327 $aSoftware Measurement methodologiesExperimental Software Engineering methodologies; Benchmark suites; Previous interoperability evaluations; Conclusions; Work objectives; Thesis goals and open research problems; Contributions to the state of the art; Work assumptions, hypothesis and restrictions; Benchmarking methodology for Semantic Web technologies; Design principles; Research methodology; Selection of relevant processes; Identification of the main tasks; Task adaption and completion; Analysis of task dependencies; Benchmarking methodology; Benchmarking actors; Benchmarking process 327 $aPlan phaseExperiment phase; Improvement phase; Recalibration task; Organizing the benchmarking activities; Plan phase; Experiment phase; RDF(S) Interoperability Benchmarking; Experiment definition; RDF(S) Import Benchmark Suite; RDF(S) Export Benchmark Suite; RDF(S) Interoperability Benchmark Suite; Experiment execution; Experiments performed; Experiment automation; RDF(S) import results; KAON RDF(S) import results; Protege-Frames RDF(S) import results; WebODE RDF(S) import results; Corese, Jena and Sesame RDF(S) import results; Evolution of RDF(S) import results; Global RDF(S) import results 327 $aRDF(S) export resultsKAON RDF(S) export results; Protege-Frames RDF(S) export results; WebODE RDF(S) export results; Corese, Jena and Sesame RDF(S) export results; Evolution of RDF(S) export results; Global RDF(S) export results; RDF(S) interoperability results; KAON interoperability results; Protege-Frames interoperability results; WebODE interoperability results; Global RDF(S) interoperability results; OWL Interoperability Benchmarking; Experiment definition; The OWL Lite Import Benchmark Suite; Benchmarks that depend on the knowledge model; Benchmarks that depend on the syntax 327 $aDescription of the benchmarksTowards benchmark suites for OWL DL and Full; Experiment execution: the IBSE tool; IBSE requirements; IBSE implementation; Using IBSE; OWL compliance results; GATE OWL compliance results; Jena OWL compliance results; KAON2 OWL compliance results; Protege-Frames OWL compliance results; Protege-OWL OWL compliance results; SemTalk OWL compliance results; SWI-Prolog OWL compliance results; WebODE OWL compliance results; Global OWL compliance results; OWL interoperability results; OWL interoperability results per tool; Global OWL interoperability results 327 $aEvolution of OWL interoperability results 330 $aThis book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined: 410 0$aStudies on the Semantic Web ;$vv. 3. 606 $aSemantic Web 615 0$aSemantic Web. 676 $a025.04 700 $aGarci?a Castro$b Rau?l$01678554 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a9910813903503321 996 $aBenchmarking semantic web technology$94046298 997 $aUNINA