Experimental Track Research and Development: Objectives and Overview

Experimental Track Research and Development: Objectives and Overview
paly

This article provides an overview of the Experimental Track, with a focus on its objectives and phases. It discusses the importance of conducting experimental studies with scalable and distributed triplestores, and how the lessons learned from the experience can be used to produce effective and efficient solutions.

  • Uploaded on | 1 Views
  • meaghan meaghan

About Experimental Track Research and Development: Objectives and Overview

PowerPoint presentation about 'Experimental Track Research and Development: Objectives and Overview'. This presentation describes the topic on This article provides an overview of the Experimental Track, with a focus on its objectives and phases. It discusses the importance of conducting experimental studies with scalable and distributed triplestores, and how the lessons learned from the experience can be used to produce effective and efficient solutions.. The key topics included in this slideshow are experimental track, research and development, scalable, distributed triplestores, effective solutions,. Download this presentation absolutely free.

Presentation Transcript


1. Experimental Track Research and Development

2. Outline A Brief Overview of the Experimental Track Phase I The Imitation Game Phase II Can we make the world better?

3. Outline A Brief Overview of the Experimental Track Phase I The Imitation Game Phase II Can we make the world better?

4. An Overview of the Experimental Track - Objectives The objectives of the experimental track are twofold: Conduct an experimental study with scalable and distributed triplestores to replicate the results and compare their performance . Use the lessons of experience for producing effective and efficient solutions.

5. An Overview of the Experimental Track Basic Concepts Resource Description Framework It is the foundation of defining the data structure for the Semantic Web (SW). It defines data in the form of . A collection of RDF data forms directed, labeled graph. Blinked Data It is a special type of data which is a combination of the two most recent notions Big and Linked data. Triplestore A triplestore is a purpose-built database for the storage and retrieval of triples. CEDAR CEDAR PROJECT PROJECT ANR ANR rdf:type fundedBy

6. An Overview of the Experimental Track - Basic Concepts SPARQL SPARQL is a query language for retrieving and manipulating RDF databases. Hadoop/MapReduce A de facto framework for building scalable and distributed applications for storing and processing Big Data on clusters of commodity hardware. The core of the framework is Hadoop Distributed File System (HDFS) and it relies on MapReduce programming paradigm. Benchmarking Tool It facilitates the evaluation of triplestore with respect to queries. The Architecture of Hadoop

7. An Overview of the Experimental Track Key People Lead Researcher Prof. Mohand-Sa d Hacid Manager/Researcher/Developer Rafiqul Haque Lead Developers Minwei CHEN (CedTMart) Tanguy Raynaud (CedCom) Mohammed Ali Hammal (Xadoop) Third Party Service Provider Laurent Pouilloux (Grid5000, cole Normale Suprieure, Lyon) Vincent Hurtevent (LIRIS Cloud, Universit Claude Bernard Lyon 1, Lyon ) Research Chair Prof. Hassan A t-kaci

8. Outline A Brief Overview of the Experimental Track Phase I The Imitation Game Experiments with Triplestores Large-scale Triplestores Experiment Platform Experiment Setting Results Lessons Learnt

9. Experiments with Triplestores Large-scale Triplestores 1. Scalability can be a challenge 2. Fault-tolerance cannot be guaranteed 3. Performance (recorded through benchmark) is satisfactory 4. Some are commercially successful 1. Easily scalable 2. Fault-tolerance is guaranteed 3. Performance (recorded through benchmark) is satisfactory 4. They are prototypes/PoC

10. Experiments with Triplestores Experiment Platform

11. Experiments with Triplestores Experiment Setting Triplestores SHARD (Scalable, High-Performance, Robust and Distributed) Developed by the BBN technologies SHARD saves intermediate results of the query processing to speed up the processing of similar later queries RDFPig Developed jointly by VUA and Yahoo It relies on Skew-resistant optimization technique LUBM A benchmarking tool to facilitate the evaluation of triplestore with respect to extensional queries. Queries 14 SPARQL queries suggested by LUBM Dataset 20 Gigabytes

12. Experiments with Triplestores Experiment Setting Hardware Specification for Node X Processor : Intel(R) Core I3, 3.20 Ghz, 64 bit OS : Ubuntu 12.04, 64 bit HDD : 1 TB SATA Memory : 8 GB Hardware Specification for node Y Processor : Intel(R) Quad Core (TM) I5, CPU 2.70 Ghz 64 bit OS : Ubuntu 12.04, 64 bit HDD : 300 GB SATA Memory : 16 GB Hardware Specification for nodes in distributed cluster Processor : Intel(R) Dual Core (TM) I5, CPU 2.20 Ghz 64 bit OS : Ubuntu 12.04, 64 bit HDD : 500 GB SATA Memory : 8 GB

13. Experiments with Triplestores Experiment Setting SHARD Triplestore 20 GB Dataset SHARD Triplestore 800 Million Triples 20 GB Dataset Times (in Seconds) SHARD Triplestore 20 GB Dataset

14. Experiments with Triplestores Lessons Learnt Replication of results Myth or Real? The performance of triplestores depends on several issues: Efficient data partition Distribution of data within clusters Efficient query plans Hardware specification Size of main memory HDD RPM Clock cycle time According to our cost model, performance is the most challenging issue for Big Data processing applications.

15. Experiments with Triplestores Lessons Learnt (cont.) The insight of Hadoop/MapReduce It guarantees scalability for Big Linked Data applications It has almost 1 50 configuration parameters . Therefore, configuring a high-performance Hadoop cluster is highly challenging The performance tuning parameters are heavily dependent on each other At runtime, failure of slave nodes affects application performance The overhead cost affect performance The delay time between map sort reduce can affect the performance severely The creators were friendly but not always responsive . Hadoop/MapReduce can be an ideal framework for building analytics with batch processing style. However, it is not ideal technology for a real-time analytic. Hadoop/MapReduce can be an ideal framework for building analytics with batch processing style. However, it is not ideal technology for a real-time analytic.

16. Outline Phase II Can we make the world better? An Overview of CedTMart Triplestore The Architecture CedTMart Components Query Optimization Conclusion and Future Work Demonstration Implementation Credit : The main implementer of CedTMart is Minwei Chen, M.Sc.

17. An Overview of CedTMart CedTMart is a framework for processing Big Linked Data and queries in both centralized and distributed environment. The main goal of this framework is to guarantee the scalability and high-performance in processing queries on large-scale datasets. The key functions of this framework are: data cleaning , conversion , compression , comparison , distribution , and query processing .

18. The Architecture

19. CedTMart Components Data Converter CedTMart is compatible with Notation 3 (N3) RDF data format. The converter converts any RDF serialization format to N3. RDF2RDF Library Developed By: Enrico Minach Link: http://www.l3s.de/~minack/rdf2rdf/

20. CedTMart Components Data Converter Converted to

21. CedTMart Components Data Cleaner Data cleaning is carried out in two steps: Validate triples Separate invalid triples It mainly relies on syntactic rules of RDF recommended by W3C. Rule based Validator Invalid Triple Separator <> a owl:Ontology . <> owl:imports < htt://www.lehigh.edu/%7Ezhp2/2004/0401/univ- bench.owl> . Example of invalid triples

22. CedTMart Components Data Partitioner Data Partitioner performs two different types of partitioning Predicate Partitioning (PP) Splitting triples by predicate It is a kind of predicate indexing Predicate Object Partitioning (POP) Splitting triples by predicate-object of type It is a kind of type indexing In real world RDF datasets, the number of predicate is no more than 10 or 20 (M. Socker et al., 2008)

23. CedTMart Components Data Partitioner Predicate Partition Predicate Object Partition Data Reader Predicate Partitioner Predicate Object Partitioner

24. CedTMart Components Data Compressor Mapping Triples to Bit Matrix (BitMat) Example of BitMat Representation Index Table BitMat (M. Atre, 2009) Our idea S-O and O-S BitMat Tables of PP

25. CedTMart Components Data Compressor D-Gap Compression It is a process to translating a bit array into a structure using integer. It is a special type of Run Length Encoding (RLE) method. The key purpose is to compress the sparse BitMat matrices and enable the system to load the S-O (Subject-Object) and O-S (Object-Subject) matrices onto the main memory which helps processing queries faster. It enables performing logical operations such as processing queries directly on the compressed files .

26. CedTMart Components Data Compressor The objective is to load compressed data instead of loading a huge binary sequence. D-Gap Compression Process It starts with the Start Bit whose binary sequence is always either 1 or 0. Then, the length of the binary sequence of equal bits is counted. The total length is calculated by summing the length of equal bits.

27. CedTMart Components Data Compressor BitMat S-O and O-S matrices Compressed Predicate: S-O Matrix Compressed Predicate: O-S Matrix Index The data compression process example.

28. CedTMart Components Data Distributor Data distributor performs two functions: Data Comparison The key purpose is to partition dataset intelligently to reduce communication between data nodes. It is a process of comparing subject-object pairs between the predicates. It calculates similarity between predicates by summing the common distinct subjects and objects between two predicates. Data Distribution It distributes data within a cluster of nodes.

29. CedTMart Components Data Distributor Predicate Comparison Process P1 S 1 O 13 S 2 O 28 S 3 O 3 S 4 O 44 S 5 O 50 P2 S 1 O 28 S 27 O 28 S 31 O 31 S 4 O 4 P3 S 15 O 15 S 21 O 21 S 33 O 33 S 40 O 40 P4 S 11 O 11 S 2 O 12 S 2 O 39 S 4 O 40 P5 S 18 O 18 S 2 O 18 S 3 O 3 S 4 O 4 P6 S 1 O 1 S 26 O 26 S 53 O 53 S 4 O 4 S 33 O 39 S 21 O 21 S 33 O 33 S 28 O 28 P1 P2 RS or P1 P3 RS or P1 P4 RS or P1 P5 RS or P1 P6 RS or P2 P3 RS or P2 P4 RS or P2 P5 RS or P2 P6 RS or P3 P4 RS or P3 P5 RS or P3 P6 RS or P4 P5 RS or P4 P6 RS or P5 P6 RS or

30. CedTMart Components Data Distributor DN : Directory Node CN : Computer Node Data Distribution Protocol

31. CedTMart Components Query Processor The query processor is composed of three components: Query analyser Performs analysis Measure the complexity of queries Query Planner Partitions SPARQL graphs Ranks subgraphs Calculate costs Generates an efficient plan based on the cost of executing subgraphs (parts of a query) Query executor Executes subgraphs

32. CedTMart Components Query Processor SELECT ?x WHERE { ?x hasFriend ?y. ?x hasEmployer ?z. ?y hasEmployer ?z. ?x likes ?t. ?y likes ?t. ?t hasAuthor Martin. Promod hasFriend Martin } Query form Result variable Triple Patterns Pattern variable Clause Example of a SPARQL Query SPARQL Query expressed in the form of Graph.

33. CedTMart Components Query Processor Query Analysis number of query forms, type of query form, number of result variables, type of result variable, number of triple patterns, number of variables contained in each triple pattern, number of modifiers, number of global joins, availability of data set definition, availability of query pattern, availability of modifier. SELECT ?x WHERE { ?x hasFriend ?y. ?x hasEmployer ?z. ?y hasEmployer ?z. ?x likes ?t. ?y likes ?t. ?t hasAuthor Martin. Promod hasFriend Martin } Query form : 1 Result Variable: 1 Triple Pattern : 7 Pattern Variable : 4 Global Join : 6 Modifier : 0 Summary of Analysis

34. CedTMart Components Query Processor Query Planning SPARQL Graph Partitioning It is a process of decomposing a SPARQL graphs (queries) into subgraphs (sub-queries) by number of variables contained in triple patterns. The is to increase parallelization and increase the speed of processing queries.

35. CedTMart Components Query Processor Query Planning SPARQL SubGraph Ranking It is a process of ordering subgraphs by number of variables contained in triple patterns. The key purposes is to prioritize the execution sequence, assists in calculating cost, and minimize the number of execution. QRank SG1 1 SG2 2 SG3 3

36. CedTMart Components Query Processor Cost calculation Query processing cost is calculated by summing three cost patterns with their coefficients , Where: C T denotes the cost triple pattern C P denotes the cost of Predicate C V denotes the cost Predicate variables , , denote the coefficients whose values depend on the size of preprocessed datasets and nature of queries C Q = ( C T + C P + C V )

37. CedTMart Components Query Processor Cost of Triple Pattern It is determined by the number of variables a subgraph contains. Cost of Predicate It is determined by the size of predicates and the cost of subject and/or object. The size of a predicate is determined by the number of subjects and objects contained in a predicate file. The cost of a subject and an object is determined by the number of entries in S-O and O-S matrix. Cost of Variable It is determined by number of times a variable appears in a triple pattern and the total number of variable contained in that triple pattern. The cost of predicate is another parameter of calculating cost of variables.

38. CedTMart Components Query Processor Generating Execution Plan The planner generates an efficient execution plan based on the estimated cost. Query Execution The executor executes queries according to the generated plan.

39. Query Optimization Optimization Predicate partitioning helps in planning global and local joins efficiently. Predicate object partition is a type of index which enables generating a highly-efficient plan for a special type of queries. Intelligent partitioning of data blocks within a cluster can optimize the performance. Multilevel indexing enables faster information retrieval. We developed a binary tree structure of block location which enables identify data location faster . The cost patterns assist in generating efficient query execution plan. D-Gap compression enables loading a huge index table on memory.

40. Experiment Experiment Setting 3Node Cluster Dataset: 10 GB LUBM data Node Specification Intel Xeon E312xx 8-core Processor OS: Ubuntu 12.04 Memory: 16 GB HDD: 15 GB (root disk) 500 GB and 100 GB (Block storage volumes) Q1: SELECT ?X WHERE {?X rdf:type ub:GraduateStudent} Q2: SELECT ?X WHERE {?X rdf:type ub:ResearchAssistant. ?X ub:memberOf http://www.Department0.University3466.edu } Times in milliseconds

41. Conclusion & Future Work Conclusion The CedTMart framework provides a complete set of functionalities for processing multi-join queries on Big Linked dataset. It has been built on various methods and techniques to guarantee high-performance in processing queries. The cost patterns along with binary tree representation data block locations are efficient techniques for processing queries faster. The framework is scalable but it needs further investigation.

42. Conclusion & Future Work Conclusion Contribution: experimenting with our own ideas for a distributed triplestore. The query processor has a few limitations: It cannot handle all SPARQL query forms. The compressor is memory bound. It cannot process modifiers, and optional pattern queries. The performance of CedTmart in Big Linked Data is not yet guaranteed . It does not have some important functional features such as Update.

43. Conclusion & Future Work Potential Improvements The data cleaner should be improved by adding more rules. CedTMart should be experimented in a massive-scale cluster to validate scalability. The algorithms and optimization techniques should be refined by conducting more experiments with respect to different use cases. Further research is required to optimize the global join of complex queries. The source code of triplestore needs debugging.

44. Demonstration

Related