36 resultados para Hadoop


Relevância:

20.00% 20.00%

Publicador:

Resumo:

MapReduce frameworks such as Hadoop are well suited to handling large sets of data which can be processed separately and independently, with canonical applications in information retrieval and sales record analysis. Rapid advances in sequencing technology have ensured an explosion in the availability of genomic data, with a consequent rise in the importance of large scale comparative genomics, often involving operations and data relationships which deviate from the classical Map Reduce structure. This work examines the application of Hadoop to patterns of this nature, using as our focus a wellestablished workflow for identifying promoters - binding sites for regulatory proteins - Across multiple gene regions and organisms, coupled with the unifying step of assembling these results into a consensus sequence. Our approach demonstrates the utility of Hadoop for problems of this nature, showing how the tyranny of the "dominant decomposition" can be at least partially overcome. It also demonstrates how load balance and the granularity of parallelism can be optimized by pre-processing that splits and reorganizes input files, allowing a wide range of related problems to be brought under the same computational umbrella.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper compares parallel and distributed implementations of an iterative, Gibbs sampling, machine learning algorithm. Distributed implementations run under Hadoop on facility computing clouds. The probabilistic model under study is the infinite HMM [1], in which parameters are learnt using an instance blocked Gibbs sampling, with a step consisting of a dynamic program. We apply this model to learn part-of-speech tags from newswire text in an unsupervised fashion. However our focus here is on runtime performance, as opposed to NLP-relevant scores, embodied by iteration duration, ease of development, deployment and debugging. © 2010 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

对现有HDFS的设计架构进行分析,并借与LinuxFS架构的对比凸显HDFS的分布式特性。通过分析得出:现有的HDFS架构使用Java的Map界面,不利于任务的分解和并行处理,因此HDFS仅在数据的存储上实现分布式,数据处理依然是集中式的,这就形成了对NameNode的依赖,随着集群的扩大,NameNode的性能成为系统瓶颈,并提出了解决方向。

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Il presente elaborato ha come oggetto la progettazione e lo sviluppo di una soluzione Hadoop per il Calcolo di Big Data Analytics. Nell'ambito del progetto di monitoraggio dei bottle cooler, le necessità emerse dall'elaborazione di dati in continua crescita, ha richiesto lo sviluppo di una soluzione in grado di sostituire le tradizionali tecniche di ETL, non pi�ù su�fficienti per l'elaborazione di Big Data. L'obiettivo del presente elaborato consiste nel valutare e confrontare le perfomance di elaborazione ottenute, da un lato, dal flusso di ETL tradizionale, e dall'altro dalla soluzione Hadoop implementata sulla base del framework MapReduce.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Il presente elaborato ha come oggetto l’analisi delle prestazioni e il porting di un sistema di SBI sulla distribuzione Hadoop di Cloudera. Nello specifico è stato fatto un porting dei dati del progetto WebPolEU. Successivamente si sono confrontate le prestazioni del query engine Impala con quelle di ElasticSearch che, diversamente da Oracle, sfrutta la stessa componente hardware (cluster).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Negli ultimi anni la biologia ha fatto ricorso in misura sempre maggiore all’informatica per affrontare analisi complesse che prevedono l’utilizzo di grandi quantità di dati. Fra le scienze biologiche che prevedono l’elaborazione di una mole di dati notevole c’è la genomica, una branca della biologia molecolare che si occupa dello studio di struttura, contenuto, funzione ed evoluzione del genoma degli organismi viventi. I sistemi di data warehouse sono una tecnologia informatica che ben si adatta a supportare determinati tipi di analisi in ambito genomico perché consentono di effettuare analisi esplorative e dinamiche, analisi che si rivelano utili quando si vogliono ricavare informazioni di sintesi a partire da una grande quantità di dati e quando si vogliono esplorare prospettive e livelli di dettaglio diversi. Il lavoro di tesi si colloca all’interno di un progetto più ampio riguardante la progettazione di un data warehouse in ambito genomico. Le analisi effettuate hanno portato alla scoperta di dipendenze funzionali e di conseguenza alla definizione di una gerarchia nei dati. Attraverso l’inserimento di tale gerarchia in un modello multidimensionale relativo ai dati genomici sarà possibile ampliare il raggio delle analisi da poter eseguire sul data warehouse introducendo un contenuto informativo ulteriore riguardante le caratteristiche dei pazienti. I passi effettuati in questo lavoro di tesi sono stati prima di tutto il caricamento e filtraggio dei dati. Il fulcro del lavoro di tesi è stata l’implementazione di un algoritmo per la scoperta di dipendenze funzionali con lo scopo di ricavare dai dati una gerarchia. Nell’ultima fase del lavoro di tesi si è inserita la gerarchia ricavata all’interno di un modello multidimensionale preesistente. L’intero lavoro di tesi è stato svolto attraverso l’utilizzo di Apache Spark e Apache Hadoop.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ogni giorno vengono generati grandi moli di dati attraverso sorgenti diverse. Questi dati, chiamati Big Data, sono attualmente oggetto di forte interesse nel settore IT (Information Technology). I processi digitalizzati, le interazioni sui social media, i sensori ed i sistemi mobili, che utilizziamo quotidianamente, sono solo un piccolo sottoinsieme di tutte le fonti che contribuiscono alla produzione di questi dati. Per poter analizzare ed estrarre informazioni da questi grandi volumi di dati, tante sono le tecnologie che sono state sviluppate. Molte di queste sfruttano approcci distribuiti e paralleli. Una delle tecnologie che ha avuto maggior successo nel processamento dei Big Data, e Apache Hadoop. Il Cloud Computing, in particolare le soluzioni che seguono il modello IaaS (Infrastructure as a Service), forniscono un valido strumento all'approvvigionamento di risorse in maniera semplice e veloce. Per questo motivo, in questa proposta, viene utilizzato OpenStack come piattaforma IaaS. Grazie all'integrazione delle tecnologie OpenStack e Hadoop, attraverso Sahara, si riesce a sfruttare le potenzialita offerte da un ambiente cloud per migliorare le prestazioni dell'elaborazione distribuita e parallela. Lo scopo di questo lavoro e ottenere una miglior distribuzione delle risorse utilizzate nel sistema cloud con obiettivi di load balancing. Per raggiungere questi obiettivi, si sono rese necessarie modifiche sia al framework Hadoop che al progetto Sahara.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Desde el inicio de los tiempos el ser humano ha tenido la necesidad de comprender y analizar todo lo que nos rodea, para ello se ha valido de diferentes herramientas como las pinturas rupestres, la biblioteca de Alejandría, bastas colecciones de libros y actualmente una enorme cantidad de información informatizada. Todo esto siempre se ha almacenado, según la tecnología de la época lo permitía, con la esperanza de que fuera útil mediante su consulta y análisis. En la actualidad continúa ocurriendo lo mismo. Hasta hace unos años se ha realizado el análisis de información manualmente o mediante bases de datos relacionales. Ahora ha llegado el momento de una nueva tecnología, Big Data, con la cual se puede realizar el análisis de extensas cantidades de datos de todo tipo en tiempos relativamente pequeños. A lo largo de este libro, se estudiarán las características y ventajas de Big Data, además de realizar un estudio de la plataforma Hadoop. Esta es una plataforma basada en Java y puede realizar el análisis de grandes cantidades de datos de diferentes formatos y procedencias. Durante la lectura de estas páginas se irá dotando al lector de los conocimientos previos necesarios para su mejor comprensión, así como de ubicarle temporalmente en el desarrollo de este concepto, de su uso, las previsiones y la evolución y desarrollo que se prevé tenga en los próximos años. ABSTRACT. Since the beginning of time, human being was in need of understanding and analyzing everything around him. In order to do that, he used different media as cave paintings, Alexandria library, big amount of book collections and nowadays massive amount of computerized information. All this information was stored, depending on the age and technology capability, with the expectation of being useful though it consulting and analysis. Nowadays they keep doing the same. In the last years, they have been processing the information manually or using relational databases. Now it is time for a new technology, Big Data, which is able to analyze huge amount of data in a, relatively, small time. Along this book, characteristics and advantages of Big Data will be detailed, so as an introduction to Hadoop platform. This platform is based on Java and can perform the analysis of massive amount of data in different formats and coming from different sources. During this reading, the reader will be provided with the prior knowledge needed to it understanding, so as the temporal location, uses, forecast, evolution and growth in the next years.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the development of electronic devices, more and more mobile clients are connected to the Internet and they generate massive data every day. We live in an age of “Big Data”, and every day we generate hundreds of million magnitude data. By analyzing the data and making prediction, we can carry out better development plan. Unfortunately, traditional computation framework cannot meet the demand, so the Hadoop would be put forward. First the paper introduces the background and development status of Hadoop, compares the MapReduce in Hadoop 1.0 and YARN in Hadoop 2.0, and analyzes the advantages and disadvantages of them. Because the resource management module is the core role of YARN, so next the paper would research about the resource allocation module including the resource management, resource allocation algorithm, resource preemption model and the whole resource scheduling process from applying resource to finishing allocation. Also it would introduce the FIFO Scheduler, Capacity Scheduler, and Fair Scheduler and compare them. The main work has been done in this paper is researching and analyzing the Dominant Resource Fair algorithm of YARN, putting forward a maximum resource utilization algorithm based on Dominant Resource Fair algorithm. The paper also provides a suggestion to improve the unreasonable facts in resource preemption model. Emphasizing “fairness” during resource allocation is the core concept of Dominant Resource Fair algorithm of YARM. Because the cluster is multiple users and multiple resources, so the user’s resource request is multiple too. The DRF algorithm would divide the user’s resources into dominant resource and normal resource. For a user, the dominant resource is the one whose share is highest among all the request resources, others are normal resource. The DRF algorithm requires the dominant resource share of each user being equal. But for these cases where different users’ dominant resource amount differs greatly, emphasizing “fairness” is not suitable and can’t promote the resource utilization of the cluster. By analyzing these cases, this thesis puts forward a new allocation algorithm based on DRF. The new algorithm takes the “fairness” into consideration but not the main principle. Maximizing the resource utilization is the main principle and goal of the new algorithm. According to comparing the result of the DRF and new algorithm based on DRF, we found that the new algorithm has more high resource utilization than DRF. The last part of the thesis is to install the environment of YARN and use the Scheduler Load Simulator (SLS) to simulate the cluster environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

During the last decades, we assisted to what is called “information explosion”. With the advent of the new technologies and new contexts, the volume, velocity and variety of data has increased exponentially, becoming what is known today as big data. Among them, we emphasize telecommunications operators, which gather, using network monitoring equipment, millions of network event records, the Call Detail Records (CDRs) and the Event Detail Records (EDRs), commonly known as xDRs. These records are stored and later processed to compute network performance and quality of service metrics. With the ever increasing number of collected xDRs, its generated volume needing to be stored has increased exponentially, making the current solutions based on relational databases not suited anymore. To tackle this problem, the relational data store can be replaced by Hadoop File System (HDFS). However, HDFS is simply a distributed file system, this way not supporting any aspect of the relational paradigm. To overcome this difficulty, this paper presents a framework that enables the current systems inserting data into relational databases, to keep doing it transparently when migrating to Hadoop. As proof of concept, the developed platform was integrated with the Altaia - a performance and QoS management of telecommunications networks and services.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Information overload has become a serious issue for web users. Personalisation can provide effective solutions to overcome this problem. Recommender systems are one popular personalisation tool to help users deal with this issue. As the base of personalisation, the accuracy and efficiency of web user profiling affects the performances of recommender systems and other personalisation systems greatly. In Web 2.0, the emerging user information provides new possible solutions to profile users. Folksonomy or tag information is a kind of typical Web 2.0 information. Folksonomy implies the users‘ topic interests and opinion information. It becomes another source of important user information to profile users and to make recommendations. However, since tags are arbitrary words given by users, folksonomy contains a lot of noise such as tag synonyms, semantic ambiguities and personal tags. Such noise makes it difficult to profile users accurately or to make quality recommendations. This thesis investigates the distinctive features and multiple relationships of folksonomy and explores novel approaches to solve the tag quality problem and profile users accurately. Harvesting the wisdom of crowds and experts, three new user profiling approaches are proposed: folksonomy based user profiling approach, taxonomy based user profiling approach, hybrid user profiling approach based on folksonomy and taxonomy. The proposed user profiling approaches are applied to recommender systems to improve their performances. Based on the generated user profiles, the user and item based collaborative filtering approaches, combined with the content filtering methods, are proposed to make recommendations. The proposed new user profiling and recommendation approaches have been evaluated through extensive experiments. The effectiveness evaluation experiments were conducted on two real world datasets collected from Amazon.com and CiteULike websites. The experimental results demonstrate that the proposed user profiling and recommendation approaches outperform those related state-of-the-art approaches. In addition, this thesis proposes a parallel, scalable user profiling implementation approach based on advanced cloud computing techniques such as Hadoop, MapReduce and Cascading. The scalability evaluation experiments were conducted on a large scaled dataset collected from Del.icio.us website. This thesis contributes to effectively use the wisdom of crowds and expert to help users solve information overload issues through providing more accurate, effective and efficient user profiling and recommendation approaches. It also contributes to better usages of taxonomy information given by experts and folksonomy information contributed by users in Web 2.0.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Large scaled emerging user created information in web 2.0 such as tags, reviews, comments and blogs can be used to profile users’ interests and preferences to make personalized recommendations. To solve the scalability problem of the current user profiling and recommender systems, this paper proposes a parallel user profiling approach and a scalable recommender system. The current advanced cloud computing techniques including Hadoop, MapReduce and Cascading are employed to implement the proposed approaches. The experiments were conducted on Amazon EC2 Elastic MapReduce and S3 with a real world large scaled dataset from Del.icio.us website.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Distributed systems are widely used for solving large-scale and data-intensive computing problems, including all-to-all comparison (ATAC) problems. However, when used for ATAC problems, existing computational frameworks such as Hadoop focus on load balancing for allocating comparison tasks, without careful consideration of data distribution and storage usage. While Hadoop-based solutions provide users with simplicity of implementation, their inherent MapReduce computing pattern does not match the ATAC pattern. This leads to load imbalances and poor data locality when Hadoop's data distribution strategy is used for ATAC problems. Here we present a data distribution strategy which considers data locality, load balancing and storage savings for ATAC computing problems in homogeneous distributed systems. A simulated annealing algorithm is developed for data distribution and task scheduling. Experimental results show a significant performance improvement for our approach over Hadoop-based solutions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The requirement of distributed computing of all-to-all comparison (ATAC) problems in heterogeneous systems is increasingly important in various domains. Though Hadoop-based solutions are widely used, they are inefficient for the ATAC pattern, which is fundamentally different from the MapReduce pattern for which Hadoop is designed. They exhibit poor data locality and unbalanced allocation of comparison tasks, particularly in heterogeneous systems. The results in massive data movement at runtime and ineffective utilization of computing resources, affecting the overall computing performance significantly. To address these problems, a scalable and efficient data and task distribution strategy is presented in this paper for processing large-scale ATAC problems in heterogeneous systems. It not only saves storage space but also achieves load balancing and good data locality for all comparison tasks. Experiments of bioinformatics examples show that about 89\% of the ideal performance capacity of the multiple machines have be achieved through using the approach presented in this paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Generally classifiers tend to overfit if there is noise in the training data or there are missing values. Ensemble learning methods are often used to improve a classifier's classification accuracy. Most ensemble learning approaches aim to improve the classification accuracy of decision trees. However, alternative classifiers to decision trees exist. The recently developed Random Prism ensemble learner for classification aims to improve an alternative classification rule induction approach, the Prism family of algorithms, which addresses some of the limitations of decision trees. However, Random Prism suffers like any ensemble learner from a high computational overhead due to replication of the data and the induction of multiple base classifiers. Hence even modest sized datasets may impose a computational challenge to ensemble learners such as Random Prism. Parallelism is often used to scale up algorithms to deal with large datasets. This paper investigates parallelisation for Random Prism, implements a prototype and evaluates it empirically using a Hadoop computing cluster.