875 resultados para big data storage


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Aiming to address requirements concerning integration of services in the context of ?big data?, this paper presents an innovative approach that (i) ensures a flexible, adaptable and scalable information and computation infrastructure, and (ii) exploits the competences of stakeholders and information workers to meaningfully confront information management issues such as information characterization, classification and interpretation, thus incorporating the underlying collective intelligence. Our approach pays much attention to the issues of usability and ease-of-use, not requiring any particular programming expertise from the end users. We report on a series of technical issues concerning the desired flexibility of the proposed integration framework and we provide related recommendations to developers of such solutions. Evaluation results are also discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There is no empirical evidence whatsoever to support most of the beliefs on which software construction is based. We do not yet know the adequacy, limits, qualities, costs and risks of the technologies used to develop software. Experimentation helps to check and convert beliefs and opinions into facts. This research is concerned with the replication area. Replication is a key component for gathering empirical evidence on software development that can be used in industry to build better software more efficiently. Replication has not been an easy thing to do in software engineering (SE) because the experimental paradigm applied to software development is still immature. Nowadays, a replication is executed mostly using a traditional replication package. But traditional replication packages do not appear, for some reason, to have been as effective as expected for transferring information among researchers in SE experimentation. The trouble spot appears to be the replication setup, caused by version management problems with materials, instruments, documents, etc. This has proved to be an obstacle to obtaining enough details about the experiment to be able to reproduce it as exactly as possible. We address the problem of information exchange among experimenters by developing a schema to characterize replications. We will adapt configuration management and product line ideas to support the experimentation process. This will enable researchers to make systematic decisions based on explicit knowledge rather than assumptions about replications. This research will output a replication support web environment. This environment will not only archive but also manage experimental materials flexibly enough to allow both similar and differentiated replications with massive experimental data storage. The platform should be accessible to several research groups working together on the same families of experiments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Sensor network deployments have become a primary source of big data about the real world that surrounds us, measuring a wide range of physical properties in real time. With such large amounts of heterogeneous data, a key challenge is to describe and annotate sensor data with high-level metadata, using and extending models, for instance with ontologies. However, to automate this task there is a need for enriching the sensor metadata using the actual observed measurements and extracting useful meta-information from them. This paper proposes a novel approach of characterization and extraction of semantic metadata through the analysis of sensor data raw observations. This approach consists in using approximations to represent the raw sensor measurements, based on distributions of the observation slopes, building a classi?cation scheme to automatically infer sensor metadata like the type of observed property, integrating the semantic analysis results with existing sensor networks metadata.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The emergence of cloud datacenters enhances the capability of online data storage. Since massive data is stored in datacenters, it is necessary to effectively locate and access interest data in such a distributed system. However, traditional search techniques only allow users to search images over exact-match keywords through a centralized index. These techniques cannot satisfy the requirements of content based image retrieval (CBIR). In this paper, we propose a scalable image retrieval framework which can efficiently support content similarity search and semantic search in the distributed environment. Its key idea is to integrate image feature vectors into distributed hash tables (DHTs) by exploiting the property of locality sensitive hashing (LSH). Thus, images with similar content are most likely gathered into the same node without the knowledge of any global information. For searching semantically close images, the relevance feedback is adopted in our system to overcome the gap between low-level features and high-level features. We show that our approach yields high recall rate with good load balance and only requires a few number of hops.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En este documento se detalla, la planificación y elaboración de un paquete que respeta el estándar S4 de programación en lenguaje R. El paquete consiste en una serie de métodos y clases para la generación de exámenes tipos test y soluciones a partir de un archivo xls, que hace las funciones de una base de datos. El diseño propuesto está orientado a objetos y desarrolla un conjunto de clases que representan los contenidos de una prueba de evaluación tipo test: enunciados, peguntas y respuestas. Se ha realizado una implementación sencilla de un prototipo con las funciones básicas necesarias para generar los tests. Además se ha generado la documentación necesaria para crear el paquete, esto significa que cada método tiene una página de ayuda, que se podrá consultar desde un terminal con R, dicha documentación incluye ejemplos de ejecución de cada método.---ABSTRACT---In this document is detailed the elaboration and development of a package that meets the standard S4 of programming language R. This package consists of a group of methods and classes used for the generation of test exams and their solutions starting from a xls format file wich plays the role of a data base at the same time. These classes have been grouped in a way that the user could have a complete and easy vision of them. This division has been done by using data storage and functions whose tasks are more or less the same. Furthermore, the necessary documentation to create this package has also been generated, that means that every method has a help page wich can be called from a R terminal if necessary. This documentation has examples of the execution of every method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Since the beginning of Internet, Internet Service Providers (ISP) have seen the need of giving to users? traffic different treatments defined by agree- ments between ISP and customers. This procedure, known as Quality of Service Management, has not much changed in the last years (DiffServ and Deep Pack-et Inspection have been the most chosen mechanisms). However, the incremen-tal growth of Internet users and services jointly with the application of recent Ma- chine Learning techniques, open up the possibility of going one step for-ward in the smart management of network traffic. In this paper, we first make a survey of current tools and techniques for QoS Management. Then we intro-duce clustering and classifying Machine Learning techniques for traffic charac-terization and the concept of Quality of Experience. Finally, with all these com-ponents, we present a brand new framework that will manage in a smart way Quality of Service in a telecom Big Data based scenario, both for mobile and fixed communications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the last decade, the research community has focused on new classification methods that rely on statistical characteristics of Internet traffic, instead of pre-viously popular port-number-based or payload-based methods, which are under even bigger constrictions. Some research works based on statistical characteristics generated large fea-ture sets of Internet traffic; however, nowadays it?s impossible to handle hun-dreds of features in big data scenarios, only leading to unacceptable processing time and misleading classification results due to redundant and correlative data. As a consequence, a feature selection procedure is essential in the process of Internet traffic characterization. In this paper a survey of feature selection methods is presented: feature selection frameworks are introduced, and differ-ent categories of methods are briefly explained and compared; several proposals on feature selection in Internet traffic characterization are shown; finally, future application of feature selection to a concrete project is proposed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Multigroup diffusion codes for three dimensional LWR core analysis use as input data pre-generated homogenized few group cross sections and discontinuity factors for certain combinations of state variables, such as temperatures or densities. The simplest way of compiling those data are tabulated libraries, where a grid covering the domain of state variables is defined and the homogenized cross sections are computed at the grid points. Then, during the core calculation, an interpolation algorithm is used to compute the cross sections from the table values. Since interpolation errors depend on the distance between the grid points, a determined refinement of the mesh is required to reach a target accuracy, which could lead to large data storage volume and a large number of lattice transport calculations. In this paper, a simple and effective procedure to optimize the distribution of grid points for tabulated libraries is presented. Optimality is considered in the sense of building a non-uniform point distribution with the minimum number of grid points for each state variable satisfying a given target accuracy in k-effective. The procedure consists of determining the sensitivity coefficients of k-effective to cross sections using perturbation theory; and estimating the interpolation errors committed with different mesh steps for each state variable. These results allow evaluating the influence of interpolation errors of each cross section on k-effective for any combination of state variables, and estimating the optimal distance between grid points.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En la situación actual donde los sistemas TI sanitarios son diversos con modelos que van desde soluciones predominantes, adoptadas y creadas por grandes organizaciones, hasta soluciones a medida desarrolladas por cualquier empresa de la competencia para satisfacer necesidades concretas. Todos estos sistemas se encuentran bajo similares presiones financieras, no sólo de las condiciones económicas mundiales actuales y el aumento de los costes sanitarios, sino también bajo las presiones de una población que ha adoptado los avances tecnológicos actuales, y demanda una atención sanitaria más personalizable a la altura de esos avances tecnológicos que disfruta en otros ámbitos. El objeto es desarrollar un modelo de negocio orientado al soporte del intercambio de información en el ámbito clínico. El objetivo de este modelo de negocio es aumentar la competitividad dentro de este sector sin la necesidad de recurrir a expertos en estándares, proporcionando perfiles técnicos cualificados menos costosos con la ayuda de herramientas que simplifiquen el uso de los estándares de interoperabilidad. Se hará uso de especificaciones abiertas ya existentes como FHIR, que publica documentación y tutoriales bajo licencias abiertas. La principal ventaja que nos encontramos es que ésta especificación presenta un giro en la concepción actual de la disposición de información clínica, vista hasta ahora como especial por el requerimiento de estándares más complejos que solucionen cualquier caso por específico que sea. Ésta especificación permite hacer uso de la información clínica a través de tecnologías web actuales (HTTP, HTML, OAuth2, JSON, XML) que todo el mundo puede usar sin un entrenamiento particular para crear y consumir esta información. Partiendo por tanto de un mercado con una integración de la información casi inexistente, comparada con otros entornos actuales, hará que el gasto en integración clínica aumente dramáticamente, dejando atrás los desafíos técnicos cuyo gasto retrocederá a un segundo plano. El gasto se centrará en las expectativas de lo que se puede obtener en la tendencia actual de la personalización de los datos clínicos de los pacientes, con acceso a los registros de instituciones junto con datos ‘sociales/móviles/big data’.---ABSTRACT---In the current situation IT health systems are diverse, with models varying from predominant solutions adopted and created by large organizations, to ad-hoc solutions developed by any company to meet specific needs. However, all these systems are under similar financial pressures, not only from current global economic conditions and increased health care costs, but also under pressure from a population that has embraced the current technological advances, and demand a more personalized health care, up to those enjoyed by technological advances in other areas. The purpose of this thesis is to develop a business model aimed at the provision of information exchange within the clinical domain. It is intended to increase competitiveness in the health IT sector without the need for experts in standards, providing qualified technical profiles less expensively with the help of tools that simplify the use of interoperability standards. Open specifications, like FHIR, will be used in order to enable interoperability between systems. The main advantage found within FHIR is that introduces a shift in the current conception of available clinical information. So far seen, the clinical information domain IT systems, as a special requirement for more complex standards that address any specific case. This specification allows the use of clinical information through existing web technologies (HTTP, HTML, OAuth2, JSON and XML), which everyone can use with no particular training to create and consume this information. The current situation in the sector is that the integration of information is almost nonexistent, compared to current trends. Spending in IT health systems will increase dramatically within clinical integration for the next years, leaving the technical challenges whose costs will recede into the background. The investment on this area will focus on the expectations of what can be obtained in the current trend of personalization of clinical data of patients with access to records of institutions with ‘social /mobile /big data’.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Urban economic activities are an essential facet in defining city identity. Traditional approaches rely very often on the most theoretical and quantitative features of the studies, excluding de-facto a direct association between those findings and the tangible subject of the analysis. To fill the gap, the Big Data era and information visualization methodologies could help analysts, stakeholders and general audience to gain a new insight on the field. In this paper, we want to provide some food for thought about new opportunities arising in visual urban economies as well as present some visual results on possible scenarios.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recientemente, el paradigma de la computación en la nube ha recibido mucho interés por parte tanto de la industria como del mundo académico. Las infraestructuras cloud públicas están posibilitando nuevos modelos de negocio y ayudando a reducir costes. Sin embargo, una compañía podría desear ubicar sus datos y servicios en sus propias instalaciones, o tener que atenerse a leyes de protección de datos. Estas circunstancias hacen a las infraestructuras cloud privadas ciertamente deseables, ya sea para complementar a las públicas o para sustituirlas por completo. Por desgracia, las carencias en materia de estándares han impedido que las soluciones para la gestión de infraestructuras privadas se hayan desarrollado adecuadamente. Además, la multitud de opciones disponibles ha creado en los clientes el miedo a depender de una tecnología concreta (technology lock-in). Una de las causas de este problema es la falta de alineación entre la investigación académica y los productos comerciales, ya que aquella está centrada en el estudio de escenarios idealizados sin correspondencia con el mundo real, mientras que éstos consisten en soluciones desarrolladas sin tener en cuenta cómo van a encajar con los estándares más comunes o sin preocuparse de hacer públicos sus resultados. Con objeto de resolver este problema, propongo un sistema de gestión modular para infraestructuras cloud privadas enfocado en tratar con las aplicaciones en lugar de centrarse únicamente en los recursos hardware. Este sistema de gestión sigue el paradigma de la computación autónoma y está diseñado en torno a un modelo de información sencillo, desarrollado para ser compatible con los estándares más comunes. Este modelo divide el entorno en dos vistas, que sirven para separar aquello que debe preocupar a cada actor involucrado del resto de información, pero al mismo tiempo permitiendo relacionar el entorno físico con las máquinas virtuales que se despliegan encima de él. En dicho modelo, las aplicaciones cloud están divididas en tres tipos genéricos (Servicios, Trabajos de Big Data y Reservas de Instancias), para que así el sistema de gestión pueda sacar partido de las características propias de cada tipo. El modelo de información está complementado por un conjunto de acciones de gestión atómicas, reversibles e independientes, que determinan las operaciones que se pueden llevar a cabo sobre el entorno y que es usado para hacer posible la escalabilidad en el entorno. También describo un motor de gestión encargado de, a partir del estado del entorno y usando el ya mencionado conjunto de acciones, la colocación de recursos. Está dividido en dos niveles: la capa de Gestores de Aplicación, encargada de tratar sólo con las aplicaciones; y la capa del Gestor de Infraestructura, responsable de los recursos físicos. Dicho motor de gestión obedece un ciclo de vida con dos fases, para así modelar mejor el comportamiento de una infraestructura real. El problema de la colocación de recursos es atacado durante una de las fases (la de consolidación) por un resolutor de programación entera, y durante la otra (la online) por un heurístico hecho ex-profeso. Varias pruebas han demostrado que este acercamiento combinado es superior a otras estrategias. Para terminar, el sistema de gestión está acoplado a arquitecturas de monitorización y de actuadores. Aquella estando encargada de recolectar información del entorno, y ésta siendo modular en su diseño y capaz de conectarse con varias tecnologías y ofrecer varios modos de acceso. ABSTRACT The cloud computing paradigm has raised in popularity within the industry and the academia. Public cloud infrastructures are enabling new business models and helping to reduce costs. However, the desire to host company’s data and services on premises, and the need to abide to data protection laws, make private cloud infrastructures desirable, either to complement or even fully substitute public oferings. Unfortunately, a lack of standardization has precluded private infrastructure management solutions to be developed to a certain level, and a myriad of diferent options have induced the fear of lock-in in customers. One of the causes of this problem is the misalignment between academic research and industry ofering, with the former focusing in studying idealized scenarios dissimilar from real-world situations, and the latter developing solutions without taking care about how they f t with common standards, or even not disseminating their results. With the aim to solve this problem I propose a modular management system for private cloud infrastructures that is focused on the applications instead of just the hardware resources. This management system follows the autonomic system paradigm, and is designed around a simple information model developed to be compatible with common standards. This model splits the environment in two views that serve to separate the concerns of the stakeholders while at the same time enabling the traceability between the physical environment and the virtual machines deployed onto it. In it, cloud applications are classifed in three broad types (Services, Big Data Jobs and Instance Reservations), in order for the management system to take advantage of each type’s features. The information model is paired with a set of atomic, reversible and independent management actions which determine the operations that can be performed over the environment and is used to realize the cloud environment’s scalability. From the environment’s state and using the aforementioned set of actions, I also describe a management engine tasked with the resource placement. It is divided in two tiers: the Application Managers layer, concerned just with applications; and the Infrastructure Manager layer, responsible of the actual physical resources. This management engine follows a lifecycle with two phases, to better model the behavior of a real infrastructure. The placement problem is tackled during one phase (consolidation) by using an integer programming solver, and during the other (online) with a custom heuristic. Tests have demonstrated that this combined approach is superior to other strategies. Finally, the management system is paired with monitoring and actuators architectures. The former able to collect the necessary information from the environment, and the later modular in design and capable of interfacing with several technologies and ofering several access interfaces.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El presente Trabajo de Fin de Grado se enmarca dentro de un sistema de control y desarrollo de sistemas inteligentes de transporte (ITS). Este Trabajo consta de varias líneas de desarrollo, que se engloban dentro de dicho marco y surgen de la necesidad de aumentar la seguridad, flujo, estructura y mantenimiento de las carreteras incorporando las tecnologías más recientes. En primer lugar, el presente Trabajo se centra en el desarrollo de un nuevo sistema de procesamiento de datos de tráfico en tiempo real que aprovecha las tecnologías de Big Data, Cloud Computing y Map-Reduce que han surgido estos últimos años. Para ello se realiza un estudio previo de los datos de tráfico vial que originan los vehículos que viajan por carreteras. Centrándose en el sistema empleado por la Dirección General de Tráfico de España y comparándolos con el de las Empresas basadas en servicios de localización (LBS). Se expone el modelo Hadoop utilizado así como el proceso Map-Reduce implementado en este sistema analizador. Por último los datos de salida son preparados y enviados a un módulo web básico que actúa como Sistema de Información Geográfica (GIS).---ABSTRACT---This Final Degree Project is part of a control system and development of intelligent transport systems (ITS). This work is part of a several lines of development, which are included within this framework and arise from the need to increase security, flow, structure and maintenance of roads incorporating the latest technologies. First, this paper focuses on the development of a new data processing system of real-time traffic that takes advantage of Big Data, Cloud Computing and Map-Reduce technologies emerged in our recent years. It is made a preliminary study of road traffic data originated by vehicles traveling by road. Focusing on the system used by the Dirección General de Tráfico of Spain and compared with that of the companies offering location based services (LBS). It is exposed the used Hadoop model and the Map-Reduce process implemented on this analyzer system. Finally, the output data is prepared and sent to a basic web module that acts as Geographic Information System (GIS).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El avance tecnológico de los últimos años ha aumentado la necesidad de guardar enormes cantidades de datos de forma masiva, llegando a una situación de desorden en el proceso de almacenamiento de datos, a su desactualización y a complicar su análisis. Esta situación causó un gran interés para las organizaciones en la búsqueda de un enfoque para obtener información relevante de estos grandes almacenes de datos. Surge así lo que se define como inteligencia de negocio, un conjunto de herramientas, procedimientos y estrategias para llevar a cabo la “extracción de conocimiento”, término con el que se refiere comúnmente a la extracción de información útil para la propia organización. Concretamente en este proyecto, se ha utilizado el enfoque Knowledge Discovery in Databases (KDD), que permite lograr la identificación de patrones y un manejo eficiente de las anomalías que puedan aparecer en una red de comunicaciones. Este enfoque comprende desde la selección de los datos primarios hasta su análisis final para la determinación de patrones. El núcleo de todo el enfoque KDD es la minería de datos, que contiene la tecnología necesaria para la identificación de los patrones mencionados y la extracción de conocimiento. Para ello, se utilizará la herramienta RapidMiner en su versión libre y gratuita, debido a que es más completa y de manejo más sencillo que otras herramientas como KNIME o WEKA. La gestión de una red engloba todo el proceso de despliegue y mantenimiento. Es en este procedimiento donde se recogen y monitorizan todas las anomalías ocasionadas en la red, las cuales pueden almacenarse en un repositorio. El objetivo de este proyecto es realizar un planteamiento teórico y varios experimentos que permitan identificar patrones en registros de anomalías de red. Se ha estudiado el repositorio de MAWI Lab, en el que se han almacenado anomalías diarias. Se trata de buscar indicios característicos anuales detectando patrones. Los diferentes experimentos y procedimientos de este estudio pretenden demostrar la utilidad de la inteligencia de negocio a la hora de extraer información a partir de un almacén de datos masivo, para su posterior análisis o futuros estudios. ABSTRACT. The technological progresses in the recent years required to store a big amount of information in repositories. This information is often in disorder, outdated and needs a complex analysis. This situation has caused a relevant interest in investigating methodologies to obtain important information from these huge data stores. Business intelligence was born as a set of tools, procedures and strategies to implement the "knowledge extraction". Specifically in this project, Knowledge Discovery in Databases (KDD) approach has been used. KDD is one of the most important processes of business intelligence to achieve the identification of patterns and the efficient management of the anomalies in a communications network. This approach includes all necessary stages from the selection of the raw data until the analysis to determine the patterns. The core process of the whole KDD approach is the Data Mining process, which analyzes the information needed to identify the patterns and to extract the knowledge. In this project we use the RapidMiner tool to carry out the Data Mining process, because this tool has more features and is easier to use than other tools like WEKA or KNIME. Network management includes the deployment, supervision and maintenance tasks. Network management process is where all anomalies are collected, monitored, and can be stored in a repository. The goal of this project is to construct a theoretical approach, to implement a prototype and to carry out several experiments that allow identifying patterns in some anomalies records. MAWI Lab repository has been selected to be studied, which contains daily anomalies. The different experiments show the utility of the business intelligence to extract information from big data warehouse.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Debido al gran incremento de datos digitales que ha tenido lugar en los últimos años, ha surgido un nuevo paradigma de computación paralela para el procesamiento eficiente de grandes volúmenes de datos. Muchos de los sistemas basados en este paradigma, también llamados sistemas de computación intensiva de datos, siguen el modelo de programación de Google MapReduce. La principal ventaja de los sistemas MapReduce es que se basan en la idea de enviar la computación donde residen los datos, tratando de proporcionar escalabilidad y eficiencia. En escenarios libres de fallo, estos sistemas generalmente logran buenos resultados. Sin embargo, la mayoría de escenarios donde se utilizan, se caracterizan por la existencia de fallos. Por tanto, estas plataformas suelen incorporar características de tolerancia a fallos y fiabilidad. Por otro lado, es reconocido que las mejoras en confiabilidad vienen asociadas a costes adicionales en recursos. Esto es razonable y los proveedores que ofrecen este tipo de infraestructuras son conscientes de ello. No obstante, no todos los enfoques proporcionan la misma solución de compromiso entre las capacidades de tolerancia a fallo (o de manera general, las capacidades de fiabilidad) y su coste. Esta tesis ha tratado la problemática de la coexistencia entre fiabilidad y eficiencia de los recursos en los sistemas basados en el paradigma MapReduce, a través de metodologías que introducen el mínimo coste, garantizando un nivel adecuado de fiabilidad. Para lograr esto, se ha propuesto: (i) la formalización de una abstracción de detección de fallos; (ii) una solución alternativa a los puntos únicos de fallo de estas plataformas, y, finalmente, (iii) un nuevo sistema de asignación de recursos basado en retroalimentación a nivel de contenedores. Estas contribuciones genéricas han sido evaluadas tomando como referencia la arquitectura Hadoop YARN, que, hoy en día, es la plataforma de referencia en la comunidad de los sistemas de computación intensiva de datos. En la tesis se demuestra cómo todas las contribuciones de la misma superan a Hadoop YARN tanto en fiabilidad como en eficiencia de los recursos utilizados. ABSTRACT Due to the increase of huge data volumes, a new parallel computing paradigm to process big data in an efficient way has arisen. Many of these systems, called dataintensive computing systems, follow the Google MapReduce programming model. The main advantage of these systems is based on the idea of sending the computation where the data resides, trying to provide scalability and efficiency. In failure-free scenarios, these frameworks usually achieve good results. However, these ones are not realistic scenarios. Consequently, these frameworks exhibit some fault tolerance and dependability techniques as built-in features. On the other hand, dependability improvements are known to imply additional resource costs. This is reasonable and providers offering these infrastructures are aware of this. Nevertheless, not all the approaches provide the same tradeoff between fault tolerant capabilities (or more generally, reliability capabilities) and cost. In this thesis, we have addressed the coexistence between reliability and resource efficiency in MapReduce-based systems, looking for methodologies that introduce the minimal cost and guarantee an appropriate level of reliability. In order to achieve this, we have proposed: (i) a formalization of a failure detector abstraction; (ii) an alternative solution to single points of failure of these frameworks, and finally (iii) a novel feedback-based resource allocation system at the container level. Finally, our generic contributions have been instantiated for the Hadoop YARN architecture, which is the state-of-the-art framework in the data-intensive computing systems community nowadays. The thesis demonstrates how all our approaches outperform Hadoop YARN in terms of reliability and resource efficiency.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

One of the most demanding needs in cloud computing and big data is that of having scalable and highly available databases. One of the ways to attend these needs is to leverage the scalable replication techniques developed in the last decade. These techniques allow increasing both the availability and scalability of databases. Many replication protocols have been proposed during the last decade. The main research challenge was how to scale under the eager replication model, the one that provides consistency across replicas. This thesis provides an in depth study of three eager database replication systems based on relational systems: Middle-R, C-JDBC and MySQL Cluster and three systems based on In-Memory Data Grids: JBoss Data Grid, Oracle Coherence and Terracotta Ehcache. Thesis explore these systems based on their architecture, replication protocols, fault tolerance and various other functionalities. It also provides experimental analysis of these systems using state-of-the art benchmarks: TPC-C and TPC-W (for relational systems) and Yahoo! Cloud Serving Benchmark (In- Memory Data Grids). Thesis also discusses three Graph Databases, Neo4j, Titan and Sparksee based on their architecture and transactional capabilities and highlights the weaker transactional consistencies provided by these systems. It discusses an implementation of snapshot isolation in Neo4j graph database to provide stronger isolation guarantees for transactions.