917 resultados para big data.


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dato il recente avvento delle tecnologie NGS, in grado di sequenziare interi genomi umani in tempi e costi ridotti, la capacità di estrarre informazioni dai dati ha un ruolo fondamentale per lo sviluppo della ricerca. Attualmente i problemi computazionali connessi a tali analisi rientrano nel topic dei Big Data, con databases contenenti svariati tipi di dati sperimentali di dimensione sempre più ampia. Questo lavoro di tesi si occupa dell'implementazione e del benchmarking dell'algoritmo QDANet PRO, sviluppato dal gruppo di Biofisica dell'Università di Bologna: il metodo consente l'elaborazione di dati ad alta dimensionalità per l'estrazione di una Signature a bassa dimensionalità di features con un'elevata performance di classificazione, mediante una pipeline d'analisi che comprende algoritmi di dimensionality reduction. Il metodo è generalizzabile anche all'analisi di dati non biologici, ma caratterizzati comunque da un elevato volume e complessità, fattori tipici dei Big Data. L'algoritmo QDANet PRO, valutando la performance di tutte le possibili coppie di features, ne stima il potere discriminante utilizzando un Naive Bayes Quadratic Classifier per poi determinarne il ranking. Una volta selezionata una soglia di performance, viene costruito un network delle features, da cui vengono determinate le componenti connesse. Ogni sottografo viene analizzato separatamente e ridotto mediante metodi basati sulla teoria dei networks fino all'estrapolazione della Signature finale. Il metodo, già precedentemente testato su alcuni datasets disponibili al gruppo di ricerca con riscontri positivi, è stato messo a confronto con i risultati ottenuti su databases omici disponibili in letteratura, i quali costituiscono un riferimento nel settore, e con algoritmi già esistenti che svolgono simili compiti. Per la riduzione dei tempi computazionali l'algoritmo è stato implementato in linguaggio C++ su HPC, con la parallelizzazione mediante librerie OpenMP delle parti più critiche.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thanks to the advanced technologies and social networks that allow the data to be widely shared among the Internet, there is an explosion of pervasive multimedia data, generating high demands of multimedia services and applications in various areas for people to easily access and manage multimedia data. Towards such demands, multimedia big data analysis has become an emerging hot topic in both industry and academia, which ranges from basic infrastructure, management, search, and mining to security, privacy, and applications. Within the scope of this dissertation, a multimedia big data analysis framework is proposed for semantic information management and retrieval with a focus on rare event detection in videos. The proposed framework is able to explore hidden semantic feature groups in multimedia data and incorporate temporal semantics, especially for video event detection. First, a hierarchical semantic data representation is presented to alleviate the semantic gap issue, and the Hidden Coherent Feature Group (HCFG) analysis method is proposed to capture the correlation between features and separate the original feature set into semantic groups, seamlessly integrating multimedia data in multiple modalities. Next, an Importance Factor based Temporal Multiple Correspondence Analysis (i.e., IF-TMCA) approach is presented for effective event detection. Specifically, the HCFG algorithm is integrated with the Hierarchical Information Gain Analysis (HIGA) method to generate the Importance Factor (IF) for producing the initial detection results. Then, the TMCA algorithm is proposed to efficiently incorporate temporal semantics for re-ranking and improving the final performance. At last, a sampling-based ensemble learning mechanism is applied to further accommodate the imbalanced datasets. In addition to the multimedia semantic representation and class imbalance problems, lack of organization is another critical issue for multimedia big data analysis. In this framework, an affinity propagation-based summarization method is also proposed to transform the unorganized data into a better structure with clean and well-organized information. The whole framework has been thoroughly evaluated across multiple domains, such as soccer goal event detection and disaster information management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Semantics, knowledge and Grids represent three spaces where people interact, understand, learn and create. Grids represent the advanced cyber-infrastructures and evolution. Big data influence the evolution of semantics, knowledge and Grids. Exploring semantics, knowledge and Grids on big data helps accelerate the shift of scientific paradigm, the fourth industrial revolution, and the transformational innovation of technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A pesar de la existencia de una multitud de investigaciones sobre el análisis de sentimiento, existen pocos trabajos que traten el tema de su implantación práctica y real y su integración con la inteligencia de negocio y big data de tal forma que dichos análisis de sentimiento estén incorporados en una arquitectura (que soporte todo el proceso desde la obtención de datos hasta su explotación con las herramientas de BI) aplicada a la gestión de la crisis. Se busca, por medio de este trabajo, investigar cómo se pueden unir los mundos de análisis (de sentimiento y crisis) y de la tecnología (todo lo relacionado con la inteligencia de negocios, minería de datos y Big Data), y crear una solución de Inteligencia de Negocios que comprenda la minería de datos y el análisis de sentimiento (basados en grandes volúmenes de datos), y que ayude a empresas y/o gobiernos con la gestión de crisis. El autor se ha puesto a estudiar formas de trabajar con grandes volúmenes de datos, lo que se conoce actualmente como Big Data Science, o la ciencia de los datos aplicada a grandes volúmenes de datos (Big Data), y unir esta tecnología con el análisis de sentimiento relacionado a una situación real (en este trabajo la situación elegida fue la del proceso de impechment de la presidenta de Brasil, Dilma Rousseff). En esta unión se han utilizado técnicas de inteligencia de negocios para la creación de cuadros de mandos, rutinas de ETC (Extracción, Transformación y Carga) de los datos así como también técnicas de minería de textos y análisis de sentimiento. El trabajo ha sido desarrollado en distintas partes y con distintas fuentes de datos (datasets) debido a las distintas pruebas de tecnología a lo largo del proyecto. Uno de los datasets más importantes del proyecto son los tweets recogidos entre los meses de diciembre de 2015 y enero de 2016. Los mensajes recogidos contenían la palabra "Dilma" en el mensaje. Todos los twittees fueron recogidos con la API de Streaming del Twitter. Es muy importante entender que lo que se publica en la red social Twitter no se puede manipular y representa la opinión de la persona o entidad que publica el mensaje. Por esto se puede decir que hacer el proceso de minería de datos con los datos del Twitter puede ser muy eficiente y verídico. En 3 de diciembre de 2015 se aceptó la petición de apertura del proceso del impechment del presidente de Brasil, Dilma Rousseff. La petición fue aceptada por el presidente de la Cámara de los Diputados, el diputado Sr. Eduardo Cunha (PMDBRJ), y de este modo se creó una expectativa sobre el sentimiento de la población y el futuro de Brasil. También se ha recogido datos de las búsquedas en Google referentes a la palabra Dilma; basado en estos datos, el objetivo es llegar a un análisis global de sentimiento (no solo basado en los twittees recogidos). Utilizando apenas dos fuentes (Twitter y búsquedas de Google) han sido extraídos muchísimos datos, pero hay muchas otras fuentes donde es posible obtener informaciones con respecto de las opiniones de las personas acerca de un tema en particular. Así, una herramienta que pueda recoger, extraer y almacenar tantos datos e ilustrar las informaciones de una manera eficaz que ayude y soporte una toma de decisión, contribuye para la gestión de crisis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives: To discuss how current research in the area of smart homes and ambient assisted living will be influenced by the use of big data. Methods: A scoping review of literature published in scientific journals and conference proceedings was performed, focusing on smart homes, ambient assisted living and big data over the years 2011-2014. Results: The health and social care market has lagged behind other markets when it comes to the introduction of innovative IT solutions and the market faces a number of challenges as the use of big data will increase. First, there is a need for a sustainable and trustful information chain where the needed information can be transferred from all producers to all consumers in a structured way. Second, there is a need for big data strategies and policies to manage the new situation where information is handled and transferred independently of the place of the expertise. Finally, there is a possibility to develop new and innovative business models for a market that supports cloud computing, social media, crowdsourcing etc. Conclusions: The interdisciplinary area of big data, smart homes and ambient assisted living is no longer only of interest for IT developers, it is also of interest for decision makers as customers make more informed choices among today's services. In the future it will be of importance to make information usable for managers and improve decision making, tailor smart home services based on big data, develop new business models, increase competition and identify policies to ensure privacy, security and liability.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The amount of data collected from an individual player during a football match has increased significantly in recent years, following technological evolution in positional tracking. However, given the short time that separates competitions, the common analysis of these data focuses on the magnitude of actions of each player, while considering either technical or physical perform- ance. This focus leads to a considerable amount of information not being taken into account in performance optimization, particularly while considering a sequence of different matches of the same team. In this presentation, we will present a tactical performance indicator that considers players’ overall positioning and their level of coordination during the match. This performance indicator will be applied in different time scales, with a particular focus on possible practical applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Big data are reshaping the way we interact with technology, thus fostering new applications to increase the safety-assessment of foods. An extraordinary amount of information is analysed using machine learning approaches aimed at detecting the existence or predicting the likelihood of future risks. Food business operators have to share the results of these analyses when applying to place on the market regulated products, whereas agri-food safety agencies (including the European Food Safety Authority) are exploring new avenues to increase the accuracy of their evaluations by processing Big data. Such an informational endowment brings with it opportunities and risks correlated to the extraction of meaningful inferences from data. However, conflicting interests and tensions among the involved entities - the industry, food safety agencies, and consumers - hinder the finding of shared methods to steer the processing of Big data in a sound, transparent and trustworthy way. A recent reform in the EU sectoral legislation, the lack of trust and the presence of a considerable number of stakeholders highlight the need of ethical contributions aimed at steering the development and the deployment of Big data applications. Moreover, Artificial Intelligence guidelines and charters published by European Union institutions and Member States have to be discussed in light of applied contexts, including the one at stake. This thesis aims to contribute to these goals by discussing what principles should be put forward when processing Big data in the context of agri-food safety-risk assessment. The research focuses on two interviewed topics - data ownership and data governance - by evaluating how the regulatory framework addresses the challenges raised by Big data analysis in these domains. The outcome of the project is a tentative Roadmap aimed to identify the principles to be observed when processing Big data in this domain and their possible implementations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

I big data sono caratterizzati dalle ben note 4v: volume, velocità, veracità e varietà. Quest'ultima risulta di importanza critica nei sistemi schema-less, dove il concetto di schema non è rigido. In questo contesto rientrano i database NoSQL, i quali offrono modelli dati diversi dal classico modello dati relazionale, ovvero: documentale, wide-column, grafo e key-value. Si parla di multistore quando ci si riferisce all'uso di database con modelli dati diversi che vengono esposti con un'unica interfaccia di interrogazione, sia per sfruttare caratteristiche di un modello dati che per le maggiori performance dei database NoSQL in contesti distribuiti. Fare analisi sui dati all'interno di un multistore risulta molto più complesso: i dati devono essere integrati e va ripristinata la consistenza. A questo scopo nasce la necessità di approcci più soft, chiamati pay-as-you-go: l'integrazione è leggera e incrementale, aggira la complessità degli approcci di integrazione tradizionali e restituisce risposte best-effort o approssimative. Seguendo tale filosofia, nasce il concetto di dataspace come rappresentazione logica e di alto livello dei dataset disponibili. Obiettivo di questo lavoro tesi è studiare, progettare e realizzare una modalità di interrogazione delle sorgenti dati eterogenee in contesto multistore con l'intento di fare analisi situazionali, considerando le problematiche di varietà e appoggiandosi all'integrazione fornita dal dataspace. Lo scopo finale è di sviluppare un prototipo che esponga un'interfaccia per interrogare il dataspace con la semantica GPSJ, ovvero la classe di query più comune nelle applicazioni OLAP. Un'interrogazione nel dataspace dovrà essere tradotta in una serie di interrogazioni nelle sorgenti e, attraverso un livello middleware, i risultati parziali dovranno essere integrati tra loro in modo che il risultato dell'interrogazione sia corretto e allo stesso tempo completo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The fast development of Information Communication Technologies (ICT) offers new opportunities to realize future smart cities. To understand, manage and forecast the city's behavior, it is necessary the analysis of different kinds of data from the most varied dataset acquisition systems. The aim of this research activity in the framework of Data Science and Complex Systems Physics is to provide stakeholders with new knowledge tools to improve the sustainability of mobility demand in future cities. Under this perspective, the governance of mobility demand generated by large tourist flows is becoming a vital issue for the quality of life in Italian cities' historical centers, which will worsen in the next future due to the continuous globalization process. Another critical theme is sustainable mobility, which aims to reduce private transportation means in the cities and improve multimodal mobility. We analyze the statistical properties of urban mobility of Venice, Rimini, and Bologna by using different datasets provided by companies and local authorities. We develop algorithms and tools for cartography extraction, trips reconstruction, multimodality classification, and mobility simulation. We show the existence of characteristic mobility paths and statistical properties depending on transport means and user's kinds. Finally, we use our results to model and simulate the overall behavior of the cars moving in the Emilia Romagna Region and the pedestrians moving in Venice with software able to replicate in silico the demand for mobility and its dynamic.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The idea behind the project is to develop a methodology for analyzing and developing techniques for the diagnosis and the prediction of the state of charge and health of lithium-ion batteries for automotive applications. For lithium-ion batteries, residual functionality is measured in terms of state of health; however, this value cannot be directly associated with a measurable value, so it must be estimated. The development of the algorithms is based on the identification of the causes of battery degradation, in order to model and predict the trend. Therefore, models have been developed that are able to predict the electrical, thermal and aging behavior. In addition to the model, it was necessary to develop algorithms capable of monitoring the state of the battery, online and offline. This was possible with the use of algorithms based on Kalman filters, which allow the estimation of the system status in real time. Through machine learning algorithms, which allow offline analysis of battery deterioration using a statistical approach, it is possible to analyze information from the entire fleet of vehicles. Both systems work in synergy in order to achieve the best performance. Validation was performed with laboratory tests on different batteries and under different conditions. The development of the model allowed to reduce the time of the experimental tests. Some specific phenomena were tested in the laboratory, and the other cases were artificially generated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The fourth industrial revolution, also known as Industry 4.0, has rapidly gained traction in businesses across Europe and the world, becoming a central theme in small, medium, and large enterprises alike. This new paradigm shifts the focus from locally-based and barely automated firms to a globally interconnected industrial sector, stimulating economic growth and productivity, and supporting the upskilling and reskilling of employees. However, despite the maturity and scalability of information and cloud technologies, the support systems already present in the machine field are often outdated and lack the necessary security, access control, and advanced communication capabilities. This dissertation proposes architectures and technologies designed to bridge the gap between Operational and Information Technology, in a manner that is non-disruptive, efficient, and scalable. The proposal presents cloud-enabled data-gathering architectures that make use of the newest IT and networking technologies to achieve the desired quality of service and non-functional properties. By harnessing industrial and business data, processes can be optimized even before product sale, while the integrated environment enhances data exchange for post-sale support. The architectures have been tested and have shown encouraging performance results, providing a promising solution for companies looking to embrace Industry 4.0, enhance their operational capabilities, and prepare themselves for the upcoming fifth human-centric revolution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The chapters of the thesis focus on a limited variety of selected themes in EU privacy and data protection law. Chapter 1 sets out the general introduction on the research topic. Chapter 2 touches upon the methodology used in the research. Chapter 3 conceptualises the basic notions from a legal standpoint. Chapter 4 examines the current regulatory regime applicable to digital health technologies, healthcare emergencies, privacy, and data protection. Chapter 5 provides case studies on the application deployed in the Covid-19 scenario, from the perspective of privacy and data protection. Chapter 6 addresses the post-Covid European regulatory initiatives on the subject matter, and its potential effects on privacy and data protection. Chapter 7 is the outcome of a six-month internship with a company in Italy and focuses on the protection of fundamental rights through common standardisation and certification, demonstrating that such standards can serve as supporting tools to guarantee the right to privacy and data protection in digital health technologies. The thesis concludes with the observation that finding and transposing European privacy and data protection standards into scenarios, such as public healthcare emergencies where digital health technologies are deployed, requires rapid coordination between the European Data Protection Authorities and the Member States guarantee that individual privacy and data protection rights are ensured.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the advent of new technologies it is increasingly easier to find data of different nature from even more accurate sensors that measure the most disparate physical quantities and with different methodologies. The collection of data thus becomes progressively important and takes the form of archiving, cataloging and online and offline consultation of information. Over time, the amount of data collected can become so relevant that it contains information that cannot be easily explored manually or with basic statistical techniques. The use of Big Data therefore becomes the object of more advanced investigation techniques, such as Machine Learning and Deep Learning. In this work some applications in the world of precision zootechnics and heat stress accused by dairy cows are described. Experimental Italian and German stables were involved for the training and testing of the Random Forest algorithm, obtaining a prediction of milk production depending on the microclimatic conditions of the previous days with satisfactory accuracy. Furthermore, in order to identify an objective method for identifying production drops, compared to the Wood model, typically used as an analytical model of the lactation curve, a Robust Statistics technique was used. Its application on some sample lactations and the results obtained allow us to be confident about the use of this method in the future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis represents the conclusive outcome of the European Joint Doctorate programmein Law, Science & Technology funded by the European Commission with the instrument Marie Skłodowska-Curie Innovative Training Networks actions inside of the H2020, grantagreement n. 814177. The tension between data protection and privacy from one side, and the need of granting further uses of processed personal datails is investigated, drawing the lines of the technological development of the de-anonymization/re-identification risk with an explorative survey. After acknowledging its span, it is questioned whether a certain degree of anonymity can still be granted focusing on a double perspective: an objective and a subjective perspective. The objective perspective focuses on the data processing models per se, while the subjective perspective investigates whether the distribution of roles and responsibilities among stakeholders can ensure data anonymity.