747 resultados para Healthcare Big Data Analytics


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Automatic generation of classification rules has been an increasingly popular technique in commercial applications such as Big Data analytics, rule based expert systems and decision making systems. However, a principal problem that arises with most methods for generation of classification rules is the overfit-ting of training data. When Big Data is dealt with, this may result in the generation of a large number of complex rules. This may not only increase computational cost but also lower the accuracy in predicting further unseen instances. This has led to the necessity of developing pruning methods for the simplification of rules. In addition, classification rules are used further to make predictions after the completion of their generation. As efficiency is concerned, it is expected to find the first rule that fires as soon as possible by searching through a rule set. Thus a suit-able structure is required to represent the rule set effectively. In this chapter, the authors introduce a unified framework for construction of rule based classification systems consisting of three operations on Big Data: rule generation, rule simplification and rule representation. The authors also review some existing methods and techniques used for each of the three operations and highlight their limitations. They introduce some novel methods and techniques developed by them recently. These methods and techniques are also discussed in comparison to existing ones with respect to efficient processing of Big Data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The new digital technologies have led to widespread use of cloud computing, recognition of the potential of big data analytics, and significant progress in aspects of the Internet of Things, such as home automation, smart cities and grids and digital manufacturing. In addition to closing gaps in respect of the basic necessities of access and usage, now the conditions must be established for using the new platforms and finding ways to participate actively in the creation of content and even new applications and platforms. This message runs through the three chapters of this book. Chapter I presents the main features of the digital revolution, emphasizing that today’s world economy is a digital economy. Chapter II examines the region’s strengths and weaknesses with respect to digital access and consumption. Chapter III reviews the main policy debates and urges countries to take a more proactive approach towards, for example, regulation, network neutrality and combating cybercrime. The conclusion highlights two crucial elements: first, the need to take steps towards a single regional digital market that can compete in a world of global platforms by tapping the benefits of economies of scale and developing network economies; and second, the significance of the next stage of the digital agenda for Latin America and the Caribbean (eLAC2018), which will embody the latest updates to a cooperation strategy that has been in place for over a decade.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract: Context aware applications, which can adapt their behaviors to changing environments, are attracting more and more attention. To simplify the complexity of developing applications, context aware middleware, which introduces context awareness into the traditional middleware, is highlighted to provide a homogeneous interface involving generic context management solutions. This paper provides a survey of state-of-the-art context aware middleware architectures proposed during the period from 2009 through 2015. First, a preliminary background, such as the principles of context, context awareness, context modelling, and context reasoning, is provided for a comprehensive understanding of context aware middleware. On this basis, an overview of eleven carefully selected middleware architectures is presented and their main features explained. Then, thorough comparisons and analysis of the presented middleware architectures are performed based on technical parameters including architectural style, context abstraction, context reasoning, scalability, fault tolerance, interoperability, service discovery, storage, security & privacy, context awareness level, and cloud-based big data analytics. The analysis shows that there is actually no context aware middleware architecture that complies with all requirements. Finally, challenges are pointed out as open issues for future work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Questa tesi concerne quella che è una generalizzata tendenza verso la trasformazione digitale dei processi di business. Questa evoluzione, che implica l’utilizzo delle moderne tecnologie informatiche tra cui il Cloud Computing, le Big Data Analytics e gli strumenti Mobile, non è priva di insidie che vanno di volta in volta individuate ed affrontate opportunamente. In particolare si farà riferimento ad un caso aziendale, quello della nota azienda bolognese FAAC spa, ed alla funzione acquisti. Nell'ambito degli approvvigionamenti l'azienda sente la necessità di ristrutturare e digitalizzare il processo di richiesta di offerta (RdO) ai propri fornitori, al fine di consentire alla funzione di acquisti di concentrarsi sull'implementazione della strategia aziendale più che sull'operatività quotidiana. Si procede quindi in questo elaborato all'implementazione di un progetto di implementazione di una piattaforma specifica di e-procurement per la gestione delle RdO. Preliminarmente vengono analizzati alcuni esempi di project management presenti in letteratura e quindi viene definito un modello per la gestione del progetto specifico. Lo svolgimento comprende quindi: una fase di definizione degli obiettivi di continuità dell'azienda, un'analisi As-Is dei processi, la definizione degli obiettivi specifici di progetto e dei KPI di valutazione delle performance, la progettazione della piattaforma software ed infine alcune valutazioni relative ai rischi ed alle alternative dell'implementazione.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The era of big data opens up new opportunities in personalised medicine, preventive care, chronic disease management and in telemonitoring and managing of patients with implanted devices. The rich data accumulating within online services and internet companies provide a microscope to study human behaviour at scale, and to ask completely new questions about the interplay between behavioural patterns and health. In this paper, we shed light on a particular aspect of data-driven healthcare: autonomous decision-making. We first look at three examples where we can expect data-driven decisions to be taken autonomously by technology, with no or limited human intervention. We then discuss some of the technical and practical challenges that can be expected, and sketch the research agenda to address them.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Talk of Big Data seems to be everywhere. Indeed, the apparently value-free concept of ‘data’ has seen a spectacular broadening of popular interest, shifting from the dry terminology of labcoat-wearing scientists to the buzzword du jour of marketers. In the business world, data is increasingly framed as an economic asset of critical importance, a commodity on a par with scarce natural resources (Backaitis, 2012; Rotella, 2012). It is social media that has most visibly brought the Big Data moment to media and communication studies, and beyond it, to the social sciences and humanities. Social media data is one of the most important areas of the rapidly growing data market (Manovich, 2012; Steele, 2011). Massive valuations are attached to companies that directly collect and profit from social media data, such as Facebook and Twitter, as well as to resellers and analytics companies like Gnip and DataSift. The expectation attached to the business models of these companies is that their privileged access to data and the resulting valuable insights into the minds of consumers and voters will make them irreplaceable in the future. Analysts and consultants argue that advanced statistical techniques will allow the detection of ongoing communicative events (natural disasters, political uprisings) and the reliable prediction of future ones (electoral choices, consumption)...

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Environmental monitoring is becoming critical as human activity and climate change place greater pressures on biodiversity, leading to an increasing need for data to make informed decisions. Acoustic sensors can help collect data across large areas for extended periods making them attractive in environmental monitoring. However, managing and analysing large volumes of environmental acoustic data is a great challenge and is consequently hindering the effective utilization of the big dataset collected. This paper presents an overview of our current techniques for collecting, storing and analysing large volumes of acoustic data efficiently, accurately, and cost-effectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Social Media Analytics ist ein neuer Forschungsbereich, in dem interdisziplinäre Methoden kombiniert, erweitert und angepasst werden, um Social-Media-Daten auszuwerten. Neben der Beantwortung von Forschungsfragen ist es ebenfalls ein Ziel, Architekturentwürfe für die Entwicklung neuer Informationssysteme und Anwendungen bereitzustellen, die auf sozialen Medien basieren. Der Beitrag stellt die wichtigsten Aspekte des Bereichs Social Media Analytics vor und verweist auf die Notwendigkeit einer fächerübergreifenden Forschungsagenda, für deren Erstellung und Bearbeitung der Wirtschaftsinformatik eine wichtige Rolle zukommt.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Social Media Analytics is an emerging interdisciplinary research field that aims on combining, extending, and adapting methods for analysis of social media data. On the one hand it can support IS and other research disciplines to answer their research questions and on the other hand it helps to provide architectural designs as well as solution frameworks for new social media-based applications and information systems. The authors suggest that IS should contribute to this field and help to develop and process an interdisciplinary research agenda.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Twitter ist eine besonders nützliche Quelle für Social-Media-Daten: mit dem Twitter-API (dem Application Programming Interface, das einen strukturierten Zugang zu Kommunikationsdaten in standardisierten Formaten bietet) ist es Forschern möglich, mit ein wenig Mühe und ausreichenden technische Ressourcen sehr große Archive öffentlich verbreiteter Tweets zu bestimmten Themen, Interessenbereichen, oder Veranstaltungen aufzubauen. Grundsätzlich liefert das API sehr langen Listen von Hunderten, Tausenden oder Millionen von Tweets und den Metadaten zu diesen Tweets; diese Daten können dann auf verschiedentlichste Weise extrahiert, kombiniert, und visualisiert werden, um die Dynamik der Social-Media-Kommunikation zu verstehen. Diese Forschung ist häufig um althergebrachte Fragestellungen herum aufgebaut, wird aber in der Regel in einem bislang unbekannt großen Maßstab durchgeführt. Die Projekte von Medien- und Kommunikationswissenschaftlern wie Papacharissi und de Fatima Oliveira (2012), Wood und Baughman (2012) oder Lotan et al. (2011) – um nur eine Handvoll der letzten Beispiele zu nennen – sind grundlegend auf Twitterdatensätze aufgebaut, die jetzt routinemäßig Millionen von Tweets und zugehörigen Metadaten umfassen, erfaßt nach einer Vielzahl von Kriterien. Was allen diesen Fällen gemein ist, ist jedoch die Notwendigkeit, neue methodische Wege in der Verarbeitung und Analyse derart großer Datensätze zur medienvermittelten sozialen Interaktion zu gehen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this chapter, we draw out the relevant themes from a range of critical scholarship from the small body of digital media and software studies work that has focused on the politics of Twitter data and the sociotechnical means by which access is regulated. We highlight in particular the contested relationships between social media research (in both academic and non-academic contexts) and the data wholesale, retail, and analytics industries that feed on them. In the second major section of the chapter we discuss in detail the pragmatic edge of these politics in terms of what kinds of scientific research is and is not possible in the current political economy of Twitter data access. Finally, at the end of the chapter we return to the much broader implications of these issues for the politics of knowledge, demonstrating how the apparently microscopic level of how the Twitter API mediates access to Twitter data actually inscribes and influences the macro level of the global political economy of science itself, through re-inscribing institutional and traditional disciplinary privilege We conclude with some speculations about future developments in data rights and data philanthropy that may at least mitigate some of these negative impacts.