884 resultados para Connectivity,Connected Car,Big Data,KPI


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Technologies for Big Data and Data Science are receiving increasing research interest nowadays. This paper introduces the prototyping architecture of a tool aimed to solve Big Data Optimization problems. Our tool combines the jMetal framework for multi-objective optimization with Apache Spark, a technology that is gaining momentum. In particular, we make use of the streaming facilities of Spark to feed an optimization problem with data from different sources. We demonstrate the use of our tool by solving a dynamic bi-objective instance of the Traveling Salesman Problem (TSP) based on near real-time traffic data from New York City, which is updated several times per minute. Our experiment shows that both jMetal and Spark can be integrated providing a software platform to deal with dynamic multi-optimization problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cet essai est présenté en tant que mémoire de maîtrise dans le cadre du programme de droit des technologies de l’information. Ce mémoire traite de différents modèles d’affaires qui ont pour caractéristique commune de commercialiser les données dans le contexte des technologies de l’information. Les pratiques commerciales observées sont peu connues et l’un des objectifs est d’informer le lecteur quant au fonctionnement de ces pratiques. Dans le but de bien situer les enjeux, cet essai discutera d’abord des concepts théoriques de vie privée et de protection des renseignements personnels. Une fois ce survol tracé, les pratiques de « data brokerage », de « cloud computing » et des solutions « analytics » seront décortiquées. Au cours de cette description, les enjeux juridiques soulevés par chaque aspect de la pratique en question seront étudiés. Enfin, le dernier chapitre de cet essai sera réservé à deux enjeux, soit le rôle du consentement et la sécurité des données, qui ne relèvent pas d’une pratique commerciale spécifique, mais qui sont avant tout des conséquences directes de l’évolution des technologies de l’information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Virtually every sector of business and industry that uses computing, including financial analysis, search engines, and electronic commerce, incorporate Big Data analysis into their business model. Sophisticated clustering algorithms are popular for deducing the nature of data by assigning labels to unlabeled data. We address two main challenges in Big Data. First, by definition, the volume of Big Data is too large to be loaded into a computer’s memory (this volume changes based on the computer used or available, but there is always a data set that is too large for any computer). Second, in real-time applications, the velocity of new incoming data prevents historical data from being stored and future data from being accessed. Therefore, we propose our Streaming Kernel Fuzzy c-Means (stKFCM) algorithm, which reduces both computational complexity and space complexity significantly. The proposed stKFCM only requires O(n2) memory where n is the (predetermined) size of a data subset (or data chunk) at each time step, which makes this algorithm truly scalable (as n can be chosen based on the available memory). Furthermore, only 2n2 elements of the full N × N (where N >> n) kernel matrix need to be calculated at each time-step, thus reducing both the computation time in producing the kernel elements and also the complexity of the FCM algorithm. Empirical results show that stKFCM, even with relatively very small n, can provide clustering performance as accurately as kernel fuzzy c-means run on the entire data set while achieving a significant speedup.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

I Big Data stanno guidando una rivoluzione globale. In tutti i settori, pubblici o privati, e le industrie quali Vendita al dettaglio, Sanità, Media e Trasporti, i Big Data stanno influenzando la vita di miliardi di persone. L’impatto dei Big Data è sostanziale, ma così discreto da passare inosservato alla maggior parte delle persone. Le applicazioni di Business Intelligence e Advanced Analytics vogliono studiare e trarre informazioni dai Big Data. Si studia il passaggio dalla prima alla seconda, mettendo in evidenza aspetti simili e differenze.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cet essai est présenté en tant que mémoire de maîtrise dans le cadre du programme de droit des technologies de l’information. Ce mémoire traite de différents modèles d’affaires qui ont pour caractéristique commune de commercialiser les données dans le contexte des technologies de l’information. Les pratiques commerciales observées sont peu connues et l’un des objectifs est d’informer le lecteur quant au fonctionnement de ces pratiques. Dans le but de bien situer les enjeux, cet essai discutera d’abord des concepts théoriques de vie privée et de protection des renseignements personnels. Une fois ce survol tracé, les pratiques de « data brokerage », de « cloud computing » et des solutions « analytics » seront décortiquées. Au cours de cette description, les enjeux juridiques soulevés par chaque aspect de la pratique en question seront étudiés. Enfin, le dernier chapitre de cet essai sera réservé à deux enjeux, soit le rôle du consentement et la sécurité des données, qui ne relèvent pas d’une pratique commerciale spécifique, mais qui sont avant tout des conséquences directes de l’évolution des technologies de l’information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As economies, societies, and environments change, official statistics evolve and develop to reflect those changes. In reaction to disruptive innovations arising from globalisation, technological advances, and cultural changes, the pace of change of official statistics will accelerate in the future. The motivation for change may also be more existential than that of the past as official statisticians consider the survival of their discipline. This article examines some of the emerging developments and questions whether they present threats or offer opportunities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As mechatronic devices and components become increasingly integrated with and within wider systems concepts such as Cyber-Physical Systems and the Internet of Things, designer engineers are faced with new sets of challenges in areas such as privacy. The paper looks at the current, and potential future, of privacy legislation, regulations and standards and considers how these are likely to impact on the way in which mechatronics is perceived and viewed. The emphasis is not therefore on technical issues, though these are brought into consideration where relevant, but on the soft, or human centred, issues associated with achieving user privacy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objetivo: Identificar las barreras para la unificación de una Historia Clínica Electrónica –HCE- en Colombia. Materiales y Métodos: Se realizó un estudio cualitativo. Se realizaron entrevistas semiestructuradas a profesionales y expertos de 22 instituciones del sector salud, de Bogotá y de los departamentos de Cundinamarca, Santander, Antioquia, Caldas, Huila, Valle del Cauca. Resultados: Colombia se encuentra en una estructuración para la implementación de la Historia Clínica Electrónica Unificada -HCEU-. Actualmente, se encuentra en unificación en 42 IPSs públicas en el departamento de Cundinamarca, el desarrollo de la HCEU en el país es privado y de desarrollo propio debido a las necesidades particulares de cada IPS. Conclusiones: Se identificaron barreras humanas, financieras, legales, organizacionales, técnicas y profesionales en los departamentos entrevistados. Se identificó que la unificación de la HCE depende del acuerdo de voluntades entre las IPSs del sector público, privado, EPSs, y el Gobierno Nacional.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to the high standards expected from diagnostic medical imaging, the analysis of information regarding waiting lists via different information systems is of utmost importance. Such analysis, on the one hand, may improve the diagnostic quality and, on the other hand, may lead to the reduction of waiting times, with the concomitant increase of the quality of services and the reduction of the inherent financial costs. Hence, the purpose of this study is to assess the waiting time in the delivery of diagnostic medical imaging services, like computed tomography and magnetic resonance imaging. Thereby, this work is focused on the development of a decision support system to assess waiting times in diagnostic medical imaging with recourse to operational data of selected attributes extracted from distinct information systems. The computational framework is built on top of a Logic Programming Case-base Reasoning approach to Knowledge Representation and Reasoning that caters for the handling of in-complete, unknown, or even self-contradictory information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The 10th European Conference on Information Systems Management is being held at The University of Evora, Portugal on the 8 /9 September 2016. The Conference Chair is Paulo Silva and the Programme Chairs are Prof. Rui Quaresma and Prof. António Guerreiro. ECISM provides an opportunity for individuals researching and working in the broad field of information systems management, including IT evaluation to come together to exchange ideas and discuss current research in the field. This has developed into a particularly important forum for the present era, where the modern challenges of managing information and evaluating the effectiveness of related technologies are constantly evolving in the world of Big Data and Cloud Computing. We hope that this year’s conference will provide you with plenty of opportunities to share your expertise with colleagues from around the world. The keynote speakers for the Conference are Carlos Zorrinho from the Portuguese Delegation and Isabel Ramos from University of Minho, Portugal. ECISM 2016 received an initial submission of 84 abstracts. After the double blind peer review process 25 aca demic papers, 7 PhD research papers, 3 Masters research paper and 5 work in progress papers have been ac cepted for publication in these Conference Proceedings. These papers represent research from around the world, including Belgium, Brazil, China, Czech Republic, Kazakhstan, Malaysia, New Zealand, Norway, Oman, Poland, Portugal, South Africa, Sweden, The Netherlands, UK and Vietnam.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analytics is the technology working with the manipulation of data to produce information able to change the world we live every day. Analytics have been largely used within the last decade to cluster people’s behaviour to predict their preferences of items to buy, music to listen, movies to watch and even electoral preference. The most advanced companies succeded in controlling people’s behaviour using analytics. Despite the evidence of the super-power of analytics, they are rarely applied to the big data collected within supply chain systems (i.e. distribution network, storage systems and production plants). This PhD thesis explores the fourth research paradigm (i.e. the generation of knowledge from data) applied to supply chain system design and operations management. An ontology defining the entities and the metrics of supply chain systems is used to design data structures for data collection in supply chain systems. The consistency of this data is provided by mathematical demonstrations inspired by the factory physics theory. The availability, quantity and quality of the data within these data structures define different decision patterns. Ten decision patterns are identified, and validated on-field, to address ten different class of design and control problems in the field of supply chain systems research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A High-Performance Computing job dispatcher is a critical software that assigns the finite computing resources to submitted jobs. This resource assignment over time is known as the on-line job dispatching problem in HPC systems. The fact the problem is on-line means that solutions must be computed in real-time, and their required time cannot exceed some threshold to do not affect the normal system functioning. In addition, a job dispatcher must deal with a lot of uncertainty: submission times, the number of requested resources, and duration of jobs. Heuristic-based techniques have been broadly used in HPC systems, at the cost of achieving (sub-)optimal solutions in a short time. However, the scheduling and resource allocation components are separated, thus generates a decoupled decision that may cause a performance loss. Optimization-based techniques are less used for this problem, although they can significantly improve the performance of HPC systems at the expense of higher computation time. Nowadays, HPC systems are being used for modern applications, such as big data analytics and predictive model building, that employ, in general, many short jobs. However, this information is unknown at dispatching time, and job dispatchers need to process large numbers of them quickly while ensuring high Quality-of-Service (QoS) levels. Constraint Programming (CP) has been shown to be an effective approach to tackle job dispatching problems. However, state-of-the-art CP-based job dispatchers are unable to satisfy the challenges of on-line dispatching, such as generate dispatching decisions in a brief period and integrate current and past information of the housing system. Given the previous reasons, we propose CP-based dispatchers that are more suitable for HPC systems running modern applications, generating on-line dispatching decisions in a proper time and are able to make effective use of job duration predictions to improve QoS levels, especially for workloads dominated by short jobs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using Big Data and Natural Language Processing (NLP) tools, this dissertation investigates the narrative strategies that atypical actors can leverage to deal with the adverse reactions they often elicit. Extensive research shows that atypical actors, those who fail to abide by established contextual standards and norms, are subject to skepticism and face a higher risk of rejection. Indeed, atypical actors combine features and behaviors in unconventional ways, thereby generating confusion in the audience and instilling doubts about their propositions' legitimacy. However, the same atypicality is often cited as the precursor to socio-cultural innovation and a strategic act to expand the capacity for delivering valued goods and services. Contextualizing the conditions under which atypicality is celebrated or punished has been a significant theoretical challenge for scholars interested in reconciling this tension. Nevertheless, prior work has focused on audience side factors or on actor-side characteristics that are only scantily under an actor's control (e.g., status and reputation). This dissertation demonstrates that atypical actors can use strategically crafted narratives to mitigate against the audience’s negative response. In particular, when atypical actors evoke conventional features in their story, they are more likely to overcome the illegitimacy discount usually applied to them. Moreover, narratives become successful navigational devices for atypicality when atypical actors use a more abstract language. This simplifies classification and provides the audience with more flexibility to interpret and understand them.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L’elaborazione di quantità di dati sempre crescente ed in tempi ragionevoli è una delle principali sfide tecnologiche del momento. La difficoltà non risiede esclusivamente nel disporre di motori di elaborazione efficienti e in grado di eseguire la computazione coordinata su un’enorme mole di dati, ma anche nel fornire agli sviluppatori di tali applicazioni strumenti di sviluppo che risultino intuitivi nell’utilizzo e facili nella messa in opera, con lo scopo di ridurre il tempo necessario a realizzare concretamente un’idea di applicazione e abbassare le barriere all’ingresso degli strumenti software disponibili. Questo lavoro di tesi prende in esame il progetto RAM3S, il cui intento è quello di semplificare la realizzazione di applicazioni di elaborazione dati basate su piattaforme di Stream Processing quali Spark, Storm, Flinke e Samza, e si occupa di esaudire il suo scopo originale fornendo un framework astratto ed estensibile per la definizione di applicazioni di stream processing, capaci di eseguire indistintamente sulle piattaforme disponibili sul mercato.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Biology is now a “Big Data Science” thanks to technological advancements allowing the characterization of the whole macromolecular content of a cell or a collection of cells. This opens interesting perspectives, but only a small portion of this data may be experimentally characterized. From this derives the demand of accurate and efficient computational tools for automatic annotation of biological molecules. This is even more true when dealing with membrane proteins, on which my research project is focused leading to the development of two machine learning-based methods: BetAware-Deep and SVMyr. BetAware-Deep is a tool for the detection and topology prediction of transmembrane beta-barrel proteins found in Gram-negative bacteria. These proteins are involved in many biological processes and primary candidates as drug targets. BetAware-Deep exploits the combination of a deep learning framework (bidirectional long short-term memory) and a probabilistic graphical model (grammatical-restrained hidden conditional random field). Moreover, it introduced a modified formulation of the hydrophobic moment, designed to include the evolutionary information. BetAware-Deep outperformed all the available methods in topology prediction and reported high scores in the detection task. Glycine myristoylation in Eukaryotes is the binding of a myristic acid on an N-terminal glycine. SVMyr is a fast method based on support vector machines designed to predict this modification in dataset of proteomic scale. It uses as input octapeptides and exploits computational scores derived from experimental examples and mean physicochemical features. SVMyr outperformed all the available methods for co-translational myristoylation prediction. In addition, it allows (as a unique feature) the prediction of post-translational myristoylation. Both the tools here described are designed having in mind best practices for the development of machine learning-based tools outlined by the bioinformatics community. Moreover, they are made available via user-friendly web servers. All this make them valuable tools for filling the gap between sequential and annotated data.