908 resultados para predictive analytics


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Il lavoro presentato in questo elaborato tratterà lo sviluppo di un sistema di alerting che consenta di monitorare proattivamente una o più sorgenti dati aziendali, segnalando le eventuali condizioni di irregolarità rilevate; questo verrà incluso all'interno di sistemi già esistenti dedicati all'analisi dei dati e alla pianificazione, ovvero i cosiddetti Decision Support Systems. Un sistema di supporto alle decisioni è in grado di fornire chiare informazioni per tutta la gestione dell'impresa, misurandone le performance e fornendo proiezioni sugli andamenti futuri. Questi sistemi vengono catalogati all'interno del più ampio ambito della Business Intelligence, che sottintende l'insieme di metodologie in grado di trasformare i dati di business in informazioni utili al processo decisionale. L'intero lavoro di tesi è stato svolto durante un periodo di tirocinio svolto presso Iconsulting S.p.A., IT System Integrator bolognese specializzato principalmente nello sviluppo di progetti di Business Intelligence, Enterprise Data Warehouse e Corporate Performance Management. Il software che verrà illustrato in questo elaborato è stato realizzato per essere collocato all'interno di un contesto più ampio, per rispondere ai requisiti di un cliente multinazionale leader nel settore della telefonia mobile e fissa.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Los juegos serios (o videojuegos educativos), se consideran una herramienta importante para la educación en el futuro. Por ello, se está invirtiendo mucho esfuerzo en el análisis de su corrección e idoneidad para alcanzar los objetivos educativos pretendidos. El campo de análisis de aprendizaje con juegos pretende proporcionar herramientas que verifiquen estas características mejorando la calidad y efectividad de los juegos serios. Para ello, se necesitan normalmente tres etapas: 1), monitorizar los datos de la interacción del jugador con el juego; 2), analizar esos datos recolectados; y 3), visualizar los resultados. En este contexto, hay algunos asuntos importantes a considerar: nivel de conocimiento del juego, receptor de las visualizaciones finales o cantidad y complejidad de los datos. Estas ideas se ponen en práctica con dos ejemplos de juegos serios centrándonos en las dos últimas etapas del proceso. Se realizan varios análisis y visualizaciones con ellos considerando los diferentes aspectos antes mencionados. Entre las conclusiones que se pueden extraer, destaca que, a pesar de haber algunos aspectos aún por mejorar, el análisis de aprendizaje con juegos es una herramienta esencial para muchos usuarios con una amplia variedad de intereses en juego serios.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Corrigendum European Journal of Human Genetics (2016) 24, 1515; doi:10.1038/ejhg.2016.81 22 Years of predictive testing for Huntington’s disease: the experience of the UK Huntington’s Prediction Consortium Sheharyar S Baig, Mark Strong, Elisabeth Rosser, Nicola V Taverner, Ruth Glew, Zosia Miedzybrodzka, Angus Clarke, David Craufurd, UK Huntington's Disease Prediction Consortium and Oliver W Quarrell Correction to: European Journal of Human Genetics advance online publication, 11 May 2016; doi: 10.1038/ejhg.2016.36 Post online publication the authors realised that they had made an error: The sentence on page 2: 'In the first 5-year period........but this changed significantly in the last 5-year period with 51% positive and 49% negative (χ2=20.6, P<0.0001)' should read: 'In the first 5-year period........but this changed significantly in the last 5-year period with 49% positive and 51% negative (χ2=20.6, P<0.0001)'.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

At the moment, the phrases “big data” and “analytics” are often being used as if they were magic incantations that will solve all an organization’s problems at a stroke. The reality is that data on its own, even with the application of analytics, will not solve any problems. The resources that analytics and big data can consume represent a significant strategic risk if applied ineffectively. Any analysis of data needs to be guided, and to lead to action. So while analytics may lead to knowledge and intelligence (in the military sense of that term), it also needs the input of knowledge and intelligence (in the human sense of that term). And somebody then has to do something new or different as a result of the new insights, or it won’t have been done to any purpose. Using an analytics example concerning accounts payable in the public sector in Canada, this paper reviews thinking from the domains of analytics, risk management and knowledge management, to show some of the pitfalls, and to present a holistic picture of how knowledge management might help tackle the challenges of big data and analytics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As we look around a scene, we perceive it as continuous and stable even though each saccadic eye movement changes the visual input to the retinas. How the brain achieves this perceptual stabilization is unknown, but a major hypothesis is that it relies on presaccadic remapping, a process in which neurons shift their visual sensitivity to a new location in the scene just before each saccade. This hypothesis is difficult to test in vivo because complete, selective inactivation of remapping is currently intractable. We tested it in silico with a hierarchical, sheet-based neural network model of the visual and oculomotor system. The model generated saccadic commands to move a video camera abruptly. Visual input from the camera and internal copies of the saccadic movement commands, or corollary discharge, converged at a map-level simulation of the frontal eye field (FEF), a primate brain area known to receive such inputs. FEF output was combined with eye position signals to yield a suitable coordinate frame for guiding arm movements of a robot. Our operational definition of perceptual stability was "useful stability," quantified as continuously accurate pointing to a visual object despite camera saccades. During training, the emergence of useful stability was correlated tightly with the emergence of presaccadic remapping in the FEF. Remapping depended on corollary discharge but its timing was synchronized to the updating of eye position. When coupled to predictive eye position signals, remapping served to stabilize the target representation for continuously accurate pointing. Graded inactivations of pathways in the model replicated, and helped to interpret, previous in vivo experiments. The results support the hypothesis that visual stability requires presaccadic remapping, provide explanations for the function and timing of remapping, and offer testable hypotheses for in vivo studies. We conclude that remapping allows for seamless coordinate frame transformations and quick actions despite visual afferent lags. With visual remapping in place for behavior, it may be exploited for perceptual continuity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cumulon is a system aimed at simplifying the development and deployment of statistical analysis of big data in public clouds. Cumulon allows users to program in their familiar language of matrices and linear algebra, without worrying about how to map data and computation to specific hardware and cloud software platforms. Given user-specified requirements in terms of time, monetary cost, and risk tolerance, Cumulon automatically makes intelligent decisions on implementation alternatives, execution parameters, as well as hardware provisioning and configuration settings -- such as what type of machines and how many of them to acquire. Cumulon also supports clouds with auction-based markets: it effectively utilizes computing resources whose availability varies according to market conditions, and suggests best bidding strategies for them. Cumulon explores two alternative approaches toward supporting such markets, with different trade-offs between system and optimization complexity. Experimental study is conducted to show the efficiency of Cumulon's execution engine, as well as the optimizer's effectiveness in finding the optimal plan in the vast plan space.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: It is unclear whether diagnostic protocols based on cardiac markers to identify low-risk chest pain patients suitable for early release from the emergency department can be applied to patients older than 65 years or with traditional cardiac risk factors. METHODS AND RESULTS: In a single-center retrospective study of 231 consecutive patients with high-risk factor burden in which a first cardiac troponin (cTn) level was measured in the emergency department and a second cTn sample was drawn 4 to 14 hours later, we compared the performance of a modified 2-Hour Accelerated Diagnostic Protocol to Assess Patients with Chest Pain Using Contemporary Troponins as the Only Biomarker (ADAPT) rule to a new risk classification scheme that identifies patients as low risk if they have no known coronary artery disease, a nonischemic electrocardiogram, and 2 cTn levels below the assay's limit of detection. Demographic and outcome data were abstracted through chart review. The median age of our population was 64 years, and 75% had Thrombosis In Myocardial Infarction risk score ≥2. Using our risk classification rule, 53 (23%) patients were low risk with a negative predictive value for 30-day cardiac events of 98%. Applying a modified ADAPT rule to our cohort, 18 (8%) patients were identified as low risk with a negative predictive value of 100%. In a sensitivity analysis, the negative predictive value of our risk algorithm did not change when we relied only on undetectable baseline cTn and eliminated the second cTn assessment. CONCLUSIONS: If confirmed in prospective studies, this less-restrictive risk classification strategy could be used to safely identify chest pain patients with more traditional cardiac risk factors for early emergency department release.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We analyzed projections of current and future ambient temperatures along the eastern United States in relationship to the thermal tolerance of harbor seals in air. Using the earth systems model (HadGEM2-ES) and representative concentration pathways (RCPs) 4.5 and 8.5, which are indicative of two different atmospheric CO2 concentrations, we were able to examine possible shifts in distribution based on three metrics: current preferences, the thermal limit of juveniles, and the thermal limits of adults. Our analysis focused on average ambient temperatures because harbor seals are least effective at regulating their body temperature in air, making them most susceptible to rising air temperatures in the coming years. Our study focused on the months of May, June, and August from 2041-2060 (2050) and 2061-2080 (2070) as these are the historic months in which harbor seals are known to annually come ashore to pup, breed, and molt. May, June, and August are also some of the warmest months of the year. We found that breeding colonies along the eastern United States will be limited by the thermal tolerance of juvenile harbor seals in air, while their foraging range will extend as far south as the thermal tolerance of adult harbor seals in air. Our analysis revealed that in 2070, harbor seal pups should be absent from the United States coastline nearing the end of the summer due to exceptionally high air temperatures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

If patients at risk of admission or readmission to hospital or other forms of care could be identified and offered suitable early interventions then their lives and long-term health may be improved by reducing the chances of future admission or readmission to care, and hopefully, their cost of care reduced. Considerable work has been carried out in this subject area especially in the USA and the UK. This has led for instance to the development of tools such as PARR, PARR-30, and the Combined Predictive Model for prediction of emergency readmission or admission to acute care. Here we perform a structured review the academic and grey literature on predictive risk tools for social care utilisation, as well as admission and readmission to general hospitals and psychiatric hospitals. This is the first phase of a project in partnership with Docobo Ltd and funded by Innovate UK,in which we seek to develop novel predictive risk tools and dashboards to assist commissioners in Clinical Commissioning Groups with the triangulation of the intelligence available from routinely collected data to optimise integrated care and better understand the complex needs of individuals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Measuring and tracking athletic performance is crucial to an athlete’s development and the countermovement vertical jump is often used to measure athletic performance, particularly lower limb power. The linear power developed in the lower limb is estimated through jump height. However, the relationship between angular power, produced by the joints of the lower limb, and jump height is not well understood. This study examined the contributions of the kinetic value of angular power, and its kinematic component, angular velocity, of the lower limb joints to jump height in the countermovement vertical jump. Kinematic and kinetic data were gathered from twenty varsity-level basketball and volleyball athletes as they performed six maximal effort jumps in four arm swing conditions: no-arm involvement, single-non-dominant arm swing, single-dominant arm swing, and two-arm swing. The displacement of the whole body centre of mass, peak joint powers, peak angular velocity, and locations of the peaks as a percentage of the jump’s takeoff period, were computed. Linear regressions assessed the relationship of the variables to jump height. Results demonstrated that knee peak power (p = 0.001, ß = 0.363, r = 0.363), its location within takeoff period (p = 0.023, ß = -0.256, r = 0.256), and peak knee peak angular velocity (p = 0.005, ß = 0.310, r = 0.310) were moderately linked to increased jump height. Additionally, the location, within the takeoff period, of the peak angular velocities of the hip (p = 0.003, ß = -0.318, r = 0.419) and ankle (p = 0.011, ß = 0.270, r = 0.419) were positively linked to jump height. These results highlight the importance of training the velocity and timing of joint motion beyond traditional power training protocols as well as the importance of further investigation into appropriate testing protocol that is sensitive to the contributions by individual joints in maximal effort jumping.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An investigation into karst hazard in southern Ontario has been undertaken with the intention of leading to the development of predictive karst models for this region. The reason these are not currently feasible is a lack of sufficient karst data, though this is not entirely due to the lack of karst features. Geophysical data was collected at Lake on the Mountain, Ontario as part of this karst investigation. This data was collected in order to validate the long-standing hypothesis that Lake on the Mountain was formed from a sinkhole collapse. Sub-bottom acoustic profiling data was collected in order to image the lake bottom sediments and bedrock. Vertical bedrock features interpreted as solutionally enlarged fractures were taken as evidence for karst processes on the lake bottom. Additionally, the bedrock topography shows a narrower and more elongated basin than was previously identified, and this also lies parallel to a mapped fault system in the area. This suggests that Lake on the Mountain was formed over a fault zone which also supports the sinkhole hypothesis as it would provide groundwater pathways for karst dissolution to occur. Previous sediment cores suggest that Lake on the Mountain would have formed at some point during the Wisconsinan glaciation with glacial meltwater and glacial loading as potential contributing factors to sinkhole development. A probabilistic karst model for the state of Kentucky, USA, has been generated using the Weights of Evidence method. This model is presented as an example of the predictive capabilities of these kind of data-driven modelling techniques and to show how such models could be applied to karst in Ontario. The model was able to classify 70% of the validation dataset correctly while minimizing false positive identifications. This is moderately successful and could stand to be improved. Finally, suggestions to improving the current karst model of southern Ontario are suggested with the goal of increasing investigation into karst in Ontario and streamlining the reporting system for sinkholes, caves, and other karst features so as to improve the current Ontario karst database.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Big Data Analytics is an emerging field since massive storage and computing capabilities have been made available by advanced e-infrastructures. Earth and Environmental sciences are likely to benefit from Big Data Analytics techniques supporting the processing of the large number of Earth Observation datasets currently acquired and generated through observations and simulations. However, Earth Science data and applications present specificities in terms of relevance of the geospatial information, wide heterogeneity of data models and formats, and complexity of processing. Therefore, Big Earth Data Analytics requires specifically tailored techniques and tools. The EarthServer Big Earth Data Analytics engine offers a solution for coverage-type datasets, built around a high performance array database technology, and the adoption and enhancement of standards for service interaction (OGC WCS and WCPS). The EarthServer solution, led by the collection of requirements from scientific communities and international initiatives, provides a holistic approach that ranges from query languages and scalability up to mobile access and visualization. The result is demonstrated and validated through the development of lighthouse applications in the Marine, Geology, Atmospheric, Planetary and Cryospheric science domains.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Big Data Analytics is an emerging field since massive storage and computing capabilities have been made available by advanced e-infrastructures. Earth and Environmental sciences are likely to benefit from Big Data Analytics techniques supporting the processing of the large number of Earth Observation datasets currently acquired and generated through observations and simulations. However, Earth Science data and applications present specificities in terms of relevance of the geospatial information, wide heterogeneity of data models and formats, and complexity of processing. Therefore, Big Earth Data Analytics requires specifically tailored techniques and tools. The EarthServer Big Earth Data Analytics engine offers a solution for coverage-type datasets, built around a high performance array database technology, and the adoption and enhancement of standards for service interaction (OGC WCS and WCPS). The EarthServer solution, led by the collection of requirements from scientific communities and international initiatives, provides a holistic approach that ranges from query languages and scalability up to mobile access and visualization. The result is demonstrated and validated through the development of lighthouse applications in the Marine, Geology, Atmospheric, Planetary and Cryospheric science domains.