775 resultados para Medical data mining
Resumo:
In questa analisi si cercherà di comprendere cosa caratterizza questa l'ondata di progresso tecnologico che sta cambiando il mercato del lavoro. Il principale aspetto negativo di questo progresso si chiama "Technological Unemployment". Benché gli esperti si trovino in disaccordo su quali siano le cause della persistente alta disoccupazione, Brynjolfsson e McAfee puntano il dito contro l'automazione che ha soppiantato i lavori ripetitivi delle aziende. Tuttavia, è anche vero che il progresso ha sempre portato aumenti di produttività, e soprattutto nuovi tipi di occupazioni che hanno compensato la perdita di posti di lavoro, nel medio-lungo termine. Keynes evidenzia che la disoccupazione dovuta alla scoperta di strumenti economizzatori di manodopera procede con ritmo più rapido di quello con cui riusciamo a trovare nuovi impieghi per la manodopera stessa. Da ciò si crea ansia per il futuro, più o meno motivata. Gli stessi esperti sono spaccati a metà tra chi ha fiducia nei possibili risvolti positivi del progresso e chi invece teme possa comportare scenari catastrofici. Le macchine ci rubano lavoro o ci liberano da esso? Con questa ricerca ci si pone l'obiettivo di analizzare le effettive prospettive dei prossimi decenni. Nel capitolo 2 che è il corpo della tesi prenderemo soprattutto in conto il lavoro accademico di Frey ed Osborne dell'Oxford Martin School, intitolato "The future of employment: how susceptible are jobs to computerisation?" (2013). Essi sono stati tra i primi a studiare e quantificare cosa comporteranno le nuove tecnologie in termini di impiego. Il loro obiettivo era individuare le occupazioni a rischio, da qui a vent'anni, nel mercato del lavoro degli Stati Uniti e la relazione che intercorre tra la loro probabilità di essere computerizzati e i loro salari e livello d'istruzione medi, il tutto valutato attraverso l'ausilio di una nuova metodologia che si vedrà nel dettaglio. A conclusioni simili alle loro, per certi aspetti, è successivamente giunto anche Autor; tra l'altro viene spesso citato per altre sue opere dagli stessi Frey e Osborne, che usano le sue categorizzazioni per impostare la struttura del loro calcolo dell'automatizzabilità dei lavori utilizzando i recenti miglioramenti nelle scienze ingegneristiche quali ML (Machine Learning ad esempio Data mining, Machine vision, Computational statistics o più in generale AI) e MR (Mobile robotics) come strumenti di valutazione. Oltre alle sue ricerche, si presenteranno brevemente i risultati di un recente sondaggio tenuto dal Pew Research Center in cui importanti figure dell'informatica e dell'economia esprimono il loro giudizio sul futuro panorama del mondo del lavoro, considerando l'imminente ondata di innovazioni tecnologiche. La tesi si conclude con un'elaborazione personale. In questo modo si prenderà coscienza dei problemi concreti che il progresso tecnologico potrebbe procurare, ma anche dei suoi aspetti positivi.
Resumo:
The spectacular advances computer science applied to geographic information systems (GIS) in recent times has favored the emergence of several technological solutions. These developments have given rise to enormous opportunities for digital management of the territory. Among the technological solutions, the most famous Google Maps offers free online mapping dynamic exhaustive of the Maps. In addition to meet the enormous needs of urban indicators geotagged information, we did work on this project “Integration of an urban observatory on Google Maps.” The problem of geolocation in the urban observatory is particularly relevant in the sense that there is currently no data (descriptive and geographical) reliable on the urban sector; we must stick to extrapolate from data old and obsolete. This helps to curb the effectiveness of urban management to make difficult investment programming and to prevent the acquisition of knowledge to make cities engines of growth. The use of a geolocation tool coupled to the data would allow better monitoring of indicators Our project's objective is to develop an interactive map server (WebMapping) which map layer is formed from the resources of the Google Maps servers and match information from the field to produce maps of urban equipment and infrastructure of a city data to the client's request To achieve this goal, we will participate in a study of a GPS location of strategic sites in our core sector (health facilities), on the other hand, using information from the field, we will build a postgresql database that will link the information from the field to map from Google Maps via KML scripts and PHP appropriate. We will limit ourselves in our work to the city of Douala Cameroon with the sectors of health facilities with the possibility of extension to other areas and other cities. Keywords: Geographic Information System (GIS), Thematic Mapping, Web Mapping, data mining, Google API.
Resumo:
Accurate seasonal to interannual streamflow forecasts based on climate information are critical for optimal management and operation of water resources systems. Considering most water supply systems are multipurpose, operating these systems to meet increasing demand under the growing stresses of climate variability and climate change, population and economic growth, and environmental concerns could be very challenging. This study was to investigate improvement in water resources systems management through the use of seasonal climate forecasts. Hydrological persistence (streamflow and precipitation) and large-scale recurrent oceanic-atmospheric patterns such as the El Niño/Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), North Atlantic Oscillation (NAO), the Atlantic Multidecadal Oscillation (AMO), the Pacific North American (PNA), and customized sea surface temperature (SST) indices were investigated for their potential to improve streamflow forecast accuracy and increase forecast lead-time in a river basin in central Texas. First, an ordinal polytomous logistic regression approach is proposed as a means of incorporating multiple predictor variables into a probabilistic forecast model. Forecast performance is assessed through a cross-validation procedure, using distributions-oriented metrics, and implications for decision making are discussed. Results indicate that, of the predictors evaluated, only hydrologic persistence and Pacific Ocean sea surface temperature patterns associated with ENSO and PDO provide forecasts which are statistically better than climatology. Secondly, a class of data mining techniques, known as tree-structured models, is investigated to address the nonlinear dynamics of climate teleconnections and screen promising probabilistic streamflow forecast models for river-reservoir systems. Results show that the tree-structured models can effectively capture the nonlinear features hidden in the data. Skill scores of probabilistic forecasts generated by both classification trees and logistic regression trees indicate that seasonal inflows throughout the system can be predicted with sufficient accuracy to improve water management, especially in the winter and spring seasons in central Texas. Lastly, a simplified two-stage stochastic economic-optimization model was proposed to investigate improvement in water use efficiency and the potential value of using seasonal forecasts, under the assumption of optimal decision making under uncertainty. Model results demonstrate that incorporating the probabilistic inflow forecasts into the optimization model can provide a significant improvement in seasonal water contract benefits over climatology, with lower average deficits (increased reliability) for a given average contract amount, or improved mean contract benefits for a given level of reliability compared to climatology. The results also illustrate the trade-off between the expected contract amount and reliability, i.e., larger contracts can be signed at greater risk.
Resumo:
The primary goal of this project is to demonstrate the practical use of data mining algorithms to cluster a solved steady-state computational fluids simulation (CFD) flow domain into a simplified lumped-parameter network. A commercial-quality code, “cfdMine” was created using a volume-weighted k-means clustering that that can accomplish the clustering of a 20 million cell CFD domain on a single CPU in several hours or less. Additionally agglomeration and k-means Mahalanobis were added as optional post-processing steps to further enhance the separation of the clusters. The resultant nodal network is considered a reduced-order model and can be solved transiently at a very minimal computational cost. The reduced order network is then instantiated in the commercial thermal solver MuSES to perform transient conjugate heat transfer using convection predicted using a lumped network (based on steady-state CFD). When inserting the lumped nodal network into a MuSES model, the potential for developing a “localized heat transfer coefficient” is shown to be an improvement over existing techniques. Also, it was found that the use of the clustering created a new flow visualization technique. Finally, fixing clusters near equipment newly demonstrates a capability to track temperatures near specific objects (such as equipment in vehicles).
Resumo:
The municipality of San Juan La Laguna, Guatemala is home to approximately 5,200 people and located on the western side of the Lake Atitlán caldera. Steep slopes surround all but the eastern side of San Juan. The Lake Atitlán watershed is susceptible to many natural hazards, but most predictable are the landslides that can occur annually with each rainy season, especially during high-intensity events. Hurricane Stan hit Guatemala in October 2005; the resulting flooding and landslides devastated the Atitlán region. Locations of landslide and non-landslide points were obtained from field observations and orthophotos taken following Hurricane Stan. This study used data from multiple attributes, at every landslide and non-landslide point, and applied different multivariate analyses to optimize a model for landslides prediction during high-intensity precipitation events like Hurricane Stan. The attributes considered in this study are: geology, geomorphology, distance to faults and streams, land use, slope, aspect, curvature, plan curvature, profile curvature and topographic wetness index. The attributes were pre-evaluated for their ability to predict landslides using four different attribute evaluators, all available in the open source data mining software Weka: filtered subset, information gain, gain ratio and chi-squared. Three multivariate algorithms (decision tree J48, logistic regression and BayesNet) were optimized for landslide prediction using different attributes. The following statistical parameters were used to evaluate model accuracy: precision, recall, F measure and area under the receiver operating characteristic (ROC) curve. The algorithm BayesNet yielded the most accurate model and was used to build a probability map of landslide initiation points. The probability map developed in this study was also compared to the results of a bivariate landslide susceptibility analysis conducted for the watershed, encompassing Lake Atitlán and San Juan. Landslides from Tropical Storm Agatha 2010 were used to independently validate this study’s multivariate model and the bivariate model. The ultimate aim of this study is to share the methodology and results with municipal contacts from the author's time as a U.S. Peace Corps volunteer, to facilitate more effective future landslide hazard planning and mitigation.
Resumo:
Dieser Beitrag beschreibt die Konzeption, den Funktionsumfang und Erfahrungswerte der Open-Source-eLearning-Plattform Stud.IP. Der Funktionsumfang umfasst für jede einzelne Veranstaltung Ablaufpläne, das Hochladen von Hausarbeiten, Diskussionsforen, persönliche Homepages, Chaträume u.v.a. Ziel ist es hierbei, eine Infrastruktur des Lehrens und Lernens anzubieten, die dem Stand der Technik entspricht. Wissenschaftliche Einrichtungen finden zudem eine leistungsstarke Umgebung zur Verwaltung ihres Personals, Pflege ihrer Webseiten und der automatischer Erstellung von Veranstaltungs- oder Personallisten vor. Betreiber können auf ein verlässliches Supportsystem zugreifen, dass sie an der Weiterentwicklung durch die Entwickler- und Betreiber-Community teilhaben lässt.
Resumo:
In unserem Beitrag evaluieren wir die didaktische Einbettung einer CSCL-Anwendung anhand von Logfile-Analysen. Dazu betrachten wir exemplarisch die Nutzung des webbasierten Systems CommSy in einer projektorientierten Lehrveranstaltung, die wir als offenes Seminar charakterisieren. Wir erzielen zwei Ergebnisse: (1) Wir geben Hinweise zur Gestaltung des Nutzungskontexts eines CSCL-Systems sowie zur Unterstützung seiner anfänglichen und kontinuierlichen Nutzung. (2) Wir beschreiben die Analyse von Nutzungsanlässen und -mustern sowie von NutzerInnentypen anhand von Logfiles. Dabei können Logfile-Analysen zur Validierung weiterer Evaluationsergebnisse dienen, sind selbst jedoch nur in Kombination mit zusätzlichen Informationen zum Nutzungskontext interpretierbar.
Resumo:
We describe the use of log file analysis to investigate whether the use of CSCL applications corresponds to its didactical purposes. Exemplarily we examine the use of the web-based system CommSy as software support for project-oriented university courses. We present two findings: (1) We suggest measures to shape the context of CSCL applications and support their initial and continuous use. (2) We show how log files can be used to analyze how, when and by whom a CSCL system is used and thus help to validate further empirical findings. However, log file analyses can only be interpreted reasonably when additional data concerning the context of use is available.
Resumo:
In recent years, learning analytics (LA) has attracted a great deal of attention in technology-enhanced learning (TEL) research as practitioners, institutions, and researchers are increasingly seeing the potential that LA has to shape the future TEL landscape. Generally, LA deals with the development of methods that harness educational data sets to support the learning process. This paper provides a foundation for future research in LA. It provides a systematic overview on this emerging field and its key concepts through a reference model for LA based on four dimensions, namely data, environments, context (what?), stakeholders (who?), objectives (why?), and methods (how?). It further identifies various challenges and research opportunities in the area of LA in relation to each dimension.
Resumo:
BACKGROUND Posttraumatic Stress Disorder (PTSD) may occur in patients after exposure to a life-threatening illness. About one out of six patients develop clinically relevant levels of PTSD symptoms after acute myocardial infarction (MI). Symptoms of PTSD are associated with impaired quality of life and increase the risk of recurrent cardiovascular events. The main hypothesis of the MI-SPRINT study is that trauma-focused psychological counseling is more effective than non-trauma focused counseling in preventing posttraumatic stress after acute MI. METHODS/DESIGN The study is a single-center, randomized controlled psychological trial with two active intervention arms. The sample consists of 426 patients aged 18 years or older who are at 'high risk' to develop clinically relevant posttraumatic stress symptoms. 'High risk' patients are identified with three single-item questions with a numeric rating scale (0 to 10) asking about 'pain during MI', 'fear of dying until admission' and/or 'worrying and feeling helpless when being told about having MI'. Exclusion criteria are emergency heart surgery, severe comorbidities, current severe depression, disorientation, cognitive impairment and suicidal ideation. Patients will be randomly allocated to a single 45-minute counseling session targeting either specific MI-triggered traumatic reactions (that is, the verum intervention) or the general role of psychosocial stress in coronary heart disease (that is, the control intervention). The session will take place in the coronary care unit within 48 hours, by the bedside, after patients have reached stable circulatory conditions. Each patient will additionally receive an illustrated information booklet as study material. Sociodemographic factors, psychosocial and medical data, and cardiometabolic risk factors will be assessed during hospitalization. The primary outcome is the interviewer-rated posttraumatic stress level at three-month follow-up, which is hypothesized to be at least 20% lower in the verum group than in the control group using the t-test. Secondary outcomes are posttraumatic stress levels at 12-month follow-up, and psychosocial functioning and cardiometabolic risk factors at both follow-up assessments. DISCUSSION If the verum intervention proves to be effective, the study will be the first to show that a brief trauma-focused psychological intervention delivered within a somatic health care setting can reduce the incidence of posttraumatic stress in acute MI patients. TRIAL REGISTRATION ClinicalTrials.gov: NCT01781247.