983 resultados para PREDICTIVE MODELING


Relevância:

60.00% 60.00%

Publicador:

Resumo:

La présentation d'antigène par les molécules d'histocompatibilité majeure de classe I (CMHI) permet au système immunitaire adaptatif de détecter et éliminer les agents pathogènes intracellulaires et des cellules anormales. La surveillance immunitaire est effectuée par les lymphocytes T CD8 qui interagissent avec le répertoire de peptides associés au CMHI présentés à la surface de toutes cellules nucléées. Les principaux gènes humains de CMHI, HLA-A et HLA-B, sont très polymorphes et par conséquent montrent des différences dans la présentation des antigènes. Nous avons étudié les différences qualitatives et quantitatives dans l'expression et la liaison peptidique de plusieurs allotypes HLA. Utilisant la technique de cytométrie de flux quantitative nous avons établi une hiérarchie d'expression pour les quatre HLA-A, B allotypes enquête. Nos résultats sont compatibles avec une corrélation inverse entre l'expression allotypique et la diversité des peptides bien que d'autres études soient nécessaires pour consolider cette hypothèse. Les origines mondiales du répertoire de peptides associés au CMHI restent une question centrale à la fois fondamentalement et dans la recherche de cibles immunothérapeutiques. Utilisant des techniques protéogénomiques, nous avons identifié et analysé 25,172 peptides CMHI isolées à partir des lymphocytes B de 18 personnes qui exprime collectivement 27 allotypes HLA-A,B. Alors que 58% des gènes ont été la source de 1-64 peptides CMHI par gène, 42% des gènes ne sont pas représentés dans l'immunopeptidome. Dans l'ensemble, l’immunopeptidome présenté par 27 allotypes HLA-A,B ne couvrent que 17% des séquences exomiques exprimées dans les cellules des sujets. Nous avons identifié plusieurs caractéristiques des transcrits et des protéines qui améliorent la production des peptides CMHI. Avec ces données, nous avons construit un modèle de régression logistique qui prédit avec une grande précision si un gène de notre ensemble de données ou à partir d'ensembles de données indépendants génèrerait des peptides CMHI. Nos résultats montrent la sélection préférentielle des peptides CMHI à partir d'un répertoire limité de produits de gènes avec des caractéristiques distinctes. L'idée que le système immunitaire peut surveiller des peptides CMHI couvrant seulement une fraction du génome codant des protéines a des implications profondes dans l'auto-immunité et l'immunologie du cancer.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this thesis, a machine learning approach was used to develop a predictive model for residual methanol concentration in industrial formalin produced at the Akzo Nobel factory in Kristinehamn, Sweden. The MATLABTM computational environment supplemented with the Statistics and Machine LearningTM toolbox from the MathWorks were used to test various machine learning algorithms on the formalin production data from Akzo Nobel. As a result, the Gaussian Process Regression algorithm was found to provide the best results and was used to create the predictive model. The model was compiled to a stand-alone application with a graphical user interface using the MATLAB CompilerTM.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Predictive distribution modelling of Berberis aristata DC, a rare threatened plant with high medicinal values has been done with an aim to understand its potential distribution zones in Indian Himalayan region. Bioclimatic and topographic variables were used to develop the distribution model with the help of three different algorithms viz. GeneticAlgorithm for Rule-set Production (GARP), Bioclim and Maximum entroys(MaxEnt). Maximum entropy has predicted wider potential distribution (10.36%) compared to GARP (4.63%) and Bioclim (2.44%). Validation confirms that these outputs are comparable to the present distribution pattern of the B. atistata. This exercise highlights that this species favours Western Himalaya. However, GARP and MaxEnt's prediction of Eastern Himalayan states (i.e. Arunachal Pradesh, Nagaland and Manipur) are also identified as potential occurrence places require further exploration.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Assessing prognostic risk is crucial to clinical care, and critically dependent on both diagnosis and medical interventions. Current methods use this augmented information to build a single prediction rule. But this may not be expressive enough to capture differential effects of interventions on prognosis. To this end, we propose a supervised, Bayesian nonparametric framework that simultaneously discovers the latent intervention groups and builds a separate prediction rule for each intervention group. The prediction rule is learnt using diagnosis data through a Bayesian logistic regression. For inference, we develop an efficient collapsed Gibbs sampler. We demonstrate that our method outperforms baselines in predicting 30-day hospital readmission using two patient cohorts - Acute Myocardial Infarction and Pneumonia. The significance of this model is that it can be applied widely across a broad range of medical prognosis tasks. © 2014 Springer International Publishing.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Medical interventions critically determine clinical outcomes. But prediction models either ignore interventions or dilute impact by building a single prediction rule by amalgamating interventions with other features. One rule across all interventions may not capture differential effects. Also, interventions change with time as innovations are made, requiring prediction models to evolve over time. To address these gaps, we propose a prediction framework that explicitly models interventions by extracting a set of latent intervention groups through a Hierarchical Dirichlet Process (HDP) mixture. Data are split in temporal windows and for each window, a separate distribution over the intervention groups is learnt. This ensures that the model evolves with changing interventions. The outcome is modeled as conditional, on both the latent grouping and the patients' condition, through a Bayesian logistic regression. Learning distributions for each time-window result in an over-complex model when interventions do not change in every time-window. We show that by replacing HDP with a dynamic HDP prior, a more compact set of distributions can be learnt. Experiments performed on two hospital datasets demonstrate the superiority of our framework over many existing clinical and traditional prediction frameworks.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Medical outcomes are inexorably linked to patient illness and clinical interventions. Interventions change the course of disease, crucially determining outcome. Traditional outcome prediction models build a single classifier by augmenting interventions with disease information. Interventions, however, differentially affect prognosis, thus a single prediction rule may not suffice to capture variations. Interventions also evolve over time as more advanced interventions replace older ones. To this end, we propose a Bayesian nonparametric, supervised framework that models a set of intervention groups through a mixture distribution building a separate prediction rule for each group, and allows the mixture distribution to change with time. This is achieved by using a hierarchical Dirichlet process mixture model over the interventions. The outcome is then modeled as conditional on both the latent grouping and the disease information through a Bayesian logistic regression. Experiments on synthetic and medical cohorts for 30-day readmission prediction demonstrate the superiority of the proposed model over clinical and data mining baselines.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Empirical evidence and theoretical studies suggest that the phenotype, i.e., cellular- and molecular-scale dynamics, including proliferation rate and adhesiveness due to microenvironmental factors and gene expression that govern tumor growth and invasiveness, also determine gross tumor-scale morphology. It has been difficult to quantify the relative effect of these links on disease progression and prognosis using conventional clinical and experimental methods and observables. As a result, successful individualized treatment of highly malignant and invasive cancers, such as glioblastoma, via surgical resection and chemotherapy cannot be offered and outcomes are generally poor. What is needed is a deterministic, quantifiable method to enable understanding of the connections between phenotype and tumor morphology. Here, we critically assess advantages and disadvantages of recent computational modeling efforts (e.g., continuum, discrete, and cellular automata models) that have pursued this understanding. Based on this assessment, we review a multiscale, i.e., from the molecular to the gross tumor scale, mathematical and computational "first-principle" approach based on mass conservation and other physical laws, such as employed in reaction-diffusion systems. Model variables describe known characteristics of tumor behavior, and parameters and functional relationships across scales are informed from in vitro, in vivo and ex vivo biology. We review the feasibility of this methodology that, once coupled to tumor imaging and tumor biopsy or cell culture data, should enable prediction of tumor growth and therapy outcome through quantification of the relation between the underlying dynamics and morphological characteristics. In particular, morphologic stability analysis of this mathematical model reveals that tumor cell patterning at the tumor-host interface is regulated by cell proliferation, adhesion and other phenotypic characteristics: histopathology information of tumor boundary can be inputted to the mathematical model and used as a phenotype-diagnostic tool to predict collective and individual tumor cell invasion of surrounding tissue. This approach further provides a means to deterministically test effects of novel and hypothetical therapy strategies on tumor behavior.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The usual way of modeling variability using threshold voltage shift and drain current amplification is becoming inaccurate as new sources of variability appear in sub-22nm devices. In this work we apply the four-injector approach for variability modeling to the simulation of SRAMs with predictive technology models from 20nm down to 7nm nodes. We show that the SRAMs, designed following ITRS roadmap, present stability metrics higher by at least 20% compared to a classical variability modeling approach. Speed estimation is also pessimistic, whereas leakage is underestimated if sub-threshold slope and DIBL mismatch and their correlations with threshold voltage are not considered.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Groundwater systems of different densities are often mathematically modeled to understand and predict environmental behavior such as seawater intrusion or submarine groundwater discharge. Additional data collection may be justified if it will cost-effectively aid in reducing the uncertainty of a model's prediction. The collection of salinity, as well as, temperature data could aid in reducing predictive uncertainty in a variable-density model. However, before numerical models can be created, rigorous testing of the modeling code needs to be completed. This research documents the benchmark testing of a new modeling code, SEAWAT Version 4. The benchmark problems include various combinations of density-dependent flow resulting from variations in concentration and temperature. The verified code, SEAWAT, was then applied to two different hydrological analyses to explore the capacity of a variable-density model to guide data collection. ^ The first analysis tested a linear method to guide data collection by quantifying the contribution of different data types and locations toward reducing predictive uncertainty in a nonlinear variable-density flow and transport model. The relative contributions of temperature and concentration measurements, at different locations within a simulated carbonate platform, for predicting movement of the saltwater interface were assessed. Results from the method showed that concentration data had greater worth than temperature data in reducing predictive uncertainty in this case. Results also indicated that a linear method could be used to quantify data worth in a nonlinear model. ^ The second hydrological analysis utilized a model to identify the transient response of the salinity, temperature, age, and amount of submarine groundwater discharge to changes in tidal ocean stage, seasonal temperature variations, and different types of geology. The model was compared to multiple kinds of data to (1) calibrate and verify the model, and (2) explore the potential for the model to be used to guide the collection of data using techniques such as electromagnetic resistivity, thermal imagery, and seepage meters. Results indicated that the model can be used to give insight to submarine groundwater discharge and be used to guide data collection. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many studies focused on the development of crash prediction models have resulted in aggregate crash prediction models to quantify the safety effects of geometric, traffic, and environmental factors on the expected number of total, fatal, injury, and/or property damage crashes at specific locations. Crash prediction models focused on predicting different crash types, however, have rarely been developed. Crash type models are useful for at least three reasons. The first is motivated by the need to identify sites that are high risk with respect to specific crash types but that may not be revealed through crash totals. Second, countermeasures are likely to affect only a subset of all crashes—usually called target crashes—and so examination of crash types will lead to improved ability to identify effective countermeasures. Finally, there is a priori reason to believe that different crash types (e.g., rear-end, angle, etc.) are associated with road geometry, the environment, and traffic variables in different ways and as a result justify the estimation of individual predictive models. The objectives of this paper are to (1) demonstrate that different crash types are associated to predictor variables in different ways (as theorized) and (2) show that estimation of crash type models may lead to greater insights regarding crash occurrence and countermeasure effectiveness. This paper first describes the estimation results of crash prediction models for angle, head-on, rear-end, sideswipe (same direction and opposite direction), and pedestrian-involved crash types. Serving as a basis for comparison, a crash prediction model is estimated for total crashes. Based on 837 motor vehicle crashes collected on two-lane rural intersections in the state of Georgia, six prediction models are estimated resulting in two Poisson (P) models and four NB (NB) models. The analysis reveals that factors such as the annual average daily traffic, the presence of turning lanes, and the number of driveways have a positive association with each type of crash, whereas median widths and the presence of lighting are negatively associated. For the best fitting models covariates are related to crash types in different ways, suggesting that crash types are associated with different precrash conditions and that modeling total crash frequency may not be helpful for identifying specific countermeasures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a seminal data mining article, Leo Breiman [1] argued that to develop effective predictive classification and regression models, we need to move away from the sole dependency on statistical algorithms and embrace a wider toolkit of modeling algorithms that include data mining procedures. Nevertheless, many researchers still rely solely on statistical procedures when undertaking data modeling tasks; the sole reliance on these procedures has lead to the development of irrelevant theory and questionable research conclusions ([1], p.199). We will outline initiatives that the HPC & Research Support group is undertaking to engage researchers with data mining tools and techniques; including a new range of seminars, workshops, and one-on-one consultations covering data mining algorithms, the relationship between data mining and the research cycle, and limitations and problems with these new algorithms. Organisational limitations and restrictions to these initiatives are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data collection using Autonomous Underwater Vehicles (AUVs) is increasing in importance within the oceano- graphic research community. Contrary to traditional moored or static platforms, mobile sensors require intelligent planning strategies to manoeuvre through the ocean. However, the ability to navigate to high-value locations and collect data with specific scientific merit is worth the planning efforts. In this study, we examine the use of ocean model predictions to determine the locations to be visited by an AUV, and aid in planning the trajectory that the vehicle executes during the sampling mission. The objectives are: a) to provide near-real time, in situ measurements to a large-scale ocean model to increase the skill of future predictions, and b) to utilize ocean model predictions as a component in an end-to-end autonomous prediction and tasking system for aquatic, mobile sensor networks. We present an algorithm designed to generate paths for AUVs to track a dynamically evolving ocean feature utilizing ocean model predictions. This builds on previous work in this area by incorporating the predicted current velocities into the path planning to assist in solving the 3-D motion planning problem of steering an AUV between two selected locations. We present simulation results for tracking a fresh water plume by use of our algorithm. Additionally, we present experimental results from field trials that test the skill of the model used as well as the incorporation of the model predictions into an AUV trajectory planner. These results indicate a modest, but measurable, improvement in surfacing error when the model predictions are incorporated into the planner.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a novel controller for stable and precise operation of multi-rotors with heavy slung loads is introduced. First, simplified equations of motions for the multi-rotor and slung load are derived. The model is then used to design a Nonlinear Model Predictive Controller (NMPC) that can manage the highly nonlinear dynamics whilst accounting for system constraints. The controller is shown to simultaneously track specified waypoints whilst actively damping large slung load oscillations. A Linear-quadratic regulator (LQR) controller is also derived, and control performance is compared in simulation. Results show the improved performance of the Nonlinear Model Predictive Control (NMPC) controller over a larger flight envelope, including aggressive maneuvers and large slung load displacements. Computational cost remains relatively small, amenable to practical implementation. Such systems for small Unmanned Aerial Vehicles (UAVs) may provide significant benefit to several applications in agriculture, law enforcement and construction.