982 resultados para Prediction theory.


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new model for dealing with decision making under risk by considering subjective and objective information in the same formulation is here presented. The uncertain probabilistic weighted average (UPWA) is also presented. Its main advantage is that it unifies the probability and the weighted average in the same formulation and considering the degree of importance that each case has in the analysis. Moreover, it is able to deal with uncertain environments represented in the form of interval numbers. We study some of its main properties and particular cases. The applicability of the UPWA is also studied and it is seen that it is very broad because all the previous studies that use the probability or the weighted average can be revised with this new approach. Focus is placed on a multi-person decision making problem regarding the selection of strategies by using the theory of expertons.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

ABSTRACT: BACKGROUND: Chest wall syndrome (CWS), the main cause of chest pain in primary care practice, is most often an exclusion diagnosis. We developed and evaluated a clinical prediction rule for CWS. METHODS: Data from a multicenter clinical cohort of consecutive primary care patients with chest pain were used (59 general practitioners, 672 patients). A final diagnosis was determined after 12 months of follow-up. We used the literature and bivariate analyses to identify candidate predictors, and multivariate logistic regression was used to develop a clinical prediction rule for CWS. We used data from a German cohort (n = 1212) for external validation. RESULTS: From bivariate analyses, we identified six variables characterizing CWS: thoracic pain (neither retrosternal nor oppressive), stabbing, well localized pain, no history of coronary heart disease, absence of general practitioner's concern, and pain reproducible by palpation. This last variable accounted for 2 points in the clinical prediction rule, the others for 1 point each; the total score ranged from 0 to 7 points. The area under the receiver operating characteristic (ROC) curve was 0.80 (95% confidence interval 0.76-0.83) in the derivation cohort (specificity: 89%; sensitivity: 45%; cut-off set at 6 points). Among all patients presenting CWS (n = 284), 71% (n = 201) had a pain reproducible by palpation and 45% (n = 127) were correctly diagnosed. For a subset (n = 43) of these correctly classified CWS patients, 65 additional investigations (30 electrocardiograms, 16 thoracic radiographies, 10 laboratory tests, eight specialist referrals, one thoracic computed tomography) had been performed to achieve diagnosis. False positives (n = 41) included three patients with stable angina (1.8% of all positives). External validation revealed the ROC curve to be 0.76 (95% confidence interval 0.73-0.79) with a sensitivity of 22% and a specificity of 93%. CONCLUSIONS: This CWS score offers a useful complement to the usual CWS exclusion diagnosing process. Indeed, for the 127 patients presenting CWS and correctly classified by our clinical prediction rule, 65 additional tests and exams could have been avoided. However, the reproduction of chest pain by palpation, the most important characteristic to diagnose CWS, is not pathognomonic.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In IVF around 70% of embryos fail to implant. Often more than one embryo is transferred in order to enhance the chances of pregnancy, but this is at the price of an increased multiple pregnancy risk. In the aim to increase the success rate with a single embryo, research projects on prognostic factors of embryo viability have been initiated, but no marker has found a routine clinical application to date. Effects of soluble human leukocyte antigen-G (sHLA-G) on both NK cell activity and on Th1/Th2 cytokine balance suggest a role in the embryo implantation process, but the relevance of sHLA-G measurements in embryo culture medium and in follicular fluid (FF) are inconsistent to date. In this study, we have investigated the potential of sHLA-G in predicting the achievement of a pregnancy after IVF-ICSI in a large number of patients (n = 221). sHLA-G was determined in media and in FF by ELISA. In both FF and embryo medium, no significant differences in sHLA-G concentrations were observed between the groups "pregnancy" and "implantation failure", or between the groups "ongoing" versus "miscarried pregnancies". Our results do not favour routine sHLA-G determinations in the FF nor in embryo conditioned media, with the current assay technology available.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

(1) The common shrew Sorex araneus and Millet's shrew S. coronatusare sibling species.They are morphologically and genetically very similar but do not hybridize.Their parapatric distribution throughout south-western Europe, with a few narrow zones of distributional overlap, suggests that they are in competitive parapatry. (2) Two of these contact zones were studied; there was evidence of coexistence over periods of 2 years as well as habitat segregation. In both zones, the species segregated on litter thickness and humidity variables. (3) A simple analysis of spatial distribution showed that habitats visible in the field corresponded to the habitats selected by the species. Habitat selection was found throughout the annual life-cycle of the shrews. (4) In one contact zone, a removal experiment was performed to test whether habitat segregation is induced by interspecific interactions. The experiment showed that the species select habitats differentially when both are present and abandon habitat selection when their competitor removed. (5)These results confirm the role of resource partitioning in promoting narrow ranges of distributional overlap between such parapatric species and qualitatively support the prediction of habitat selection theory that, in a two-species system, coexistence may be achieved by differential habitat selection to avoid competition. The results also support the view that the common shrew and Millet's shrew are in competitive parapatry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Selostus: Viljelymaiden savespitoisuuden alueellistaminen geostatistiikan ja pistemäisen tiedon avulla

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The asphalt concrete (AC) dynamic modulus (|E*|) is a key design parameter in mechanistic-based pavement design methodologies such as the American Association of State Highway and Transportation Officials (AASHTO) MEPDG/Pavement-ME Design. The objective of this feasibility study was to develop frameworks for predicting the AC |E*| master curve from falling weight deflectometer (FWD) deflection-time history data collected by the Iowa Department of Transportation (Iowa DOT). A neural networks (NN) methodology was developed based on a synthetically generated viscoelastic forward solutions database to predict AC relaxation modulus (E(t)) master curve coefficients from FWD deflection-time history data. According to the theory of viscoelasticity, if AC relaxation modulus, E(t), is known, |E*| can be calculated (and vice versa) through numerical inter-conversion procedures. Several case studies focusing on full-depth AC pavements were conducted to isolate potential backcalculation issues that are only related to the modulus master curve of the AC layer. For the proof-of-concept demonstration, a comprehensive full-depth AC analysis was carried out through 10,000 batch simulations using a viscoelastic forward analysis program. Anomalies were detected in the comprehensive raw synthetic database and were eliminated through imposition of certain constraints involving the sigmoid master curve coefficients. The surrogate forward modeling results showed that NNs are able to predict deflection-time histories from E(t) master curve coefficients and other layer properties very well. The NN inverse modeling results demonstrated the potential of NNs to backcalculate the E(t) master curve coefficients from single-drop FWD deflection-time history data, although the current prediction accuracies are not sufficient to recommend these models for practical implementation. Considering the complex nature of the problem investigated with many uncertainties involved, including the possible presence of dynamics during FWD testing (related to the presence and depth of stiff layer, inertial and wave propagation effects, etc.), the limitations of current FWD technology (integration errors, truncation issues, etc.), and the need for a rapid and simplified approach for routine implementation, future research recommendations have been provided making a strong case for an expanded research study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aim Structure of the Thesis In the first article, I focus on the context in which the Homo Economicus was constructed - i.e., the conception of economic actors as fully rational, informed, egocentric, and profit-maximizing. I argue that the Homo Economicus theory was developed in a specific societal context with specific (partly tacit) values and norms. These norms have implicitly influenced the behavior of economic actors and have framed the interpretation of the Homo Economicus. Different factors however have weakened this implicit influence of the broader societal values and norms on economic actors. The result is an unbridled interpretation and application of the values and norms of the Homo Economicus in the business environment, and perhaps also in the broader society. In the second article, I show that the morality of many economic actors relies on isomorphism, i.e., the attempt to fit into the group by adopting the moral norms surrounding them. In consequence, if the norms prevailing in a specific group or context (such as a specific region or a specific industry) change, it can be expected that actors with an 'isomorphism morality' will also adapt their ethical thinking and their behavior -for the 'better' or for the 'worse'. The article further describes the process through which corporations could emancipate from the ethical norms prevailing in the broader society, and therefore develop an institution with specific norms and values. These norms mainly rely on mainstream business theories praising the economic actor's self-interest and neglecting moral reasoning. Moreover, because of isomorphism morality, many economic actors have changed their perception of ethics, and have abandoned the values prevailing in the broader society in order to adopt those of the economic theory. Finally, isomorphism morality also implies that these economic actors will change their morality again if the institutional context changes. The third article highlights the role and responsibility of business scholars in promoting a systematic reflection and self-critique of the business system and develops alternative models to fill the moral void of the business institution and its inherent legitimacy crisis. Indeed, the current business institution relies on assumptions such as scientific neutrality and specialization, which seem at least partly challenged by two factors. First, self-fulfilling prophecy provides scholars with an important (even if sometimes undesired) normative influence over practical life. Second, the increasing complexity of today's (socio-political) world and interactions between the different elements constituting our society question the strong specialization of science. For instance, economic theories are not unrelated to psychology or sociology, and economic actors influence socio-political structures and processes, e.g., through lobbying (Dobbs, 2006; Rondinelli, 2002), or through marketing which changes not only the way we consume, but more generally tries to instill a specific lifestyle (Cova, 2004; M. K. Hogg & Michell, 1996; McCracken, 1988; Muniz & O'Guinn, 2001). In consequence, business scholars are key actors in shaping both tomorrow's economic world and its broader context. A greater awareness of this influence might be a first step toward an increased feeling of civic responsibility and accountability for the models and theories developed or taught in business schools.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aims: Plasma concentrations of imatinib differ largely between patients despite same dosage, owing to large inter-individual variability in pharmacokinetic (PK) parameters. As the drug concentration at the end of the dosage interval (Cmin) correlates with treatment response and tolerability, monitoring of Cmin is suggested for therapeutic drug monitoring (TDM) of imatinib. Due to logistic difficulties, random sampling during the dosage interval is however often performed in clinical practice, thus rendering the respective results not informative regarding Cmin values.Objectives: (I) To extrapolate randomly measured imatinib concentrations to more informative Cmin using classical Bayesian forecasting. (II) To extend the classical Bayesian method to account for correlation between PK parameters. (III) To evaluate the predictive performance of both methods.Methods: 31 paired blood samples (random and trough levels) were obtained from 19 cancer patients under imatinib. Two Bayesian maximum a posteriori (MAP) methods were implemented: (A) a classical method ignoring correlation between PK parameters, and (B) an extended one accounting for correlation. Both methods were applied to estimate individual PK parameters, conditional on random observations and covariate-adjusted priors from a population PK model. The PK parameter estimates were used to calculate trough levels. Relative prediction errors (PE) were analyzed to evaluate accuracy (one-sample t-test) and to compare precision between the methods (F-test to compare variances).Results: Both Bayesian MAP methods allowed non-biased predictions of individual Cmin compared to observations: (A) - 7% mean PE (CI95% - 18 to 4 %, p = 0.15) and (B) - 4% mean PE (CI95% - 18 to 10 %, p = 0.69). Relative standard deviations of actual observations from predictions were 22% (A) and 30% (B), i.e. comparable to the intraindividual variability reported. Precision was not improved by taking into account correlation between PK parameters (p = 0.22).Conclusion: Clinical interpretation of randomly measured imatinib concentrations can be assisted by Bayesian extrapolation to maximum likelihood Cmin. Classical Bayesian estimation can be applied for TDM without the need to include correlation between PK parameters. Both methods could be adapted in the future to evaluate other individual pharmacokinetic measures correlated to clinical outcomes, such as area under the curve(AUC).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Remote sensing image processing is nowadays a mature research area. The techniques developed in the field allow many real-life applications with great societal value. For instance, urban monitoring, fire detection or flood prediction can have a great impact on economical and environmental issues. To attain such objectives, the remote sensing community has turned into a multidisciplinary field of science that embraces physics, signal theory, computer science, electronics, and communications. From a machine learning and signal/image processing point of view, all the applications are tackled under specific formalisms, such as classification and clustering, regression and function approximation, image coding, restoration and enhancement, source unmixing, data fusion or feature selection and extraction. This paper serves as a survey of methods and applications, and reviews the last methodological advances in remote sensing image processing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

AIM: Total imatinib concentrations are currently measured for the therapeutic drug monitoring of imatinib, whereas only free drug equilibrates with cells for pharmacological action. Due to technical and cost limitations, routine measurement of free concentrations is generally not performed. In this study, free and total imatinib concentrations were measured to establish a model allowing the confident prediction of imatinib free concentrations based on total concentrations and plasma proteins measurements. METHODS: One hundred and fifty total and free plasma concentrations of imatinib were measured in 49 patients with gastrointestinal stromal tumours. A population pharmacokinetic model was built up to characterize mean total and free concentrations with inter-patient and intrapatient variability, while taking into account α1 -acid glycoprotein (AGP) and human serum albumin (HSA) concentrations, in addition to other demographic and environmental covariates. RESULTS: A one compartment model with first order absorption was used to characterize total and free imatinib concentrations. Only AGP influenced imatinib total clearance. Imatinib free concentrations were best predicted using a non-linear binding model to AGP, with a dissociation constant Kd of 319 ng ml(-1) , assuming a 1:1 molar binding ratio. The addition of HSA in the equation did not improve the prediction of imatinib unbound concentrations. CONCLUSION: Although free concentration monitoring is probably more appropriate than total concentrations, it requires an additional ultrafiltration step and sensitive analytical technology, not always available in clinical laboratories. The model proposed might represent a convenient approach to estimate imatinib free concentrations. However, therapeutic ranges for free imatinib concentrations remain to be established before it enters into routine practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Schizophrenia is postulated to be the prototypical dysconnection disorder, in which hallucinations are the core symptom. Due to high heterogeneity in methodology across studies and the clinical phenotype, it remains unclear whether the structural brain dysconnection is global or focal and if clinical symptoms result from this dysconnection. In the present work, we attempt to clarify this issue by studying a population considered as a homogeneous genetic sub-type of schizophrenia, namely the 22q11.2 deletion syndrome (22q11.2DS). Cerebral MRIs were acquired for 46 patients and 48 age and gender matched controls (aged 6-26, respectively mean age = 15.20 ± 4.53 and 15.28 ± 4.35 years old). Using the Connectome mapper pipeline (connectomics.org) that combines structural and diffusion MRI, we created a whole brain network for each individual. Graph theory was used to quantify the global and local properties of the brain network organization for each participant. A global degree loss of 6% was found in patients' networks along with an increased Characteristic Path Length. After identifying and comparing hubs, a significant loss of degree in patients' hubs was found in 58% of the hubs. Based on Allen's brain network model for hallucinations, we explored the association between local efficiency and symptom severity. Negative correlations were found in the Broca's area (p < 0.004), the Wernicke area (p < 0.023) and a positive correlation was found in the dorsolateral prefrontal cortex (DLPFC) (p < 0.014). In line with the dysconnection findings in schizophrenia, our results provide preliminary evidence for a targeted alteration in the brain network hubs' organization in individuals with a genetic risk for schizophrenia. The study of specific disorganization in language, speech and thought regulation networks sharing similar network properties may help to understand their role in the hallucination mechanism.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

High-throughput prioritization of cancer-causing mutations (drivers) is a key challenge of cancer genome projects, due to the number of somatic variants detected in tumors. One important step in this task is to assess the functional impact of tumor somatic mutations. A number of computational methods have been employed for that purpose, although most were originally developed to distinguish disease-related nonsynonymous single nucleotide variants (nsSNVs) from polymorphisms. Our new method, transformed Functional Impact score for Cancer (transFIC), improves the assessment of the functional impact of tumor nsSNVs by taking into account the baseline tolerance of genes to functional variants.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The cross-recognition of peptides by cytotoxic T lymphocytes is a key element in immunology and in particular in peptide based immunotherapy. Here we develop three-dimensional (3D) quantitative structure-activity relationships (QSARs) to predict cross-recognition by Melan-A-specific cytotoxic T lymphocytes of peptides bound to HLA A*0201 (hereafter referred to as HLA A2). First, we predict the structure of a set of self- and pathogen-derived peptides bound to HLA A2 using a previously developed ab initio structure prediction approach [Fagerberg et al., J. Mol. Biol., 521-46 (2006)]. Second, shape and electrostatic energy calculations are performed on a 3D grid to produce similarity matrices which are combined with a genetic neural network method [So et al., J. Med. Chem., 4347-59 (1997)] to generate 3D-QSAR models. The models are extensively validated using several different approaches. During the model generation, the leave-one-out cross-validated correlation coefficient (q (2)) is used as the fitness criterion and all obtained models are evaluated based on their q (2) values. Moreover, the best model obtained for a partitioned data set is evaluated by its correlation coefficient (r = 0.92 for the external test set). The physical relevance of all models is tested using a functional dependence analysis and the robustness of the models obtained for the entire data set is confirmed using y-randomization. Finally, the validated models are tested for their utility in the setting of rational peptide design: their ability to discriminate between peptides that only contain side chain substitutions in a single secondary anchor position is evaluated. In addition, the predicted cross-recognition of the mono-substituted peptides is confirmed experimentally in chromium-release assays. These results underline the utility of 3D-QSARs in peptide mimetic design and suggest that the properties of the unbound epitope are sufficient to capture most of the information to determine the cross-recognition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Inbreeding load affects not only the average fecundity of philopatric individuals but also its variance. From bet-hedging theory, this should add further dispersal pressures to those stemming from the mere avoidance of inbreeding. Pressures on both sexes are identical under monogamy or promiscuity. Under polygyny, by contrast, the variance in reproductive output decreases with dispersal rate in females but increases in males, which should induce a female-biased dispersal. To test this prediction, we performed individual-based simulations. From our results, a female-biased dispersal indeed emerges as both polygyny and inbreeding load increase. We conclude that sex-biased dispersal may be selected for as a bet-hedging strategy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Due to the advances in sensor networks and remote sensing technologies, the acquisition and storage rates of meteorological and climatological data increases every day and ask for novel and efficient processing algorithms. A fundamental problem of data analysis and modeling is the spatial prediction of meteorological variables in complex orography, which serves among others to extended climatological analyses, for the assimilation of data into numerical weather prediction models, for preparing inputs to hydrological models and for real time monitoring and short-term forecasting of weather.In this thesis, a new framework for spatial estimation is proposed by taking advantage of a class of algorithms emerging from the statistical learning theory. Nonparametric kernel-based methods for nonlinear data classification, regression and target detection, known as support vector machines (SVM), are adapted for mapping of meteorological variables in complex orography.With the advent of high resolution digital elevation models, the field of spatial prediction met new horizons. In fact, by exploiting image processing tools along with physical heuristics, an incredible number of terrain features which account for the topographic conditions at multiple spatial scales can be extracted. Such features are highly relevant for the mapping of meteorological variables because they control a considerable part of the spatial variability of meteorological fields in the complex Alpine orography. For instance, patterns of orographic rainfall, wind speed and cold air pools are known to be correlated with particular terrain forms, e.g. convex/concave surfaces and upwind sides of mountain slopes.Kernel-based methods are employed to learn the nonlinear statistical dependence which links the multidimensional space of geographical and topographic explanatory variables to the variable of interest, that is the wind speed as measured at the weather stations or the occurrence of orographic rainfall patterns as extracted from sequences of radar images. Compared to low dimensional models integrating only the geographical coordinates, the proposed framework opens a way to regionalize meteorological variables which are multidimensional in nature and rarely show spatial auto-correlation in the original space making the use of classical geostatistics tangled.The challenges which are explored during the thesis are manifolds. First, the complexity of models is optimized to impose appropriate smoothness properties and reduce the impact of noisy measurements. Secondly, a multiple kernel extension of SVM is considered to select the multiscale features which explain most of the spatial variability of wind speed. Then, SVM target detection methods are implemented to describe the orographic conditions which cause persistent and stationary rainfall patterns. Finally, the optimal splitting of the data is studied to estimate realistic performances and confidence intervals characterizing the uncertainty of predictions.The resulting maps of average wind speeds find applications within renewable resources assessment and opens a route to decrease the temporal scale of analysis to meet hydrological requirements. Furthermore, the maps depicting the susceptibility to orographic rainfall enhancement can be used to improve current radar-based quantitative precipitation estimation and forecasting systems and to generate stochastic ensembles of precipitation fields conditioned upon the orography.