982 resultados para Modeling problems
Resumo:
The methodology for generating a homology model of the T1 TCR-PbCS-K(d) class I major histocompatibility complex (MHC) class I complex is presented. The resulting model provides a qualitative explanation of the effect of over 50 different mutations in the region of the complementarity determining region (CDR) loops of the T cell receptor (TCR), the peptide and the MHC's alpha(1)/alpha(2) helices. The peptide is modified by an azido benzoic acid photoreactive group, which is part of the epitope recognized by the TCR. The construction of the model makes use of closely related homologs (the A6 TCR-Tax-HLA A2 complex, the 2C TCR, the 14.3.d TCR Vbeta chain, the 1934.4 TCR Valpha chain, and the H-2 K(b)-ovalbumine peptide), ab initio sampling of CDR loops conformations and experimental data to select from the set of possibilities. The model shows a complex arrangement of the CDR3alpha, CDR1beta, CDR2beta and CDR3beta loops that leads to the highly specific recognition of the photoreactive group. The protocol can be applied systematically to a series of related sequences, permitting the analysis at the structural level of the large TCR repertoire specific for a given peptide-MHC complex.
Resumo:
Synaptic plasticity involves a complex molecular machinery with various protein interactions but it is not yet clear how its components give rise to the different aspects of synaptic plasticity. Here we ask whether it is possible to mathematically model synaptic plasticity by making use of known substances only. We present a model of a multistable biochemical reaction system and use it to simulate the plasticity of synaptic transmission in long-term potentiation (LTP) or long-term depression (LTD) after repeated excitation of the synapse. According to our model, we can distinguish between two phases: first, a "viscosity" phase after the first excitation, the effects of which like the activation of NMDA receptors and CaMKII fade out in the absence of further excitations. Second, a "plasticity" phase actuated by an identical subsequent excitation that follows after a short time interval and causes the temporarily altered concentrations of AMPA subunits in the postsynaptic membrane to be stabilized. We show that positive feedback is the crucial element in the core chemical reaction, i.e. the activation of the short-tail AMPA subunit by NEM-sensitive factor, which allows generating multiple stable equilibria. Three stable equilibria are related to LTP, LTD and a third unfixed state called ACTIVE. Our mathematical approach shows that modeling synaptic multistability is possible by making use of known substances like NMDA and AMPA receptors, NEM-sensitive factor, glutamate, CaMKII and brain-derived neurotrophic factor. Furthermore, we could show that the heteromeric combination of short- and long-tail AMPA receptor subunits fulfills the function of a memory tag.
Resumo:
The right to be treated humanely when detained is universally recognized. Deficiencies in detention conditions and violence, however, subvert this right. When this occurs, proper medico-legal investigations are critical irrespective of the nature of death. Unfortunately, the very context of custody raises serious concerns over the effectiveness and fairness of medico-legal examinations. The aim of this manuscript is to identify and discuss the practical and ethical difficulties encountered in the medico-legal investigation following deaths in custody. Data for this manuscript come from a larger project on Death in Custody that examined the causes of deaths in custody and the conditions under which these deaths should be investigated and prevented. A total of 33 stakeholders from forensic medicine, law, prison administration or national human rights administration were interviewed. Data obtained were analyzed qualitatively. Forensic experts are an essential part of the criminal justice process as they offer evidence for subsequent indictment and eventual punishment of perpetrators. Their independence when investigating a death in custody was deemed critical and lack thereof, problematic. When experts were not independent, concerns arose in relation to conflicts of interest, biased perspectives, and low-quality forensic reports. The solutions to ensure independent forensic investigations of deaths in custody must be structural and simple: setting binding standards of practice rather than detailed procedures and relying on preexisting national practices as opposed to encouraging new practices that are unattainable for countries with limited resources.
Resumo:
The forensic two-trace problem is a perplexing inference problem introduced by Evett (J Forensic Sci Soc 27:375-381, 1987). Different possible ways of wording the competing pair of propositions (i.e., one proposition advanced by the prosecution and one proposition advanced by the defence) led to different quantifications of the value of the evidence (Meester and Sjerps in Biometrics 59:727-732, 2003). Here, we re-examine this scenario with the aim of clarifying the interrelationships that exist between the different solutions, and in this way, produce a global vision of the problem. We propose to investigate the different expressions for evaluating the value of the evidence by using a graphical approach, i.e. Bayesian networks, to model the rationale behind each of the proposed solutions and the assumptions made on the unknown parameters in this problem.
Resumo:
In todays competitive markets, the importance of goodscheduling strategies in manufacturing companies lead to theneed of developing efficient methods to solve complexscheduling problems.In this paper, we studied two production scheduling problemswith sequence-dependent setups times. The setup times areone of the most common complications in scheduling problems,and are usually associated with cleaning operations andchanging tools and shapes in machines.The first problem considered is a single-machine schedulingwith release dates, sequence-dependent setup times anddelivery times. The performance measure is the maximumlateness.The second problem is a job-shop scheduling problem withsequence-dependent setup times where the objective is tominimize the makespan.We present several priority dispatching rules for bothproblems, followed by a study of their performance. Finally,conclusions and directions of future research are presented.
Resumo:
In many areas of economics there is a growing interest in how expertise andpreferences drive individual and group decision making under uncertainty. Increasingly, we wish to estimate such models to quantify which of these drive decisionmaking. In this paper we propose a new channel through which we can empirically identify expertise and preference parameters by using variation in decisionsover heterogeneous priors. Relative to existing estimation approaches, our \Prior-Based Identification" extends the possible environments which can be estimated,and also substantially improves the accuracy and precision of estimates in thoseenvironments which can be estimated using existing methods.
Resumo:
Objectives: The aim of this study was to evaluate the efficacy of brief motivational intervention (BMI) in reducing alcohol use and related problems among binge drinkers randomly selected from a census of 20 year-old French speaking Swiss men and to test the hypothesis that BMI contributes to maintain low-risk drinking among non-bingers. Methods: Randomized controlled trial comparing the impact of BMI on weekly alcohol use, frequency of binge drinking and occurrence of alcohol-related problems. Setting: Army recruitment center. Participants: A random sample of 622 men were asked to participate, 178 either refused, or missed appointment, or had to follow military assessment procedures instead, resulting in 418 men randomized into BMI or control conditions, 88.7% completing the 6-month follow-up assessment. Intervention: A single face-to-face BMI session exploring alcohol use and related problems in order to stimulate behaviour change perspective in a non-judgmental, empathic manner based on the principles of motivational interviewing (MI). Main outcome measures: Weekly alcohol use, binge drinking frequency and the occurrence of 12 alcohol-related consequences. Results: Among binge drinkers, we observed a 20% change in drinking induced by BMI, with a reduction in weekly drinking of 1.5 drink in the BMI group, compared to an increase of 0.8 drink per week in the control group (incidence rate ratio 0.8, 95% confidence interval 0,66 to 0,98, p = 0.03). BMI did not influence the frequency of binge drinking and the occurrence of 12 possible alcohol-related consequences. However, BMI induced a reduction in the alcohol use of participants who, after drinking over the past 12 months, experienced alcohol-related consequences, i.e., hangover (-20%), missed a class (-53%), got behind at school (-54%), argued with friends (-38%), engaged in unplanned sex (-45%) or did not use protection when having sex (-64%). BMI did not reduce weekly drinking in those who experienced the six other problems screened. Among non-bingers, BMI did not contribute to maintain low-risk drinking. Conclusions: At army conscription, BMI reduced alcohol use in binge drinkers, particularly in those who recently experienced alcohol-related adverse consequences. No preventive effect of BMI was observed among non-bingers. BMI is an interesting preventive option in young binge drinkers, particularly in countries with mandatory army recruitment.
Resumo:
The P-median problem is a classical location model par excellence . In this paper we, firstexamine the early origins of the problem, formulated independently by Louis Hakimi andCharles ReVelle, two of the fathers of the burgeoning multidisciplinary field of researchknown today as Facility Location Theory and Modelling. We then examine some of thetraditional heuristic and exact methods developed to solve the problem. In the third sectionwe analyze the impact of the model in the field. We end the paper by proposing new lines ofresearch related to such a classical problem.
Resumo:
The interpretation of the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) is based on a 4-factor model, which is only partially compatible with the mainstream Cattell-Horn-Carroll (CHC) model of intelligence measurement. The structure of cognitive batteries is frequently analyzed via exploratory factor analysis and/or confirmatory factor analysis. With classical confirmatory factor analysis, almost all crossloadings between latent variables and measures are fixed to zero in order to allow the model to be identified. However, inappropriate zero cross-loadings can contribute to poor model fit, distorted factors, and biased factor correlations; most important, they do not necessarily faithfully reflect theory. To deal with these methodological and theoretical limitations, we used a new statistical approach, Bayesian structural equation modeling (BSEM), among a sample of 249 French-speaking Swiss children (8-12 years). With BSEM, zero-fixed cross-loadings between latent variables and measures are replaced by approximate zeros, based on informative, small-variance priors. Results indicated that a direct hierarchical CHC-based model with 5 factors plus a general intelligence factor better represented the structure of the WISC-IV than did the 4-factor structure and the higher order models. Because a direct hierarchical CHC model was more adequate, it was concluded that the general factor should be considered as a breadth rather than a superordinate factor. Because it was possible for us to estimate the influence of each of the latent variables on the 15 subtest scores, BSEM allowed improvement of the understanding of the structure of intelligence tests and the clinical interpretation of the subtest scores.
Resumo:
There is a large and growing literature that studies the effects of weak enforcement institutions on economic performance. This literature has focused almost exclusively on primary markets, in which assets are issued and traded to improve the allocation of investment and consumption. The general conclusion is that weak enforcement institutions impair the workings of these markets, giving rise to various inefficiencies.But weak enforcement institutions also create incentives to develop secondary markets, in which the assets issued in primary markets are retraded. This paper shows that trading in secondary markets counteracts the effects of weak enforcement institutions and, in the absence of further frictions, restores efficiency.
Resumo:
This monthly report from the Iowa Department of Natural Resources is about the water quality management of Iowa's rivers, streams and lakes.
Resumo:
We present a polyhedral framework for establishing general structural properties on optimal solutions of stochastic scheduling problems, where multiple job classes vie for service resources: the existence of an optimal priority policy in a given family, characterized by a greedoid(whose feasible class subsets may receive higher priority), where optimal priorities are determined by class-ranking indices, under restricted linear performance objectives (partial indexability). This framework extends that of Bertsimas and Niño-Mora (1996), which explained the optimality of priority-index policies under all linear objectives (general indexability). We show that, if performance measures satisfy partial conservation laws (with respect to the greedoid), which extend previous generalized conservation laws, then theproblem admits a strong LP relaxation over a so-called extended greedoid polytope, which has strong structural and algorithmic properties. We present an adaptive-greedy algorithm (which extends Klimov's) taking as input the linear objective coefficients, which (1) determines whether the optimal LP solution is achievable by a policy in the given family; and (2) if so, computes a set of class-ranking indices that characterize optimal priority policies in the family. In the special case of project scheduling, we show that, under additional conditions, the optimal indices can be computed separately for each project (index decomposition). We further apply the framework to the important restless bandit model (two-action Markov decision chains), obtaining new index policies, that extend Whittle's (1988), and simple sufficient conditions for their validity. These results highlight the power of polyhedral methods (the so-called achievable region approach) in dynamic and stochastic optimization.