990 resultados para Context modeling
Resumo:
Synaptic plasticity involves a complex molecular machinery with various protein interactions but it is not yet clear how its components give rise to the different aspects of synaptic plasticity. Here we ask whether it is possible to mathematically model synaptic plasticity by making use of known substances only. We present a model of a multistable biochemical reaction system and use it to simulate the plasticity of synaptic transmission in long-term potentiation (LTP) or long-term depression (LTD) after repeated excitation of the synapse. According to our model, we can distinguish between two phases: first, a "viscosity" phase after the first excitation, the effects of which like the activation of NMDA receptors and CaMKII fade out in the absence of further excitations. Second, a "plasticity" phase actuated by an identical subsequent excitation that follows after a short time interval and causes the temporarily altered concentrations of AMPA subunits in the postsynaptic membrane to be stabilized. We show that positive feedback is the crucial element in the core chemical reaction, i.e. the activation of the short-tail AMPA subunit by NEM-sensitive factor, which allows generating multiple stable equilibria. Three stable equilibria are related to LTP, LTD and a third unfixed state called ACTIVE. Our mathematical approach shows that modeling synaptic multistability is possible by making use of known substances like NMDA and AMPA receptors, NEM-sensitive factor, glutamate, CaMKII and brain-derived neurotrophic factor. Furthermore, we could show that the heteromeric combination of short- and long-tail AMPA receptor subunits fulfills the function of a memory tag.
Resumo:
AbstractOBJECTIVEAnalyzing beliefs and actions of nurses in exercising patient advocacy in a hospital context.METHODA quantitative cross-sectional exploratory and descriptive study, conducted with 153 nurses from two hospitals in southern Brazil, one public and one philanthropic, by applying Protective Nursing Advocacy Scale - Brazilian version. Data were analyzed using descriptive statistics and analysis of variance.RESULTSNurses believe they are advocating for patients in their workplaces, and agree that they should advocate, especially when vulnerable patients need their protection. Personal values and professional skills have been identified as major sources of support for the practice of advocacy.CONCLUSIONNurses do not disagree nor agree that advocating for patients in their working environments can bring them negative consequences. It is necessary to recognize how the characteristics of public and private institutions have helped or not helped in exercising patient advocacy by nurses.
Resumo:
Context Heart failure (HF) is the most common complication of infective endocarditis. However, clinical characteristics of HF in patients with infective endocarditis, use of surgical therapy, and their associations with patient outcome are not well described.Objectives To determine the clinical, echocardiographic, and microbiological variables associated with HF in patients with definite infective endocarditis and to examine variables independently associated with in-hospital and 1-year mortality for patients with infective endocarditis and HF, including the use and association of surgery with outcome.Design, Setting, and Patients The International Collaboration on Endocarditis-Prospective Cohort Study, a prospective, multicenter study enrolling 4166 patients with definite native- or prosthetic-valve infective endocarditis from 61 centers in 28 countries between June 2000 and December 2006.Main Outcome Measures In-hospital and 1-year mortality.Results Of 4075 patients with infective endocarditis and known HF status enrolled, 1359 (33.4% [95% CI, 31.9%-34.8%]) had HF, and 906 (66.7% [95% CI, 64.2%-69.2%]) were classified as having New York Heart Association class III or IV symptom status. Within the subset with HF, 839 (61.7% [95% CI, 59.2%-64.3%]) underwent valvular surgery during the index hospitalization. In-hospital mortality was 29.7% (95% CI, 27.2%-32.1%) for the entire HF cohort, with lower mortality observed in patients undergoing valvular surgery compared with medical therapy alone (20.6% [95% CI, 17.9%-23.4%] vs 44.8% [95% CI, 40.4%-49.0%], respectively; P < .001). One-year mortality was 29.1% (95% CI, 26.0%-32.2%) in patients undergoing valvular surgery vs 58.4% (95% CI, 54.1%-62.6%) in those not undergoing surgery (P < .001). Cox proportional hazards modeling with propensity score adjustment for surgery showed that advanced age, diabetes mellitus, health care-associated infection, causative microorganism (Staphylococcus aureus or fungi), severe HF (New York Heart Association class III or IV), stroke, and paravalvular complications were independently associated with 1-year mortality, whereas valvular surgery during the initial hospitalization was associated with lower mortality.Conclusion In this cohort of patients with infective endocarditis complicated by HF, severity of HF was strongly associated with surgical therapy and subsequent mortality, whereas valvular surgery was associated with lower in-hospital and 1-year mortality.
Resumo:
The educational system in Spain is undergoing a reorganization. At present, high-school graduates who want to enroll at a public university must take a set of examinations Pruebas de Aptitud para el Acceso a la Universidad (PAAU). A "new formula" (components, weights, type of exam,...) for university admission is been discussed. The present paper summarizes part of the research done by the author in her PhD. The context for this thesis is the evaluation of large-scale and complex systems of assessment. The main objectives were: to achieve a deep knowledge of the entire university admissions process in Spain, to discover the main sources of uncertainty and topromote empirical research in a continual improvement of the entire process. Focusing in the suitable statistical models and strategies which allow to high-light the imperfections of the system and reduce them, the paper develops, among other approaches, some applications of multilevel modeling.
Resumo:
The forensic two-trace problem is a perplexing inference problem introduced by Evett (J Forensic Sci Soc 27:375-381, 1987). Different possible ways of wording the competing pair of propositions (i.e., one proposition advanced by the prosecution and one proposition advanced by the defence) led to different quantifications of the value of the evidence (Meester and Sjerps in Biometrics 59:727-732, 2003). Here, we re-examine this scenario with the aim of clarifying the interrelationships that exist between the different solutions, and in this way, produce a global vision of the problem. We propose to investigate the different expressions for evaluating the value of the evidence by using a graphical approach, i.e. Bayesian networks, to model the rationale behind each of the proposed solutions and the assumptions made on the unknown parameters in this problem.
Resumo:
The interpretation of the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) is based on a 4-factor model, which is only partially compatible with the mainstream Cattell-Horn-Carroll (CHC) model of intelligence measurement. The structure of cognitive batteries is frequently analyzed via exploratory factor analysis and/or confirmatory factor analysis. With classical confirmatory factor analysis, almost all crossloadings between latent variables and measures are fixed to zero in order to allow the model to be identified. However, inappropriate zero cross-loadings can contribute to poor model fit, distorted factors, and biased factor correlations; most important, they do not necessarily faithfully reflect theory. To deal with these methodological and theoretical limitations, we used a new statistical approach, Bayesian structural equation modeling (BSEM), among a sample of 249 French-speaking Swiss children (8-12 years). With BSEM, zero-fixed cross-loadings between latent variables and measures are replaced by approximate zeros, based on informative, small-variance priors. Results indicated that a direct hierarchical CHC-based model with 5 factors plus a general intelligence factor better represented the structure of the WISC-IV than did the 4-factor structure and the higher order models. Because a direct hierarchical CHC model was more adequate, it was concluded that the general factor should be considered as a breadth rather than a superordinate factor. Because it was possible for us to estimate the influence of each of the latent variables on the 15 subtest scores, BSEM allowed improvement of the understanding of the structure of intelligence tests and the clinical interpretation of the subtest scores.
Resumo:
This monthly report from the Iowa Department of Natural Resources is about the water quality management of Iowa's rivers, streams and lakes.
Resumo:
The activation of the specific immune response against tumor cells is based on the recognition by the CD8+ Cytotoxic Τ Lymphocytes (CTL), of antigenic peptides (p) presented at the surface of the cell by the class I major histocompatibility complex (MHC). The ability of the so-called T-Cell Receptors (TCR) to discriminate between self and non-self peptides constitutes the most important specific control mechanism against infected cells. The TCR/pMHC interaction has been the subject of much attention in cancer therapy since the design of the adoptive transfer approach, in which Τ lymphocytes presenting an interesting response against tumor cells are extracted from the patient, expanded in vitro, and reinfused after immunodepletion, possibly leading to cancer regression. In the last decade, major progress has been achieved by the introduction of engineered lypmhocytes. In the meantime, the understanding of the molecular aspects of the TCRpMHC interaction has become essential to guide in vitro and in vivo studies. In 1996, the determination of the first structure of a TCRpMHC complex by X-ray crystallography revealed the molecular basis of the interaction. Since then, molecular modeling techniques have taken advantage of crystal structures to study the conformational space of the complex, and understand the specificity of the recognition of the pMHC by the TCR. In the meantime, experimental techniques used to determine the sequences of TCR that bind to a pMHC complex have been used intensively, leading to the collection of large repertoires of TCR sequences that are specific for a given pMHC. There is a growing need for computational approaches capable of predicting the molecular interactions that occur upon TCR/pMHC binding without relying on the time consuming resolution of a crystal structure. This work presents new approaches to analyze the molecular principles that govern the recognition of the pMHC by the TCR and the subsequent activation of the T-cell. We first introduce TCRep 3D, a new method to model and study the structural properties of TCR repertoires, based on homology and ab initio modeling. We discuss the methodology in details, and demonstrate that it outperforms state of the art modeling methods in predicting relevant TCR conformations. Two successful applications of TCRep 3D that supported experimental studies on TCR repertoires are presented. Second, we present a rigid body study of TCRpMHC complexes that gives a fair insight on the TCR approach towards pMHC. We show that the binding mode of the TCR is correctly described by long-distance interactions. Finally, the last section is dedicated to a detailed analysis of an experimental hydrogen exchange study, which suggests that some regions of the constant domain of the TCR are subject to conformational changes upon binding to the pMHC. We propose a hypothesis of the structural signaling of TCR molecules leading to the activation of the T-cell. It is based on the analysis of correlated motions in the TCRpMHC structure. - L'activation de la réponse immunitaire spécifique dirigée contre les cellules tumorales est basée sur la reconnaissance par les Lymphocytes Τ Cytotoxiques (CTL), d'un peptide antigénique (p) présenté à la suface de la cellule par le complexe majeur d'histocompatibilité de classe I (MHC). La capacité des récepteurs des lymphocytes (TCR) à distinguer les peptides endogènes des peptides étrangers constitue le mécanisme de contrôle le plus important dirigé contre les cellules infectées. L'interaction entre le TCR et le pMHC est le sujet de beaucoup d'attention dans la thérapie du cancer, depuis la conception de la méthode de transfer adoptif: les lymphocytes capables d'une réponse importante contre les cellules tumorales sont extraits du patient, amplifiés in vitro, et réintroduits après immunosuppression. Il peut en résulter une régression du cancer. Ces dix dernières années, d'importants progrès ont été réalisés grâce à l'introduction de lymphocytes modifiés par génie génétique. En parallèle, la compréhension du TCRpMHC au niveau moléculaire est donc devenue essentielle pour soutenir les études in vitro et in vivo. En 1996, l'obtention de la première structure du complexe TCRpMHC à l'aide de la cristallographie par rayons X a révélé les bases moléculaires de l'interaction. Depuis lors, les techniques de modélisation moléculaire ont exploité les structures expérimentales pour comprendre la spécificité de la reconnaissance du pMHC par le TCR. Dans le même temps, de nouvelles techniques expérimentales permettant de déterminer la séquence de TCR spécifiques envers un pMHC donné, ont été largement exploitées. Ainsi, d'importants répertoires de TCR sont devenus disponibles, et il est plus que jamais nécessaire de développer des approches informatiques capables de prédire les interactions moléculaires qui ont lieu lors de la liaison du TCR au pMHC, et ce sans dépendre systématiquement de la résolution d'une structure cristalline. Ce mémoire présente une nouvelle approche pour analyser les principes moléculaires régissant la reconnaissance du pMHC par le TCR, et l'activation du lymphocyte qui en résulte. Dans un premier temps, nous présentons TCRep 3D, une nouvelle méthode basée sur les modélisations par homologie et ab initio, pour l'étude de propriétés structurales des répertoires de TCR. Le procédé est discuté en détails et comparé à des approches standard. Nous démontrons ainsi que TCRep 3D est le plus performant pour prédire des conformations pertinentes du TCR. Deux applications à des études expérimentales des répertoires TCR sont ensuite présentées. Dans la seconde partie de ce travail nous présentons une étude de complexes TCRpMHC qui donne un aperçu intéressant du mécanisme d'approche du pMHC par le TCR. Finalement, la dernière section se concentre sur l'analyse détaillée d'une étude expérimentale basée sur les échanges deuterium/hydrogène, dont les résultats révèlent que certaines régions clés du domaine constant du TCR sont sujettes à un changement conformationnel lors de la liaison au pMHC. Nous proposons une hypothèse pour la signalisation structurelle des TCR, menant à l'activation du lymphocyte. Celle-ci est basée sur l'analyse des mouvements corrélés observés dans la structure du TCRpMHC.
Resumo:
This paper presents a two-factor (Vasicek-CIR) model of the term structure of interest rates and develops its pricing and empirical properties. We assume that default free discount bond prices are determined by the time to maturity and two factors, the long-term interest rate and the spread. Assuming a certain process for both factors, a general bond pricing equation is derived and a closed-form expression for bond prices is obtained. Empirical evidence of the model's performance in comparisson with a double Vasicek model is presented. The main conclusion is that the modeling of the volatility in the long-term rate process can help (in a large amount) to fit the observed data can improve - in a reasonable quantity - the prediction of the future movements in the medium- and long-term interest rates. However, for shorter maturities, it is shown that the pricing errors are, basically, negligible and it is not so clear which is the best model to be used.
Resumo:
Com características morfológicas e edafo-climáticas extremamente diversificadas, a ilha de Santo Antão em Cabo Verde apresenta uma reconhecida vulnerabilidade ambiental a par de uma elevada carência de estudos científicos que incidam sobre essa realidade e sirvam de base à uma compreensão integrada dos fenómenos. A cartografia digital e as tecnologias de informação geográfica vêm proporcionando um avanço tecnológico na colecção, armazenamento e processamento de dados espaciais. Várias ferramentas actualmente disponíveis permitem modelar uma multiplicidade de factores, localizar e quantificar os fenómenos bem como e definir os níveis de contribuição de diferentes factores no resultado final. No presente estudo, desenvolvido no âmbito do curso de pós-graduação e mestrado em sistemas de Informação geográfica realizado pela Universidade de Trás-os-Montes e Alto Douro, pretende-se contribuir para a minimização do deficit de informação relativa às características biofísicas da citada ilha, recorrendo-se à aplicação de tecnologias de informação geográfica e detecção remota, associadas à análise estatística multivariada. Nesse âmbito, foram produzidas e analisadas cartas temáticas e desenvolvido um modelo de análise integrada de dados. Com efeito, a multiplicidade de variáveis espaciais produzidas, de entre elas 29 variáveis com variação contínua passíveis de influenciar as características biofísicas da região e, possíveis ocorrências de efeitos mútuos antagónicos ou sinergéticos, condicionam uma relativa complexidade à interpretação a partir dos dados originais. Visando contornar este problema, recorre-se a uma rede de amostragem sistemática, totalizando 921 pontos ou repetições, para extrair os dados correspondentes às 29 variáveis nos pontos de amostragem e, subsequente desenvolvimento de técnicas de análise estatística multivariada, nomeadamente a análise em componentes principais. A aplicação destas técnicas permitiu simplificar e interpretar as variáreis originais, normalizando-as e resumindo a informação contida na diversidade de variáveis originais, correlacionadas entre si, num conjunto de variáveis ortogonais (não correlacionadas), e com níveis de importância decrescente, as componentes principais. Fixou-se como meta a concentração de 75% da variância dos dados originais explicadas pelas primeiras 3 componentes principais e, desenvolveu-se um processo interactivo em diferentes etapas, eliminando sucessivamente as variáveis menos representativas. Na última etapa do processo as 3 primeiras CP resultaram em 74,54% da variância dos dados originais explicadas mas, que vieram a demonstrar na fase posterior, serem insuficientes para retratar a realidade. Optou-se pela inclusão da 4ª CP (CP4), com a qual 84% da referida variância era explicada e, representando oito variáveis biofísicas: a altitude, a densidade hidrográfica, a densidade de fracturação geológica, a precipitação, o índice de vegetação, a temperatura, os recursos hídricos e a distância à rede hidrográfica. A subsequente interpolação da 1ª componente principal (CP1) e, das principais variáveis associadas as componentes CP2, CP3 e CP4 como variáveis auxiliares, recorrendo a técnicas geoestatística em ambiente ArcGIS permitiu a obtenção de uma carta representando 84% da variação das características biofísicas no território. A análise em clusters validada pelo teste “t de Student” permitiu reclassificar o território em 6 unidades biofísicas homogéneas. Conclui-se que, as tecnologias de informação geográfica actualmente disponíveis a par de facilitar análises interactivas e flexíveis, possibilitando que se faça variar temas e critérios, integrar novas informações e introduzir melhorias em modelos construídos com bases em informações disponíveis num determinado contexto, associadas a técnicas de análise estatística multivariada, possibilitam, com base em critérios científicos, desenvolver a análise integrada de múltiplas variáveis biofísicas cuja correlação entre si, torna complexa a compreensão integrada dos fenómenos.
Resumo:
Abstract Accurate characterization of the spatial distribution of hydrological properties in heterogeneous aquifers at a range of scales is a key prerequisite for reliable modeling of subsurface contaminant transport, and is essential for designing effective and cost-efficient groundwater management and remediation strategies. To this end, high-resolution geophysical methods have shown significant potential to bridge a critical gap in subsurface resolution and coverage between traditional hydrological measurement techniques such as borehole log/core analyses and tracer or pumping tests. An important and still largely unresolved issue, however, is how to best quantitatively integrate geophysical data into a characterization study in order to estimate the spatial distribution of one or more pertinent hydrological parameters, thus improving hydrological predictions. Recognizing the importance of this issue, the aim of the research presented in this thesis was to first develop a strategy for the assimilation of several types of hydrogeophysical data having varying degrees of resolution, subsurface coverage, and sensitivity to the hydrologic parameter of interest. In this regard a novel simulated annealing (SA)-based conditional simulation approach was developed and then tested in its ability to generate realizations of porosity given crosshole ground-penetrating radar (GPR) and neutron porosity log data. This was done successfully for both synthetic and field data sets. A subsequent issue that needed to be addressed involved assessing the potential benefits and implications of the resulting porosity realizations in terms of groundwater flow and contaminant transport. This was investigated synthetically assuming first that the relationship between porosity and hydraulic conductivity was well-defined. Then, the relationship was itself investigated in the context of a calibration procedure using hypothetical tracer test data. Essentially, the relationship best predicting the observed tracer test measurements was determined given the geophysically derived porosity structure. Both of these investigations showed that the SA-based approach, in general, allows much more reliable hydrological predictions than other more elementary techniques considered. Further, the developed calibration procedure was seen to be very effective, even at the scale of tomographic resolution, for predictions of transport. This also held true at locations within the aquifer where only geophysical data were available. This is significant because the acquisition of hydrological tracer test measurements is clearly more complicated and expensive than the acquisition of geophysical measurements. Although the above methodologies were tested using porosity logs and GPR data, the findings are expected to remain valid for a large number of pertinent combinations of geophysical and borehole log data of comparable resolution and sensitivity to the hydrological target parameter. Moreover, the obtained results allow us to have confidence for future developments in integration methodologies for geophysical and hydrological data to improve the 3-D estimation of hydrological properties.
Resumo:
The present paper makes progress in explaining the role of capital for inflation and output dynamics. We followWoodford (2003, Ch. 5) in assuming Calvo pricing combined with a convex capital adjustment cost at the firm level. Our main result is that capital accumulation affects inflation dynamics primarily through its impact on the marginal cost. This mechanism is much simpler than the one implied by the analysis in Woodford's text. The reason is that his analysis suffers from a conceptual mistake, as we show. The latter obscures the economic mechanism through which capital affects inflation and output dynamics in the Calvo model, as discussed in Woodford (2004).