894 resultados para Unified Model Reference
Resumo:
This thesis is mainly concerned with a model calculation for generalized parton distributions (GPDs). We calculate vectorial- and axial GPDs for the N N and N Delta transition in the framework of a light front quark model. This requires the elaboration of a connection between transition amplitudes and GPDs. We provide the first quark model calculations for N Delta GPDs. The examination of transition amplitudes leads to various model independent consistency relations. These relations are not exactly obeyed by our model calculation since the use of the impulse approximation in the light front quark model leads to a violation of Poincare covariance. We explore the impact of this covariance breaking on the GPDs and form factors which we determine in our model calculation and find large effects. The reference frame dependence of our results which originates from the breaking of Poincare covariance can be eliminated by introducing spurious covariants. We extend this formalism in order to obtain frame independent results from our transition amplitudes.
Resumo:
This thesis presents a universal model of documents and deltas. This model formalize what it means to find differences between documents and to shows a single shared formalization that can be used by any algorithm to describe the differences found between any kind of comparable documents. The main scientific contribution of this thesis is a universal delta model that can be used to represent the changes found by an algorithm. The main part of this model are the formal definition of changes (the pieces of information that records that something has changed), operations (the definitions of the kind of change that happened) and deltas (coherent summaries of what has changed between two documents). The fundamental mechanism tha makes the universal delta model a very expressive tool is the use of encapsulation relations between changes. In the universal delta model, changes are not always simple records of what has changed, they can also be combined into more complex changes that reflects the detection of more meaningful modifications. In addition to the main entities (i.e., changes, operations and deltas), the model describes and defines also documents and the concept of equivalence between documents. As a corollary to the model, there is also an extensible catalog of possible operations that algorithms can detect, used to create a common library of operations, and an UML serialization of the model, useful as a reference when implementing APIs that deal with deltas. The universal delta model presented in this thesis acts as the formal groundwork upon which algorithm can be based and libraries can be implemented. It removes the need to recreate a new delta model and terminology whenever a new algorithm is devised. It also alleviates the problems that toolmakers have when adapting their software to new diff algorithms.
Resumo:
Die transmembrane Potenzialdifferenz Δφm ist direkt mit der katalytischen Aktivität der Cytochrom c Oxidase (CcO) verknüpft. Die CcO ist das terminale Enzym (Komplex IV) in der Atmungskette der Mitochondrien. Das Enzym katalysiert die Reduktion von O2 zu 2 H2O. Dabei werden Elektronen vom natürlichen Substrat Cytochrom c zur CcO übertragen. Der Eleltronentransfer innerhalb der CcO ist an die Protonentranslokation über die Membran gekoppelt. Folglich bildet sich über der inneren Membrane der Mitochondrien eine Differenz in der Protonenkonzentration. Zusätzlich wird eine Potenzialdifferenz Δφm generiert.rnrnDas Transmembranpotenzial Δφm kann mit Hilfe der Fluoreszenzspektroskopie unter Einsatz eines potenzialemfindlichen Farbstoffs gemessen werden. Um quantitative Aussagen aus solchen Untersuchungen ableiten zu können, müssen zuvor Kalibrierungsmessungen am Membransystem durchgeführt werden.rnrnIn dieser Arbeit werden Kalibrierungsmessungen von Δφm in einer Modellmembrane mit inkorporiertem CcO vorgestellt. Dazu wurde ein biomimetisches Membransystem, die Proteinverankerte Doppelschicht (protein-tethered Bilayer Lipid Membrane, ptBLM), auf einem transparenten, leitfähigem Substrat (Indiumzinnoxid, ITO) entwickelt. ITO ermöglicht den simultanen Einsatz von elektrochemischen und Fluoreszenz- oder optischen wellenleiterspektroskopischen Methoden. Das Δφm in der ptBLM wurde durch extern angelegte, definierte elektrische Spannungen induziert. rnrnEine dünne Hydrogelschicht wurde als "soft cushion" für die ptBLM auf ITO eingesetzt. Das Polymernetzwerk enthält die NTA Funktionsgruppen zur orientierten Immobilisierung der CcO auf der Oberfläche der Hydrogels mit Hilfe der Ni-NTA Technik. Die ptBLM wurde nach der Immobilisierung der CcO mittels in-situ Dialyse gebildet. Elektrochemische Impedanzmessungen zeigten einen hohen elektrischen Widerstand (≈ 1 MΩ) der ptBLM. Optische Wellenleiterspektren (SPR / OWS) zeigten eine erhöhte Anisotropie des Systems nach der Bildung der Doppellipidschicht. Cyklovoltammetriemessungen von reduziertem Cytochrom c bestätigten die Aktivität der CcO in der Hydrogel-gestützten ptBLM. Das Membranpotenzial in der Hydrogel-gestützten ptBLM, induziert durch definierte elektrische Spannungen, wurde mit Hilfe der ratiometrischen Fluoreszenzspektroskopie gemessen. Referenzmessungen mit einer einfach verankerten Dopplellipidschicht (tBLM) lieferten einen Umrechnungsfaktor zwischen dem ratiometrischen Parameter Rn und dem Membranpotenzial (0,05 / 100 mV). Die Nachweisgrenze für das Membranpotenzial in einer Hydrogel-gestützten ptBLM lag bei ≈ 80 mV. Diese Daten dienen als gute Grundlage für künftige Untersuchungen des selbstgenerierten Δφm der CcO in einer ptBLM.
Resumo:
The research hypothesis of the thesis is that “an open participation in the co-creation of the services and environments, makes life easier for vulnerable groups”; assuming that the participatory and emancipatory approaches are processes of possible actions and changes aimed at facilitating people’s lives. The adoption of these approaches is put forward as the common denominator of social innovative practices that supporting inclusive processes allow a shift from a medical model to a civil and human rights approach to disability. The theoretical basis of this assumption finds support in many principles of Inclusive Education and the main focus of the hypothesis of research is on participation and emancipation as approaches aimed at facing emerging and existing problems related to inclusion. The framework of reference for the research is represented by the perspectives adopted by several international documents concerning policies and interventions to promote and support the leadership and participation of vulnerable groups. In the first part an in-depth analysis of the main academic publications on the central themes of the thesis has been carried out. After investigating the framework of reference, the analysis focuses on the main tools of participatory and emancipatory approaches, which are able to connect with the concepts of active citizenship and social innovation. In the second part two case studies concerning participatory and emancipatory approaches in the areas of concern are presented and analyzed as example of the improvement of inclusion, through the involvement and participation of persons with disability. The research has been developed using a holistic and interdisciplinary approach, aimed at providing a knowledge-base that fosters a shift from a situation of passivity and care towards a new scenario based on the person’s commitment in the elaboration of his/her own project of life.
Resumo:
Spatial prediction of hourly rainfall via radar calibration is addressed. The change of support problem (COSP), arising when the spatial supports of different data sources do not coincide, is faced in a non-Gaussian setting; in fact, hourly rainfall in Emilia-Romagna region, in Italy, is characterized by abundance of zero values and right-skeweness of the distribution of positive amounts. Rain gauge direct measurements on sparsely distributed locations and hourly cumulated radar grids are provided by the ARPA-SIMC Emilia-Romagna. We propose a three-stage Bayesian hierarchical model for radar calibration, exploiting rain gauges as reference measure. Rain probability and amounts are modeled via linear relationships with radar in the log scale; spatial correlated Gaussian effects capture the residual information. We employ a probit link for rainfall probability and Gamma distribution for rainfall positive amounts; the two steps are joined via a two-part semicontinuous model. Three model specifications differently addressing COSP are presented; in particular, a stochastic weighting of all radar pixels, driven by a latent Gaussian process defined on the grid, is employed. Estimation is performed via MCMC procedures implemented in C, linked to R software. Communication and evaluation of probabilistic, point and interval predictions is investigated. A non-randomized PIT histogram is proposed for correctly assessing calibration and coverage of two-part semicontinuous models. Predictions obtained with the different model specifications are evaluated via graphical tools (Reliability Plot, Sharpness Histogram, PIT Histogram, Brier Score Plot and Quantile Decomposition Plot), proper scoring rules (Brier Score, Continuous Rank Probability Score) and consistent scoring functions (Root Mean Square Error and Mean Absolute Error addressing the predictive mean and median, respectively). Calibration is reached and the inclusion of neighbouring information slightly improves predictions. All specifications outperform a benchmark model with incorrelated effects, confirming the relevance of spatial correlation for modeling rainfall probability and accumulation.
Resumo:
This study aims at a comprehensive understanding of the effects of aerosol-cloud interactions and their effects on cloud properties and climate using the chemistry-climate model EMAC. In this study, CCN activation is regarded as the dominant driver in aerosol-cloud feedback loops in warm clouds. The CCN activation is calculated prognostically using two different cloud droplet nucleation parameterizations, the STN and HYB CDN schemes. Both CDN schemes account for size and chemistry effects on the droplet formation based on the same aerosol properties. The calculation of the solute effect (hygroscopicity) is the main difference between the CDN schemes. The kappa-method is for the first time incorporated into Abdul-Razzak and Ghan activation scheme (ARG) to calculate hygroscopicity and critical supersaturation of aerosols (HYB), and the performance of the modied scheme is compared with the osmotic coefficient model (STN), which is the standard in the ARG scheme. Reference simulations (REF) with the prescribed cloud droplet number concentration have also been carried out in order to understand the effects of aerosol-cloud feedbacks. In addition, since the calculated cloud coverage is an important determinant of cloud radiative effects and is influencing the nucleation process two cloud cover parameterizations (i.e., a relative humidity threshold; RH-CLC and a statistical cloud cover scheme; ST-CLC) have been examined together with the CDN schemes, and their effects on the simulated cloud properties and relevant climate parameters have been investigated. The distinct cloud droplet spectra show strong sensitivity to aerosol composition effects on cloud droplet formation in all particle sizes, especially for the Aitken mode. As Aitken particles are the major component of the total aerosol number concentration and CCN, and are most sensitive to aerosol chemical composition effect (solute effect) on droplet formation, the activation of Aitken particles strongly contribute to total cloud droplet formation and thereby providing different cloud droplet spectra. These different spectra influence cloud structure, cloud properties, and climate, and show regionally varying sensitivity to meteorological and geographical condition as well as the spatiotemporal aerosol properties (i.e., particle size, number, and composition). The changes responding to different CDN schemes are more pronounced at lower altitudes than higher altitudes. Among regions, the subarctic regions show the strongest changes, as the lower surface temperature amplifies the effects of the activated aerosols; in contrast, the Sahara desert, where is an extremely dry area, is less influenced by changes in CCN number concentration. The aerosol-cloud coupling effects have been examined by comparing the prognostic CDN simulations (STN, HYB) with the reference simulation (REF). Most pronounced effects are found in the cloud droplet number concentration, cloud water distribution, and cloud radiative effect. The aerosol-cloud coupling generally increases cloud droplet number concentration; this decreases the efficiency of the formation of weak stratiform precipitation, and increases the cloud water loading. These large-scale changes lead to larger cloud cover and longer cloud lifetime, and contribute to high optical thickness and strong cloud cooling effects. This cools the Earth's surface, increases atmospheric stability, and reduces convective activity. These changes corresponding to aerosol-cloud feedbacks are also differently simulated depending on the cloud cover scheme. The ST-CLC scheme is more sensitive to aerosol-cloud coupling, since this scheme uses a tighter linkage of local dynamics and cloud water distributions in cloud formation process than the RH-CLC scheme. For the calculated total cloud cover, the RH-CLC scheme simulates relatively similar pattern to observations than the ST-CLC scheme does, but the overall properties (e.g., total cloud cover, cloud water content) in the RH simulations are overestimated, particularly over ocean. This is mainly originated from the difference in simulated skewness in each scheme: the RH simulations calculate negatively skewed distributions of cloud cover and relevant cloud water, which is similar to that of the observations, while the ST simulations yield positively skewed distributions resulting in lower mean values than the RH-CLC scheme does. The underestimation of total cloud cover over ocean, particularly over the intertropical convergence zone (ITCZ) relates to systematic defficiency of the prognostic calculation of skewness in the current set-ups of the ST-CLC scheme.rnOverall, the current EMAC model set-ups perform better over continents for all combinations of the cloud droplet nucleation and cloud cover schemes. To consider aerosol-cloud feedbacks, the HYB scheme is a better method for predicting cloud and climate parameters for both cloud cover schemes than the STN scheme. The RH-CLC scheme offers a better simulation of total cloud cover and the relevant parameters with the HYB scheme and single-moment microphysics (REF) than the ST-CLC does, but is not very sensitive to aerosol-cloud interactions.
Resumo:
This paper presents a kernel density correlation based nonrigid point set matching method and shows its application in statistical model based 2D/3D reconstruction of a scaled, patient-specific model from an un-calibrated x-ray radiograph. In this method, both the reference point set and the floating point set are first represented using kernel density estimates. A correlation measure between these two kernel density estimates is then optimized to find a displacement field such that the floating point set is moved to the reference point set. Regularizations based on the overall deformation energy and the motion smoothness energy are used to constraint the displacement field for a robust point set matching. Incorporating this non-rigid point set matching method into a statistical model based 2D/3D reconstruction framework, we can reconstruct a scaled, patient-specific model from noisy edge points that are extracted directly from the x-ray radiograph by an edge detector. Our experiment conducted on datasets of two patients and six cadavers demonstrates a mean reconstruction error of 1.9 mm
Resumo:
Discusses the cooperative effort between librarians and science faculty at Bucknell University in developing an effective library use education course for incoming undergraduate science and engineering students. Describes course structure and activities, and includes a library instruction bibliography. (five references) (EA)
Resumo:
We showed that when CA3 pyramidal neurons in the caudal 80% of the dorsal hippocampus had almost disappeared completely, the efferent pathway of CA3 was rarely detectable. We used the mouse pilocarpine model of temporal lobe epilepsy (TLE), and injected iontophoretically the anterograde tracer phaseolus vulgaris leucoagglutinin (PHA-L) into gliotic CA3, medial septum and the nucleus of diagonal band of Broca, median raphe, and lateral supramammillary nuclei, or the retrograde tracer cholera toxin B subunit (CTB) into gliotic CA3 area of hippocampus. In the afferent pathway, the number of neurons projecting to CA3 from medial septum and the nucleus of diagonal band of Broca, median raphe, and lateral supramammillary nuclei increased significantly. In the hippocampus, where CA3 pyramidal neurons were partially lost, calbindin, calretinin, parvalbumin immunopositive back-projection neurons from CA1-CA3 area were observed. Sprouting of Schaffer collaterals with increased number of large boutons in both sides of CA1 area, particularly in the stratum pyramidale, was found. When CA3 pyramidal neurons in caudal 80% of the dorsal hippocampus have almost disappeared completely, surviving CA3 neurons in the rostral 20% of the dorsal hippocampus may play an important role in transmitting hyperactivity of granule cells to surviving CA1 neurons or to dorsal part of the lateral septum. We concluded that reorganization of CA3 area with its downstream or upstream nuclei may be involved in the occurrence of epilepsy.
Resumo:
Many seemingly disparate approaches for marginal modeling have been developed in recent years. We demonstrate that many current approaches for marginal modeling of correlated binary outcomes produce likelihoods that are equivalent to the proposed copula-based models herein. These general copula models of underlying latent threshold random variables yield likelihood based models for marginal fixed effects estimation and interpretation in the analysis of correlated binary data. Moreover, we propose a nomenclature and set of model relationships that substantially elucidates the complex area of marginalized models for binary data. A diverse collection of didactic mathematical and numerical examples are given to illustrate concepts.
Resumo:
Drug-induced respiratory depression is a common side effect of the agents used in anesthesia practice to provide analgesia and sedation. Depression of the ventilatory drive in the spontaneously breathing patient can lead to severe cardiorespiratory events and it is considered a primary cause of morbidity. Reliable predictions of respiratory inhibition in the clinical setting would therefore provide a valuable means to improve the safety of drug delivery. Although multiple studies investigated the regulation of breathing in man both in the presence and absence of ventilatory depressant drugs, a unified description of respiratory pharmacodynamics is not available. This study proposes a mathematical model of human metabolism and cardiorespiratory regulation integrating several isolated physiological and pharmacological aspects of acute drug-induced ventilatory depression into a single theoretical framework. The description of respiratory regulation has a parsimonious yet comprehensive structure with substantial predictive capability. Simulations relative to the synergistic interaction of the hypercarbic and hypoxic respiratory drive and the global effect of drugs on the control of breathing are in good agreement with published experimental data. Besides providing clinically relevant predictions of respiratory depression, the model can also serve as a test bed to investigate issues of drug tolerability and dose finding/control under non-steady-state conditions.
Resumo:
Comments on an article by Kashima et al. (see record 2007-10111-001). In their target article Kashima and colleagues try to show how a connectionist model conceptualization of the self is best suited to capture the self's temporal and socio-culturally contextualized nature. They propose a new model and to support this model, the authors conduct computer simulations of psychological phenomena whose importance for the self has long been clear, even if not formally modeled, such as imitation, and learning of sequence and narrative. As explicated when we advocated connectionist models as a metaphor for self in Mischel and Morf (2003), we fully endorse the utility of such a metaphor, as these models have some of the processing characteristics necessary for capturing key aspects and functions of a dynamic cognitive-affective self-system. As elaborated in that chapter, we see as their principal strength that connectionist models can take account of multiple simultaneous processes without invoking a single central control. All outputs reflect a distributed pattern of activation across a large number of simple processing units, the nature of which depends on (and changes with) the connection weights between the links and the satisfaction of mutual constraints across these links (Rummelhart & McClelland, 1986). This allows a simple account for why certain input features will at times predominate, while others take over on other occasions. (PsycINFO Database Record (c) 2008 APA, all rights reserved)
Resumo:
This report presents the development of a Stochastic Knock Detection (SKD) method for combustion knock detection in a spark-ignition engine using a model based design approach. Knock Signal Simulator (KSS) was developed as the plant model for the engine. The KSS as the plant model for the engine generates cycle-to-cycle accelerometer knock intensities following a stochastic approach with intensities that are generated using a Monte Carlo method from a lognormal distribution whose parameters have been predetermined from engine tests and dependent upon spark-timing, engine speed and load. The lognormal distribution has been shown to be a good approximation to the distribution of measured knock intensities over a range of engine conditions and spark-timings for multiple engines in previous studies. The SKD method is implemented in Knock Detection Module (KDM) which processes the knock intensities generated by KSS with a stochastic distribution estimation algorithm and outputs estimates of high and low knock intensity levels which characterize knock and reference level respectively. These estimates are then used to determine a knock factor which provides quantitative measure of knock level and can be used as a feedback signal to control engine knock. The knock factor is analyzed and compared with a traditional knock detection method to detect engine knock under various engine operating conditions. To verify the effectiveness of the SKD method, a knock controller was also developed and tested in a model-in-loop (MIL) system. The objective of the knock controller is to allow the engine to operate as close as possible to its border-line spark-timing without significant engine knock. The controller parameters were tuned to minimize the cycle-to-cycle variation in spark timing and the settling time of the controller in responding to step increase in spark advance resulting in the onset of engine knock. The simulation results showed that the combined system can be used adequately to model engine knock and evaluated knock control strategies for a wide range of engine operating conditions.
Resumo:
BACKGROUND: Reperfusion injury is the leading cause of early graft dysfunction after lung transplantation. Activation of neutrophilic granulocytes with generation of free oxygen radicals appears to play a key role in this process. The efficacy of ascorbic acid as an antioxidant in the amelioration of reperfusion injury after lung transplantation has not been studied yet. METHODS: An in situ autotransplantation model in sheep is presented. The left lung was flushed (Euro-Collins solution) and reperfused; after 2 hours of cold storage, the right hilus was then clamped (group R [reference], n = 6). Group AA animals (n = 6) were treated with 1 g/kg ascorbic acid before reperfusion. Controls (group C, n = 6) underwent hilar preparation and instrumentation only. RESULTS: In group R, arterio-alveolar oxygen difference (AaDO2) and pulmonary vascular resistance (PVR) were significantly elevated after reperfusion. Five of 6 animals developed frank alveolar edema. All biochemical parameters showed significant PMN activation. In group AA, AaDO2, PVR, work of breathing, and the level of PMN activation were significantly lower. CONCLUSIONS: The experimental model reproduces all aspects of lung reperfusion injury reliably. Ascorbic acid was able to weaken reperfusion injury in this experimental setup.
Resumo:
BACKGROUND: Wheezing disorders in childhood vary widely in clinical presentation and disease course. During the last years, several ways to classify wheezing children into different disease phenotypes have been proposed and are increasingly used for clinical guidance, but validation of these hypothetical entities is difficult. METHODOLOGY/PRINCIPAL FINDINGS: The aim of this study was to develop a testable disease model which reflects the full spectrum of wheezing illness in preschool children. We performed a qualitative study among a panel of 7 experienced clinicians from 4 European countries working in primary, secondary and tertiary paediatric care. In a series of questionnaire surveys and structured discussions, we found a general consensus that preschool wheezing disorders consist of several phenotypes, with a great heterogeneity of specific disease concepts between clinicians. Initially, 24 disease entities were described among the 7 physicians. In structured discussions, these could be narrowed down to three entities which were linked to proposed mechanisms: a) allergic wheeze, b) non-allergic wheeze due to structural airway narrowing and c) non-allergic wheeze due to increased immune response to viral infections. This disease model will serve to create an artificial dataset that allows the validation of data-driven multidimensional methods, such as cluster analysis, which have been proposed for identification of wheezing phenotypes in children. CONCLUSIONS/SIGNIFICANCE: While there appears to be wide agreement among clinicians that wheezing disorders consist of several diseases, there is less agreement regarding their number and nature. A great diversity of disease concepts exist but a unified phenotype classification reflecting underlying disease mechanisms is lacking. We propose a disease model which may help guide future research so that proposed mechanisms are measured at the right time and their role in disease heterogeneity can be studied.