971 resultados para Generalized Jkr Model
Resumo:
Abstract Background The generalized odds ratio (GOR) was recently suggested as a genetic model-free measure for association studies. However, its properties were not extensively investigated. We used Monte Carlo simulations to investigate type-I error rates, power and bias in both effect size and between-study variance estimates of meta-analyses using the GOR as a summary effect, and compared these results to those obtained by usual approaches of model specification. We further applied the GOR in a real meta-analysis of three genome-wide association studies in Alzheimer's disease. Findings For bi-allelic polymorphisms, the GOR performs virtually identical to a standard multiplicative model of analysis (e.g. per-allele odds ratio) for variants acting multiplicatively, but augments slightly the power to detect variants with a dominant mode of action, while reducing the probability to detect recessive variants. Although there were differences among the GOR and usual approaches in terms of bias and type-I error rates, both simulation- and real data-based results provided little indication that these differences will be substantial in practice for meta-analyses involving bi-allelic polymorphisms. However, the use of the GOR may be slightly more powerful for the synthesis of data from tri-allelic variants, particularly when susceptibility alleles are less common in the populations (≤10%). This gain in power may depend on knowledge of the direction of the effects. Conclusions For the synthesis of data from bi-allelic variants, the GOR may be regarded as a multiplicative-like model of analysis. The use of the GOR may be slightly more powerful in the tri-allelic case, particularly when susceptibility alleles are less common in the populations.
Resumo:
Aspects related to the users' cooperative work are not considered in the traditional approach of software engineering, since the user is viewed independently of his/her workplace environment or group, with the individual model generalized to the study of collective behavior of all users. This work proposes a process for software requirements to address issues involving cooperative work in information systems that provide distributed coordination in the users' actions and the communication among them occurs indirectly through the data entered while using the software. To achieve this goal, this research uses ergonomics, the 3C cooperation model, awareness and software engineering concepts. Action-research is used as a research methodology applied in three cycles during the development of a corporate workflow system in a technological research company. This article discusses the third cycle, which corresponds to the process that deals with the refinement of the cooperative work requirements with the software in actual use in the workplace, where the inclusion of a computer system changes the users' workplace, from the face to face interaction to the interaction mediated by the software. The results showed that the highest degree of users' awareness about their activities and other system users contribute to a decrease in their errors and in the inappropriate use of the system.
Resumo:
The first part of the thesis concerns the study of inflation in the context of a theory of gravity called "Induced Gravity" in which the gravitational coupling varies in time according to the dynamics of the very same scalar field (the "inflaton") driving inflation, while taking on the value measured today since the end of inflation. Through the analytical and numerical analysis of scalar and tensor cosmological perturbations we show that the model leads to consistent predictions for a broad variety of symmetry-breaking inflaton's potentials, once that a dimensionless parameter entering into the action is properly constrained. We also discuss the average expansion of the Universe after inflation (when the inflaton undergoes coherent oscillations about the minimum of its potential) and determine the effective equation of state. Finally, we analyze the resonant and perturbative decay of the inflaton during (p)reheating. The second part is devoted to the study of a proposal for a quantum theory of gravity dubbed "Horava-Lifshitz (HL) Gravity" which relies on power-counting renormalizability while explicitly breaking Lorentz invariance. We test a pair of variants of the theory ("projectable" and "non-projectable") on a cosmological background and with the inclusion of scalar field matter. By inspecting the quadratic action for the linear scalar cosmological perturbations we determine the actual number of propagating degrees of freedom and realize that the theory, being endowed with less symmetries than General Relativity, does admit an extra gravitational degree of freedom which is potentially unstable. More specifically, we conclude that in the case of projectable HL Gravity the extra mode is either a ghost or a tachyon, whereas in the case of non-projectable HL Gravity the extra mode can be made well-behaved for suitable choices of a pair of free dimensionless parameters and, moreover, turns out to decouple from the low-energy Physics.
Resumo:
Since the development of quantum mechanics it has been natural to analyze the connection between classical and quantum mechanical descriptions of physical systems. In particular one should expect that in some sense when quantum mechanical effects becomes negligible the system will behave like it is dictated by classical mechanics. One famous relation between classical and quantum theory is due to Ehrenfest. This result was later developed and put on firm mathematical foundations by Hepp. He proved that matrix elements of bounded functions of quantum observables between suitable coherents states (that depend on Planck's constant h) converge to classical values evolving according to the expected classical equations when h goes to zero. His results were later generalized by Ginibre and Velo to bosonic systems with infinite degrees of freedom and scattering theory. In this thesis we study the classical limit of Nelson model, that describes non relativistic particles, whose evolution is dictated by Schrödinger equation, interacting with a scalar relativistic field, whose evolution is dictated by Klein-Gordon equation, by means of a Yukawa-type potential. The classical limit is a mean field and weak coupling limit. We proved that the transition amplitude of a creation or annihilation operator, between suitable coherent states, converges in the classical limit to the solution of the system of differential equations that describes the classical evolution of the theory. The quantum evolution operator converges to the evolution operator of fluctuations around the classical solution. Transition amplitudes of normal ordered products of creation and annihilation operators between coherent states converge to suitable products of the classical solutions. Transition amplitudes of normal ordered products of creation and annihilation operators between fixed particle states converge to an average of products of classical solutions, corresponding to different initial conditions.
Resumo:
This thesis is mainly concerned with a model calculation for generalized parton distributions (GPDs). We calculate vectorial- and axial GPDs for the N N and N Delta transition in the framework of a light front quark model. This requires the elaboration of a connection between transition amplitudes and GPDs. We provide the first quark model calculations for N Delta GPDs. The examination of transition amplitudes leads to various model independent consistency relations. These relations are not exactly obeyed by our model calculation since the use of the impulse approximation in the light front quark model leads to a violation of Poincare covariance. We explore the impact of this covariance breaking on the GPDs and form factors which we determine in our model calculation and find large effects. The reference frame dependence of our results which originates from the breaking of Poincare covariance can be eliminated by introducing spurious covariants. We extend this formalism in order to obtain frame independent results from our transition amplitudes.
Resumo:
In this work, the Generalized Beam Theory (GBT) is used as the main tool to analyze the mechanics of thin-walled beams. After an introduction to the subject and a quick review of some of the most well-known approaches to describe the behaviour of thin-walled beams, a novel formulation of the GBT is presented. This formulation contains the classic shear-deformable GBT available in the literature and contributes an additional description of cross-section warping that is variable along the wall thickness besides along the wall midline. Shear deformation is introduced in such a way that the classical shear strain components of the Timoshenko beam theory are recovered exactly. According to the new kinematics proposed, a reviewed form of the cross-section analysis procedure is devised, based on a unique modal decomposition. Later, a procedure for a posteriori reconstruction of all the three-dimensional stress components in the finite element analysis of thin-walled beams using the GBT is presented. The reconstruction is simple and based on the use of three-dimensional equilibrium equations and of the RCP procedure. Finally, once the stress reconstruction procedure is presented, a study of several existing issues on the constitutive relations in the GBT is carried out. Specifically, a constitutive law based on mirroring the kinematic constraints of the GBT model into a specific stress field assumption is proposed. It is shown that this method is equally valid for isotropic and orthotropic beams and coincides with the conventional GBT approach available in the literature. Later on, an analogous procedure is presented for the case of laminated beams. Lastly, as a way to improve an inherently poor description of shear deformability in the GBT, the introduction of shear correction factors is proposed. Throughout this work, numerous examples are provided to determine the validity of all the proposed contributions to the field.
Resumo:
Radiotherapy has shown some efficacy for epilepsies but the insufficient confinement of the radiation dose to the pathological target reduces its indications. Synchrotron-generated X-rays overcome this limitation and allow the delivery of focalized radiation doses to discrete brain volumes via interlaced arrays of microbeams (IntMRT). Here, we used IntMRT to target brain structures involved in seizure generation in a rat model of absence epilepsy (GAERS). We addressed the issue of whether and how synchrotron radiotherapeutic treatment suppresses epileptic activities in neuronal networks. IntMRT was used to target the somatosensory cortex (S1Cx), a region involved in seizure generation in the GAERS. The antiepileptic mechanisms were investigated by recording multisite local-field potentials and the intracellular activity of irradiated S1Cx pyramidal neurons in vivo. MRI and histopathological images displayed precise and sharp dose deposition and revealed no impairment of surrounding tissues. Local-field potentials from behaving animals demonstrated a quasi-total abolition of epileptiform activities within the target. The irradiated S1Cx was unable to initiate seizures, whereas neighboring non-irradiated cortical and thalamic regions could still produce pathological oscillations. In vivo intracellular recordings showed that irradiated pyramidal neurons were strongly hyperpolarized and displayed a decreased excitability and a reduction of spontaneous synaptic activities. These functional alterations explain the suppression of large-scale synchronization within irradiated cortical networks. Our work provides the first post-irradiation electrophysiological recordings of individual neurons. Altogether, our data are a critical step towards understanding how X-ray radiation impacts neuronal physiology and epileptogenic processes.
Resumo:
Filaggrin loss-of-function mutations resulting in C-terminal protein truncations are strong predisposing factors in human atopic dermatitis (AD). To assess the possibility of similar truncations in canine AD, an exclusion strategy was designed on 16 control and 18 AD dogs of various breeds. Comparative immunofluorescence microscopy was performed with an antibody raised against the canine filaggrin C-terminus and a commercial N-terminal antibody. Concurrent with human AD-like features such as generalized NFKB activation and hyperproliferation, four distinctive filaggrin expression patterns were identified in non-lesional skin. It was found that 10/18 AD dogs exhibited an identical pattern for both antibodies with comparable (category I, 3/18) or reduced (category II, 7/18) expression to that of controls. In contrast, 4/18 dogs displayed aberrant large vesicles revealed by the C-terminal but not the N-terminal antibody (category III), while 4/18 showed a control-like N-terminal expression but lacked the C-terminal protein (category IV). The missing C-terminal filaggrin in category IV strongly points towards loss-of function mutations in 4/18 (22%) of all AD dogs analysed.
Resumo:
Marginal generalized linear models can be used for clustered and longitudinal data by fitting a model as if the data were independent and using an empirical estimator of parameter standard errors. We extend this approach to data where the number of observations correlated with a given one grows with sample size and show that parameter estimates are consistent and asymptotically Normal with a slower convergence rate than for independent data, and that an information sandwich variance estimator is consistent. We present two problems that motivated this work, the modelling of patterns of HIV genetic variation and the behavior of clustered data estimators when clusters are large.
Resumo:
Generalized linear mixed models (GLMMs) provide an elegant framework for the analysis of correlated data. Due to the non-closed form of the likelihood, GLMMs are often fit by computational procedures like penalized quasi-likelihood (PQL). Special cases of these models are generalized linear models (GLMs), which are often fit using algorithms like iterative weighted least squares (IWLS). High computational costs and memory space constraints often make it difficult to apply these iterative procedures to data sets with very large number of cases. This paper proposes a computationally efficient strategy based on the Gauss-Seidel algorithm that iteratively fits sub-models of the GLMM to subsetted versions of the data. Additional gains in efficiency are achieved for Poisson models, commonly used in disease mapping problems, because of their special collapsibility property which allows data reduction through summaries. Convergence of the proposed iterative procedure is guaranteed for canonical link functions. The strategy is applied to investigate the relationship between ischemic heart disease, socioeconomic status and age/gender category in New South Wales, Australia, based on outcome data consisting of approximately 33 million records. A simulation study demonstrates the algorithm's reliability in analyzing a data set with 12 million records for a (non-collapsible) logistic regression model.
Resumo:
BACKGROUND: Many HIV-infected patients on highly active antiretroviral therapy (HAART) experience metabolic complications including dyslipidaemia and insulin resistance, which may increase their coronary heart disease (CHD) risk. We developed a prognostic model for CHD tailored to the changes in risk factors observed in patients starting HAART. METHODS: Data from five cohort studies (British Regional Heart Study, Caerphilly and Speedwell Studies, Framingham Offspring Study, Whitehall II) on 13,100 men aged 40-70 and 114,443 years of follow up were used. CHD was defined as myocardial infarction or death from CHD. Model fit was assessed using the Akaike Information Criterion; generalizability across cohorts was examined using internal-external cross-validation. RESULTS: A parametric model based on the Gompertz distribution generalized best. Variables included in the model were systolic blood pressure, total cholesterol, high-density lipoprotein cholesterol, triglyceride, glucose, diabetes mellitus, body mass index and smoking status. Compared with patients not on HAART, the estimated CHD hazard ratio (HR) for patients on HAART was 1.46 (95% CI 1.15-1.86) for moderate and 2.48 (95% CI 1.76-3.51) for severe metabolic complications. CONCLUSIONS: The change in the risk of CHD in HIV-infected men starting HAART can be estimated based on typical changes in risk factors, assuming that HRs estimated using data from non-infected men are applicable to HIV-infected men. Based on this model the risk of CHD is likely to increase, but increases may often be modest, and could be offset by lifestyle changes.
Resumo:
27-Channel EEG potential map series were recorded from 12 normals with closed and open eyes. Intracerebral dipole model source locations in the frequency domain were computed. Eye opening (visual input) caused centralization (convergence and elevation) of the source locations of the seven frequency bands, indicative of generalized activity; especially, there was clear anteriorization of α-2 (10.5–12 Hz) and β-2 (18.5–21 Hz) sources (α-2 also to the left). Complexity of the map series' trajectories in state space (assessed by Global Dimensional Complexity and Global OMEGA Complexity) increased significantly with eye opening, indicative of more independent, parallel, active processes. Contrary to PET and fMRI, these results suggest that brain activity is more distributed and independent during visual input than after eye closing (when it is more localized and more posterior).
Resumo:
This study examines the relationship among psychological resources (generalized resistance resources), care demands (demands for care, competing demands, perception of burden) and cognitive stress in a selected population of primary family caregivers. The study utilizes Antonovsky's Salutogenic Model of Health, specifically the concept of generalized resistance resources (GRRs), to analyze the relative effect of these resources on mediating cognitive stress, controlling for other care demands. The study is based on a sample of 784 eligible caregivers who (1) were relatives, (2) had the main responsibility for care, defined as a primary caregiver, and (3) provided a scaled stress score for the amount of overall care given to the care recipient (family member). The sample was drawn from the 1982 National Long-Term Care Survey (NLTCS) of individuals who assisted a given NLTCS sample person with ADL limitations.^ The study tests the following hypotheses: (a) There will be a negative relationship between generalized resistance resources (GRRs) and cognitive stress controlling for care demands (demands for care, competing demands, and perceptions of burden); (b) of the specific GRRs (material, cognitive, social, cultural-environmental) the social domain will represent the most significant factor predicting a decrease in cognitive stress; and (c) the social domain will be more significant for the female than the male primary family caregiver in decreasing cognitive stress.^ The study found that GRRs had a statistically significant mediating effect on cognitive stress, but the GRRs were a less significant predictor of stress than perception of burden and demands for care. Thus, although the analysis supported the underlying hypothesis, the specific hypothesis regarding GRRs' greater significance in buffering cognitive stress was not supported. Second, the results did not demonstrate the statistical significance or differences among the GRR domains. The hypothesis that the social GRR domain was most significant in mediating stress of family caregivers was not supported. Finally, the results confirmed that there are differences in the importance of social support help in mediating stress based on gender. It was found that gender and social support help were related to cognitive stress and gender had a statistically significant interaction effect with social support help. Implications for clinical practice, public health policy, and research are discussed. ^
Resumo:
This paper reports a comparison of three modeling strategies for the analysis of hospital mortality in a sample of general medicine inpatients in a Department of Veterans Affairs medical center. Logistic regression, a Markov chain model, and longitudinal logistic regression were evaluated on predictive performance as measured by the c-index and on accuracy of expected numbers of deaths compared to observed. The logistic regression used patient information collected at admission; the Markov model was comprised of two absorbing states for discharge and death and three transient states reflecting increasing severity of illness as measured by laboratory data collected during the hospital stay; longitudinal regression employed Generalized Estimating Equations (GEE) to model covariance structure for the repeated binary outcome. Results showed that the logistic regression predicted hospital mortality as well as the alternative methods but was limited in scope of application. The Markov chain provides insights into how day to day changes of illness severity lead to discharge or death. The longitudinal logistic regression showed that increasing illness trajectory is associated with hospital mortality. The conclusion is reached that for standard applications in modeling hospital mortality, logistic regression is adequate, but for new challenges facing health services research today, alternative methods are equally predictive, practical, and can provide new insights. ^
Resumo:
During the generalization of epileptic seizures, pathological activity in one brain area recruits distant brain structures into joint synchronous discharges. However, it remains unknown whether specific changes in local circuit activity are related to the aberrant recruitment of anatomically distant structures into epileptiform discharges. Further, it is not known whether aberrant areas recruit or entrain healthy ones into pathological activity. Here we study the dynamics of local circuit activity during the spread of epileptiform discharges in the zero-magnesium in vitro model of epilepsy. We employ high-speed multi-photon imaging in combination with dual whole-cell recordings in acute thalamocortical (TC) slices of the juvenile mouse to characterize the generalization of epileptic activity between neocortex and thalamus. We find that, although both structures are exposed to zero-magnesium, the initial onset of focal epileptiform discharge occurs in cortex. This suggests that local recurrent connectivity that is particularly prevalent in cortex is important for the initiation of seizure activity. Subsequent recruitment of thalamus into joint, generalized discharges is coincident with an increase in the coherence of local cortical circuit activity that itself does not depend on thalamus. Finally, the intensity of population discharges is positively correlated between both brain areas. This suggests that during and after seizure generalization not only the timing but also the amplitude of epileptiform discharges in thalamus is entrained by cortex. Together these results suggest a central role of neocortical activity for the onset and the structure of pathological recruitment of thalamus into joint synchronous epileptiform discharges.