963 resultados para Generalised Linear Modeling
Resumo:
Recent studies have pointed out a similarity between tectonics and slope tectonic-induced structures. Numerous studies have demonstrated that structures and fabrics previously interpreted as of purely geodynamical origin are instead the result of large slope deformation, and this led in the past to erroneous interpretations. Nevertheless, their limit seems not clearly defined, but it is somehow transitional. Some studies point out continuity between failures developing at surface with upper crust movements. In this contribution, the main studies which examine the link between rock structures and slope movements are reviewed. The aspects regarding model and scale of observation are discussed together with the role of pre-existing weaknesses in the rock mass. As slope failures can develop through progressive failure, structures and their changes in time and space can be recognized. Furthermore, recognition of the origin of these structures can help in avoiding misinterpretations of regional geology. This also suggests the importance of integrating different slope movement classifications based on distribution and pattern of deformation and the application of structural geology techniques. A structural geology approach in the landslide community is a tool that can greatly support the hazard quantification and related risks, because most of the physical parameters, which are used for landslide modeling, are derived from geotechnical tests or the emerging geophysical approaches.
Resumo:
Natural selection is typically exerted at some specific life stages. If natural selection takes place before a trait can be measured, using conventional models can cause wrong inference about population parameters. When the missing data process relates to the trait of interest, a valid inference requires explicit modeling of the missing process. We propose a joint modeling approach, a shared parameter model, to account for nonrandom missing data. It consists of an animal model for the phenotypic data and a logistic model for the missing process, linked by the additive genetic effects. A Bayesian approach is taken and inference is made using integrated nested Laplace approximations. From a simulation study we find that wrongly assuming that missing data are missing at random can result in severely biased estimates of additive genetic variance. Using real data from a wild population of Swiss barn owls Tyto alba, our model indicates that the missing individuals would display large black spots; and we conclude that genes affecting this trait are already under selection before it is expressed. Our model is a tool to correctly estimate the magnitude of both natural selection and additive genetic variance.
Resumo:
HIV-1 infects CD4+ T cells and completes its replication cycle in approximately 24 hours. We employed repeated measurements in a standardized cell system and rigorous mathematical modeling to characterize the emergence of the viral replication intermediates and their impact on the cellular transcriptional response with high temporal resolution. We observed 7,991 (73%) of the 10,958 expressed genes to be modulated in concordance with key steps of viral replication. Fifty-two percent of the overall variability in the host transcriptome was explained by linear regression on the viral life cycle. This profound perturbation of cellular physiology was investigated in the light of several regulatory mechanisms, including transcription factors, miRNAs, host-pathogen interaction, and proviral integration. Key features were validated in primary CD4+ T cells, and with viral constructs using alternative entry strategies. We propose a model of early massive cellular shutdown and progressive upregulation of the cellular machinery to complete the viral life cycle.
Resumo:
The pathogenesis of Schistosoma mansoni infection is largely determined by host T-cell mediated immune responses such as the granulomatous response to tissue deposited eggs and subsequent fibrosis. The major egg antigens have a valuable role in desensitizing the CD4+ Th cells that mediate granuloma formation, which may prevent or ameliorate clinical signs of schistosomiasis.S. mansoni major egg antigen Smp40 was expressed and completely purified. It was found that the expressed Smp40 reacts specifically with anti-Smp40 monoclonal antibody in Western blotting. Three-dimensional structure was elucidated based on the similarity of Smp40 with the small heat shock protein coded in the protein database as 1SHS as a template in the molecular modeling. It was figured out that the C-terminal of the Smp40 protein (residues 130 onward) contains two alpha crystallin domains. The fold consists of eight beta strands sandwiched in two sheets forming Greek key. The purified Smp40 was used for in vitro stimulation of peripheral blood mononuclear cells from patients infected with S. mansoni using phytohemagglutinin mitogen as a positive control. The obtained results showed that there is no statistical difference in interferon-g, interleukin (IL)-4 and IL-13 levels obtained with Smp40 stimulation compared with the control group (P > 0.05 for each). On the other hand, there were significant differences after Smp40 stimulation in IL-5 (P = 0.006) and IL-10 levels (P < 0.001) compared with the control group. Gaining the knowledge by reviewing the literature, it was found that the overall pattern of cytokine profile obtained with Smp40 stimulation is reported to be associated with reduced collagen deposition, decreased fibrosis, and granuloma formation inhibition. This may reflect its future prospect as a leading anti-pathology schistosomal vaccine candidate.
Resumo:
By 2002, dengue virus serotype 1 (DENV-1) and DENV-2 had circulated for more than a decade in Brazil. In 2002, the introduction of DENV-3 in the state of Bahia produced a massive epidemic and the first cases of dengue hemorrhagic fever. Based on the standardized frequency, timing and location of viral isolations by the state's Central Laboratory, DENV-3 probably entered Bahia through its capital, Salvador, and then rapidly disseminated to other cities, following the main roads. A linear regression model that included traffic flow, distance from the capital and DENV-1 circulation (r² = 0.24, p = 0.001) supported this hypothesis. This pattern was not seen for serotypes already in circulation and was not seen for DENV-3 in the following year. Human population density was another important factor in the intensity of viral circulation. Neither DENV-1 nor DENV-2 fit this model for 2001 or 2003. Since the vector has limited flight range and vector densities fail to correlate with intensity of viral circulation, this distribution represents the movement of infected people and to some extent mosquitoes. This pattern may mimic person-to-person spread of a new infection.
Resumo:
Observations in daily practice are sometimes registered as positive values larger then a given threshold α. The sample space is in this case the interval (α,+∞), α & 0, which can be structured as a real Euclidean space in different ways. This fact opens the door to alternative statistical models depending not only on the assumed distribution function, but also on the metric which is considered as appropriate, i.e. the way differences are measured, and thus variability
Resumo:
This paper is a first draft of the principle of statistical modelling on coordinates. Several causes —which would be long to detail—have led to this situation close to the deadline for submitting papers to CODAWORK’03. The main of them is the fast development of the approach along thelast months, which let appear previous drafts as obsolete. The present paper contains the essential parts of the state of the art of this approach from my point of view. I would like to acknowledge many clarifying discussions with the group of people working in this field in Girona, Barcelona, Carrick Castle, Firenze, Berlin, G¨ottingen, and Freiberg. They have given a lot of suggestions and ideas. Nevertheless, there might be still errors or unclear aspects which are exclusively my fault. I hope this contribution serves as a basis for further discussions and new developments
Resumo:
Aitchison and Bacon-Shone (1999) considered convex linear combinations ofcompositions. In other words, they investigated compositions of compositions, wherethe mixing composition follows a logistic Normal distribution (or a perturbationprocess) and the compositions being mixed follow a logistic Normal distribution. Inthis paper, I investigate the extension to situations where the mixing compositionvaries with a number of dimensions. Examples would be where the mixingproportions vary with time or distance or a combination of the two. Practicalsituations include a river where the mixing proportions vary along the river, or acrossa lake and possibly with a time trend. This is illustrated with a dataset similar to thatused in the Aitchison and Bacon-Shone paper, which looked at how pollution in aloch depended on the pollution in the three rivers that feed the loch. Here, I explicitlymodel the variation in the linear combination across the loch, assuming that the meanof the logistic Normal distribution depends on the river flows and relative distancefrom the source origins
Resumo:
In the context of the investigation of the use of automated fingerprint identification systems (AFIS) for the evaluation of fingerprint evidence, the current study presents investigations into the variability of scores from an AFIS system when fingermarks from a known donor are compared to fingerprints that are not from the same source. The ultimate goal is to propose a model, based on likelihood ratios, which allows the evaluation of mark-to-print comparisons. In particular, this model, through its use of AFIS technology, benefits from the possibility of using a large amount of data, as well as from an already built-in proximity measure, the AFIS score. More precisely, the numerator of the LR is obtained from scores issued from comparisons between impressions from the same source and showing the same minutia configuration. The denominator of the LR is obtained by extracting scores from comparisons of the questioned mark with a database of non-matching sources. This paper focuses solely on the assignment of the denominator of the LR. We refer to it by the generic term of between-finger variability. The issues addressed in this paper in relation to between-finger variability are the required sample size, the influence of the finger number and general pattern, as well as that of the number of minutiae included and their configuration on a given finger. Results show that reliable estimation of between-finger variability is feasible with 10,000 scores. These scores should come from the appropriate finger number/general pattern combination as defined by the mark. Furthermore, strategies of obtaining between-finger variability when these elements cannot be conclusively seen on the mark (and its position with respect to other marks for finger number) have been presented. These results immediately allow case-by-case estimation of the between-finger variability in an operational setting.
Resumo:
This research work deals with the problem of modeling and design of low level speed controller for the mobile robot PRIM. The main objective is to develop an effective educational tool. On one hand, the interests in using the open mobile platform PRIM consist in integrating several highly related subjects to the automatic control theory in an educational context, by embracing the subjects of communications, signal processing, sensor fusion and hardware design, amongst others. On the other hand, the idea is to implement useful navigation strategies such that the robot can be served as a mobile multimedia information point. It is in this context, when navigation strategies are oriented to goal achievement, that a local model predictive control is attained. Hence, such studies are presented as a very interesting control strategy in order to develop the future capabilities of the system
Resumo:
This paper analyses the associations between Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI) on the prevalence of schistosomiasis and the presence of Biomphalaria glabrata in the state of Minas Gerais (MG), Brazil. Additionally, vegetation, soil and shade fraction images were created using a Linear Spectral Mixture Model (LSMM) from the blue, red and infrared channels of the Moderate Resolution Imaging Spectroradiometer spaceborne sensor and the relationship between these images and the prevalence of schistosomiasis and the presence of B. glabrata was analysed. First, we found a high correlation between the vegetation fraction image and EVI and second, a high correlation between soil fraction image and NDVI. The results also indicate that there was a positive correlation between prevalence and the vegetation fraction image (July 2002), a negative correlation between prevalence and the soil fraction image (July 2002) and a positive correlation between B. glabrata and the shade fraction image (July 2002). This paper demonstrates that the LSMM variables can be used as a substitute for the standard vegetation indices (EVI and NDVI) to determine and delimit risk areas for B. glabrata and schistosomiasis in MG, which can be used to improve the allocation of resources for disease control.
Resumo:
MOTIVATION: Understanding gene regulation in biological processes and modeling the robustness of underlying regulatory networks is an important problem that is currently being addressed by computational systems biologists. Lately, there has been a renewed interest in Boolean modeling techniques for gene regulatory networks (GRNs). However, due to their deterministic nature, it is often difficult to identify whether these modeling approaches are robust to the addition of stochastic noise that is widespread in gene regulatory processes. Stochasticity in Boolean models of GRNs has been addressed relatively sparingly in the past, mainly by flipping the expression of genes between different expression levels with a predefined probability. This stochasticity in nodes (SIN) model leads to over representation of noise in GRNs and hence non-correspondence with biological observations. RESULTS: In this article, we introduce the stochasticity in functions (SIF) model for simulating stochasticity in Boolean models of GRNs. By providing biological motivation behind the use of the SIF model and applying it to the T-helper and T-cell activation networks, we show that the SIF model provides more biologically robust results than the existing SIN model of stochasticity in GRNs. AVAILABILITY: Algorithms are made available under our Boolean modeling toolbox, GenYsis. The software binaries can be downloaded from http://si2.epfl.ch/ approximately garg/genysis.html.
Resumo:
Aim: The aim of the study was to investigate the influence of dietary intake of commercial hydrolyzed collagen (Gelatine Royal ®) on bone remodeling in pre-pubertal children. Methods: A randomized double-blind study was carried out in 60 children (9.42 ± 1.31 years) divided into three groups according to the amount of partially hydrolyzed collagen taken daily for 4 months: placebo (G-I, n = 18), collagen (G-II, n = 20) and collagen + calcium (G-III, n = 22) groups. Analyses of the following biochemical markers were carried out: total and bone alkaline phosphatase (tALP and bALP), osteocalcin, tartrate-resistant acid phosphatase (TRAP), type I collagen carboxy terminal telopeptide, lipids, calcium, 25-hydroxyvitamin D, insulin-like growth factor 1 (IGF-1), thyroid-stimulating hormone, free thyroxin and intact parathormone. Results: There was a significantly greater increase in serum IGF-1 in G-III than in G II (p < 0.01) or G-I (p < 0.05) during the study period, and a significantly greater increase in plasma tALP in G-III than in G-I (p < 0.05). Serum bALP behavior significantly (p < 0.05) differed between G-II (increase) and G-I (decrease). Plasma TRAP behavior significantly differed between G-II and G-I (p < 0.01) and between G-III and G-II (p < 0.05). Conclusion: Daily dietary intake of hydrolyzed collagen seems to have a potential role in enhancing bone remodeling at key stages of growth and development.
Resumo:
Metabolic problems lead to numerous failures during clinical trials, and much effort is now devoted to developing in silico models predicting metabolic stability and metabolites. Such models are well known for cytochromes P450 and some transferases, whereas less has been done to predict the activity of human hydrolases. The present study was undertaken to develop a computational approach able to predict the hydrolysis of novel esters by human carboxylesterase hCES2. The study involved first a homology modeling of the hCES2 protein based on the model of hCES1 since the two proteins share a high degree of homology (congruent with 73%). A set of 40 known substrates of hCES2 was taken from the literature; the ligands were docked in both their neutral and ionized forms using GriDock, a parallel tool based on the AutoDock4.0 engine which can perform efficient and easy virtual screening analyses of large molecular databases exploiting multi-core architectures. Useful statistical models (e.g., r (2) = 0.91 for substrates in their unprotonated state) were calculated by correlating experimental pK(m) values with distance between the carbon atom of the substrate's ester group and the hydroxy function of Ser228. Additional parameters in the equations accounted for hydrophobic and electrostatic interactions between substrates and contributing residues. The negatively charged residues in the hCES2 cavity explained the preference of the enzyme for neutral substrates and, more generally, suggested that ligands which interact too strongly by ionic bonds (e.g., ACE inhibitors) cannot be good CES2 substrates because they are trapped in the cavity in unproductive modes and behave as inhibitors. The effects of protonation on substrate recognition and the contrasting behavior of substrates and products were finally investigated by MD simulations of some CES2 complexes.