958 resultados para Classical orthogonal polynomials of a discrete variable
Resumo:
BACKGROUND: The ambition of most molecular biologists is the understanding of the intricate network of molecular interactions that control biological systems. As scientists uncover the components and the connectivity of these networks, it becomes possible to study their dynamical behavior as a whole and discover what is the specific role of each of their components. Since the behavior of a network is by no means intuitive, it becomes necessary to use computational models to understand its behavior and to be able to make predictions about it. Unfortunately, most current computational models describe small networks due to the scarcity of kinetic data available. To overcome this problem, we previously published a methodology to convert a signaling network into a dynamical system, even in the total absence of kinetic information. In this paper we present a software implementation of such methodology. RESULTS: We developed SQUAD, a software for the dynamic simulation of signaling networks using the standardized qualitative dynamical systems approach. SQUAD converts the network into a discrete dynamical system, and it uses a binary decision diagram algorithm to identify all the steady states of the system. Then, the software creates a continuous dynamical system and localizes its steady states which are located near the steady states of the discrete system. The software permits to make simulations on the continuous system, allowing for the modification of several parameters. Importantly, SQUAD includes a framework for perturbing networks in a manner similar to what is performed in experimental laboratory protocols, for example by activating receptors or knocking out molecular components. Using this software we have been able to successfully reproduce the behavior of the regulatory network implicated in T-helper cell differentiation. CONCLUSION: The simulation of regulatory networks aims at predicting the behavior of a whole system when subject to stimuli, such as drugs, or determine the role of specific components within the network. The predictions can then be used to interpret and/or drive laboratory experiments. SQUAD provides a user-friendly graphical interface, accessible to both computational and experimental biologists for the fast qualitative simulation of large regulatory networks for which kinetic data is not necessarily available.
Resumo:
This paper explores how absorptive capacity affects the innovative performance and productivity dynamics of Spanish firms. A firm’s efficiency levels are measured using two variables: the labour productivity and the Total Factor Productivity (TFP). The theoretical framework is based on the seminal contributions of Cohen and Levinthal (1989, 1990) regarding absorptive capacity; and the applied framework is based on the four-stage structural model proposed by Crépon, Duguet and Mairesse (1998) for setting the determinants of R&D, the effects of R&D activities on innovation outputs, and the impacts of innovation on firm productivity. The present study uses a twostage structural model. In the first stage, a probit estimation is used to investigate how the sources of R&D, the absorptive capacity and a vector of the firm’s individual features influence the firm’s likelihood of developing innovations in products or processes. In the second phase, a quantile regression is used to analyze the effect of R&D sources, absorptive capacity and firm characteristics on productivity. This method shows the elasticity of each exogenous variable on productivity according to the firms’ levels of efficiency, and thus allows us to distinguish between firms that are close to the technological frontier and those that are further away from it. We used extensive firm-level panel data from 5,575 firms for the 2004-2009 period. The results show that the internal absorptive capacity has a strong impact on the productivity of firms, whereas the role of external absorptive capacity differs according to nature of the each industry and according the distance of firms from the technological frontier. Key words: R&D sources, innovation strategies, absorptive capacity, technological distance, quantile regression.
Resumo:
OBJECTIVE: Insulin-like growth factor-I (IGF-I) is an important regulator of fetal growth and its bioavailability depends on insulin-like growth factor binding proteins (IGFBPs). Genes coding for IGF-I and IGFBP3 are polymorphic. We hypothesized that either amniotic fluid protein concentration at the beginning of the second trimester or genotype of one of these two genes could be predictive of abnormal fetal growth. STUDY DESIGN: Amniotic fluid samples (14-18 weeks of pregnancy) from 123 patients with appropriate for gestational age (AGA) fetuses, 39 patients with small for gestational age (SGA) fetuses and 34 patients with large for gestational age (LGA) were analyzed. Protein concentrations were evaluated by ELISA and gene polymorphisms by PCR. RESULTS: Amniotic fluid IGFBP3 concentrations were significantly higher in SGA compared to AGA group (P=0.030), and this was even more significant when adjusted to gestational age at the time of amniocentesis and other covariates (ANCOVA analysis: P=0.009). Genotypic distribution of IGF-I variable number of tandem repeats (VNTR) polymorphism was significantly different in SGA compared to AGA group (P=0.029). 19CA/20CA genotype frequency was threefold decreased in SGA compared to AGA group and the risk of SGA occurrence of this genotype was decreased accordingly: OR=0.289, 95%CI=0.1-0.9, P=0.032. Genotype distribution of IGFBP3(A-202C) polymorphism was similar in all three groups. CONCLUSIONS: High IGFBP3 concentrations in amniotic fluid at the beginning of the second trimester are associated with increased risks of SGA while 19CA/20CA genotype at IGF-I VNTR polymorphism is associated with reduced risks of SGA. Neither IGFBP3 concentrations, nor IGF-I/IGFBP3 polymorphisms are associated with modified risks of LGA.
Resumo:
Integrons play a role in horizontal acquisition and expression of genes, as well as gene reservoir, contributing for the resistance phenotype, particularly relevant to bacteria of clinical importance. We aimed to determine the composition and the organization of the class 1 integron variable region present in Pseudomonas aeruginosa clinical isolates from Brazil. Strains carrying class 1 integrons were resistant to the majority of antibiotics tested, except to imipenem and ceftazidime. Sequence analysis of the integron variable region revealed the presence of the blaCARB-4 gene into two distinct cassette arrays: aacA4-dhfrXVb-blaCARB-4 and aadB-aacA4-blaCARB-4 . dhfrXVb gene cassette, which is rare in Brazil and in P. aeruginosa species, was found in one isolate. PFGE analysis showed the spread of blaCARB-4 among P. aeruginosa clones. The occurrence of blaCARB-4 and dhfrXVb in Brazil may contribute for developing resistance to clinically important antibiotics, and shows a diversified scenarium of these elements occurring in Amazon clinical settings, where no study about integron dinamycs was performed to date.
Resumo:
A version of Matheron’s discrete Gaussian model is applied to cell composition data.The examples are for map patterns of felsic metavolcanics in two different areas. Q-Qplots of the model for cell values representing proportion of 10 km x 10 km cell areaunderlain by this rock type are approximately linear, and the line of best fit can be usedto estimate the parameters of the model. It is also shown that felsic metavolcanics in theAbitibi area of the Canadian Shield can be modeled as a fractal
Resumo:
One area which has been largely neglected when studying the acquisition of addiction to smoking with thetranstheoretical model is whether the individual had previously experimented with smoking. The importance of includingthe experimentation variable was supported by this research
Resumo:
In human Population Genetics, routine applications of principal component techniques are oftenrequired. Population biologists make widespread use of certain discrete classifications of humansamples into haplotypes, the monophyletic units of phylogenetic trees constructed from severalsingle nucleotide bimorphisms hierarchically ordered. Compositional frequencies of the haplotypesare recorded within the different samples. Principal component techniques are then required as adimension-reducing strategy to bring the dimension of the problem to a manageable level, say two,to allow for graphical analysis.Population biologists at large are not aware of the special features of compositional data and normally make use of the crude covariance of compositional relative frequencies to construct principalcomponents. In this short note we present our experience with using traditional linear principalcomponents or compositional principal components based on logratios, with reference to a specificdataset
Resumo:
The classical statistical study of the wind speed in the atmospheric surface layer is madegenerally from the analysis of the three habitual components that perform the wind data,that is, the component W-E, the component S-N and the vertical component,considering these components independent.When the goal of the study of these data is the Aeolian energy, so is when wind isstudied from an energetic point of view and the squares of wind components can beconsidered as compositional variables. To do so, each component has to be divided bythe module of the corresponding vector.In this work the theoretical analysis of the components of the wind as compositionaldata is presented and also the conclusions that can be obtained from the point of view ofthe practical applications as well as those that can be derived from the application ofthis technique in different conditions of weather
Resumo:
The author studies the error and complexity of the discrete random walk Monte Carlo technique for radiosity, using both the shooting and gathering methods. The author shows that the shooting method exhibits a lower complexity than the gathering one, and under some constraints, it has a linear complexity. This is an improvement over a previous result that pointed to an O(n log n) complexity. The author gives and compares three unbiased estimators for each method, and obtains closed forms and bounds for their variances. The author also bounds the expected value of the mean square error (MSE). Some of the results obtained are also shown
Resumo:
Participation is a key indicator of the potential effectiveness of any population-based intervention. Defining, measuring and reporting participation in cancer screening programmes has become more heterogeneous as the number and diversity of interventions have increased, and the purposes of this benchmarking parameter have broadened. This study, centred on colorectal cancer, addresses current issues that affect the increasingly complex task of comparing screening participation across settings. Reports from programmes with a defined target population and active invitation scheme, published between 2005 and 2012, were reviewed. Differences in defining and measuring participation were identified and quantified, and participation indicators were grouped by aims of measure and temporal dimensions. We found that consistent terminology, clear and complete reporting of participation definition and systematic documentation of coverage by invitation were lacking. Further, adherence to definitions proposed in the 2010 European Guidelines for Quality Assurance in Colorectal Cancer Screening was suboptimal. Ineligible individuals represented 1% to 15% of invitations, and variable criteria for ineligibility yielded differences in participation estimates that could obscure the interpretation of colorectal cancer screening participation internationally. Excluding ineligible individuals from the reference population enhances comparability of participation measures. Standardised measures of cumulative participation to compare screening protocols with different intervals and inclusion of time since invitation in definitions are urgently needed to improve international comparability of colorectal cancer screening participation. Recommendations to improve comparability of participation indicators in cancer screening interventions are made.
Resumo:
The EU has been one of the main actors involved in the construction process of an international climate change regime, adopting it as an identity sign in the international arena. This activism has reverted in the European political agenda and in the one of its Members States. Therefore, climate change has become a driver for the EU growing participation in energy policy and for its governance evolution. In this context, much attention has been paid to the climate and energy policies integration agreed after the 2007 spring European Council. Apparently, this decision meant a decisive step towards the incorporation of the environmental variable in the energy policy-making. Moreover, the Action Plan [2007-2009] “Energy Policy for Europe” outlined priority actions in a variety of energy-related areas, implying the new European Energy Policy commencement. Against this background, there is still much left to understand about its formulation and its further development. Rooted on the Environmental Policy Integration approach, this paper traces the increasing proximity between environment and energy policies in order to understand the green contribution to the European Energy Policy construction.
Resumo:
Report for the scientific sojourn carried out at the l’ Institute for Computational Molecular Science of the Temple University, United States, from 2010 to 2012. Two-component systems (TCS) are used by pathogenic bacteria to sense the environment within a host and activate mechanisms related to virulence and antimicrobial resistance. A prototypical example is the PhoQ/PhoP system, which is the major regulator of virulence in Salmonella. Hence, PhoQ is an attractive target for the design of new antibiotics against foodborne diseases. Inhibition of the PhoQ-mediated bacterial virulence does not result in growth inhibition, presenting less selective pressure for the generation of antibiotic resistance. Moreover, PhoQ is a histidine kinase (HK) and it is absent in animals. Nevertheless, the design of satisfactory HK inhibitors has been proven to be a challenge. To compete with the intracellular ATP concentrations, the affinity of a HK inhibidor must be in the micromolar-nanomolar range, whereas the current lead compounds have at best millimolar affinities. Moreover, the drug selectivity depends on the conformation of a highly variable loop, referred to as the “ATP-lid, which is difficult to study by X-Ray crystallography due to its flexibility. I have investigated the binding of different HK inhibitors to PhoQ. In particular, all-atom molecular dynamics simulations have been combined with enhanced sampling techniques in order to provide structural and dynamic information of the conformation of the ATP-lid. Transient interactions between these drugs and the ATP-lid have been identified and the free energy of the different binding modes has been estimated. The results obtained pinpoint the importance of protein flexibility in the HK-inhibitor binding, and constitute a first step in developing more potent and selective drugs. The computational resources of the hosting institution as well as the experience of the members of the group in drug binding and free energy methods have been crucial to carry out this work.
Resumo:
Suite à une infection avec le protozoaire Leishmania major (L. major), les souris sensibles de souche BALB/c développent des lésions progressives associées à une maturation des cellules CD4+ TH2 sécrétant de l'IL-4. A l'inverse, les souris résistantes de souche C57BL/6 guérissent à terme, sous l'influence de l'expansion des cellules CD4+ TH1 produisant de l'IFNy qui a un effet synergique avec le TNF ("tumor necrosis factor") sur l'activation des macrophages et leur fonction leishmanicide. Lors de notre étude nous avons montré que des souris C57BL/6 doublement déficientes en TNF et FasL ("Fas ligand") infectées par L. major ne guérissaient ni leur lésions ni ne contrôlaient la réplication de parasites malgré une réponse de type TH1. Bien que l'activité de synthétase inductible de l'oxyde nitrique ("iNOs") soit comparable chez les souris doublement ou simplement déficientes, seules celles déficientes en FasL ont démontré une incapacité à contrôler la réplication parasitaire. De surcroît il est apparu que le FasL a un effet synergique avec l'IFNy. L'adjonction de FasL à une culture cellulaire de macrophages stimulés par l'IFNy conduit à une activation de ces cellules. Celle-ci est démontrée par l'augmentation de la production de TNF et de NO par les macrophages ainsi que par l'élimination des parasites intracellulaires par ces mêmes cellules. Alors que le FasL et l'IFNy semblent essentiels au contrôle de la réplication des pathogènes intracellulaires, la contribution de TNF s'oriente davantage vers le contrôle de l'inflammation. L'activation macrophagique via Fas précède la mort cellulaire qui survient quelques jours plus tard. Cette mort cellulaire programmée était indépendante de la cascade enzymatique des caspases, au vu de l'absence d'effet de l'inhibiteur non-spécifique ZVAD-fmk des caspases. Ces résultats suggèrent que l'interaction Fas-FasL agit comme une costimulation nécessaire à une activation efficace des macrophages, la mort cellulaire survenant consécutivement à l'activation des macrophages.¦-¦Upon infection with the protozoan parasite Leishmania major (L. major), susceptible BALB/c mice develop non healing lesions associated with the maturation of CD4+ TH2 cells secreting IL-4. In contrast, resistant C57BL/6 mice are able to heal their lesions, because of CD4+ TH1 cell expansion and production of high levels of IFNy, which synergizes with tumour necrosis factor (TNF) in activating macrophages to their microbicidal state. In our study we showed that C57BL/6 mice lacking both TNF and Fas ligand (FasL) infected with L. major neither resolved their lesions nor controlled L. major replication despite a strong TH1 response. Although comparable inducible nitric oxide synthase (iNOs) was measured in single or double deficient mice, only mice deficient in FasL failed to control the parasite replication. Moreover FasL synergized with IFNy for the induction of leishmanicidal activity within macrophages infected with L. major in vitro. Addition of FasL to IFNy stimulated macrophages led to their activation, as reflected by the secretion of tumour necrosis factor and nitrite oxide, as well as the induction of their microbicidal activity, resulting in the killing of intracellular L. major. While FasL along with IFNy and iNOs appeared to be essential for the complete control of intracellular pathogen replication, the contribution of TNF appeared more important in controlling the inflammation on the site of infection. Macrophage activation via Fas pathway preceded cell death, which occurred a few days after Fas mediated activation. This program cell death was independent of caspase enzymatic activities as revealed by the lack of effect of ZVAD-fmk, a pan-caspase inhibitor. These results suggested that the Fas-FasL pathway, as part of the classical activation pathway of the macrophages, is essential in the stimulation of macrophage leading to a microbicidal state and to AICD, and may thus contribute to the pathogenesis of L. major infection.
Resumo:
Standard methods for the analysis of linear latent variable models oftenrely on the assumption that the vector of observed variables is normallydistributed. This normality assumption (NA) plays a crucial role inassessingoptimality of estimates, in computing standard errors, and in designinganasymptotic chi-square goodness-of-fit test. The asymptotic validity of NAinferences when the data deviates from normality has been calledasymptoticrobustness. In the present paper we extend previous work on asymptoticrobustnessto a general context of multi-sample analysis of linear latent variablemodels,with a latent component of the model allowed to be fixed across(hypothetical)sample replications, and with the asymptotic covariance matrix of thesamplemoments not necessarily finite. We will show that, under certainconditions,the matrix $\Gamma$ of asymptotic variances of the analyzed samplemomentscan be substituted by a matrix $\Omega$ that is a function only of thecross-product moments of the observed variables. The main advantage of thisis thatinferences based on $\Omega$ are readily available in standard softwareforcovariance structure analysis, and do not require to compute samplefourth-order moments. An illustration with simulated data in the context ofregressionwith errors in variables will be presented.
Resumo:
Much of empirical economics involves regression analysis. However, does thepresentation of results affect economists ability to make inferences for decision makingpurposes? In a survey, 257 academic economists were asked to make probabilisticinferences on the basis of the outputs of a regression analysis presented in a standardformat. Questions concerned the distribution of the dependent variable conditional onknown values of the independent variable. However, many respondents underestimateduncertainty by failing to take into account the standard deviation of the estimatedresiduals. The addition of graphs did not substantially improve inferences. On the otherhand, when only graphs were provided (i.e., with no statistics), respondents weresubstantially more accurate. We discuss implications for improving practice in reportingresults of regression analyses.