884 resultados para Problem analysis
Resumo:
The dynamical analysis of large biological regulatory networks requires the development of scalable methods for mathematical modeling. Following the approach initially introduced by Thomas, we formalize the interactions between the components of a network in terms of discrete variables, functions, and parameters. Model simulations result in directed graphs, called state transition graphs. We are particularly interested in reachability properties and asymptotic behaviors, which correspond to terminal strongly connected components (or "attractors") in the state transition graph. A well-known problem is the exponential increase of the size of state transition graphs with the number of network components, in particular when using the biologically realistic asynchronous updating assumption. To address this problem, we have developed several complementary methods enabling the analysis of the behavior of large and complex logical models: (i) the definition of transition priority classes to simplify the dynamics; (ii) a model reduction method preserving essential dynamical properties, (iii) a novel algorithm to compact state transition graphs and directly generate compressed representations, emphasizing relevant transient and asymptotic dynamical properties. The power of an approach combining these different methods is demonstrated by applying them to a recent multilevel logical model for the network controlling CD4+ T helper cell response to antigen presentation and to a dozen cytokines. This model accounts for the differentiation of canonical Th1 and Th2 lymphocytes, as well as of inflammatory Th17 and regulatory T cells, along with many hybrid subtypes. All these methods have been implemented into the software GINsim, which enables the definition, the analysis, and the simulation of logical regulatory graphs.
Resumo:
Low efficiency of transfection is often the limiting factor for acquiring conclusive data in reporter assays. It is especially difficult to efficiently transfect and characterize promoters in primary human cells. To overcome this problem we have developed a system in which reporter gene expression is quantified by flow cytometry. In this system, green fluorescent protein (GFP) reporter constructs are co-transfected with a reference plasmid that codes for the mouse cell surface antigen Thy-1.1 and serves to determine transfection efficiency. Comparison of mean GFP expression of the total transfected cell population with the activity of an analogous luciferase reporter showed that the sensitivity of the two reporter systems is similar. However, because GFP expression can be analyzed at the single-cell level and in the same cells the expression of the reference plasmid can be monitored by two-color fluorescence, the GFP reporter system is in fact more sensitive, particularly in cells which can only be transfected with a low efficiency.
Resumo:
This paper explores two major issues, from biophysical and historical viewpoints. We examine land management, which we define as the long-term fertility maintenance of land in relation to agriculture, fishery and forestry. We also explore humans’ positive role as agents aiming to reinforce harmonious materials circulation within the land. Liebig’s view on nature, agriculture and land, emphasizes the maintenance of long-term land fertility based on his agronomical thought that the circulation of matter in agricultural fields must be maintained with manure as much as possible. The thoughts of several classical economists, on nature, agriculture and land are reassessed from Liebig’s view point. Then, the land management problem is discussed at a much more fundamental level, to understand the necessary conditions for life in relation to land management. This point is analyzed in terms of two mechanisms: entropy disposal on the earth, and material circulation against gravitational field. Finally from the historical example of the metropolis of Edo, it is shown that there is yet another necessary condition for the sustainable management of land based on the creation of harmonious material cycles among cities, farm land, forests and surrounding sea areas in which humans play a vital role as agent.
Resumo:
Water scarcity is a long-standing problem in Catalonia, as there are significant differences in the spatial and temporal distribution of water through the territory. There has consequently been a debate for many years about whether the solution to water scarcity must be considered in terms of efficiency or equity, the role that the public sector must play and the role that market-based instruments should play in water management. The aim of this paper is to use a Computable General Equilibrium (CGE) model to analyze the advantages and disadvantages associated with different policy instruments, from both a supply and a demand viewpoint, which can be applied to water management in Catalonia. We also introduce an ecological sector in our CGE model, allowing us to analyze the environmental impact of the alternative policies simulated. The calibration of the exogenous variables of the CGE model is performed by using a Social Accounting Matrix (SAM) for the Catalan economy with 2001 data. The results suggest that taking into account the principle of sustainability of the resource, the policy debate between supply and demand in water policies is obsolete, and a new combination of policies is required to respect the different values associated with water. Keywords: Water Policies; Computable General Equilibrium Model; Economic Effects; Environmental Effects.
Resumo:
In this study I try to explain the systemic problem of the low economic competitiveness of nuclear energy for the production of electricity by carrying out a biophysical analysis of its production process. Given the fact that neither econometric approaches nor onedimensional methods of energy analyses are effective, I introduce the concept of biophysical explanation as a quantitative analysis capable of handling the inherent ambiguity associated with the concept of energy. In particular, the quantities of energy, considered as relevant for the assessment, can only be measured and aggregated after having agreed on a pre-analytical definition of a grammar characterizing a given set of finite transformations. Using this grammar it becomes possible to provide a biophysical explanation for the low economic competitiveness of nuclear energy in the production of electricity. When comparing the various unit operations of the process of production of electricity with nuclear energy to the analogous unit operations of the process of production of fossil energy, we see that the various phases of the process are the same. The only difference is related to characteristics of the process associated with the generation of heat which are completely different in the two systems. Since the cost of production of fossil energy provides the base line of economic competitiveness of electricity, the (lack of) economic competitiveness of the production of electricity from nuclear energy can be studied, by comparing the biophysical costs associated with the different unit operations taking place in nuclear and fossil power plants when generating process heat or net electricity. In particular, the analysis focuses on fossil-fuel requirements and labor requirements for those phases that both nuclear plants and fossil energy plants have in common: (i) mining; (ii) refining/enriching; (iii) generating heat/electricity; (iv) handling the pollution/radioactive wastes. By adopting this approach, it becomes possible to explain the systemic low economic competitiveness of nuclear energy in the production of electricity, because of: (i) its dependence on oil, limiting its possible role as a carbon-free alternative; (ii) the choices made in relation to its fuel cycle, especially whether it includes reprocessing operations or not; (iii) the unavoidable uncertainty in the definition of the characteristics of its process; (iv) its large inertia (lack of flexibility) due to issues of time scale; and (v) its low power level.
Resumo:
Although the sensitivity to light of thioridazine and its metabolites has been described, the problem does not seem to be widely acknowledged. Indeed, a survey of the literature shows that assays of these compounds under light-protected conditions have been performed only in a few of the numerous analytical studies on this drug. In the present study, thioridazine, its metabolites, and 18 other neuroleptics were tested for their sensitivity to light under conditions used for their analysis. The results show that light significantly affects the analysis of thioridazine and its metabolites. It readily causes the racemization of the isomeric pairs of thioridazine 5-sulphoxide and greatly decreases the concentration of thioridazine. This sensitivity to light varied with the medium used (most sensitive in acidic media) and also with the molecule (in order of decreasing sensitivity: thioridazine > mesoridazine > sulforidazine). Degradation in neutral or basic media was slow, with the exception of mesoridazine in a neutral medium. Twelve other phenothiazines tested, as well as chlorprotixene, a thioxanthene drug, were found to be sensitive to light in acidic media, whereas flupenthixol and zuclopenthixol (two thioxanthenes), clozapine, fluperlapine, and haloperidol (a butyrophenone) did not seem to be affected. In addition to being sensitive to light, some compounds may be readily oxidized by peroxide-containing solvents.
Resumo:
In the last decade, dengue fever (DF) in Brazil has been recognized as an important public health problem, and an increasing number of dengue haemorrhagic fever (DHF) cases have been reported since the introduction of dengue virus type 2 (DEN-2) into the country in 1990. In order to analyze the complete genome sequence of a DEN-2 Brazilian strain (BR64022/98), we designed primers to amplify contiguous segments of approximately 500 base pairs across the entire sequence of the viral genome. Twenty fragments amplified by reverse transcriptase-PCR were cloned, and the complete nucleotide and the deduced amino acid sequences were determined. This constitutes the first complete genetic characterization of a DEN-2 strain from Brazil. All amino acid changes differentiating strains related to the Asian/American-Asian genotype were observed in BR64022/98, indicating the Asiatic origin of the strain.
Resumo:
1 6 STRUCTURE OF THIS THESIS -Chapter I presents the motivations of this dissertation by illustrating two gaps in the current body of knowledge that are worth filling, describes the research problem addressed by this thesis and presents the research methodology used to achieve this goal. -Chapter 2 shows a review of the existing literature showing that environment analysis is a vital strategic task, that it shall be supported by adapted information systems, and that there is thus a need for developing a conceptual model of the environment that provides a reference framework for better integrating the various existing methods and a more formal definition of the various aspect to support the development of suitable tools. -Chapter 3 proposes a conceptual model that specifies the various enviromnental aspects that are relevant for strategic decision making, how they relate to each other, and ,defines them in a more formal way that is more suited for information systems development. -Chapter 4 is dedicated to the evaluation of the proposed model on the basis of its application to a concrete environment to evaluate its suitability to describe the current conditions and potential evolution of a real environment and get an idea of its usefulness. -Chapter 5 goes a step further by assembling a toolbox describing a set of methods that can be used to analyze the various environmental aspects put forward by the model and by providing more detailed specifications for a number of them to show how our model can be used to facilitate their implementation as software tools. -Chapter 6 describes a prototype of a strategic decision support tool that allow the analysis of some of the aspects of the environment that are not well supported by existing tools and namely to analyze the relationship between multiple actors and issues. The usefulness of this prototype is evaluated on the basis of its application to a concrete environment. -Chapter 7 finally concludes this thesis by making a summary of its various contributions and by proposing further interesting research directions.
Resumo:
We introduce an algebraic operator framework to study discounted penalty functions in renewal risk models. For inter-arrival and claim size distributions with rational Laplace transform, the usual integral equation is transformed into a boundary value problem, which is solved by symbolic techniques. The factorization of the differential operator can be lifted to the level of boundary value problems, amounting to iteratively solving first-order problems. This leads to an explicit expression for the Gerber-Shiu function in terms of the penalty function.
Resumo:
Quantification is a major problem when using histology to study the influence of ecological factors on tree structure. This paper presents a method to prepare and to analyse transverse sections of cambial zone and of conductive phloem in bark samples. The following paper (II) presents the automated measurement procedure. Part I here describes and discusses the preparation method, and the influence of tree age on the observed structure. Highly contrasted images of samples extracted at breast height during dormancy were analysed with an automatic image analyser. Between three young (38 years) and three old (147 years) trees, age-related differences were identified by size and shape parameters, at both cell and tissue levels. In the cambial zone, older trees had larger and more rectangular fusiform initials. In the phloem, sieve tubes were also larger, but their shape did not change and the area for sap conduction was similar in both categories. Nevertheless, alterations were limited, and demanded statistical analysis to be identified and ascertained. The physiological implications of the structural changes are discussed.
Resumo:
In an earlier investigation (Burger et al., 2000) five sediment cores near the RodriguesTriple Junction in the Indian Ocean were studied applying classical statistical methods(fuzzy c-means clustering, linear mixing model, principal component analysis) for theextraction of endmembers and evaluating the spatial and temporal variation ofgeochemical signals. Three main factors of sedimentation were expected by the marinegeologists: a volcano-genetic, a hydro-hydrothermal and an ultra-basic factor. Thedisplay of fuzzy membership values and/or factor scores versus depth providedconsistent results for two factors only; the ultra-basic component could not beidentified. The reason for this may be that only traditional statistical methods wereapplied, i.e. the untransformed components were used and the cosine-theta coefficient assimilarity measure.During the last decade considerable progress in compositional data analysis was madeand many case studies were published using new tools for exploratory analysis of thesedata. Therefore it makes sense to check if the application of suitable data transformations,reduction of the D-part simplex to two or three factors and visualinterpretation of the factor scores would lead to a revision of earlier results and toanswers to open questions . In this paper we follow the lines of a paper of R. Tolosana-Delgado et al. (2005) starting with a problem-oriented interpretation of the biplotscattergram, extracting compositional factors, ilr-transformation of the components andvisualization of the factor scores in a spatial context: The compositional factors will beplotted versus depth (time) of the core samples in order to facilitate the identification ofthe expected sources of the sedimentary process.Kew words: compositional data analysis, biplot, deep sea sediments
Resumo:
Factor analysis as frequent technique for multivariate data inspection is widely used also for compositional data analysis. The usual way is to use a centered logratio (clr)transformation to obtain the random vector y of dimension D. The factor model istheny = Λf + e (1)with the factors f of dimension k & D, the error term e, and the loadings matrix Λ.Using the usual model assumptions (see, e.g., Basilevsky, 1994), the factor analysismodel (1) can be written asCov(y) = ΛΛT + ψ (2)where ψ = Cov(e) has a diagonal form. The diagonal elements of ψ as well as theloadings matrix Λ are estimated from an estimation of Cov(y).Given observed clr transformed data Y as realizations of the random vectory. Outliers or deviations from the idealized model assumptions of factor analysiscan severely effect the parameter estimation. As a way out, robust estimation ofthe covariance matrix of Y will lead to robust estimates of Λ and ψ in (2), seePison et al. (2003). Well known robust covariance estimators with good statisticalproperties, like the MCD or the S-estimators (see, e.g. Maronna et al., 2006), relyon a full-rank data matrix Y which is not the case for clr transformed data (see,e.g., Aitchison, 1986).The isometric logratio (ilr) transformation (Egozcue et al., 2003) solves thissingularity problem. The data matrix Y is transformed to a matrix Z by usingan orthonormal basis of lower dimension. Using the ilr transformed data, a robustcovariance matrix C(Z) can be estimated. The result can be back-transformed tothe clr space byC(Y ) = V C(Z)V Twhere the matrix V with orthonormal columns comes from the relation betweenthe clr and the ilr transformation. Now the parameters in the model (2) can beestimated (Basilevsky, 1994) and the results have a direct interpretation since thelinks to the original variables are still preserved.The above procedure will be applied to data from geochemistry. Our specialinterest is on comparing the results with those of Reimann et al. (2002) for the Kolaproject data
Resumo:
The application of compositional data analysis through log ratio trans-formations corresponds to a multinomial logit model for the shares themselves.This model is characterized by the property of Independence of Irrelevant Alter-natives (IIA). IIA states that the odds ratio in this case the ratio of shares is invariant to the addition or deletion of outcomes to the problem. It is exactlythis invariance of the ratio that underlies the commonly used zero replacementprocedure in compositional data analysis. In this paper we investigate using thenested logit model that does not embody IIA and an associated zero replacementprocedure and compare its performance with that of the more usual approach ofusing the multinomial logit model. Our comparisons exploit a data set that com-bines voting data by electoral division with corresponding census data for eachdivision for the 2001 Federal election in Australia
Resumo:
Compositional random vectors are fundamental tools in the Bayesian analysis of categorical data.Many of the issues that are discussed with reference to the statistical analysis of compositionaldata have a natural counterpart in the construction of a Bayesian statistical model for categoricaldata.This note builds on the idea of cross-fertilization of the two areas recommended by Aitchison (1986)in his seminal book on compositional data. Particular emphasis is put on the problem of whatparameterization to use
Resumo:
At CoDaWork'03 we presented work on the analysis of archaeological glass composi-tional data. Such data typically consist of geochemical compositions involving 10-12variables and approximates completely compositional data if the main component, sil-ica, is included. We suggested that what has been termed `crude' principal componentanalysis (PCA) of standardized data often identi ed interpretable pattern in the datamore readily than analyses based on log-ratio transformed data (LRA). The funda-mental problem is that, in LRA, minor oxides with high relative variation, that maynot be structure carrying, can dominate an analysis and obscure pattern associatedwith variables present at higher absolute levels. We investigate this further using sub-compositional data relating to archaeological glasses found on Israeli sites. A simplemodel for glass-making is that it is based on a `recipe' consisting of two `ingredients',sand and a source of soda. Our analysis focuses on the sub-composition of componentsassociated with the sand source. A `crude' PCA of standardized data shows two clearcompositional groups that can be interpreted in terms of di erent recipes being used atdi erent periods, reected in absolute di erences in the composition. LRA analysis canbe undertaken either by normalizing the data or de ning a `residual'. In either case,after some `tuning', these groups are recovered. The results from the normalized LRAare di erently interpreted as showing that the source of sand used to make the glassdi ered. These results are complementary. One relates to the recipe used. The otherrelates to the composition (and presumed sources) of one of the ingredients. It seemsto be axiomatic in some expositions of LRA that statistical analysis of compositionaldata should focus on relative variation via the use of ratios. Our analysis suggests thatabsolute di erences can also be informative