34 resultados para Hierarchical dynamic models

em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gas sensing systems based on low-cost chemical sensor arrays are gaining interest for the analysis of multicomponent gas mixtures. These sensors show different problems, e.g., nonlinearities and slow time-response, which can be partially solved by digital signal processing. Our approach is based on building a nonlinear inverse dynamic system. Results for different identification techniques, including artificial neural networks and Wiener series, are compared in terms of measurement accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results: Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions: Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Development of research methods requires a systematic review of their status. This study focuses on the use of Hierarchical Linear Modeling methods in psychiatric research. Evaluation includes 207 documents published until 2007, included and indexed in the ISI Web of Knowledge databases; analyses focuses on the 194 articles in the sample. Bibliometric methods are used to describe the publications patterns. Results indicate a growing interest in applying the models and an establishment of methods after 2000. Both Lotka"s and Bradford"s distributions are adjusted to the data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Single nucleotide polymorphisms (SNPs) are the most frequent type of sequence variation between individuals, and represent a promising tool for finding genetic determinants of complex diseases and understanding the differences in drug response. In this regard, it is of particular interest to study the effect of non-synonymous SNPs in the context of biological networks such as cell signalling pathways. UniProt provides curated information about the functional and phenotypic effects of sequence variation, including SNPs, as well as on mutations of protein sequences. However, no strategy has been developed to integrate this information with biological networks, with the ultimate goal of studying the impact of the functional effect of SNPs in the structure and dynamics of biological networks. Results: First, we identified the different challenges posed by the integration of the phenotypic effect of sequence variants and mutations with biological networks. Second, we developed a strategy for the combination of data extracted from public resources, such as UniProt, NCBI dbSNP, Reactome and BioModels. We generated attribute files containing phenotypic and genotypic annotations to the nodes of biological networks, which can be imported into network visualization tools such as Cytoscape. These resources allow the mapping and visualization of mutations and natural variations of human proteins and their phenotypic effect on biological networks (e.g. signalling pathways, protein-protein interaction networks, dynamic models). Finally, an example on the use of the sequence variation data in the dynamics of a network model is presented. Conclusion: In this paper we present a general strategy for the integration of pathway and sequence variation data for visualization, analysis and modelling purposes, including the study of the functional impact of protein sequence variations on the dynamics of signalling pathways. This is of particular interest when the SNP or mutation is known to be associated to disease. We expect that this approach will help in the study of the functional impact of disease-associated SNPs on the behaviour of cell signalling pathways, which ultimately will lead to a better understanding of the mechanisms underlying complex diseases.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Although research has documented the importance of emotion in risk perception, little is knownabout its prevalence in everyday life. Using the Experience Sampling Method, 94 part-timestudents were prompted at random via cellular telephones to report on mood state and threeemotions and to assess risk on thirty occasions during their working hours. The emotions valence, arousal, and dominance were measured using self-assessment manikins (Bradley &Lang, 1994). Hierarchical linear models (HLM) revealed that mood state and emotions explainedsignificant variance in risk perception. In addition, valence and arousal accounted for varianceover and above reason (measured by severity and possibility of risks). Six risks were reassessedin a post-experimental session and found to be lower than their real-time counterparts.The study demonstrates the feasibility and value of collecting representative samples of data withsimple technology. Evidence for the statistical consistency of the HLM estimates is provided inan Appendix.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We discuss some practical issues related to the use of the Parameterized Expectations Approach (PEA) for solving non-linear stochastic dynamic models with rational expectations. This approach has been applied in models of macroeconomics, financial economics, economic growth, contracttheory, etc. It turns out to be a convenient algorithm, especially when there is a large number of state variables and stochastic shocks in the conditional expectations. We discuss some practical issues having to do with the application of the algorithm, and we discuss a Fortran program for implementing the algorithm that is available through the internet.We discuss these issues in a battery of six examples.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study offers a statistical analysis of the persistence of annual profits across a sample of firms from different European Union (EU) countries. To this end, a Bayesian dynamic model has been used which enables the annual behaviour of those profits to be broken down into a permanent structural component on the one hand and a transitory component on the other, while also distinguishing between general effects affecting the industry as a whole to which each firm belongs and specific effects affecting each firm in particular. This break down enables the relative importance of those fundamental components to be evaluated. The data analysed come from a sample of 23,293 firms in EU countries selected from the AMADEUS data-base. The period analysed ran from 1999 to 2007 and 21 sectors were analysed, chosen in such a way that there was a sufficiently large number of firms in each country*sector combination for the industry effects to be estimated accurately enough for meaningful comparisons to be made by sector and country. The analysis has been conducted by sector and by country from a Bayesian perspective, thus making the study more flexible and realistic since the estimates obtained do not depend on asymptotic results. In general terms, the study finds that, although the industry effects are significant, more important are the specific effects. That importance varies depending on the sector or the country in which the firm carries out its activity. The influence of firm effects accounts for more than 90% of total variation and display a significantly lower degree of persistence, with adjustment speeds oscillating around 51.1%. However, this pattern is not homogeneous but depends on the sector and country analysed. Industry effects have a more marginal importance, being significantly more persistent, with adjustment speeds oscillating around 10% with this degree of persistence being more homogeneous at both country and sector levels.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The objective of this paper was to show the potential additional insight that result from adding greenhouse gas (GHG) emissions to plant performance evaluation criteria, such as effluent quality (EQI) and operational cost (OCI) indices, when evaluating (plant-wide) control/operational strategies in wastewater treatment plants (WWTPs). The proposed GHG evaluation is based on a set of comprehensive dynamic models that estimate the most significant potential on-site and off-site sources of CO2, CH4 and N2O. The study calculates and discusses the changes in EQI, OCI and the emission of GHGs as a consequence of varying the following four process variables: (i) the set point of aeration control in the activated sludge section; (ii) the removal efficiency of total suspended solids (TSS) in the primary clarifier; (iii) the temperature in the anaerobic digester; and (iv) the control of the flow of anaerobic digester supernatants coming from sludge treatment. Based upon the assumptions built into the model structures, simulation results highlight the potential undesirable effects of increased GHG production when carrying out local energy optimization of the aeration system in the activated sludge section and energy recovery from the AD. Although off-site CO2 emissions may decrease, the effect is counterbalanced by increased N2O emissions, especially since N2O has a 300-fold stronger greenhouse effect than CO2. The reported results emphasize the importance and usefulness of using multiple evaluation criteria to compare and evaluate (plant-wide) control strategies in a WWTP for more informed operational decision making

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Since conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. Monte Carlo results show that the estimator performs well in comparison to other estimators that have been proposed for estimation of general DLV models.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Abstract. Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Because conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. It is shown that as the number of simulations diverges, the estimator is consistent and a higher-order expansion reveals the stochastic difference between the infeasible GMM estimator based on the same moment conditions and the simulated version. In particular, we show how to adjust standard errors to account for the simulations. Monte Carlo results show how the estimator may be applied to a range of dynamic latent variable (DLV) models, and that it performs well in comparison to several other estimators that have been proposed for DLV models.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A parts based model is a parametrization of an object class using a collection of landmarks following the object structure. The matching of parts based models is one of the problems where pairwise Conditional Random Fields have been successfully applied. The main reason of their effectiveness is tractable inference and learning due to the simplicity of involved graphs, usually trees. However, these models do not consider possible patterns of statistics among sets of landmarks, and thus they sufffer from using too myopic information. To overcome this limitation, we propoese a novel structure based on a hierarchical Conditional Random Fields, which we explain in the first part of this memory. We build a hierarchy of combinations of landmarks, where matching is performed taking into account the whole hierarchy. To preserve tractable inference we effectively sample the label set. We test our method on facial feature selection and human pose estimation on two challenging datasets: Buffy and MultiPIE. In the second part of this memory, we present a novel approach to multiple kernel combination that relies on stacked classification. This method can be used to evaluate the landmarks of the parts-based model approach. Our method is based on combining responses of a set of independent classifiers for each individual kernel. Unlike earlier approaches that linearly combine kernel responses, our approach uses them as inputs to another set of classifiers. We will show that we outperform state-of-the-art methods on most of the standard benchmark datasets.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper we address the issue of locating hierarchical facilities in the presence of congestion. Two hierarchical models are presented, where lower level servers attend requests first, and then, some of the served customers are referred to higher level servers. In the first model, the objective is to find the minimum number of servers and theirlocations that will cover a given region with a distance or time standard. The second model is cast as a Maximal Covering Location formulation. A heuristic procedure is then presented together with computational experience. Finally, some extensions of these models that address other types of spatial configurations are offered.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Whereas numerical modeling using finite-element methods (FEM) can provide transient temperature distribution in the component with enough accuracy, it is of the most importance the development of compact dynamic thermal models that can be used for electrothermal simulation. While in most cases single power sources are considered, here we focus on the simultaneous presence of multiple sources. The thermal model will be in the form of a thermal impedance matrix containing the thermal impedance transfer functions between two arbitrary ports. Eachindividual transfer function element ( ) is obtained from the analysis of the thermal temperature transient at node ¿ ¿ after a power step at node ¿ .¿ Different options for multiexponential transient analysis are detailed and compared. Among the options explored, small thermal models can be obtained by constrained nonlinear least squares (NLSQ) methods if the order is selected properly using validation signals. The methods are applied to the extraction of dynamic compact thermal models for a new ultrathin chip stack technology (UTCS).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a dynamic model where traders in each period are matched randomly into pairs who then bargain about the division of a fixed surplus. When agreement is reached the traders leave the market. Traders who do not come to an agreement return next period in which they will be matched again, as long as their deadline has not expired yet. New traders enter exogenously in each period. We assume that traders within a pair know each other's deadline. We define and characterize the stationary equilibrium configurations. Traders with longer deadlines fare better than traders with short deadlines. It is shown that the heterogeneity of deadlines may cause delay. It is then shown that a centralized mechanism that controls the matching protocol, but does not interfere with the bargaining, eliminates all delay. Even though this efficient centralized mechanism is not as good for traders with long deadlines, it is shown that in a model where all traders can choose which mechanism to