939 resultados para Markov process modeling
Resumo:
The general assumption under which the (X) over bar chart is designed is that the process mean has a constant in-control value. However, there are situations in which the process mean wanders. When it wanders according to a first-order autoregressive (AR (1)) model, a complex approach involving Markov chains and integral equation methods is used to evaluate the properties of the (X) over bar chart. In this paper, we propose the use of a pure Markov chain approach to study the performance of the (X) over bar chart. The performance of the chat (X) over bar with variable parameters and the (X) over bar with double sampling are compared. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The conventional Newton and fast decoupled power flow (FDPF) methods have been considered inadequate to obtain the maximum loading point of power systems due to ill-conditioning problems at and near this critical point. It is well known that the PV and Q-theta decoupling assumptions of the fast decoupled power flow formulation no longer hold in the vicinity of the critical point. Moreover, the Jacobian matrix of the Newton method becomes singular at this point. However, the maximum loading point can be efficiently computed through parameterization techniques of continuation methods. In this paper it is shown that by using either theta or V as a parameter, the new fast decoupled power flow versions (XB and BX) become adequate for the computation of the maximum loading point only with a few small modifications. The possible use of reactive power injection in a selected PV bus (Q(PV)) as continuation parameter (mu) for the computation of the maximum loading point is also shown. A trivial secant predictor, the modified zero-order polynomial which uses the current solution and a fixed increment in the parameter (V, theta, or mu) as an estimate for the next solution, is used in predictor step. These new versions are compared to each other with the purpose of pointing out their features, as well as the influence of reactive power and transformer tap limits. The results obtained with the new approach for the IEEE test systems (14, 30, 57 and 118 buses) are presented and discussed in the companion paper. The results show that the characteristics of the conventional method are enhanced and the region of convergence around the singular solution is enlarged. In addition, it is shown that parameters can be switched during the tracing process in order to efficiently determine all the PV curve points with few iterations. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
Bolted joints are a form of mechanical coupling largely used in machinery due to their reliability and low cost. Failure of bolted joints can lead to catastrophic events, such as leaking, train derailments, aircraft crashes, etc. Most of these failures occur due to the reduction of the pre-load, induced by mechanical vibration or human errors in the assembly or maintenance process. This article investigates the application of shape memory alloy (SMA) washers as an actuator to increase the pre-load on loosened bolted joints. The application of SMA washer follows a structural health monitoring procedure to identify a damage (reduction in pre-load) occurrence. In this article, a thermo-mechanical model is presented to predict the final pre-load achieved using this kind of actuator, based on the heat input and SMA washer dimension. This model extends and improves on the previous model of Ghorashi and Inman [2004, "Shape Memory Alloy in Tension and Compression and its Application as Clamping Force Actuator in a Bolted Joint: Part 2 - Modeling," J. Intell. Mater. Syst. Struct., 15:589-600], by eliminating the pre-load term related to nut turning making the system more practical. This complete model is a powerful but complex tool to be used by designers. A novel modeling approach for self-healing bolted joints based on curve fitting of experimental data is presented. The article concludes with an experimental application that leads to a change in joint assembly to increase the system reliability, by removing the ceramic washer component. Further research topics are also suggested.
Resumo:
Researches in Requirements Engineering have been growing in the latest few years. Researchers are concerned with a set of open issues such as: communication between several user profiles involved in software engineering; scope definition; volatility and traceability issues. To cope with these issues a set of works are concentrated in (i) defining processes to collect client s specifications in order to solve scope issues; (ii) defining models to represent requirements to address communication and traceability issues; and (iii) working on mechanisms and processes to be applied to requirements modeling in order to facilitate requirements evolution and maintenance, addressing volatility and traceability issues. We propose an iterative Model-Driven process to solve these issues, based on a double layered CIM to communicate requirements related knowledge to a wider amount of stakeholders. We also present a tool to help requirements engineer through the RE process. Finally we present a case study to illustrate the process and tool s benefits and usage
Resumo:
In this work we study the Hidden Markov Models with finite as well as general state space. In the finite case, the forward and backward algorithms are considered and the probability of a given observed sequence is computed. Next, we use the EM algorithm to estimate the model parameters. In the general case, the kernel estimators are used and to built a sequence of estimators that converge in L1-norm to the density function of the observable process
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
We present a generic spatially explicit modeling framework to estimate carbon emissions from deforestation (INPE-EM). The framework incorporates the temporal dynamics related to the deforestation process and accounts for the biophysical and socioeconomic heterogeneity of the region under study. We build an emission model for the Brazilian Amazon combining annual maps of new clearings, four maps of biomass, and a set of alternative parameters based on the recent literature. The most important results are as follows: (a) Using different biomass maps leads to large differences in estimates of emission; for the entire region of the Brazilian Amazon in the last decade, emission estimates of primary forest deforestation range from 0.21 to 0.26 similar to Pg similar to C similar to yr-1. (b) Secondary vegetation growth presents a small impact on emission balance because of the short duration of secondary vegetation. In average, the balance is only 5% smaller than the primary forest deforestation emissions. (c) Deforestation rates decreased significantly in the Brazilian Amazon in recent years, from 27 similar to Mkm2 in 2004 to 7 similar to Mkm2 in 2010. INPE-EM process-based estimates reflect this decrease even though the agricultural frontier is moving to areas of higher biomass. The decrease is slower than a non-process instantaneous model would estimate as it considers residual emissions (slash, wood products, and secondary vegetation). The average balance, considering all biomass, decreases from 0.28 in 2004 to 0.15 similar to Pg similar to C similar to yr-1 in 2009; the non-process model estimates a decrease from 0.33 to 0.10 similar to Pg similar to C similar to yr-1. We conclude that the INPE-EM is a powerful tool for representing deforestation-driven carbon emissions. Biomass estimates are still the largest source of uncertainty in the effective use of this type of model for informing mechanisms such as REDD+. The results also indicate that efforts to reduce emissions should focus not only on controlling primary forest deforestation but also on creating incentives for the restoration of secondary forests.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pode-se afirmar que a evolução tecnológica (desenvolvimento de novos instrumentos de medição como, softwares, satélites e computadores, bem como, o barateamento das mídias de armazenamento) permite às Organizações produzirem e adquirirem grande quantidade de dados em curto espaço de tempo. Devido ao volume de dados, Organizações de pesquisa se tornam potencialmente vulneráveis aos impactos da explosão de informações. Uma solução adotada por algumas Organizações é a utilização de ferramentas de sistemas de informação para auxiliar na documentação, recuperação e análise dos dados. No âmbito científico, essas ferramentas são desenvolvidas para armazenar diferentes padrões de metadados (dados sobre dados). Durante o processo de desenvolvimento destas ferramentas, destaca-se a adoção de padrões como a Linguagem Unificada de Modelagem (UML, do Inglês Unified Modeling Language), cujos diagramas auxiliam na modelagem de diferentes aspectos do software. O objetivo deste estudo é apresentar uma ferramenta de sistemas de informação para auxiliar na documentação dos dados das Organizações por meio de metadados e destacar o processo de modelagem de software, por meio da UML. Será abordado o Padrão de Metadados Digitais Geoespaciais, amplamente utilizado na catalogação de dados por Organizações científicas de todo mundo, e os diagramas dinâmicos e estáticos da UML como casos de uso, sequências e classes. O desenvolvimento das ferramentas de sistemas de informação pode ser uma forma de promover a organização e a divulgação de dados científicos. No entanto, o processo de modelagem requer especial atenção para o desenvolvimento de interfaces que estimularão o uso das ferramentas de sistemas de informação.
Resumo:
Luminescent spectra of Eu3+-doped sol-gel glasses have been analyzed during the densification process and compared according to the presence or not of aluminum as a codoping ion. A transition temperature from hydrated to dehydroxyled environments has been found different for doped and codoped samples. However, only slight modifications have been displayed from luminescence measurements beyond this transition. To support the experimental analysis, molecular dynamics simulations have been performed to model the doped and codoped glass structures. Despite no evidence of rare earth clustering reduction due to aluminum has been found, the modeled structures have shown that the luminescent ions are mainly located in aluminum-rich domains. The synthesis of both experimental and numerical analyses has lead us to interpret the aluminum effect as responsible for differences in structure of the luminescent sites rather than for an effective dispersion of the rare earth ions. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Immobilized cell utilization in tower-type bioreactor is one of the main alternatives being studied to improve the industrial bioprocess. Other alternatives for the production of beta -lactam antibiotics, such as a cephalosporin C fed-batch p recess in an aerated stirred-tank bioreactor with free cells of Cepha-losporium acremonium or a tower-type bioreactor with immobilized cells of this fungus, have proven to be more efficient than the batch profess. In the fed-batch process, it is possible to minimize the catabolite repression exerted by the rapidly utilization of carbon sources (such as glucose) in the synthesis of antibiotics by utilizing a suitable flow rate of supplementary medium. In this study, several runs for cephalosporin C production, each lasting 200 h, were conducted in a fed-batch tower-type bioreactor using different hydrolyzed sucrose concentrations, For this study's model, modifications were introduced to take into account the influence of supplementary medium flow rate. The balance equations considered the effect of oxygen limitation inside the bioparticles. In the Monod-type rate equations, eel concentrations, substrate concentrations, and dissolved oxygen were included as reactants affecting the bioreaction rate. The set of differential equations was solved by the numerical method, and the values of the parameters were estimated by the classic nonlinear regression method following Marquardt's procedure with a 95% confidence interval. The simulation results showed that the proposed model fit well with the experimental data,and based on the experimental data and the mathematical model an optimal mass flow rate to maximize the bioprocess productivity could be proposed.
Resumo:
When the (X) over bar chart is in use, samples are regularly taken from the process, and their means are plotted on the chart. In some cases, it is too expensive to obtain the X values, but not the values of a correlated variable Y. This paper presents a model for the economic design of a two-stage control chart, that is. a control chart based on both performance (X) and surrogate (Y) variables. The process is monitored by the surrogate variable until it signals an out-of-control behavior, and then a switch is made to the (X) over bar chart. The (X) over bar chart is built with central, warning. and action regions. If an X sample mean falls in the central region, the process surveillance returns to the (Y) over bar chart. Otherwise. The process remains under the (X) over bar chart's surveillance until an (X) over bar sample mean falls outside the control limits. The search for an assignable cause is undertaken when the performance variable signals an out-of-control behavior. In this way, the two variables, are used in an alternating fashion. The assumption of an exponential distribution to describe the length of time the process remains in control allows the application of the Markov chain approach for developing the cost function. A study is performed to examine the economic advantages of using performance and surrogate variables. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
MODSI is a multi-models tool for information systems modeling. A modeling process in MODSI can be driven according to three different approaches: informal, semi-formal and formal. The MODSI tool is therefore based on the linked usage of these three modeling approaches. It can be employed at two different levels: the meta-modeling of a method and the modeling of an information system.In this paper we start presenting different types of modeling by making an analysis of their particular features. Then, we introduce the meta-model defined in our tool, as well as the tool functional architecture. Finally, we describe and illustrate the various usage levels of this tool.