63 resultados para Markov process modeling
Resumo:
Pode-se afirmar que a evolução tecnológica (desenvolvimento de novos instrumentos de medição como, softwares, satélites e computadores, bem como, o barateamento das mídias de armazenamento) permite às Organizações produzirem e adquirirem grande quantidade de dados em curto espaço de tempo. Devido ao volume de dados, Organizações de pesquisa se tornam potencialmente vulneráveis aos impactos da explosão de informações. Uma solução adotada por algumas Organizações é a utilização de ferramentas de sistemas de informação para auxiliar na documentação, recuperação e análise dos dados. No âmbito científico, essas ferramentas são desenvolvidas para armazenar diferentes padrões de metadados (dados sobre dados). Durante o processo de desenvolvimento destas ferramentas, destaca-se a adoção de padrões como a Linguagem Unificada de Modelagem (UML, do Inglês Unified Modeling Language), cujos diagramas auxiliam na modelagem de diferentes aspectos do software. O objetivo deste estudo é apresentar uma ferramenta de sistemas de informação para auxiliar na documentação dos dados das Organizações por meio de metadados e destacar o processo de modelagem de software, por meio da UML. Será abordado o Padrão de Metadados Digitais Geoespaciais, amplamente utilizado na catalogação de dados por Organizações científicas de todo mundo, e os diagramas dinâmicos e estáticos da UML como casos de uso, sequências e classes. O desenvolvimento das ferramentas de sistemas de informação pode ser uma forma de promover a organização e a divulgação de dados científicos. No entanto, o processo de modelagem requer especial atenção para o desenvolvimento de interfaces que estimularão o uso das ferramentas de sistemas de informação.
Resumo:
Luminescent spectra of Eu3+-doped sol-gel glasses have been analyzed during the densification process and compared according to the presence or not of aluminum as a codoping ion. A transition temperature from hydrated to dehydroxyled environments has been found different for doped and codoped samples. However, only slight modifications have been displayed from luminescence measurements beyond this transition. To support the experimental analysis, molecular dynamics simulations have been performed to model the doped and codoped glass structures. Despite no evidence of rare earth clustering reduction due to aluminum has been found, the modeled structures have shown that the luminescent ions are mainly located in aluminum-rich domains. The synthesis of both experimental and numerical analyses has lead us to interpret the aluminum effect as responsible for differences in structure of the luminescent sites rather than for an effective dispersion of the rare earth ions. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Immobilized cell utilization in tower-type bioreactor is one of the main alternatives being studied to improve the industrial bioprocess. Other alternatives for the production of beta -lactam antibiotics, such as a cephalosporin C fed-batch p recess in an aerated stirred-tank bioreactor with free cells of Cepha-losporium acremonium or a tower-type bioreactor with immobilized cells of this fungus, have proven to be more efficient than the batch profess. In the fed-batch process, it is possible to minimize the catabolite repression exerted by the rapidly utilization of carbon sources (such as glucose) in the synthesis of antibiotics by utilizing a suitable flow rate of supplementary medium. In this study, several runs for cephalosporin C production, each lasting 200 h, were conducted in a fed-batch tower-type bioreactor using different hydrolyzed sucrose concentrations, For this study's model, modifications were introduced to take into account the influence of supplementary medium flow rate. The balance equations considered the effect of oxygen limitation inside the bioparticles. In the Monod-type rate equations, eel concentrations, substrate concentrations, and dissolved oxygen were included as reactants affecting the bioreaction rate. The set of differential equations was solved by the numerical method, and the values of the parameters were estimated by the classic nonlinear regression method following Marquardt's procedure with a 95% confidence interval. The simulation results showed that the proposed model fit well with the experimental data,and based on the experimental data and the mathematical model an optimal mass flow rate to maximize the bioprocess productivity could be proposed.
Resumo:
When the (X) over bar chart is in use, samples are regularly taken from the process, and their means are plotted on the chart. In some cases, it is too expensive to obtain the X values, but not the values of a correlated variable Y. This paper presents a model for the economic design of a two-stage control chart, that is. a control chart based on both performance (X) and surrogate (Y) variables. The process is monitored by the surrogate variable until it signals an out-of-control behavior, and then a switch is made to the (X) over bar chart. The (X) over bar chart is built with central, warning. and action regions. If an X sample mean falls in the central region, the process surveillance returns to the (Y) over bar chart. Otherwise. The process remains under the (X) over bar chart's surveillance until an (X) over bar sample mean falls outside the control limits. The search for an assignable cause is undertaken when the performance variable signals an out-of-control behavior. In this way, the two variables, are used in an alternating fashion. The assumption of an exponential distribution to describe the length of time the process remains in control allows the application of the Markov chain approach for developing the cost function. A study is performed to examine the economic advantages of using performance and surrogate variables. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
MODSI is a multi-models tool for information systems modeling. A modeling process in MODSI can be driven according to three different approaches: informal, semi-formal and formal. The MODSI tool is therefore based on the linked usage of these three modeling approaches. It can be employed at two different levels: the meta-modeling of a method and the modeling of an information system.In this paper we start presenting different types of modeling by making an analysis of their particular features. Then, we introduce the meta-model defined in our tool, as well as the tool functional architecture. Finally, we describe and illustrate the various usage levels of this tool.
Resumo:
This work presents a new three-phase transformer modeling suitable for simulations in Pspice environment, which until now represents the electrical characteristics of a real transformer. It is proposed the model comparison to a three-phase transformer modeling present in EMTP - ATP program, which includes the electrical and magnetic characteristics. In addition, a set including non-linear loads and a real three-phase transformer was prepared in order to compare and validate the results of this new proposed model. The three-phase Pspice transformer modeling, different from the conventional one using inductance coupling, is remarkable for its simplicity and ease in simulation process, since it uses available voltage and current sources present in Pspice program, enabling simulations of three-phase network system including the most common configuration, three wires in the primary side and four wires in the secondary side (three-phases and neutral). Finally, the proposed modeling becomes a powerful tool for three-phase network simulations due to its simplicity and accuracy, able to simulate and analyze harmonic flow in three-phase systems under balanced and unbalanced conditions.
Resumo:
This paper presents an economic design of (X) over bar control charts with variable sample sizes, variable sampling intervals, and variable control limits. The sample size n, the sampling interval h, and the control limit coefficient k vary between minimum and maximum values, tightening or relaxing the control. The control is relaxed when an (X) over bar value falls close to the target and is tightened when an (X) over bar value falls far from the target. A cost model is constructed that involves the cost of false alarms, the cost of finding and eliminating the assignable cause, the cost associated with production in an out-of-control state, and the cost of sampling and testing. The assumption of an exponential distribution to describe the length of time the process remains in control allows the application of the Markov chain approach for developing the cost function. A comprehensive study is performed to examine the economic advantages of varying the (X) over bar chart parameters.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Purpose - The aim of this paper is to present a synthetic chart based on the non-central chi-square statistic that is operationally simpler and more effective than the joint X̄ and R chart in detecting assignable cause(s). This chart will assist in identifying which (mean or variance) changed due to the occurrence of the assignable causes. Design/methodology/approach - The approach used is based on the non-central chi-square statistic and the steady-state average run length (ARL) of the developed chart is evaluated using a Markov chain model. Findings - The proposed chart always detects process disturbances faster than the joint X̄ and R charts. The developed chart can monitor the process instead of looking at two charts separately. Originality/value - The most important advantage of using the proposed chart is that practitioners can monitor the process by looking at only one chart instead of looking at two charts separately. © Emerald Group Publishing Limted.
Resumo:
When the food supply flnishes, or when the larvae of blowflies complete their development and migrate prior to the total removal of the larval substrate, they disperse to find adequate places for pupation, a process known as post-feeding larval dispersal. Based on experimental data of the Initial and final configuration of the dispersion, the reproduction of such spatio-temporal behavior is achieved here by means of the evolutionary search for cellular automata with a distinct transition rule associated with each cell, also known as a nonuniform cellular automata, and with two states per cell in the lattice. Two-dimensional regular lattices and multivalued states will be considered and a practical question is the necessity of discovering a proper set of transition rules. Given that the number of rules is related to the number of cells in the lattice, the search space is very large and an evolution strategy is then considered to optimize the parameters of the transition rules, with two transition rules per cell. As the parameters to be optimized admit a physical interpretation, the obtained computational model can be analyzed to raise some hypothetical explanation of the observed spatiotemporal behavior. © 2006 IEEE.
Resumo:
Modeling ERP software means capturing the information necessary for supporting enterprise management. This modeling process goes down through different abstraction layers, from enterprise modeling to code generation. Thus ERP is the kind of system where enterprise engineering undoubtedly has, or should have, a strong influence. For the case of Free/Open Source ERP, the lack of proper modeling methods and tools can jeopardize the advantage brought by source code availability. Therefore, the aim of this paper is to present a development process proposal for the Open Source ERP5 system. The proposed development process aims to cover different abstraction levels, taking into account well established standards and common practices, as well as platform issues. Its main goal is to provide an adaptable meta-process to ERP5 adopters. © 2006 IEEE.
Resumo:
The objective of this article is to apply the Design of Experiments technique along with the Discrete Events Simulation technique in an automotive process. The benefits of the design of experiments in simulation include the possibility to improve the performance in the simulation process, avoiding trial and error to seek solutions. The methodology of the conjoint use of Design of Experiments and Computer Simulation is presented to assess the effects of the variables and its interactions involved in the process. In this paper, the efficacy of the use of process mapping and design of experiments on the phases of conception and analysis are confirmed.
Resumo:
This paper is concerned with ℋ 2 and ℋ ∞ filter design for discrete-time Markov jump systems. The usual assumption of mode-dependent design, where the current Markov mode is available to the filter at every instant of time is substituted by the case where that availability is subject to another Markov chain. In other words, the mode is transmitted to the filter through a network with given transmission failure probabilities. The problem is solved by modeling a system with N modes as another with 2N modes and cluster availability. We also treat the case where the transition probabilities are not exactly known and demonstrate our conditions for calculating an ℋ ∞ norm bound are less conservative than the available results in the current literature. Numerical examples show the applicability of the proposed results. ©2010 IEEE.
Resumo:
The purpose of this work is to present a frequency domain model to demonstrate the operation of an electromagnetic arrangement for controlling the injection of zero-sequence currents in the electrical system. Considering the diversity of sequential distribution of harmonic components of a current, the device proposed can be used in the process of mitigation of zero-sequence components. This device, here called electromagnetic suppressor, consists of a blocker and filter both electromagnetic, whose joint operation can provide paths of high and low impedances that can be conveniently adjusted in order to search for a desired performance. This study presents physical considerations, mathematical modeling and computer simulations that clearly demonstrate the viability of this application as a more viable alternative in the conception of filtering systems. The performance analysis is based on the frequency response of harmonic transmittances. The efficacy of this technique in direct actions to maximize the harmonic mitigation process is demonstrated. ©2010 IEEE.
Resumo:
Distributed Generation, microgrid technologies, two-way communication systems, and demand response programs are issues that are being studied in recent years within the concept of smart grids. At some level of enough penetration, the Distributed Generators (DGs) can provide benefits for sub-transmission and transmission systems through the so-called ancillary services. This work is focused on the ancillary service of reactive power support provided by DGs, specifically Wind Turbine Generators (WTGs), with high level of impact on transmission systems. The main objective of this work is to propose an optimization methodology to price this service by determining the costs in which a DG incurs when it loses sales opportunity of active power, i.e, by determining the Loss of Opportunity Costs (LOC). LOC occur when more reactive power is required than available, and the active power generation has to be reduced in order to increase the reactive power capacity. In the optimization process, three objectives are considered: active power generation costs of DGs, voltage stability margin of the system, and losses in the lines of the network. Uncertainties of WTGs are reduced solving multi-objective optimal power flows in multiple probabilistic scenarios constructed by Monte Carlo simulations, and modeling the time series associated with the active power generation of each WTG via Fuzzy Logic and Markov Chains. The proposed methodology was tested using the IEEE 14 bus test system with two WTGs installed. © 2011 IEEE.