32 resultados para Production Inventory Model with Switching Time
Resumo:
We developed a stochastic lattice model to describe the vector-borne disease (like yellow fever or dengue). The model is spatially structured and its dynamical rules take into account the diffusion of vectors. We consider a bipartite lattice, forming a sub-lattice of human and another occupied by mosquitoes. At each site of lattice we associate a stochastic variable that describes the occupation and the health state of a single individual (mosquito or human). The process of disease transmission in the human population follows a similar dynamic of the Susceptible-Infected-Recovered model (SIR), while the disease transmission in the mosquito population has an analogous dynamic of the Susceptible-Infected-Susceptible model (SIS) with mosquitos diffusion. The occurrence of an epidemic is directly related to the conditional probability of occurrence of infected mosquitoes (human) in the presence of susceptible human (mosquitoes) on neighborhood. The probability of diffusion of mosquitoes can facilitate the formation of pairs Susceptible-Infected enabling an increase in the size of the epidemic. Using an asynchronous dynamic update, we study the disease transmission in a population initially formed by susceptible individuals due to the introduction of a single mosquito (human) infected. We find that this model exhibits a continuous phase transition related to the existence or non-existence of an epidemic. By means of mean field approximations and Monte Carlo simulations we investigate the epidemic threshold and the phase diagram in terms of the diffusion probability and the infection probability.
Resumo:
Bronchial hyperresponsiveness is a hallmark of asthma and many factors modulate bronchoconstriction episodes. A potential correlation of formaldehyde (FA) inhalation and asthma has been observed; however, the exact role of FA remains controversial. We investigated the effects of FA inhalation on Ovalbumin (OVA) sensitisation using a parameter of respiratory mechanics. The involvement of nitric oxide (NO) and cyclooxygenase-derived products were also evaluated. The rats were submitted, or not, to FA inhalation (1%, 90 min/day, 3 days) and were OVA-sensitised and challenged 14 days later. Our data showed that previous FA exposure in allergic rats reduced bronchial responsiveness, respiratory resistance (Rrs) and elastance (Ers) to methacholine. FA exposure in allergic rats also increased the iNOS gene expression and reduced COX-1. L-NAME treatment exacerbated the bronchial hyporesponsiveness and did not modify the Ers and Rrs, while Indomethacin partially reversed all of the parameters studied. The L-NAME and Indomethacin treatments reduced leukotriene B4 levels while they increased thromboxane B2 and prostaglandin E2. In conclusion, FA exposure prior to OVA sensitisation reduces the respiratory mechanics and the interaction of NO and PGE2 may be representing a compensatory mechanism in order to protect the lung from bronchoconstriction effects.
Resumo:
We propose a new Skyrme-like model with fields taking values on the sphere S3 or, equivalently, on the group SU(2). The action of the model contains a quadratic kinetic term plus a quartic term which is the same as that of the Skyrme-Faddeev model. The novelty of the model is that it possess a first order Bogomolny type equation whose solutions automatically satisfy the second order Euler-Lagrange equations. It also possesses a lower bound on the static energy which is saturated by the Bogomolny solutions. Such Bogomolny equation is equivalent to the so-called force free equation used in plasma and solar Physics, and which possesses large classes of solutions. An old result due to Chandrasekhar prevents the existence of finite energy solutions for the force free equation on the entire three- dimensional space R3. We construct new exact finite energy solutions to the Bogomolny equations for the case where the space is the three-sphere S3, using toroidal like coordinates.
Resumo:
This paper studies the asymptotic optimality of discrete-time Markov decision processes (MDPs) with general state space and action space and having weak and strong interactions. By using a similar approach as developed by Liu, Zhang, and Yin [Appl. Math. Optim., 44 (2001), pp. 105-129], the idea in this paper is to consider an MDP with general state and action spaces and to reduce the dimension of the state space by considering an averaged model. This formulation is often described by introducing a small parameter epsilon > 0 in the definition of the transition kernel, leading to a singularly perturbed Markov model with two time scales. Our objective is twofold. First it is shown that the value function of the control problem for the perturbed system converges to the value function of a limit averaged control problem as epsilon goes to zero. In the second part of the paper, it is proved that a feedback control policy for the original control problem defined by using an optimal feedback policy for the limit problem is asymptotically optimal. Our work extends existing results of the literature in the following two directions: the underlying MDP is defined on general state and action spaces and we do not impose strong conditions on the recurrence structure of the MDP such as Doeblin's condition.
Resumo:
This work addresses the solution to the problem of robust model predictive control (MPC) of systems with model uncertainty. The case of zone control of multi-variable stable systems with multiple time delays is considered. The usual approach of dealing with this kind of problem is through the inclusion of non-linear cost constraint in the control problem. The control action is then obtained at each sampling time as the solution to a non-linear programming (NLP) problem that for high-order systems can be computationally expensive. Here, the robust MPC problem is formulated as a linear matrix inequality problem that can be solved in real time with a fraction of the computer effort. The proposed approach is compared with the conventional robust MPC and tested through the simulation of a reactor system of the process industry.
Resumo:
In this paper, we propose three novel mathematical models for the two-stage lot-sizing and scheduling problems present in many process industries. The problem shares a continuous or quasi-continuous production feature upstream and a discrete manufacturing feature downstream, which must be synchronized. Different time-based scale representations are discussed. The first formulation encompasses a discrete-time representation. The second one is a hybrid continuous-discrete model. The last formulation is based on a continuous-time model representation. Computational tests with state-of-the-art MIP solver show that the discrete-time representation provides better feasible solutions in short running time. On the other hand, the hybrid model achieves better solutions for longer computational times and was able to prove optimality more often. The continuous-type model is the most flexible of the three for incorporating additional operational requirements, at a cost of having the worst computational performance. Journal of the Operational Research Society (2012) 63, 1613-1630. doi:10.1057/jors.2011.159 published online 7 March 2012
Resumo:
An out of equilibrium Ising model subjected to an irreversible dynamics is analyzed by means of a stochastic dynamics, on a effort that aims to understand the observed critical behavior as consequence of the intrinsic microscopic characteristics. The study focus on the kinetic phase transitions that take place by assuming a lattice model with inversion symmetry and under the influence of two competing Glauber dynamics, intended to describe the stationary states using the entropy production, which characterize the system behavior and clarifies its reversibility conditions. Thus, it is considered a square lattice formed by two sublattices interconnected, each one of which is in contact with a heat bath at different temperature from the other. Analytical and numerical treatments are faced, using mean-field approximations and Monte Carlo simulations. For the one dimensional model exact results for the entropy production were obtained, though in this case the phase transition that takes place in the two dimensional counterpart is not observed, fact which is in accordance with the behavior shared by lattice models presenting inversion symmetry. Results found for the stationary state show a critical behavior of the same class as the equilibrium Ising model with a phase transition of the second order, which is evidenced by a divergence with an exponent µ ¼ 0:003 of the entropy production derivative.
Resumo:
In this paper we propose a hybrid hazard regression model with threshold stress which includes the proportional hazards and the accelerated failure time models as particular cases. To express the behavior of lifetimes the generalized-gamma distribution is assumed and an inverse power law model with a threshold stress is considered. For parameter estimation we develop a sampling-based posterior inference procedure based on Markov Chain Monte Carlo techniques. We assume proper but vague priors for the parameters of interest. A simulation study investigates the frequentist properties of the proposed estimators obtained under the assumption of vague priors. Further, some discussions on model selection criteria are given. The methodology is illustrated on simulated and real lifetime data set.
Resumo:
After completion of the LHC8 run in 2012, the plan is to upgrade the LHC for operation close to its design energy root s = 14 TeV, with a goal of collecting hundreds of fb(-1) of integrated luminosity. The time is propitious to begin thinking of what is gained by even further LHC upgrades. In this report, we compute an LHC14 reach for supersymmetry in the mSUGRA/CMSSM model with an anticipated high luminosity upgrade. We find that LHC14 with 300 (3000) fb(-1) has a reach for supersymmetry via gluino/squark searches of m((g) over tilde) similar to 3.2 TeV (3.6 TeV) for m((q) over tilde) similar to m((g) over tilde), and a reach of m((g) over tilde) similar to 1.8 TeV (2.3 TeV) for m((q) over tilde) >> m((g) over tilde). In the case where m((q) over tilde) >> m((g) over tilde), then the LHC14 reach for chargino-neutralino production with decay into the Wh + 6 is not an element of(T) final state reaches to m((g) over tilde) similar to 2.6 TeV for 3000 fb(-1).
Resumo:
We study the charge dynamic structure factor of the one-dimensional Hubbard model with finite on-site repulsion U at half-filling. Numerical results from the time-dependent density matrix renormalization group are analyzed by comparison with the exact spectrum of the model. The evolution of the line shape as a function of U is explained in terms of a relative transfer of spectral weight between the two-holon continuum that dominates in the limit U -> infinity and a subset of the two-holon-two-spinon continuum that reconstructs the electron-hole continuum in the limit U -> 0. Power-law singularities along boundary lines of the spectrum are described by effective impurity models that are explicitly invariant under spin and eta-spin SU(2) rotations. The Mott-Hubbard metal-insulator transition is reflected in a discontinuous change of the exponents of edge singularities at U = 0. The sharp feature observed in the spectrum for momenta near the zone boundary is attributed to a van Hove singularity that persists as a consequence of integrability.
Resumo:
We propose an alternative, nonsingular, cosmic scenario based on gravitationally induced particle production. The model is an attempt to evade the coincidence and cosmological constant problems of the standard model (Lambda CDM) and also to connect the early and late time accelerating stages of the Universe. Our space-time emerges from a pure initial de Sitter stage thereby providing a natural solution to the horizon problem. Subsequently, due to an instability provoked by the production of massless particles, the Universe evolves smoothly to the standard radiation dominated era thereby ending the production of radiation as required by the conformal invariance. Next, the radiation becomes subdominant with the Universe entering in the cold dark matter dominated era. Finally, the negative pressure associated with the creation of cold dark matter (CCDM model) particles accelerates the expansion and drives the Universe to a final de Sitter stage. The late time cosmic expansion history of the CCDM model is exactly like in the standard Lambda CDM model; however, there is no dark energy. The model evolves between two limiting (early and late time) de Sitter regimes. All the stages are also discussed in terms of a scalar field description. This complete scenario is fully determined by two extreme energy densities, or equivalently, the associated de Sitter Hubble scales connected by rho(I)/rho(f) = (H-I/H-f)(2) similar to 10(122), a result that has no correlation with the cosmological constant problem. We also study the linear growth of matter perturbations at the final accelerating stage. It is found that the CCDM growth index can be written as a function of the Lambda growth index, gamma(Lambda) similar or equal to 6/11. In this framework, we also compare the observed growth rate of clustering with that predicted by the current CCDM model. Performing a chi(2) statistical test we show that the CCDM model provides growth rates that match sufficiently well with the observed growth rate of structure.
Resumo:
Sugarcane (Saccharum spp.) and palm tree (Elaeis guianeensis) are crops with high biofuel yields, 7.6 m(3) ha (1) y(-)1 of ethanol and 4 Mg ha(-1) y(-1) of oil, respectively. The joint production of these crops enhances the sustainability of ethanol. The objective of this work was comparing a traditional sugarcane ethanol production system (TSES) with a joint production system (JSEB), in which ethanol and biodiesel are produced at the same biorefinery but only ethanol is traded. The comparison is based on ISO 14.040:2006 and ISO 14044:2006, and appropriate indicators. Production systems in Cerrado (typical savannah), Cerradao (woody savannah) and pastureland ecosystems were considered. Energy and carbon balances, and land use change impacts were evaluated. The joint system includes 100% substitution of biodiesel for diesel, which is all consumed in different cropping stages. Data were collected by direct field observation methods, and questionnaires applied to Brazilian facilities. Three sugarcane mills situated in Sao Paulo State and one palm oil refinery located in Para State were surveyed. The information was supplemented by secondary sources. Results demonstrated that fossil fuel use and greenhouse gas emissions decreased, whereas energy efficiency increased when JSEB was compared to TSES. In comparison with TSES, the energy balance of JSEB was 1.7 greater. In addition, JSEB released 23% fewer GHG emissions than TSES. The ecosystem carbon payback time for Cerrado, Cerradao, and Degraded Grassland of JSEB was respectively 4, 7.7 and -7.6 years. These are typical land use types of the Brazilian Cerrado region for which JSEB was conceived. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
We propose a new general Bayesian latent class model for evaluation of the performance of multiple diagnostic tests in situations in which no gold standard test exists based on a computationally intensive approach. The modeling represents an interesting and suitable alternative to models with complex structures that involve the general case of several conditionally independent diagnostic tests, covariates, and strata with different disease prevalences. The technique of stratifying the population according to different disease prevalence rates does not add further marked complexity to the modeling, but it makes the model more flexible and interpretable. To illustrate the general model proposed, we evaluate the performance of six diagnostic screening tests for Chagas disease considering some epidemiological variables. Serology at the time of donation (negative, positive, inconclusive) was considered as a factor of stratification in the model. The general model with stratification of the population performed better in comparison with its concurrents without stratification. The group formed by the testing laboratory Biomanguinhos FIOCRUZ-kit (c-ELISA and rec-ELISA) is the best option in the confirmation process by presenting false-negative rate of 0.0002% from the serial scheme. We are 100% sure that the donor is healthy when these two tests have negative results and he is chagasic when they have positive results.
Resumo:
LHC searches for supersymmetry currently focus on strongly produced sparticles, which are copiously produced if gluinos and squarks have masses of a few hundred GeV. However, in supersymmetric models with heavy scalars, as favored by the decoupling solution to the SUSY flavor and CP problems, and m((g) over tilde) greater than or similar to 500 GeV as indicated by recent LHC results, chargino-neutralino ((W) over tilde (+/-)(1)(Z) over tilde (2)) production is the dominant cross section for m((W) over tilde1) similar to m((Z) over tilde2) < m(<(g)over tilde>)/3 at LHC with root s = 7 TeV (LHC7). Furthermore, if m((Z) over tilde1) + (m (Z) over tilde) less than or similar to m((Z) over tilde2) less than or similar to m((Z) over tilde1) + m(h), then (Z) over tilde (2) dominantly decays via (Z) over tilde (2) -> (Z) over tilde (1)Z, while (W) over tilde (1) decays via (W) over tilde (1) -> (Z) over tilde W-1. We investigate the LHC7 reach in the W Z + (sic)T channel (for both leptonic and hadronic decays of the W boson) in models with and without the assumption of gaugino mass universality. In the case of the mSUGRA/CMSSM model with heavy squark masses, the LHC7 discovery reach in the W Z+ (sic)T channel becomes competetive with the reach in the canonical (sic)T + jets channel for integrated luminosities similar to 30 fb(-1). We also present the LHC7 reach for a simplified model with arbitrary m((Z) over tilde1) and m((W) over tilde1) similar to m((Z) over tilde2). Here, we find a reach of up to m((W) over tilde1) similar to 200 (250) GeV for 10 (30) fb(-1).
Resumo:
The existence of inhomogeneities in the observed Universe modifies the distance-redshift relations thereby affecting the results of cosmological tests in comparison to the ones derived assuming spatially uniform models. By modeling the inhomogeneities through a Zeldovich-Kantowski-Dyer-Roeder approach which is phenomenologically characterized by a smoothness parameter alpha, we rediscuss the constraints on the cosmic parameters based on type Ia supernovae (SNe Ia) and gamma-ray bursts (GRBs) data. The present analysis is restricted to a flat Lambda CDM model with the reasonable assumption that Lambda does not clump. A chi(2) analysis using 557 SNe Ia data from the Union2 compilation data (R. Amanullah et al., Astrophys. J. 716, 712 (2010).) constrains the pair of parameters (Omega(m), alpha) to Omega(m) = 0.27(-0.03)(+0.08) (2 sigma) and alpha >= 0.25. A similar analysis based only on 59 Hymnium GRBs (H. Wei, J. Cosmol. Astropart. Phys. 08 (2010) 020.) constrains the matter density parameter to be Omega(m) = 0.35(-0.24)(+0.62) (2 sigma) while all values for the smoothness parameter are allowed. By performing a joint analysis, it is found that Omega(m) = 0.27(-0.06)(+0.06) and alpha >= 0.52. As a general result, although considering that current GRB data alone cannot constrain the smoothness alpha parameter, our analysis provides an interesting cosmological probe for dark energy even in the presence of inhomogeneities.