9 resultados para continuous-time models
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
In this paper, we propose three novel mathematical models for the two-stage lot-sizing and scheduling problems present in many process industries. The problem shares a continuous or quasi-continuous production feature upstream and a discrete manufacturing feature downstream, which must be synchronized. Different time-based scale representations are discussed. The first formulation encompasses a discrete-time representation. The second one is a hybrid continuous-discrete model. The last formulation is based on a continuous-time model representation. Computational tests with state-of-the-art MIP solver show that the discrete-time representation provides better feasible solutions in short running time. On the other hand, the hybrid model achieves better solutions for longer computational times and was able to prove optimality more often. The continuous-type model is the most flexible of the three for incorporating additional operational requirements, at a cost of having the worst computational performance. Journal of the Operational Research Society (2012) 63, 1613-1630. doi:10.1057/jors.2011.159 published online 7 March 2012
Resumo:
In this paper we propose a hybrid hazard regression model with threshold stress which includes the proportional hazards and the accelerated failure time models as particular cases. To express the behavior of lifetimes the generalized-gamma distribution is assumed and an inverse power law model with a threshold stress is considered. For parameter estimation we develop a sampling-based posterior inference procedure based on Markov Chain Monte Carlo techniques. We assume proper but vague priors for the parameters of interest. A simulation study investigates the frequentist properties of the proposed estimators obtained under the assumption of vague priors. Further, some discussions on model selection criteria are given. The methodology is illustrated on simulated and real lifetime data set.
Resumo:
In this paper we investigate the solubility of a hard-sphere gas in a solvent modeled as an associating lattice gas. The solution phase diagram for solute at 5% is compared with the phase diagram of the original solute free model. Model properties are investigated both through Monte Carlo simulations and a cluster approximation. The model solubility is computed via simulations and is shown to exhibit a minimum as a function of temperature. The line of minimum solubility (TmS) coincides with the line of maximum density (TMD) for different solvent chemical potentials, in accordance with the literature on continuous realistic models and on the "cavity" picture. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4743635]
Resumo:
The theoretical E-curve for the laminar flow of non-Newtonian fluids in circular tubes may not be accurate for real tubular systems with diffusion, mechanical vibration, wall roughness, pipe fittings, curves, coils, or corrugated walls. Deviations from the idealized laminar flow reactor (LFR) cannot be well represented using the axial dispersion or the tanks-in-series models of residence time distribution (RTD). In this work, four RTD models derived from non-ideal velocity profiles in segregated tube flow are proposed. They were used to represent the RTD of three tubular systems working with Newtonian and pseudoplastic fluids. Other RTD models were considered for comparison. The proposed models provided good adjustments, and it was possible to determine the active volumes. It is expected that these models can be useful for the analysis of LFR or for the evaluation of continuous thermal processing of viscous foods.
Resumo:
Objective: The purpose of this study was to investigate the rat skin penetration abilities of two commercially available low-level laser therapy (LLLT) devices during 150 sec of irradiation. Background data: Effective LLLT irradiation typically lasts from 20 sec up to a few minutes, but the LLLT time-profiles for skin penetration of light energy have not yet been investigated. Materials and methods: Sixty-two skin flaps overlaying rat's gastrocnemius muscles were harvested and immediately irradiated with LLLT devices. Irradiation was performed either with a 810 nm, 200mW continuous wave laser, or with a 904 nm, 60mW superpulsed laser, and the amount of penetrating light energy was measured by an optical power meter and registered at seven time points (range, 1-150 sec). Results: With the continuous wave 810nm laser probe in skin contact, the amount of penetrating light energy was stable at similar to 20% (SEM +/- 0.6) of the initial optical output during 150 sec irradiation. However, irradiation with the superpulsed 904 nm, 60mW laser showed a linear increase in penetrating energy from 38% (SEM +/- 1.4) to 58% (SEM +/- 3.5) during 150 sec of exposure. The skin penetration abilities were significantly different (p < 0.01) between the two lasers at all measured time points. Conclusions: LLLT irradiation through rat skin leaves sufficient subdermal light energy to influence pathological processes and tissue repair. The finding that superpulsed 904nm LLLT light energy penetrates 2-3 easier through the rat skin barrier than 810nm continuous wave LLLT, corresponds well with results of LLLT dose analyses in systematic reviews of LLLT in musculoskeletal disorders. This may explain why the differentiation between these laser types has been needed in the clinical dosage recommendations of World Association for Laser Therapy.
Resumo:
Stochastic methods based on time-series modeling combined with geostatistics can be useful tools to describe the variability of water-table levels in time and space and to account for uncertainty. Monitoring water-level networks can give information about the dynamic of the aquifer domain in both dimensions. Time-series modeling is an elegant way to treat monitoring data without the complexity of physical mechanistic models. Time-series model predictions can be interpolated spatially, with the spatial differences in water-table dynamics determined by the spatial variation in the system properties and the temporal variation driven by the dynamics of the inputs into the system. An integration of stochastic methods is presented, based on time-series modeling and geostatistics as a framework to predict water levels for decision making in groundwater management and land-use planning. The methodology is applied in a case study in a Guarani Aquifer System (GAS) outcrop area located in the southeastern part of Brazil. Communication of results in a clear and understandable form, via simulated scenarios, is discussed as an alternative, when translating scientific knowledge into applications of stochastic hydrogeology in large aquifers with limited monitoring network coverage like the GAS.
Resumo:
Tropical regions, especially the Amazon region, account for large emissions of methane (CH4). Here, we present CH4 observations from two airborne campaigns conducted within the BARCA (Balanco Atmosferico Regional de Carbono na Amazonia) project in the Amazon basin in November 2008 (end of the dry season) and May 2009 (end of the wet season). We performed continuous measurements of CH4 onboard an aircraft for the first time in the Amazon region, covering the whole Amazon basin with over 150 vertical profiles between altitudes of 500 m and 4000 m. The observations support the finding of previous ground-based, airborne, and satellite measurements that the Amazon basin is a large source of atmospheric CH4. Isotope analysis verified that the majority of emissions can be attributed to CH4 emissions from wetlands, while urban CH4 emissions could be also traced back to biogenic origin. A comparison of five TM5 based global CH4 inversions with the observations clearly indicates that the inversions using SCIAMACHY observations represent the BARCA observations best. The calculated CH4 flux estimate obtained from the mismatch between observations and TM5-modeled CH4 fields ranges from 36 to 43 mg m(-2) d(-1) for the Amazon lowland region.
Resumo:
The assessment of the thermal process impact in terms of food safety and quality is of great importance for process evaluation and design. This can be accomplished from the analysis of the residence time and temperature distributions coupled with the kinetics of thermal change, or from the use of a proper time-temperature integrator (TTI) as indicator of safety and quality. The objective of this work was to develop and test enzymic TTIs with rapid detection for the evaluation of continuous HTST pasteurization processes (70-85 degrees C, 10-60 s) of low-viscosity liquid foods, such as milk and juices. Enzymes peroxidase, lactoperoxidase and alkaline phosphatase in phosphate buffer were tested and activity was determined with commercial reflectometric strips. Discontinuous thermal treatments at various time-temperature combinations were performed in order to adjust a first order kinetic model of a two-component system. The measured time-temperature history was considered instead of assuming isothermal conditions. Experiments with slow heating and cooling were used to validate the adjusted model. Only the alkaline phosphatase TTI showed potential to be used for the evaluation of pasteurization processes. The choice was based on the obtained z-values of the thermostable and thermolabile fractions, on the cost and on the validation tests. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
In this work we compared the estimates of the parameters of ARCH models using a complete Bayesian method and an empirical Bayesian method in which we adopted a non-informative prior distribution and informative prior distribution, respectively. We also considered a reparameterization of those models in order to map the space of the parameters into real space. This procedure permits choosing prior normal distributions for the transformed parameters. The posterior summaries were obtained using Monte Carlo Markov chain methods (MCMC). The methodology was evaluated by considering the Telebras series from the Brazilian financial market. The results show that the two methods are able to adjust ARCH models with different numbers of parameters. The empirical Bayesian method provided a more parsimonious model to the data and better adjustment than the complete Bayesian method.