923 resultados para Mixed integer programming model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mixed oxide compounds, such as TiO2-SnO2 system are widely used as gas sensors and should also provide varistor properties modifying the TiO2 surface. Therefore, a theoretical investigation has been carried out characterizing the effect of SnO2 on TiO2 addition on the electronic structure by means of ab initio SCF-LCAO calculations using all electrons. In order to take into account the finite size of the cluster, we have used the point charge model for the (TiO2)(15) cluster to study the effect on electronic structure of doping the TiO2 (110) Surface. The contracted basis set for titanium (4322/42/3), oxygen (33/3) and tin (43333/4333/43) atoms were used. The charge distributions, dipole moments, and density of states of doping TiO2 and vacancy formation are reported and analysed. (C) 2003 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A non-linear model is presented which optimizes the lay-out, as well as the design and management of trickle irrigation systems, to achieve maximum net benefit. The model consists of an objective function that maximizes profit at the farm level, subject to appropriate geometric and hydraulic constraints. It can be applied to rectangular shaped fields, with uniform or zero slope. The software used is the Gams-Minos package. The basic inputs are the crop-water-production function, the cost function and cost of system components, and design variables. The main outputs are the annual net benefit and pipe diameters and lengths. To illustrate the capability of the model, a sensitivity analysis of the annual net benefit for a citrus field is evaluated with respect to irrigated area, ground slope, micro-sprinkler discharge and shape of the field. The sensitivity analysis suggests that the greatest benefit is obtained with the smallest microsprinkler discharge, the greatest area, a square field and zero ground slope. The costs of the investment and energy are the components of the objective function that had the greatest effect in the 120 situations evaluated. (C) 1996 Academic Press Limited

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main properties of realistic models for manganites are studied using analytic mean-field approximations and computational numerical methods, focusing on the two-orbital model with electrons interacting through Jahn-Teller (JT) phonons and/or Coulombic repulsions. Analyzing the model including both interactions by the combination of the mean-field approximation and the exact diagonalization method, it is argued that the spin-charge-orbital structure in the insulating phase of the purely JT-phononic model with a large Hund couphng J(H) is not qualitatively changed by the inclusion of the Coulomb interactions. As an important application of the present mean-held approximation, the CE-type antiferromagnetic state, the charge-stacked structure along the z axis, and (3x(2) - r(2))/(3y(2) - r(2))-type orbital ordering are successfully reproduced based on the JT-phononic model with large JH for the half-doped manganite, in agreement with recent Monte Carlo simulation results. Topological arguments and the relevance of the Heisenberg exchange among localized t(2g) spins explains why the inclusion of the nearest-neighbor Coulomb interaction does not destroy the charge stacking structure. It is also verified that the phase-separation tendency is observed both in purely JT-phononic (large JH) and purely Coulombic models in the vicinity of the hole undoped region, as long as realistic hopping matrices are used. This highlights the qualitative similarities of both approaches and the relevance of mixed-phase tendencies in the context of manganites. In addition, the rich and complex phase diagram of the two-orbital Coulombic model in one dimension is presented. Our results provide robust evidence that Coulombic and JT-phononic approaches to manganites are not qualitatively different ways to carry out theoretical calculations, but they share a variety of common features.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Linear mixed effects models are frequently used to analyse longitudinal data, due to their flexibility in modelling the covariance structure between and within observations. Further, it is easy to deal with unbalanced data, either with respect to the number of observations per subject or per time period, and with varying time intervals between observations. In most applications of mixed models to biological sciences, a normal distribution is assumed both for the random effects and for the residuals. This, however, makes inferences vulnerable to the presence of outliers. Here, linear mixed models employing thick-tailed distributions for robust inferences in longitudinal data analysis are described. Specific distributions discussed include the Student-t, the slash and the contaminated normal. A Bayesian framework is adopted, and the Gibbs sampler and the Metropolis-Hastings algorithms are used to carry out the posterior analyses. An example with data on orthodontic distance growth in children is discussed to illustrate the methodology. Analyses based on either the Student-t distribution or on the usual Gaussian assumption are contrasted. The thick-tailed distributions provide an appealing robust alternative to the Gaussian process for modelling distributions of the random effects and of residuals in linear mixed models, and the MCMC implementation allows the computations to be performed in a flexible manner.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The specific delayed-type hypersensitivity (DTH) response was evaluated in resistant (A/SN) and susceptible (B10.A) mice intraperitoneally infected with yeasts from a virulent (Pb18) or from a non-virulent (Pb265) Paracoccidioides brasiliensis isolates. Both strains of mice were footpad challenged with homologous antigens. Pb18 infected A/SN mice developed an evident and persistent DTH response late in the course of the disease (90th day on) whereas B10.A animals mounted a discrete and ephemeral DTH response at the 14th day post-infection. A/SN mice infected with Pb265 developed cellular immune responses whereas B10.A mice were almost always anergic. Histological analysis of the footpads of infected mice at 48 hours after challenge showed a mixed infiltrate consisting of predominantly mononuclear cells. Previous infection of resistant and susceptible mice with Pb18 did not alter their DTH responses against heterologous unrelated antigens (sheep red blood cells and dinitrofluorobenzene) indicating that the observed cellular anergy was antigen-specific. When fungal related antigens (candidin and histoplasmin) were tested in resistant mice, absence of cross-reactivity was noted. Thus, specific DTH responses against P. brasiliensis depend on both the host's genetically determined resistance and the virulence of the fungal isolate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two fundamental processes usually arise in the production planning of many industries. The first one consists of deciding how many final products of each type have to be produced in each period of a planning horizon, the well-known lot sizing problem. The other process consists of cutting raw materials in stock in order to produce smaller parts used in the assembly of final products, the well-studied cutting stock problem. In this paper the decision variables of these two problems are dependent of each other in order to obtain a global optimum solution. Setups that are typically present in lot sizing problems are relaxed together with integer frequencies of cutting patterns in the cutting problem. Therefore, a large scale linear optimizations problem arises, which is exactly solved by a column generated technique. It is worth noting that this new combined problem still takes the trade-off between storage costs (for final products and the parts) and trim losses (in the cutting process). We present some sets of computational tests, analyzed over three different scenarios. These results show that, by combining the problems and using an exact method, it is possible to obtain significant gains when compared to the usual industrial practice, which solve them in sequence. (C) 2010 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The generation expansion planning (GEP) problem consists in determining the type of technology, size, location and time at which new generation units must be integrated to the system, over a given planning horizon, to satisfy the forecasted energy demand. Over the past few years, due to an increasing awareness of environmental issues, different approaches to solve the GEP problem have included some sort of environmental policy, typically based on emission constraints. This paper presents a linear model in a dynamic version to solve the GEP problem. The main difference between the proposed model and most of the works presented in the specialized literature is the way the environmental policy is envisaged. Such policy includes: i) the taxation of CO(2) emissions, ii) an annual Emissions Reduction Rate (ERR) in the overall system, and iii) the gradual retirement of old inefficient generation plants. The proposed model is applied in an 11-region to design the most cost-effective and sustainable 10-technology US energy portfolio for the next 20 years.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increase of computing power of the microcomputers has stimulated the building of direct manipulation interfaces that allow graphical representation of Linear Programming (LP) models. This work discusses the components of such a graphical interface as the basis for a system to assist users in the process of formulating LP problems. In essence, this work proposes a methodology which considers the modelling task as divided into three stages which are specification of the Data Model, the Conceptual Model and the LP Model. The necessity for using Artificial Intelligence techniques in the problem conceptualisation and to help the model formulation task is illustrated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Linear mixed effects models have been widely used in analysis of data where responses are clustered around some random effects, so it is not reasonable to assume independence between observations in the same cluster. In most biological applications, it is assumed that the distributions of the random effects and of the residuals are Gaussian. This makes inferences vulnerable to the presence of outliers. Here, linear mixed effects models with normal/independent residual distributions for robust inferences are described. Specific distributions examined include univariate and multivariate versions of the Student-t, the slash and the contaminated normal. A Bayesian framework is adopted and Markov chain Monte Carlo is used to carry out the posterior analysis. The procedures are illustrated using birth weight data on rats in a texicological experiment. Results from the Gaussian and robust models are contrasted, and it is shown how the implementation can be used for outlier detection. The thick-tailed distributions provide an appealing robust alternative to the Gaussian process in linear mixed models, and they are easily implemented using data augmentation and MCMC techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a tool box developed to read files describing a SIMULINK® model and translates it into a structural VHDL-AMS description. In translation process, all files and directory structures to simulate the translated model on SystemVision™ environment is generate. The tool box named MS2SV was tested by three models of commercially available digital-to-analogue converters. All models use the R2R ladder network to conversion, but the functionality of these three components is different. The methodology of conversion of the model is presents together with sort theory about R-2R ladder network. In the evaluation of the translated models, we used a sine waveform input signal and the waveform generated by D/A conversion process was compared by FFT analysis. The results show the viability of this type of approach. This work considers some of challenges set by the electronic industry for the further development of simulation methodologies and tools in the field of mixed-signal technology. © 2007 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a nonlinear model with individual representation of plants for the centralized long-term hydrothermal scheduling problem over multiple areas. In addition to common aspects of long-term scheduling, this model takes transmission constraints into account. The ability to optimize hydropower exchange among multiple areas is important because it enables further minimization of complementary thermal generation costs. Also, by considering transmission constraints for long-term scheduling, a more precise coupling with shorter horizon schedules can be expected. This is an important characteristic from both operational and economic viewpoints. The proposed model is solved by a sequential quadratic programming approach in the form of a prototype system for different case studies. An analysis of the benefits provided by the model is also presented. ©2009 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present article describes the challenges programming apprentices face and identifies the elements and processes that set them apart from experienced programmers. And also explains why a conventional programming languages teaching approach fails to map the programming mental model. The purpose of this discussion is to benefit from ideas and cognitive philosophies to be embedded in programming learning tools. Cognitive components are modeled as elements to be handled by the apprentices in tutoring systems while performing a programming task. In this process a mental level solution (the mental model of the program) and an implementation level solution (the program) are created. The mapping between these representations is a path followed by the student explicitly in this approach. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Early trauma care is dependent on subjective assessments and sporadic vital sign assessments. We hypothesized that near-infrared spectroscopy-measured cerebral oxygenation (regional oxygen saturation [rSO 2]) would provide a tool to detect cardiovascular compromise during active hemorrhage. We compared rSO 2 with invasively measured mixed venous oxygen saturation (SvO2), mean arterial pressure (MAP), cardiac output, heart rate, and calculated pulse pressure. Methods: Six propofol-anesthetized instrumented swine were subjected to a fixed-rate hemorrhage until cardiovascular collapse. rSO 2 was monitored with noninvasively measured cerebral oximetry; SvO2 was measured with a fiber optic pulmonary arterial catheter. As an assessment of the time responsiveness of each variable, we recorded minutes from start of the hemorrhage for each variable achieving a 5%, 10%, 15%, and 20% change compared with baseline. Results: Mean time to cardiovascular collapse was 35 minutes ± 11 minutes (54 ± 17% total blood volume). Cerebral rSO 2 began a steady decline at an average MAP of 78 mm Hg ± 17 mm Hg, well above the expected autoregulatory threshold of cerebral blood flow. The 5%, 10%, and 15% decreases in rSO 2 during hemorrhage occurred at a similar times to SvO2, but rSO 2 lagged 6 minutes behind the equivalent percentage decreases in MAP. There was a higher correlation between rSO 2 versus MAP (R =0.72) than SvO2 versus MAP (R =0.55). Conclusions: Near-infrared spectroscopy- measured rSO 2 provided reproducible decreases during hemorrhage that were similar in time course to invasively measured cardiac output and SvO2 but delayed 5 to 9 minutes compared with MAP and pulse pressure. rSO 2 may provide an earlier warning of worsening hemorrhagic shock for prompt interventions in patients with trauma when continuous arterial BP measurements are unavailable. © 2012 Lippincott Williams & Wilkins.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Deterministic Optimal Reactive Power Dispatch problem has been extensively studied, such that the demand power and the availability of shunt reactive power compensators are known and fixed. Give this background, a two-stage stochastic optimization model is first formulated under the presumption that the load demand can be modeled as specified random parameters. A second stochastic chance-constrained model is presented considering uncertainty on the demand and the equivalent availability of shunt reactive power compensators. Simulations on six-bus and 30-bus test systems are used to illustrate the validity and essential features of the proposed models. This simulations shows that the proposed models can prevent to the power system operator about of the deficit of reactive power in the power system and suggest that shunt reactive sourses must be dispatched against the unavailability of any reactive source. © 2012 IEEE.