135 resultados para Hyperbolic Boundary-Value Problem


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper deals with the expected discounted continuous control of piecewise deterministic Markov processes (PDMP`s) using a singular perturbation approach for dealing with rapidly oscillating parameters. The state space of the PDMP is written as the product of a finite set and a subset of the Euclidean space a""e (n) . The discrete part of the state, called the regime, characterizes the mode of operation of the physical system under consideration, and is supposed to have a fast (associated to a small parameter epsilon > 0) and a slow behavior. By using a similar approach as developed in Yin and Zhang (Continuous-Time Markov Chains and Applications: A Singular Perturbation Approach, Applications of Mathematics, vol. 37, Springer, New York, 1998, Chaps. 1 and 3) the idea in this paper is to reduce the number of regimes by considering an averaged model in which the regimes within the same class are aggregated through the quasi-stationary distribution so that the different states in this class are replaced by a single one. The main goal is to show that the value function of the control problem for the system driven by the perturbed Markov chain converges to the value function of this limit control problem as epsilon goes to zero. This convergence is obtained by, roughly speaking, showing that the infimum and supremum limits of the value functions satisfy two optimality inequalities as epsilon goes to zero. This enables us to show the result by invoking a uniqueness argument, without needing any kind of Lipschitz continuity condition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hub-and-spoke networks are widely studied in the area of location theory. They arise in several contexts, including passenger airlines, postal and parcel delivery, and computer and telecommunication networks. Hub location problems usually involve three simultaneous decisions to be made: the optimal number of hub nodes, their locations and the allocation of the non-hub nodes to the hubs. In the uncapacitated single allocation hub location problem (USAHLP) hub nodes have no capacity constraints and non-hub nodes must be assigned to only one hub. In this paper, we propose three variants of a simple and efficient multi-start tabu search heuristic as well as a two-stage integrated tabu search heuristic to solve this problem. With multi-start heuristics, several different initial solutions are constructed and then improved by tabu search, while in the two-stage integrated heuristic tabu search is applied to improve both the locational and allocational part of the problem. Computational experiments using typical benchmark problems (Civil Aeronautics Board (CAB) and Australian Post (AP) data sets) as well as new and modified instances show that our approaches consistently return the optimal or best-known results in very short CPU times, thus allowing the possibility of efficiently solving larger instances of the USAHLP than those found in the literature. We also report the integer optimal solutions for all 80 CAB data set instances and the 12 AP instances up to 100 nodes, as well as for the corresponding new generated AP instances with reduced fixed costs. Published by Elsevier Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The facilities location problem for companies with global operations is very complex and not well explored in the literature. This work proposes a MILP model that solves the problem through minimization of the total logistic cost. Main contributions of the model are the pioneer carrying cost calculation, the treatment given to the take-or-pay costs and to the international tax benefits such as drawback and added value taxes in Brazil. The model was successfully applied to a real case of a chemical industry with industrial plants and sales all over the world. The model application recommended a totally new sourcing model for the company.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Back in 1970s and 1980s, cogeneration plants in sugarcane mills were primarily designed to consume all bagasse, and produce steam and electricity to the process. The plants used medium pressure steam boilers (21 bar and 300 degrees C) and backpressure steam turbines. Some plants needed also an additional fuel, as the boilers were very inefficient. In those times, sugarcane bagasse did not have an economic value, and it was considered a problem by most mills. During the 1990s and the beginning of the 2000s, sugarcane industry faced an open market perspective, thus, there was a great necessity to reduce costs in the production processes. In addition, the economic value of by-products (bagasse, molasses, etc.) increased, and there was a possibility of selling electricity to the grid. This new scenario led to a search for more advanced cogeneration systems, based mainly on higher steam parameters (40-80 bar and 400-500 degrees C). In the future, some authors suggest that biomass integrated gasification combined cycles are the best alternative to cogeneration plants in sugarcane mills. These systems might attain 35-40% efficiency for the power conversion. However, supercritical steam cycles might also attain these efficiency values, what makes them an alternative to gasification-based systems. This paper presents a comparative thermoeconomic study of these systems for sugarcane mills. The configurations studied are based on real systems that could be adapted to biomass use. Different steam consumptions in the process are considered, in order to better integrate these configurations in the mill. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the single machine scheduling problem with a common due date aiming to minimize earliness and tardiness penalties. Due to its complexity, most of the previous studies in the literature deal with this problem using heuristics and metaheuristics approaches. With the intention of contributing to the study of this problem, a branch-and-bound algorithm is proposed. Lower bounds and pruning rules that exploit properties of the problem are introduced. The proposed approach is examined through a computational comparative study with 280 problems involving different due date scenarios. In addition, the values of optimal solutions for small problems from a known benchmark are provided.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The functional relation between the decline in the rate of a physiological process and the magnitude of a stress related to soil physical conditions is an important tool for uses as diverse as assessment of the stress-related sensitivity of different plant cultivars and characterization of soil structure. Two of the most pervasive sources of stress are soil resistance to root penetration (SR) and matric potential (psi). However, the assessment of these sources of stress on physiological processes in different soils can be complicated by other sources of stress and by the strong relation between SR and psi in a soil. A multivariate boundary line approach was assessed as a means of reducing these cornplications. The effects of SR and psi stress conditions on plant responses were examined under growth chamber conditions. Maize plants (Zea mays L.) were grown in soils at different water contents and having different structures arising from variation in texture, organic carbon content and soil compaction. Measurements of carbon exchange (CE), leaf transpiration (ILT), plant transpiration (PT), leaf area (LA), leaf + shoot dry weight (LSDW), root total length (RTL), root surface area (RSA) and root dry weight (RDW) were determined after plants reached the 12-leaf stage. The LT, PT and LA were described as a function of SR and psi with a double S-shaped function using the multivariate boundary line approach. The CE and LSDW were described by the combination of an S-shaped function for SR and a linear function for psi. The root parameters were described by a single S-shaped function for SR. The sensitivity to SR and psi depended on the plant parameter. Values of PT, LA and LSDW were most sensitive to SR. Among those parameters exhibiting a significant response to psi, PT was most sensitive. The boundary line approach was found to be a useful tool to describe the functional relation between the decline in the rate of a physiological process and the magnitude of a stress related to soil physical conditions. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Banana flour obtained from unripe banana (Musa acuminata, var. Nanico) under specific drying conditions was evaluated regarding its chemical composition and nutritional value. Results are expressed in dry weight (dw). The unripe banana flour (UBF) presented a high amount of total dietary fiber (DF) (56.24 g/100 g), which consisted of resistant starch (RS) (48.99 g/100 g), fructans (0.05 g/100 g) and DF without RS or fructans (7.2 g/100 g). The contents of available starch (AS) (27.78 g/100 g) and soluble sugars (1.81 g/100 g) were low. The main phytosterols found were campesterol (4.1 mg/100 g), stigmasterol (2.5 mg/100 g) and beta-sitosterol (6.2 mg/100 g). The total polyphenol content was 50.65 mg GAE/100 g. Antioxidant activity, by the FRAP and ORAC methods, was moderated, being 358.67 and 261.00 mu mol of Trolox equivalent/100 g, respectively. The content of Zn, Ca and Fe and mineral dialyzability were low. The procedure used to obtain UBF resulted in the recovery of undamaged starch granules and in a low-energy product (597 kJ/100 g).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Chagas disease is a serious health problem in Latin America. Hidroxymethylnitrofurazone (NFOH) is a nitrofurazone prodrug more active than nitrofurazone against Trypanosoma cruzi. However, NFOH presents low aqueous solubility, high photodecomposition and high toxicity. The present work is focused on the characterization of an inclusion complex of NFOH in 2-hydroxypropyl-beta-cyclodextrin (HP-beta-CD). The complexation with HP-beta-CD was investigated using reversed-phase liquid chromatography, solubility isotherms and nuclear magnetic resonance. The retention behavior was analyzed on a reversed-phase C-18 column, using acetonitrile-water (20/80, v/v) as the mobile phase, in which HP-beta-CD was incorporated as a mobile phase additive. The decrease in the retention times with increasing concentrations of HP-beta-CD enables the determination of the apparent stability constant of the complex (K = 6.2 +/- 0.3 M-1) by HPLC. The solubility isotherm was studied and the value for the apparent stability constant (K = 7.9 +/- 0.2 M-1) was calculated. The application of continuous variation method indicated the presence of a complex with 1:1 NFOH:HP-beta-CD stoichiometry. The photostability study showed that the formation of an inclusion complex had a destabilizing effect on the photodecomposition of NFOH when compared to that of the ""free"" molecule in solution. The mobility investigation (by NMR longitudinal relaxation time) gives information about the complexation of NFOH with HP-beta-CD. In preliminary toxicity studies, cell viability tests revealed that inclusion complexes were able to decrease the toxic effect (p < 0.01) caused by NFOH. (c) 2008 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose - Using Brandenburger and Nalebuff`s 1995 co-opetition model as a reference, the purpose of this paper is to seek to develop a tool that, based on the tenets of classical game theory, would enable scholars and managers to identify which games may be played in response to the different conflict of interest situations faced by companies in their business environments. Design/methodology/approach - The literature on game theory and business strategy are reviewed and a conceptual model, the strategic games matrix (SGM), is developed. Two novel games are described and modeled. Findings - The co-opetition model is not sufficient to realistically represent most of the conflict of interest situations faced by companies. It seeks to address this problem through development of the SGM, which expands upon Brandenburger and Nalebuff`s model by providing a broader perspective, through incorporation of an additional dimension (power ratio between players) and three novel, respectively, (rival, individualistic, and associative). Practical implications - This proposed model, based on the concepts of game theory, should be used to train decision- and policy-makers to better understand, interpret and formulate conflict management strategies. Originality/value - A practical and original tool to use game models in conflict of interest situations is generated. Basic classical games, such as Nash, Stackelberg, Pareto, and Minimax, are mapped on the SGM to suggest in which situations they Could be useful. Two innovative games are described to fit four different types of conflict situations that so far have no corresponding game in the literature. A test application of the SGM to a classic Intel Corporation strategic management case, in the complex personal computer industry, shows that the proposed method is able to describe, to interpret, to analyze, and to prescribe optimal competitive and/or cooperative strategies for each conflict of interest situation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose - The purpose of this paper is to discuss the economic crisis of 2008/2009 and the major impacts on developing nations and food-producing countries Within this macro-environment of food chains, there is concern that food inflation might come back sooner than expected The role of China as one of the major food consumers in the future, and Brazil, as the major food producer, is described as the food bridge, and an agenda of common development of these countries suggested. Design/methodology/approach - This paper reviews literature on muses of food inflation, production shortages, and investigation of programs to solve the problem in the future, it is also based on author`s personal insights and experience of working on this field in the last 15 years, and recent discussions in forums and interviews Findings - The major factors that jointly caused food prices increase in 2007/2008 were population growth, Income distribution, urbanization, dollar devaluations, commodity funds, social programs, production shortages, and bionic`s A list of ten policies is suggested. horizontal expansion of food production, vertical expansion, reduction in transaction costs, in protectionism and other taxes, investment in logistics, technology and better coordination, contracts, new generation of fertilizers and to use the best sources of biofuels. Originality/value - Two major outputs from this paper are the ""food demand model"" that inserts in one model the trends and muses of food inflation and the solutions, and the ""food bridge concept"" that also aligns in one box the imminent major food chain cooperation between China and Brazil

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper develops a multi-regional general equilibrium model for climate policy analysis based on the latest version of the MIT Emissions Prediction and Policy Analysis (EPPA) model. We develop two versions so that we can solve the model either as a fully inter-temporal optimization problem (forward-looking, perfect foresight) or recursively. The standard EPPA model on which these models are based is solved recursively, and it is necessary to simplify some aspects of it to make inter-temporal solution possible. The forward-looking capability allows one to better address economic and policy issues such as borrowing and banking of GHG allowances, efficiency implications of environmental tax recycling, endogenous depletion of fossil resources, international capital flows, and optimal emissions abatement paths among others. To evaluate the solution approaches, we benchmark each version to the same macroeconomic path, and then compare the behavior of the two versions under a climate policy that restricts greenhouse gas emissions. We find that the energy sector and CO(2) price behavior are similar in both versions (in the recursive version of the model we force the inter-temporal theoretical efficiency result that abatement through time should be allocated such that the CO(2) price rises at the interest rate.) The main difference that arises is that the macroeconomic costs are substantially lower in the forward-looking version of the model, since it allows consumption shifting as an additional avenue of adjustment to the policy. On the other hand, the simplifications required for solving the model as an optimization problem, such as dropping the full vintaging of the capital stock and fewer explicit technological options, likely have effects on the results. Moreover, inter-temporal optimization with perfect foresight poorly represents the real economy where agents face high levels of uncertainty that likely lead to higher costs than if they knew the future with certainty. We conclude that while the forward-looking model has value for some problems, the recursive model produces similar behavior in the energy sector and provides greater flexibility in the details of the system that can be represented. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study brings a reflection on aesthetic values, trying to consider connections between universality, social exclusion and contemporary violence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although a new protocol of dobutamine stress echocardiography with the early injection of atropine (EA-DSE) has been demonstrated to be useful in reducing adverse effects and increasing the number of effective tests and to have similar accuracy for detecting coronary artery disease (CAD) compared with conventional protocols, no data exist regarding its ability to predict long-term events. The aim of this study was to determine the prognostic value of EA-DSE and the effects of the long-term use of beta blockers on it. A retrospective evaluation of 844 patients who underwent EA-DSE for known or suspected CAD was performed; 309 (37%) were receiving beta blockers. During a median follow-up period of 24 months, 102 events (12%) occurred. On univariate analysis, predictors of events were the ejection fraction (p <0.001), male gender (p <0.001), previous myocardial infarction (p <0.001), angiotensin-converting enzyme inhibitor therapy (p = 0.021), calcium channel blocker therapy (p = 0.034), and abnormal results on EA-DSE (p <0.001). On multivariate analysis, the independent predictors of events were male gender (relative risk [RR] 1.78, 95% confidence interval [CI] 1.13 to 2.81, p = 0.013) and abnormal results on EA-DSE (RR 4.45, 95% CI 2.84 to 7.01, p <0.0001). Normal results on EA-DSE with P blockers were associated with a nonsignificant higher incidence of events than normal results on EA-DSE without beta blockers (RR 1.29, 95% CI 0.58 to 2.87, p = 0.54). Abnormal results on EA-DSE with beta blockers had an RR of 4.97 (95% CI 2.79 to 8.87, p <0.001) compared with normal results, while abnormal results on EA-DSE without beta blockers had an RR of 5.96 (95% CI 3.41 to 10.44, p <0.001) for events, with no difference between groups (p = 0.36). In conclusion, the detection of fixed or inducible wall motion abnormalities during EA-DSE was an independent predictor of long-term events in patients with known or suspected CAD. The prognostic value of EA-DSE was not affected by the long-term use of beta blockers. (C) 2008 Elsevier Inc. All rights reserved. (Am J Cardiol 2008;102:1291-1295)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Real time three-dimensional echocardiography (RT3DE) has been demonstrated to be an accurate technique to quantify left ventricular (LV) volumes and function in different patient populations. We sought to determine the value of RT3DE for evaluating patients with hypertrophic cardiomyopathy (HCM), in comparison with cardiac magnetic resonance imaging (MRI). Methods: We studied 20 consecutive patients with HCM who underwent two-dimensional echocardiography (2DE), RT3DE, and MRI. Parameters analyzed by echocardiography and MRI included: wall thickness, LV volumes, ejection fraction (LVEF), mass, geometric index, and dyssynchrony index. Statistical analysis was performed by Lin agreement coefficient, Pearson linear correlation and Bland-Altman model. Results: There was excellent agreement between 2DE and RT3DE (Rc = 0.92), 2DE and MRI (Rc = 0.85), and RT3DE and MRI (Rc = 0.90) for linear measurements. Agreement indexes for LV end-diastolic and end-systolic volumes were Rc = 0.91 and Rc = 0.91 between 2DE and RT3DE, Rc = 0.94 and Rc = 0.95 between RT3DE and MRI, and Rc = 0.89 and Rc = 0.88 between 2DE and MRI, respectively. Satisfactory agreement was observed between 2DE and RT3DE (Rc = 0.75), RT3DE and MRI (Rc = 0.83), and 2DE and MRI (Rc = 0.73) for determining LVEF, with a mild underestimation of LVEF by 2DE, and smaller variability between RT3DE and MRI. Regarding LV mass, excellent agreement was observed between RT3DE and MRI (Rc = 0.96), with bias of -6.3 g (limits of concordance = 42.22 to -54.73 g). Conclusion: In patients with HCM, RT3DE demonstrated superior performance than 2DE for the evaluation of myocardial hypertrophy, LV volumes, LVEF, and LV mass.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electrical impedance tomography is a technique to estimate the impedance distribution within a domain, based on measurements on its boundary. In other words, given the mathematical model of the domain, its geometry and boundary conditions, a nonlinear inverse problem of estimating the electric impedance distribution can be solved. Several impedance estimation algorithms have been proposed to solve this problem. In this paper, we present a three-dimensional algorithm, based on the topology optimization method, as an alternative. A sequence of linear programming problems, allowing for constraints, is solved utilizing this method. In each iteration, the finite element method provides the electric potential field within the model of the domain. An electrode model is also proposed (thus, increasing the accuracy of the finite element results). The algorithm is tested using numerically simulated data and also experimental data, and absolute resistivity values are obtained. These results, corresponding to phantoms with two different conductive materials, exhibit relatively well-defined boundaries between them, and show that this is a practical and potentially useful technique to be applied to monitor lung aeration, including the possibility of imaging a pneumothorax.