967 resultados para Hyperbolic Boundary-Value Problem
Resumo:
We model a buyer who wishes to combine objects owned by two separate sellers in order to realize higher value. Sellers are able to avoid entering into negotiations with the buyer, so that the order in which they negotiate is endogenous. Holdout occurs if at least one of the sellers is not present in the first round of negotiations. We demonstrate that complementarity of the buyer's technology is a necessary condition for equilibrium holdout. Moreover, a rise in complementarity leads to an increased likelihood of holdout, and an increased efficiency loss. Applications include patents, the land assembly problem, and mergers.
Resumo:
Purpose - Using Brandenburger and Nalebuff`s 1995 co-opetition model as a reference, the purpose of this paper is to seek to develop a tool that, based on the tenets of classical game theory, would enable scholars and managers to identify which games may be played in response to the different conflict of interest situations faced by companies in their business environments. Design/methodology/approach - The literature on game theory and business strategy are reviewed and a conceptual model, the strategic games matrix (SGM), is developed. Two novel games are described and modeled. Findings - The co-opetition model is not sufficient to realistically represent most of the conflict of interest situations faced by companies. It seeks to address this problem through development of the SGM, which expands upon Brandenburger and Nalebuff`s model by providing a broader perspective, through incorporation of an additional dimension (power ratio between players) and three novel, respectively, (rival, individualistic, and associative). Practical implications - This proposed model, based on the concepts of game theory, should be used to train decision- and policy-makers to better understand, interpret and formulate conflict management strategies. Originality/value - A practical and original tool to use game models in conflict of interest situations is generated. Basic classical games, such as Nash, Stackelberg, Pareto, and Minimax, are mapped on the SGM to suggest in which situations they Could be useful. Two innovative games are described to fit four different types of conflict situations that so far have no corresponding game in the literature. A test application of the SGM to a classic Intel Corporation strategic management case, in the complex personal computer industry, shows that the proposed method is able to describe, to interpret, to analyze, and to prescribe optimal competitive and/or cooperative strategies for each conflict of interest situation.
Resumo:
Purpose - The purpose of this paper is to discuss the economic crisis of 2008/2009 and the major impacts on developing nations and food-producing countries Within this macro-environment of food chains, there is concern that food inflation might come back sooner than expected The role of China as one of the major food consumers in the future, and Brazil, as the major food producer, is described as the food bridge, and an agenda of common development of these countries suggested. Design/methodology/approach - This paper reviews literature on muses of food inflation, production shortages, and investigation of programs to solve the problem in the future, it is also based on author`s personal insights and experience of working on this field in the last 15 years, and recent discussions in forums and interviews Findings - The major factors that jointly caused food prices increase in 2007/2008 were population growth, Income distribution, urbanization, dollar devaluations, commodity funds, social programs, production shortages, and bionic`s A list of ten policies is suggested. horizontal expansion of food production, vertical expansion, reduction in transaction costs, in protectionism and other taxes, investment in logistics, technology and better coordination, contracts, new generation of fertilizers and to use the best sources of biofuels. Originality/value - Two major outputs from this paper are the ""food demand model"" that inserts in one model the trends and muses of food inflation and the solutions, and the ""food bridge concept"" that also aligns in one box the imminent major food chain cooperation between China and Brazil
Resumo:
This paper develops a multi-regional general equilibrium model for climate policy analysis based on the latest version of the MIT Emissions Prediction and Policy Analysis (EPPA) model. We develop two versions so that we can solve the model either as a fully inter-temporal optimization problem (forward-looking, perfect foresight) or recursively. The standard EPPA model on which these models are based is solved recursively, and it is necessary to simplify some aspects of it to make inter-temporal solution possible. The forward-looking capability allows one to better address economic and policy issues such as borrowing and banking of GHG allowances, efficiency implications of environmental tax recycling, endogenous depletion of fossil resources, international capital flows, and optimal emissions abatement paths among others. To evaluate the solution approaches, we benchmark each version to the same macroeconomic path, and then compare the behavior of the two versions under a climate policy that restricts greenhouse gas emissions. We find that the energy sector and CO(2) price behavior are similar in both versions (in the recursive version of the model we force the inter-temporal theoretical efficiency result that abatement through time should be allocated such that the CO(2) price rises at the interest rate.) The main difference that arises is that the macroeconomic costs are substantially lower in the forward-looking version of the model, since it allows consumption shifting as an additional avenue of adjustment to the policy. On the other hand, the simplifications required for solving the model as an optimization problem, such as dropping the full vintaging of the capital stock and fewer explicit technological options, likely have effects on the results. Moreover, inter-temporal optimization with perfect foresight poorly represents the real economy where agents face high levels of uncertainty that likely lead to higher costs than if they knew the future with certainty. We conclude that while the forward-looking model has value for some problems, the recursive model produces similar behavior in the energy sector and provides greater flexibility in the details of the system that can be represented. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This study brings a reflection on aesthetic values, trying to consider connections between universality, social exclusion and contemporary violence.
Resumo:
The solidification of intruded magma in porous rocks can result in the following two consequences: (1) the heat release due to the solidification of the interface between the rock and intruded magma and (2) the mass release of the volatile fluids in the region where the intruded magma is solidified into the rock. Traditionally, the intruded magma solidification problem is treated as a moving interface (i.e. the solidification interface between the rock and intruded magma) problem to consider these consequences in conventional numerical methods. This paper presents an alternative new approach to simulate thermal and chemical consequences/effects of magma intrusion in geological systems, which are composed of porous rocks. In the proposed new approach and algorithm, the original magma solidification problem with a moving boundary between the rock and intruded magma is transformed into a new problem without the moving boundary but with the proposed mass source and physically equivalent heat source. The major advantage in using the proposed equivalent algorithm is that a fixed mesh of finite elements with a variable integration time-step can be employed to simulate the consequences and effects of the intruded magma solidification using the conventional finite element method. The correctness and usefulness of the proposed equivalent algorithm have been demonstrated by a benchmark magma solidification problem. Copyright (c) 2005 John Wiley & Sons, Ltd.
Resumo:
Numerical methods are used to simulate the double-diffusion driven convective pore-fluid flow and rock alteration in three-dimensional fluid-saturated geological fault zones. The double diffusion is caused by a combination of both the positive upward temperature gradient and the positive downward salinity concentration gradient within a three-dimensional fluid-saturated geological fault zone, which is assumed to be more permeable than its surrounding rocks. In order to ensure the physical meaningfulness of the obtained numerical solutions, the numerical method used in this study is validated by a benchmark problem, for which the analytical solution to the critical Rayleigh number of the system is available. The theoretical value of the critical Rayleigh number of a three-dimensional fluid-saturated geological fault zone system can be used to judge whether or not the double-diffusion driven convective pore-fluid flow can take place within the system. After the possibility of triggering the double-diffusion driven convective pore-fluid flow is theoretically validated for the numerical model of a three-dimensional fluid-saturated geological fault zone system, the corresponding numerical solutions for the convective flow and temperature are directly coupled with a geochemical system. Through the numerical simulation of the coupled system between the convective fluid flow, heat transfer, mass transport and chemical reactions, we have investigated the effect of the double-diffusion driven convective pore-fluid flow on the rock alteration, which is the direct consequence of mineral redistribution due to its dissolution, transportation and precipitation, within the three-dimensional fluid-saturated geological fault zone system. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Although a new protocol of dobutamine stress echocardiography with the early injection of atropine (EA-DSE) has been demonstrated to be useful in reducing adverse effects and increasing the number of effective tests and to have similar accuracy for detecting coronary artery disease (CAD) compared with conventional protocols, no data exist regarding its ability to predict long-term events. The aim of this study was to determine the prognostic value of EA-DSE and the effects of the long-term use of beta blockers on it. A retrospective evaluation of 844 patients who underwent EA-DSE for known or suspected CAD was performed; 309 (37%) were receiving beta blockers. During a median follow-up period of 24 months, 102 events (12%) occurred. On univariate analysis, predictors of events were the ejection fraction (p <0.001), male gender (p <0.001), previous myocardial infarction (p <0.001), angiotensin-converting enzyme inhibitor therapy (p = 0.021), calcium channel blocker therapy (p = 0.034), and abnormal results on EA-DSE (p <0.001). On multivariate analysis, the independent predictors of events were male gender (relative risk [RR] 1.78, 95% confidence interval [CI] 1.13 to 2.81, p = 0.013) and abnormal results on EA-DSE (RR 4.45, 95% CI 2.84 to 7.01, p <0.0001). Normal results on EA-DSE with P blockers were associated with a nonsignificant higher incidence of events than normal results on EA-DSE without beta blockers (RR 1.29, 95% CI 0.58 to 2.87, p = 0.54). Abnormal results on EA-DSE with beta blockers had an RR of 4.97 (95% CI 2.79 to 8.87, p <0.001) compared with normal results, while abnormal results on EA-DSE without beta blockers had an RR of 5.96 (95% CI 3.41 to 10.44, p <0.001) for events, with no difference between groups (p = 0.36). In conclusion, the detection of fixed or inducible wall motion abnormalities during EA-DSE was an independent predictor of long-term events in patients with known or suspected CAD. The prognostic value of EA-DSE was not affected by the long-term use of beta blockers. (C) 2008 Elsevier Inc. All rights reserved. (Am J Cardiol 2008;102:1291-1295)
Resumo:
Real time three-dimensional echocardiography (RT3DE) has been demonstrated to be an accurate technique to quantify left ventricular (LV) volumes and function in different patient populations. We sought to determine the value of RT3DE for evaluating patients with hypertrophic cardiomyopathy (HCM), in comparison with cardiac magnetic resonance imaging (MRI). Methods: We studied 20 consecutive patients with HCM who underwent two-dimensional echocardiography (2DE), RT3DE, and MRI. Parameters analyzed by echocardiography and MRI included: wall thickness, LV volumes, ejection fraction (LVEF), mass, geometric index, and dyssynchrony index. Statistical analysis was performed by Lin agreement coefficient, Pearson linear correlation and Bland-Altman model. Results: There was excellent agreement between 2DE and RT3DE (Rc = 0.92), 2DE and MRI (Rc = 0.85), and RT3DE and MRI (Rc = 0.90) for linear measurements. Agreement indexes for LV end-diastolic and end-systolic volumes were Rc = 0.91 and Rc = 0.91 between 2DE and RT3DE, Rc = 0.94 and Rc = 0.95 between RT3DE and MRI, and Rc = 0.89 and Rc = 0.88 between 2DE and MRI, respectively. Satisfactory agreement was observed between 2DE and RT3DE (Rc = 0.75), RT3DE and MRI (Rc = 0.83), and 2DE and MRI (Rc = 0.73) for determining LVEF, with a mild underestimation of LVEF by 2DE, and smaller variability between RT3DE and MRI. Regarding LV mass, excellent agreement was observed between RT3DE and MRI (Rc = 0.96), with bias of -6.3 g (limits of concordance = 42.22 to -54.73 g). Conclusion: In patients with HCM, RT3DE demonstrated superior performance than 2DE for the evaluation of myocardial hypertrophy, LV volumes, LVEF, and LV mass.
Resumo:
Electrical impedance tomography is a technique to estimate the impedance distribution within a domain, based on measurements on its boundary. In other words, given the mathematical model of the domain, its geometry and boundary conditions, a nonlinear inverse problem of estimating the electric impedance distribution can be solved. Several impedance estimation algorithms have been proposed to solve this problem. In this paper, we present a three-dimensional algorithm, based on the topology optimization method, as an alternative. A sequence of linear programming problems, allowing for constraints, is solved utilizing this method. In each iteration, the finite element method provides the electric potential field within the model of the domain. An electrode model is also proposed (thus, increasing the accuracy of the finite element results). The algorithm is tested using numerically simulated data and also experimental data, and absolute resistivity values are obtained. These results, corresponding to phantoms with two different conductive materials, exhibit relatively well-defined boundaries between them, and show that this is a practical and potentially useful technique to be applied to monitor lung aeration, including the possibility of imaging a pneumothorax.
Resumo:
Background. Many resource-limited countries rely on clinical and immunological monitoring without routine virological monitoring for human immunodeficiency virus (HIV)-infected children receiving highly active antiretroviral therapy (HAART). We assessed whether HIV load had independent predictive value in the presence of immunological and clinical data for the occurrence of new World Health Organization (WHO) stage 3 or 4 events (hereafter, WHO events) among HIV-infected children receiving HAART in Latin America. Methods. The NISDI (Eunice Kennedy Shriver National Institute of Child Health and Human Development International Site Development Initiative) Pediatric Protocol is an observational cohort study designed to describe HIV-related outcomes among infected children. Eligibility criteria for this analysis included perinatal infection, age ! 15 years, and continuous HAART for >= 6 months. Cox proportional hazards modeling was used to assess time to new WHO events as a function of immunological status, viral load, hemoglobin level, and potential confounding variables; laboratory tests repeated during the study were treated as time-varying predictors. Results. The mean duration of follow-up was 2.5 years; new WHO events occurred in 92 (15.8%) of 584 children. In proportional hazards modeling, most recent viral load 15000 copies/mL was associated with a nearly doubled risk of developing a WHO event (adjusted hazard ratio, 1.81; 95% confidence interval, 1.05-3.11; P = 033), even after adjustment for immunological status defined on the basis of CD4 T lymphocyte value, hemoglobin level, age, and body mass index. Conclusions. Routine virological monitoring using the WHO virological failure threshold of 5000 copies/mL adds independent predictive value to immunological and clinical assessments for identification of children receiving HAART who are at risk for significant HIV-related illness. To provide optimal care, periodic virological monitoring should be considered for all settings that provide HAART to children.
Resumo:
P>Background Congenital adrenal hyperplasia caused by classic 21-hydroxylase deficiency (21OHD) is an autosomal recessive disorder with a high prevalence of asymptomatic heterozygote carriers (HTZ) in the general population, making case detection desirable by routine methodology. HTZ for classic and nonclassic (NC) forms have basal and ACTH-stimulated values of 17-hydroxyprogesterone (17OHP) that fail to discriminate them from the general population. 21-Deoxycortisol (21DF), an 11-hydroxylated derivative of 17OHP, is an alternative approach to identify 21OHD HTZ. Objective To determine the discriminating value of basal and ACTH-stimulated serum levels of 21DF in comparison with 17OHP in a population of HTZ for 21OHD (n = 60), as well as in NC patients (n = 16) and in genotypically normal control subjects (CS, n = 30), using fourth generation tandem mass spectrometry after HPLC separation (LC-MS/MS). Results Basal 21DF levels were not different between HTZ and CS, but stimulated values were increased in the former and virtually nonresponsive in CS. Only 17 center dot 7% of the ACTH-stimulated 21DF levels overlapped with CS, when compared to 46 center dot 8% for 17OHP. For 100% specificity, the sensitivities achieved for ACTH-stimulated 21DF, 17OHP and the quotient [(21DF + 17OHP)/F] were 82 center dot 3%, 53 center dot 2% and 87%, using cut-offs of 40, 300 ng/dl and 46 (unitless), respectively. Similar to 17OHP, ACTH-stimulated 21DF levels did not overlap between HTZ and NC patients. A positive and highly significant correlation (r = 0 center dot 846; P < 0 center dot 001) was observed between 21DF and 17OHP pairs of values from NC and HTZ. Conclusion This study confirms the superiority of ACTH-stimulated 21DF, when compared to 17OHP, both measured by LC-MS/MS, in identifying carriers for 21OHD. Serum 21DF is a useful tool in genetic counselling to screen carriers among relatives in families with affected subjects, giving support to molecular results.
Resumo:
An efficient system is now in place for improving diverse sugarcane cultivars by genetic transformation, that is, the insertion of useful new genes into single cells followed by the regeneration of genetically modified (transgenic) plants. The method has already been used to introduce genes for resistance to several major diseases, insect pests and a herbicide, Field testing has begun, and research is underway to identify other genes for increased environmental stress resistance, agronomic efficiency and yield of sucrose or other valuable products. Experience in other crops has shown that genetically improved varieties which provide genuine environmental and consumer benefits are welcomed by producers and consumers. Substantial research is still needed, but these new gene technologies will reshape the sugar industry and determine the international competitive efficiency of producers.
Resumo:
In this paper, experiments to detect turbulent spots in the transitional boundary layers, formed on a flat plate in a free-piston shock tunnel how, are reported. Experiments indicate that thin-film heat-transfer gauges are suitable for identifying turbulent-spot activity and can be used to identify parameters such as the convection rate of spots and the intermittency of turbulence.
Resumo:
The value of a seasonal forecasting system based on phases of the Southern Oscillation was estimated for a representative dryland wheat grower in the vicinity of Goondiwindi. In particular the effects on this estimate of risk attitude and planting conditions were examined. A recursive stochastic programming approach was used to identify the grower's utility-maximising action set in the event of each of the climate patterns over the period 1894-1991 recurring In the imminent season. The approach was repeated with and without use of the forecasts. The choices examined were, at planting, nitrogen application rate and cultivar and, later in the season, choices of proceeding with or abandoning each wheat activity, The value of the forecasting system was estimated as the maximum amount the grower could afford to pay for its use without expected utility being lowered relative to its non use.