966 resultados para Two-point boundary value problems
Resumo:
Purpose: To evaluate the feasibility, determine the optimal b-value, and assess the utility of 3-T diffusion-weighted MR imaging (DWI) of the spine in differentiating benign from pathologic vertebral compression fractures.Methods and Materials: Twenty patients with 38 vertebral compression fractures (24 benign, 14 pathologic) and 20 controls (total: 23 men, 17 women, mean age 56.2years) were included from December 2010 to May 2011 in this IRB-approved prospective study. MR imaging of the spine was performed on a 3-T unit with T1-w, fat-suppressed T2-w, gadolinium-enhanced fat-suppressed T1-w and zoomed-EPI (2D RF excitation pulse combined with reduced field-of-view single-shot echo-planar readout) diffusion-w (b-values: 0, 300, 500 and 700s/mm2) sequences. Two radiologists independently assessed zoomed-EPI image quality in random order using a 4-point scale: 1=excellent to 4=poor. They subsequently measured apparent diffusion coefficients (ADCs) in normal vertebral bodies and compression fractures, in consensus.Results: Lower b-values correlated with better image quality scores, with significant differences between b=300 (mean±SD=2.6±0.8), b=500 (3.0±0.7) and b=700 (3.6±0.6) (all p<0.001). Mean ADCs of normal vertebral bodies (n=162) were 0.23, 0.17 and 0.11×10-3mm2/s with b=300, 500 and 700s/mm2, respectively. In contrast, mean ADCs were 0.89, 0.70 and 0.59×10-3mm2/s for benign vertebral compression fractures and 0.79, 0.66 and 0.51×10-3mm2/s for pathologic fractures with b=300, 500 and 700s/mm2, respectively. No significant difference was found between ADCs of benign and pathologic fractures.Conclusion: 3-T DWI of the spine is feasible and lower b-values (300s/mm2) are recommended. However, our preliminary results show no advantage of DWI in differentiating benign from pathologic vertebral compression fractures.
Application of standard and refined heat balance integral methods to one-dimensional Stefan problems
Resumo:
The work in this paper concerns the study of conventional and refined heat balance integral methods for a number of phase change problems. These include standard test problems, both with one and two phase changes, which have exact solutions to enable us to test the accuracy of the approximate solutions. We also consider situations where no analytical solution is available and compare these to numerical solutions. It is popular to use a quadratic profile as an approximation of the temperature, but we show that a cubic profile, seldom considered in the literature, is far more accurate in most circumstances. In addition, the refined integral method can give greater improvement still and we develop a variation on this method which turns out to be optimal in some cases. We assess which integral method is better for various problems, showing that it is largely dependent on the specified boundary conditions.
Resumo:
The paper develops a stability theory for the optimal value and the optimal set mapping of optimization problems posed in a Banach space. The problems considered in this paper have an arbitrary number of inequality constraints involving lower semicontinuous (not necessarily convex) functions and one closed abstract constraint set. The considered perturbations lead to problems of the same type as the nominal one (with the same space of variables and the same number of constraints), where the abstract constraint set can also be perturbed. The spaces of functions involved in the problems (objective and constraints) are equipped with the metric of the uniform convergence on the bounded sets, meanwhile in the space of closed sets we consider, coherently, the Attouch-Wets topology. The paper examines, in a unified way, the lower and upper semicontinuity of the optimal value function, and the closedness, lower and upper semicontinuity (in the sense of Berge) of the optimal set mapping. This paper can be seen as a second part of the stability theory presented in [17], where we studied the stability of the feasible set mapping (completed here with the analysis of the Lipschitz-like property).
Resumo:
In this paper we present a new, accurate form of the heat balance integral method, termed the Combined Integral Method (or CIM). The application of this method to Stefan problems is discussed. For simple test cases the results are compared with exact and asymptotic limits. In particular, it is shown that the CIM is more accurate than the second order, large Stefan number, perturbation solution for a wide range of Stefan numbers. In the initial examples it is shown that the CIM reduces the standard problem, consisting of a PDE defined over a domain specified by an ODE, to the solution of one or two algebraic equations. The latter examples, where the boundary temperature varies with time, reduce to a set of three first order ODEs.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
It is well established that immunity to malaria is short-lived and is maintained by the continuous contact with the parasite. We now show that the stable transmission of malaria in Yanomami Amerindian communities maintains a degree of immunity in the exposed population capable to reduce prevalence and morbidity of malaria. We examined 508 Yanomami Amerindians living along Orinoco (407) and Mucajaí (101) rivers, on the Venezuelan and Brazilian Amazon region, respectively. At Orinoco villages, malaria was hyperendemic and presented stable transmission, while at Mucajaí villages it was mesoendemic and showed unstable transmission. The frequency of Plasmodium vivax and P. falciparum was roughly comparable in Venezuelan and Brazilian communities. Malaria presented different profiles at Orinoco and Mucajaí villages. In the former communities, malaria showed a lower prevalence (16% x 40.6%), particularly among those over 10 years old (5.2% x 34.8%), a higher frequency of asymptomatic cases (38.5% x 4.9%), and a lower frequency of cases of severe malaria (9.2% x 36.5%). Orinoco villagers also showed a higher reactivity of the immune system, measured by the frequency of splenomegaly (72.4% x 29.7%) and by the splenic index (71.4% over level 1 x 28.6), and higher prevalence (91.1% x 72.1%) and mean titer (1243 x 62) of antiplasmodial IgG antibodies, as well as a higher prevalence (77.4% x 24.7%) and mean titer (120 x 35) of antiplasmodial IgM antibodies. Our findings show that in isolated Yanomami communities the stability of malaria transmission, and the consequent continuous activation of the immune system of the exposed population, leads to the reduction of malaria prevalence and morbidity.
Resumo:
This paper investigates the role of learning by private agents and the central bank (two-sided learning) in a New Keynesian framework in which both sides of the economy have asymmetric and imperfect knowledge about the true data generating process. We assume that all agents employ the data that they observe (which may be distinct for different sets of agents) to form beliefs about unknown aspects of the true model of the economy, use their beliefs to decide on actions, and revise these beliefs through a statistical learning algorithm as new information becomes available. We study the short-run dynamics of our model and derive its policy recommendations, particularly with respect to central bank communications. We demonstrate that two-sided learning can generate substantial increases in volatility and persistence, and alter the behavior of the variables in the model in a signifficant way. Our simulations do not converge to a symmetric rational expectations equilibrium and we highlight one source that invalidates the convergence results of Marcet and Sargent (1989). Finally, we identify a novel aspect of central bank communication in models of learning: communication can be harmful if the central bank's model is substantially mis-specified
Resumo:
In a competitive world, the way a firm establishes its organizational arrangements may determine the enhancement of its core competences and the possibility of reaching new markets. Firms that find their skills to be applicable in just one type of market encounter constraints in expanding their markets, and through alliances may find a competitive form of value capture. Hybrid forms of organization appear primarily as an alternative to capturing value and managing joint assets when the market and hierarchy modes do not present any yields for the firm's competitiveness. As a result, this form may present other challenging issues, such as the allocation of rights and principal-agent problems. The biofuel market has presented a strong pattern of changes over the last 10 years. New intra-firm arrangements have appeared as a path to participate or survive among global competition. Given the need for capital to achieve better results, there has been a consistent movement of mergers and acquisitions in the Biofuel sector, especially since the 2008 financial crisis. In 2011 there were five major groups in Brazil with a grinding capacity of more than 15 million tons per year: Raízen (joint venture formed by Cosan and Shell), Louis Dreyfus, Tereos Petrobras, ETH, and Bunge. Major oil companies have implemented the strategy of diversification as a hedge against the rising cost of oil. Using the alliance of Cosan and Shell in the Brazilian biofuel market as a case study, this paper analyses the governance mode and challenging issues raised by strategic alliances when firms aim to reach new markets through the sharing of core competences with local firms. The article is based on documentary research and interviews with Cosan's Investor Relations staff, and examines the main questions involving hybrid forms through the lens of the Transaction Cost Economics (TCE), Agency Theory, Resource Based View (RBV), and dynamic capabilities theoretical approaches. One focal point is knowledge "appropriability" and the specific assets originated by the joint venture. Once the alliance is formed, it is expected that competences will be shared and new capabilities will expand the limits of the firm. In the case studied, Cosan and Shell shared a number of strategic assets related to their competences. Raízen was formed with economizing incentives, as well to continue marshalling internal resources to enhance the company's presence in the world energy sector. Therefore, some challenges might be related to the control and monitoring agents' behavior, considering the two-part organism formed by distinctive organizational culture, tacit knowledge, and long-term incentives. The case study analyzed illustrates the hybrid arrangement as a middle form for organizing the transaction: neither in the market nor in the hierarchy mode, but rather a more flexible commitment agreement with a strategic central authority. The corporate governance devices are also a challenge, since the alignment between the parent companies in the joint ventures is far more complex. These characteristics have led to an organism with bilateral dependence, offering favorable conditions for developing dynamic capabilities. However, these conditions might rely on the partners' long-term interest in the joint venture.
Resumo:
The application of the Fry method to measure strain in deformed porphyritic granites is discussed. This method requires that the distribution of markers has to satisfy at least two conditions. It has to be homogeneous and isotropic. Statistics on point distribution with the help of a Morishita diagram can easily test homogeneity. Isotropy can be checked with a cumulative histogram of angles between points. Application of these tests to undeformed (Mte Capanne granite, Elba) and to deformed (Randa orthogneiss, Alps of Switzerland) porphyritic granite reveals that their K-feldspars phenocrysts both satisfy these conditions and can be used as strain markers with the Fry method. Other problems are also examined. One is the possible distribution of deformation on discrete shear-bands. Providing several tests are met, we conclude that the Fry method can be used to estimate strain in deformed porphyritic granites. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Four-lane undivided roadways in urban areas can experience a degradation of service and/or safety as traffic volumes increase. In fact, the existence of turning vehicles on this type of roadway has a dramatic effect on both of these factors. The solution identified for these problems is typically the addition of a raised median or two-way left-turn lane (TWLTL). The mobility and safety benefits of these actions have been proven and are discussed in the “Past Research” chapter of this report along with some general cross section selection guidelines. The cost and right-of-way impacts of these actions are widely accepted. These guidelines focus on the evaluation and analysis of an alternative to the typical four-lane undivided cross section improvement approach described above. It has been found that the conversion of a four-lane undivided cross section to three lanes (i.e., one lane in each direction and a TWLTL) can improve safety and maintain an acceptable level of service. These guidelines summarize the results of past research in this area (which is almost nonexistent) and qualitative/quantitative before-and-after safety and operational impacts of case study conversions located throughout the United States and Iowa. Past research confirms that this type of conversion is acceptable or feasible in some situations but for the most part fails to specifically identify those situations. In general, the reviewed case study conversions resulted in a reduction of average or 85th percentile speeds (typically less than five miles per hour) and a relatively dramatic reduction in excessive speeding (a 60 to 70 percent reduction in the number of vehicles traveling five miles per hour faster than the posted speed limit was measured in two cases) and total crashes (reductions between 17 to 62 percent were measured). The 13 roadway conversions considered had average daily traffic volumes of 8,400 to 14,000 vehicles per day (vpd) in Iowa and 9,200 to 24,000 vehicles per day elsewhere. In addition to past research and case study results, a simulation sensitivity analysis was completed to investigate and/or confirm the operational impacts of a four-lane undivided to three-lane conversion. First, the advantages and disadvantages of different corridor simulation packages were identified for this type of analysis. Then, the CORridor SIMulation (CORSIM) software was used x to investigate and evaluate several characteristics related to the operational feasibility of a four-lane undivided to three-lane conversion. Simulated speed and level of service results for both cross sections were documented for different total peak-hour traffic, access densities, and access-point left-turn volumes (for a case study corridor defined by the researchers). These analyses assisted with the identification of the considerations for the operational feasibility determination of a four -lane to three-lane conversion. The results of the simulation analyses primarily confirmed the case study impacts. The CORSIM results indicated only a slight decrease in average arterial speed for through vehicles can be expected for a large range of peak-hour volumes, access densities, and access-point left-turn volumes (given the assumptions and design of the corridor case study evaluated). Typically, the reduction in the simulated average arterial speed (which includes both segment and signal delay) was between zero and four miles per hour when a roadway was converted from a four-lane undivided to a three-lane cross section. The simulated arterial level of service for a converted roadway, however, showed a decrease when the bi-directional peak-hour volume was about 1,750 vehicles per hour (or 17,500 vehicles per day if 10 percent of the daily volume is assumed to occur in the peak hour). Past research by others, however, indicates that 12,000 vehicles per day may be the operational capacity (i.e., level of service E) of a three-lane roadway due to vehicle platooning. The simulation results, along with past research and case study results, appear to support following volume-related feasibility suggestions for four-lane undivided to three-lane cross section conversions. It is recommended that a four-lane undivided to three-lane conversion be considered as a feasible (with respect to volume only) option when bi-directional peak-hour volumes are less than 1,500 vehicles per hour, but that some caution begin to be exercised when the roadway has a bi-directional peak-hour volume between 1,500 and 1,750 vehicles per hour. At and above 1,750 vehicles per hour, the simulation indicated a reduction in arterial level of service. Therefore, at least in Iowa, the feasibility of a four-lane undivided to three-lane conversion should be questioned and/or considered much more closely when a roadway has (or is expected to have) a peak-hour volume of more than 1,750 vehicles. Assuming that 10 percent of the daily traffic occurs during the peak-hour, these volume recommendations would correspond to 15,000 and 17,500 vehicles per day, respectively. These suggestions, however, are based on the results from one idealized case xi study corridor analysis. Individual operational analysis and/or simulations should be completed in detail once a four-lane undivided to three-lane cross section conversion is considered feasible (based on the general suggestions above) for a particular corridor. All of the simulations completed as part of this project also incorporated the optimization of signal timing to minimize vehicle delay along the corridor. A number of determination feasibility factors were identified from a review of the past research, before-and-after case study results, and the simulation sensitivity analysis. The existing and expected (i.e., design period) statuses of these factors are described and should be considered. The characteristics of these factors should be compared to each other, the impacts of other potentially feasible cross section improvements, and the goals/objectives of the community. The factors discussed in these guidelines include • roadway function and environment • overall traffic volume and level of service • turning volumes and patterns • frequent-stop and slow-moving vehicles • weaving, speed, and queues • crash type and patterns • pedestrian and bike activity • right-of-way availability, cost, and acquisition impacts • general characteristics, including - parallel roadways - offset minor street intersections - parallel parking - corner radii - at-grade railroad crossings xii The characteristics of these factors are documented in these guidelines, and their relationship to four-lane undivided to three-lane cross section conversion feasibility identified. This information is summarized along with some evaluative questions in this executive summary and Appendix C. In summary, the results of past research, numerous case studies, and the simulation analyses done as part of this project support the conclusion that in certain circumstances a four-lane undivided to three-lane conversion can be a feasible alternative for the mitigation of operational and/or safety concerns. This feasibility, however, must be determined by an evaluation of the factors identified in these guidelines (along with any others that may be relevant for a individual corridor). The expected benefits, costs, and overall impacts of a four-lane undivided to three-lane conversion should then be compared to the impacts of other feasible alternatives (e.g., adding a raised median) at a particular location.
Resumo:
This paper investigates the role of learning by private agents and the central bank(two-sided learning) in a New Keynesian framework in which both sides of the economyhave asymmetric and imperfect knowledge about the true data generating process. Weassume that all agents employ the data that they observe (which may be distinct fordifferent sets of agents) to form beliefs about unknown aspects of the true model ofthe economy, use their beliefs to decide on actions, and revise these beliefs througha statistical learning algorithm as new information becomes available. We study theshort-run dynamics of our model and derive its policy recommendations, particularlywith respect to central bank communications. We demonstrate that two-sided learningcan generate substantial increases in volatility and persistence, and alter the behaviorof the variables in the model in a significant way. Our simulations do not convergeto a symmetric rational expectations equilibrium and we highlight one source thatinvalidates the convergence results of Marcet and Sargent (1989). Finally, we identifya novel aspect of central bank communication in models of learning: communicationcan be harmful if the central bank's model is substantially mis-specified.
Resumo:
In todays competitive markets, the importance of goodscheduling strategies in manufacturing companies lead to theneed of developing efficient methods to solve complexscheduling problems.In this paper, we studied two production scheduling problemswith sequence-dependent setups times. The setup times areone of the most common complications in scheduling problems,and are usually associated with cleaning operations andchanging tools and shapes in machines.The first problem considered is a single-machine schedulingwith release dates, sequence-dependent setup times anddelivery times. The performance measure is the maximumlateness.The second problem is a job-shop scheduling problem withsequence-dependent setup times where the objective is tominimize the makespan.We present several priority dispatching rules for bothproblems, followed by a study of their performance. Finally,conclusions and directions of future research are presented.
Resumo:
The paper develops a method to solve higher-dimensional stochasticcontrol problems in continuous time. A finite difference typeapproximation scheme is used on a coarse grid of low discrepancypoints, while the value function at intermediate points is obtainedby regression. The stability properties of the method are discussed,and applications are given to test problems of up to 10 dimensions.Accurate solutions to these problems can be obtained on a personalcomputer.