999 resultados para Recursive analysis
Resumo:
Traditionally, it is assumed that the population size of cities in a country follows a Pareto distribution. This assumption is typically supported by nding evidence of Zipf's Law. Recent studies question this nding, highlighting that, while the Pareto distribution may t reasonably well when the data is truncated at the upper tail, i.e. for the largest cities of a country, the log-normal distribution may apply when all cities are considered. Moreover, conclusions may be sensitive to the choice of a particular truncation threshold, a yet overlooked issue in the literature. In this paper, then, we reassess the city size distribution in relation to its sensitivity to the choice of truncation point. In particular, we look at US Census data and apply a recursive-truncation approach to estimate Zipf's Law and a non-parametric alternative test where we consider each possible truncation point of the distribution of all cities. Results con rm the sensitivity of results to the truncation point. Moreover, repeating the analysis over simulated data con rms the di culty of distinguishing a Pareto tail from the tail of a log-normal and, in turn, identifying the city size distribution as a false or a weak Pareto law.
Resumo:
This paper develops a multi-regional general equilibrium model for climate policy analysis based on the latest version of the MIT Emissions Prediction and Policy Analysis (EPPA) model. We develop two versions so that we can solve the model either as a fully inter-temporal optimization problem (forward-looking, perfect foresight) or recursively. The standard EPPA model on which these models are based is solved recursively, and it is necessary to simplify some aspects of it to make inter-temporal solution possible. The forward-looking capability allows one to better address economic and policy issues such as borrowing and banking of GHG allowances, efficiency implications of environmental tax recycling, endogenous depletion of fossil resources, international capital flows, and optimal emissions abatement paths among others. To evaluate the solution approaches, we benchmark each version to the same macroeconomic path, and then compare the behavior of the two versions under a climate policy that restricts greenhouse gas emissions. We find that the energy sector and CO(2) price behavior are similar in both versions (in the recursive version of the model we force the inter-temporal theoretical efficiency result that abatement through time should be allocated such that the CO(2) price rises at the interest rate.) The main difference that arises is that the macroeconomic costs are substantially lower in the forward-looking version of the model, since it allows consumption shifting as an additional avenue of adjustment to the policy. On the other hand, the simplifications required for solving the model as an optimization problem, such as dropping the full vintaging of the capital stock and fewer explicit technological options, likely have effects on the results. Moreover, inter-temporal optimization with perfect foresight poorly represents the real economy where agents face high levels of uncertainty that likely lead to higher costs than if they knew the future with certainty. We conclude that while the forward-looking model has value for some problems, the recursive model produces similar behavior in the energy sector and provides greater flexibility in the details of the system that can be represented. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Leakage detection is an important issue in many chemical sensing applications. Leakage detection hy thresholds suffers from important drawbacks when sensors have serious drifts or they are affected by cross-sensitivities. Here we present an adaptive method based in a Dynamic Principal Component Analysis that models the relationships between the sensors in the may. In normal conditions a certain variance distribution characterizes sensor signals. However, in the presence of a new source of variance the PCA decomposition changes drastically. In order to prevent the influence of sensor drifts the model is adaptive and it is calculated in a recursive manner with minimum computational effort. The behavior of this technique is studied with synthetic signals and with real signals arising by oil vapor leakages in an air compressor. Results clearly demonstrate the efficiency of the proposed method.
Resumo:
PURPOSE: The European Organisation for Research and Treatment of Cancer and National Cancer Institute of Canada trial on temozolomide (TMZ) and radiotherapy (RT) in glioblastoma (GBM) has demonstrated that the combination of TMZ and RT conferred a significant and meaningful survival advantage compared with RT alone. We evaluated in this trial whether the recursive partitioning analysis (RPA) retains its overall prognostic value and what the benefit of the combined modality is in each RPA class. PATIENTS AND METHODS: Five hundred seventy-three patients with newly diagnosed GBM were randomly assigned to standard postoperative RT or to the same RT with concomitant TMZ followed by adjuvant TMZ. The primary end point was overall survival. The European Organisation for Research and Treatment of Cancer RPA used accounts for age, WHO performance status, extent of surgery, and the Mini-Mental Status Examination. RESULTS: Overall survival was statistically different among RPA classes III, IV, and V, with median survival times of 17, 15, and 10 months, respectively, and 2-year survival rates of 32%, 19%, and 11%, respectively (P < .0001). Survival with combined TMZ/RT was higher in RPA class III, with 21 months median survival time and a 43% 2-year survival rate, versus 15 months and 20% for RT alone (P = .006). In RPA class IV, the survival advantage remained significant, with median survival times of 16 v 13 months, respectively, and 2-year survival rates of 28% v 11%, respectively (P = .0001). In RPA class V, however, the survival advantage of RT/TMZ was of borderline significance (P = .054). CONCLUSION: RPA retains its prognostic significance overall as well as in patients receiving RT with or without TMZ for newly diagnosed GBM, particularly in classes III and IV.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
PURPOSE: The European Organisation for Research and Treatment of Cancer and National Cancer Institute of Canada trial on temozolomide (TMZ) and radiotherapy (RT) in glioblastoma (GBM) has demonstrated that the combination of TMZ and RT conferred a significant and meaningful survival advantage compared with RT alone. We evaluated in this trial whether the recursive partitioning analysis (RPA) retains its overall prognostic value and what the benefit of the combined modality is in each RPA class. PATIENTS AND METHODS: Five hundred seventy-three patients with newly diagnosed GBM were randomly assigned to standard postoperative RT or to the same RT with concomitant TMZ followed by adjuvant TMZ. The primary end point was overall survival. The European Organisation for Research and Treatment of Cancer RPA used accounts for age, WHO performance status, extent of surgery, and the Mini-Mental Status Examination. RESULTS: Overall survival was statistically different among RPA classes III, IV, and V, with median survival times of 17, 15, and 10 months, respectively, and 2-year survival rates of 32%, 19%, and 11%, respectively (P < .0001). Survival with combined TMZ/RT was higher in RPA class III, with 21 months median survival time and a 43% 2-year survival rate, versus 15 months and 20% for RT alone (P = .006). In RPA class IV, the survival advantage remained significant, with median survival times of 16 v 13 months, respectively, and 2-year survival rates of 28% v 11%, respectively (P = .0001). In RPA class V, however, the survival advantage of RT/TMZ was of borderline significance (P = .054). CONCLUSION: RPA retains its prognostic significance overall as well as in patients receiving RT with or without TMZ for newly diagnosed GBM, particularly in classes III and IV.
Resumo:
Recently, vision-based advanced driver-assistance systems (ADAS) have received a new increased interest to enhance driving safety. In particular, due to its high performance–cost ratio, mono-camera systems are arising as the main focus of this field of work. In this paper we present a novel on-board road modeling and vehicle detection system, which is a part of the result of the European I-WAY project. The system relies on a robust estimation of the perspective of the scene, which adapts to the dynamics of the vehicle and generates a stabilized rectified image of the road plane. This rectified plane is used by a recursive Bayesian classi- fier, which classifies pixels as belonging to different classes corresponding to the elements of interest of the scenario. This stage works as an intermediate layer that isolates subsequent modules since it absorbs the inherent variability of the scene. The system has been tested on-road, in different scenarios, including varied illumination and adverse weather conditions, and the results have been proved to be remarkable even for such complex scenarios.
Resumo:
Precise modeling of the program heap is fundamental for understanding the behavior of a program, and is thus of signiflcant interest for many optimization applications. One of the fundamental properties of the heap that can be used in a range of optimization techniques is the sharing relationships between the elements in an array or collection. If an analysis can determine that the memory locations pointed to by different entries of an array (or collection) are disjoint, then in many cases loops that traverse the array can be vectorized or transformed into a thread-parallel versión. This paper introduces several novel sharing properties over the concrete heap and corresponding abstractions to represent them. In conjunction with an existing shape analysis technique, these abstractions allow us to precisely resolve the sharing relations in a wide range of heap structures (arrays, collections, recursive data structures, composite heap structures) in a computationally efflcient manner. The effectiveness of the approach is evaluated on a set of challenge problems from the JOlden and SPECjvm98 suites. Sharing information obtained from the analysis is used to achieve substantial thread-level parallel speedups.
Resumo:
The aim of this study was to evaluate the sustainability of farm irrigation systems in the Cébalat district in northern Tunisia. It addressed the challenging topic of sustainable agriculture through a bio-economic approach linking a biophysical model to an economic optimisation model. A crop growth simulation model (CropSyst) was used to build a database to determine the relationships between agricultural practices, crop yields and environmental effects (salt accumulation in soil and leaching of nitrates) in a context of high climatic variability. The database was then fed into a recursive stochastic model set for a 10-year plan that allowed analysing the effects of cropping patterns on farm income, salt accumulation and nitrate leaching. We assumed that the long-term sustainability of soil productivity might be in conflict with farm profitability in the short-term. Assuming a discount rate of 10% (for the base scenario), the model closely reproduced the current system and allowed to predict the degradation of soil quality due to long-term salt accumulation. The results showed that there was more accumulation of salt in the soil for the base scenario than for the alternative scenario (discount rate of 0%). This result was induced by applying a higher quantity of water per hectare for the alternative as compared to a base scenario. The results also showed that nitrogen leaching is very low for the two discount rates and all climate scenarios. In conclusion, the results show that the difference in farm income between the alternative and base scenarios increases over time to attain 45% after 10 years.
Resumo:
We derive an easy-to-compute approximate bound for the range of step-sizes for which the constant-modulus algorithm (CMA) will remain stable if initialized close to a minimum of the CM cost function. Our model highlights the influence, of the signal constellation used in the transmission system: for smaller variation in the modulus of the transmitted symbols, the algorithm will be more robust, and the steady-state misadjustment will be smaller. The theoretical results are validated through several simulations, for long and short filters and channels.
Resumo:
We develop a forward-looking version of the recursive dynamic MIT Emissions Prediction and Policy Analysis (EPPA) model, and apply it to examine the economic implications of proposals in the US Congress to limit greenhouse gas (GHG) emissions. We find that shocks in the consumption path are smoothed out in the forward-looking model and that the lifetime welfare cost of GHG policy is lower than in the recursive model, since the forward-looking model can fully optimize over time. The forward-looking model allows us to explore issues for which it is uniquely well suited, including revenue-recycling and early action crediting. We find capital tax recycling to be more welfare-cost reducing than labor tax recycling because of its long-term effect on economic growth. Also, there are substantial incentives for early action credits; however, when spread over the full horizon of the policy they do not have a substantial effect on lifetime welfare costs.
Resumo:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática
Resumo:
“Many-core” systems based on a Network-on-Chip (NoC) architecture offer various opportunities in terms of performance and computing capabilities, but at the same time they pose many challenges for the deployment of real-time systems, which must fulfill specific timing requirements at runtime. It is therefore essential to identify, at design time, the parameters that have an impact on the execution time of the tasks deployed on these systems and the upper bounds on the other key parameters. The focus of this work is to determine an upper bound on the traversal time of a packet when it is transmitted over the NoC infrastructure. Towards this aim, we first identify and explore some limitations in the existing recursive-calculus-based approaches to compute the Worst-Case Traversal Time (WCTT) of a packet. Then, we extend the existing model by integrating the characteristics of the tasks that generate the packets. For this extended model, we propose an algorithm called “Branch and Prune” (BP). Our proposed method provides tighter and safe estimates than the existing recursive-calculus-based approaches. Finally, we introduce a more general approach, namely “Branch, Prune and Collapse” (BPC) which offers a configurable parameter that provides a flexible trade-off between the computational complexity and the tightness of the computed estimate. The recursive-calculus methods and BP present two special cases of BPC when a trade-off parameter is 1 or ∞, respectively. Through simulations, we analyze this trade-off, reason about the implications of certain choices, and also provide some case studies to observe the impact of task parameters on the WCTT estimates.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores