899 resultados para Multi-scale Fractal Dimension
Resumo:
Purpose – Multinationals have always needed an operating model that works – an effective plan for executing their most important activities at the right levels of their organization, whether globally, regionally or locally. The choices involved in these decisions have never been obvious, since international firms have consistently faced trade‐offs between tailoring approaches for diverse local markets and leveraging their global scale. This paper seeks a more in‐depth understanding of how successful firms manage the global‐local trade‐off in a multipolar world. Design methodology/approach – This paper utilizes a case study approach based on in‐depth senior executive interviews at several telecommunications companies including Tata Communications. The interviews probed the operating models of the companies we studied, focusing on their approaches to organization structure, management processes, management technologies (including information technology (IT)) and people/talent. Findings – Successful companies balance global‐local trade‐offs by taking a flexible and tailored approach toward their operating‐model decisions. The paper finds that successful companies, including Tata Communications, which is profiled in‐depth, are breaking up the global‐local conundrum into a set of more manageable strategic problems – what the authors call “pressure points” – which they identify by assessing their most important activities and capabilities and determining the global and local challenges associated with them. They then design a different operating model solution for each pressure point, and repeat this process as new strategic developments emerge. By doing so they not only enhance their agility, but they also continually calibrate that crucial balance between global efficiency and local responsiveness. Originality/value – This paper takes a unique approach to operating model design, finding that an operating model is better viewed as several distinct solutions to specific “pressure points” rather than a single and inflexible model that addresses all challenges equally. Now more than ever, developing the right operating model is at the top of multinational executives' priorities, and an area of increasing concern; the international business arena has changed drastically, requiring thoughtfulness and flexibility instead of standard formulas for operating internationally. Old adages like “think global and act local” no longer provide the universal guidance they once seemed to.
Resumo:
A statistical-dynamical downscaling method is used to estimate future changes of wind energy output (Eout) of a benchmark wind turbine across Europe at the regional scale. With this aim, 22 global climate models (GCMs) of the Coupled Model Intercomparison Project Phase 5 (CMIP5) ensemble are considered. The downscaling method uses circulation weather types and regional climate modelling with the COSMO-CLM model. Future projections are computed for two time periods (2021–2060 and 2061–2100) following two scenarios (RCP4.5 and RCP8.5). The CMIP5 ensemble mean response reveals a more likely than not increase of mean annual Eout over Northern and Central Europe and a likely decrease over Southern Europe. There is some uncertainty with respect to the magnitude and the sign of the changes. Higher robustness in future changes is observed for specific seasons. Except from the Mediterranean area, an ensemble mean increase of Eout is simulated for winter and a decreasing for the summer season, resulting in a strong increase of the intra-annual variability for most of Europe. The latter is, in particular, probable during the second half of the 21st century under the RCP8.5 scenario. In general, signals are stronger for 2061–2100 compared to 2021–2060 and for RCP8.5 compared to RCP4.5. Regarding changes of the inter-annual variability of Eout for Central Europe, the future projections strongly vary between individual models and also between future periods and scenarios within single models. This study showed for an ensemble of 22 CMIP5 models that changes in the wind energy potentials over Europe may take place in future decades. However, due to the uncertainties detected in this research, further investigations with multi-model ensembles are needed to provide a better quantification and understanding of the future changes.
Resumo:
Floods are the most frequent of natural disasters, affecting millions of people across the globe every year. The anticipation and forecasting of floods at the global scale is crucial to preparing for severe events and providing early awareness where local flood models and warning services may not exist. As numerical weather prediction models continue to improve, operational centres are increasingly using the meteorological output from these to drive hydrological models, creating hydrometeorological systems capable of forecasting river flow and flood events at much longer lead times than has previously been possible. Furthermore, developments in, for example, modelling capabilities, data and resources in recent years have made it possible to produce global scale flood forecasting systems. In this paper, the current state of operational large scale flood forecasting is discussed, including probabilistic forecasting of floods using ensemble prediction systems. Six state-of-the-art operational large scale flood forecasting systems are reviewed, describing similarities and differences in their approaches to forecasting floods at the global and continental scale. Currently, operational systems have the capability to produce coarse-scale discharge forecasts in the medium-range and disseminate forecasts and, in some cases, early warning products, in real time across the globe, in support of national forecasting capabilities. With improvements in seasonal weather forecasting, future advances may include more seamless hydrological forecasting at the global scale, alongside a move towards multi-model forecasts and grand ensemble techniques, responding to the requirement of developing multi-hazard early warning systems for disaster risk reduction.
Resumo:
Decadal predictions on timescales from one year to one decade are gaining importance since this time frame falls within the planning horizon of politics, economy and society. The present study examines the decadal predictability of regional wind speed and wind energy potentials in three generations of the MiKlip (‘Mittelfristige Klimaprognosen’) decadal prediction system. The system is based on the global Max-Planck-Institute Earth System Model (MPI-ESM), and the three generations differ primarily in the ocean initialisation. Ensembles of uninitialised historical and yearly initialised hindcast experiments are used to assess the forecast skill for 10 m wind speeds and wind energy output (Eout) over Central Europe with lead times from one year to one decade. With this aim, a statistical-dynamical downscaling (SDD) approach is used for the regionalisation. Its added value is evaluated by comparison of skill scores for MPI-ESM large-scale wind speeds and SDD-simulated regional wind speeds. All three MPI-ESM ensemble generations show some forecast skill for annual mean wind speed and Eout over Central Europe on yearly and multi-yearly time scales. This forecast skill is mostly limited to the first years after initialisation. Differences between the three ensemble generations are generally small. The regionalisation preserves and sometimes increases the forecast skills of the global runs but results depend on lead time and ensemble generation. Moreover, regionalisation often improves the ensemble spread. Seasonal Eout skills are generally lower than for annual means. Skill scores are lowest during summer and persist longest in autumn. A large-scale westerly weather type with strong pressure gradients over Central Europe is identified as potential source of the skill for wind energy potentials, showing a similar forecast skill and a high correlation with Eout anomalies. These results are promising towards the establishment of a decadal prediction system for wind energy applications over Central Europe.
Resumo:
Precipitation is expected to respond differently to various drivers of anthropogenic climate change. We present the first results from the Precipitation Driver and Response Model Intercomparison Project (PDRMIP), where nine global climate models have perturbed CO2, CH4, black carbon, sulfate, and solar insolation. We divide the resulting changes to global mean and regional precipitation into fast responses that scale with changes in atmospheric absorption and slow responses scaling with surface temperature change. While the overall features are broadly similar between models, we find significant regional intermodel variability, especially over land. Black carbon stands out as a component that may cause significant model diversity in predicted precipitation change. Processes linked to atmospheric absorption are less consistently modeled than those linked to top-of-atmosphere radiative forcing. We identify a number of land regions where the model ensemble consistently predicts that fast precipitation responses to climate perturbations dominate over the slow, temperature-driven responses.
Resumo:
This paper introduces the special issue of Climatic Change on the QUEST-GSI project, a global-scale multi-sectoral assessment of the impacts of climate change. The project used multiple climate models to characterise plausible climate futures with consistent baseline climate and socio-economic data and consistent assumptions, together with a suite of global-scale sectoral impacts models. It estimated impacts across sectors under specific SRES emissions scenarios, and also constructed functions relating impact to change in global mean surface temperature. This paper summarises the objectives of the project and its overall methodology, outlines how the project approach has been used in subsequent policy-relevant assessments of future climate change under different emissions futures, and summarises the general lessons learnt in the project about model validation and the presentation of multi-sector, multi-region impact assessments and their associated uncertainties to different audiences.
Resumo:
Running hydrodynamic models interactively allows both visual exploration and change of model state during simulation. One of the main characteristics of an interactive model is that it should provide immediate feedback to the user, for example respond to changes in model state or view settings. For this reason, such features are usually only available for models with a relatively small number of computational cells, which are used mainly for demonstration and educational purposes. It would be useful if interactive modeling would also work for models typically used in consultancy projects involving large scale simulations. This results in a number of technical challenges related to the combination of the model itself and the visualisation tools (scalability, implementation of an appropriate API for control and access to the internal state). While model parallelisation is increasingly addressed by the environmental modeling community, little effort has been spent on developing a high-performance interactive environment. What can we learn from other high-end visualisation domains such as 3D animation, gaming, virtual globes (Autodesk 3ds Max, Second Life, Google Earth) that also focus on efficient interaction with 3D environments? In these domains high efficiency is usually achieved by the use of computer graphics algorithms such as surface simplification depending on current view, distance to objects, and efficient caching of the aggregated representation of object meshes. We investigate how these algorithms can be re-used in the context of interactive hydrodynamic modeling without significant changes to the model code and allowing model operation on both multi-core CPU personal computers and high-performance computer clusters.
Resumo:
This paper describes the formulation of a Multi-objective Pipe Smoothing Genetic Algorithm (MOPSGA) and its application to the least cost water distribution network design problem. Evolutionary Algorithms have been widely utilised for the optimisation of both theoretical and real-world non-linear optimisation problems, including water system design and maintenance problems. In this work we present a pipe smoothing based approach to the creation and mutation of chromosomes which utilises engineering expertise with the view to increasing the performance of the algorithm whilst promoting engineering feasibility within the population of solutions. MOPSGA is based upon the standard Non-dominated Sorting Genetic Algorithm-II (NSGA-II) and incorporates a modified population initialiser and mutation operator which directly targets elements of a network with the aim to increase network smoothness (in terms of progression from one diameter to the next) using network element awareness and an elementary heuristic. The pipe smoothing heuristic used in this algorithm is based upon a fundamental principle employed by water system engineers when designing water distribution pipe networks where the diameter of any pipe is never greater than the sum of the diameters of the pipes directly upstream resulting in the transition from large to small diameters from source to the extremities of the network. MOPSGA is assessed on a number of water distribution network benchmarks from the literature including some real-world based, large scale systems. The performance of MOPSGA is directly compared to that of NSGA-II with regard to solution quality, engineering feasibility (network smoothness) and computational efficiency. MOPSGA is shown to promote both engineering and hydraulic feasibility whilst attaining good infrastructure costs compared to NSGA-II.
Resumo:
Embora tenha sido proposto que a vasculatura retínica apresenta estrutura fractal, nenhuma padronização do método de segmentação ou do método de cálculo das dimensões fractais foi realizada. Este estudo objetivou determinar se a estimação das dimensões fractais da vasculatura retínica é dependente dos métodos de segmentação vascular e dos métodos de cálculo de dimensão. Métodos: Dez imagens retinográficas foram segmentadas para extrair suas árvores vasculares por quatro métodos computacionais (“multithreshold”, “scale-space”, “pixel classification” e “ridge based detection”). Suas dimensões fractais de “informação”, de “massa-raio” e “por contagem de caixas” foram então calculadas e comparadas com as dimensões das mesmas árvores vasculares, quando obtidas pela segmentação manual (padrão áureo). Resultados: As médias das dimensões fractais variaram através dos grupos de diferentes métodos de segmentação, de 1,39 a 1,47 para a dimensão por contagem de caixas, de 1,47 a 1,52 para a dimensão de informação e de 1,48 a 1,57 para a dimensão de massa-raio. A utilização de diferentes métodos computacionais de segmentação vascular, bem como de diferentes métodos de cálculo de dimensão, introduziu diferença estatisticamente significativa nos valores das dimensões fractais das árvores vasculares. Conclusão: A estimação das dimensões fractais da vasculatura retínica foi dependente tanto dos métodos de segmentação vascular, quanto dos métodos de cálculo de dimensão utilizados
Resumo:
The microstrip antennas are in constant evidence in current researches due to several advantages that it presents. Fractal geometry coupled with good performance and convenience of the planar structures are an excellent combination for design and analysis of structures with ever smaller features and multi-resonant and broadband. This geometry has been applied in such patch microstrip antennas to reduce its size and highlight its multi-band behavior. Compared with the conventional microstrip antennas, the quasifractal patch antennas have lower frequencies of resonance, enabling the manufacture of more compact antennas. The aim of this work is the design of quasi-fractal patch antennas through the use of Koch and Minkowski fractal curves applied to radiating and nonradiating antenna s edges of conventional rectangular patch fed by microstrip inset-fed line, initially designed for the frequency of 2.45 GHz. The inset-fed technique is investigated for the impedance matching of fractal antennas, which are fed through lines of microstrip. The efficiency of this technique is investigated experimentally and compared with simulations carried out by commercial software Ansoft Designer used for precise analysis of the electromagnetic behavior of antennas by the method of moments and the neural model proposed. In this dissertation a study of literature on theory of microstrip antennas is done, the same study is performed on the fractal geometry, giving more emphasis to its various forms, techniques for generation of fractals and its applicability. This work also presents a study on artificial neural networks, showing the types/architecture of networks used and their characteristics as well as the training algorithms that were used for their implementation. The equations of settings of the parameters for networks used in this study were derived from the gradient method. It will also be carried out research with emphasis on miniaturization of the proposed new structures, showing how an antenna designed with contours fractals is capable of a miniaturized antenna conventional rectangular patch. The study also consists of a modeling through artificial neural networks of the various parameters of the electromagnetic near-fractal antennas. The presented results demonstrate the excellent capacity of modeling techniques for neural microstrip antennas and all algorithms used in this work in achieving the proposed models were implemented in commercial software simulation of Matlab 7. In order to validate the results, several prototypes of antennas were built, measured on a vector network analyzer and simulated in software for comparison
Resumo:
Fiber reinforced polymer composites have been widely applied in the aeronautical field. However, composite processing, which uses unlocked molds, should be avoided in view of the tight requirements and also due to possible environmental contamination. To produce high performance structural frames meeting aeronautical reproducibility and low cost criteria, the Brazilian industry has shown interest to investigate the resin transfer molding process (RTM) considering being a closed-mold pressure injection system which allows faster gel and cure times. Due to the fibrous composite anisotropic and non homogeneity characteristics, the fatigue behavior is a complex phenomenon quite different from to metals materials crucial to be investigated considering the aeronautical application. Fatigue sub-scale specimens of intermediate modulus carbon fiber non-crimp multi-axial reinforcement and epoxy mono-component system composite were produced according to the ASTM 3039 D. Axial fatigue tests were carried out according to ASTM D 3479. A sinusoidal load of 10 Hz frequency and load ratio R = 0.1. It was observed a high fatigue interval obtained for NCF/RTM6 composites. Weibull statistical analysis was applied to describe the failure probability of materials under cyclic loads and fractures pattern was observed by scanning electron microscopy. (C) 2010 Published by Elsevier Ltd.
Resumo:
The purpose of this study was to investigate the social-environmental implications of the first large scale wind farm recently built in Brazil (2006), Parque Eólico de Rio do Fogo (PERF), to the nearby communities. The research was base on the adjustment of the DIS/BCN tool to analyze social impact and it was linked to the multi-method approach. Applying the autophotography strategy, cameras were given to five children from the district of Zumbi, the nearest location to PERF, and they were asked to individually photograph the six places they liked the most and the six places they liked the least in their community. Then, these children were interviewed individually and collectively about the photographs. Adult locals in Zumbi, residents of Zumbi/Rio do Fogo settlement, members of the State and Municipal government and representatives of the PERF were also interviewed with the aid of some of the pictures taken by the children and others that might trigger something to say, as a strategy called sample function. The five children presented positive image towards PERF; all of them chose to photograph it as one of places they liked. Adult population of Zumbi presented positive visual evaluation towards PERF. A small number of the interviewees were aware of the environmental and social benefits of wind energy production. Residents did not participate of the decision making process regarding PERF. They approved the project, especially because of the jobs provided during construction. Nowadays, PERF is something apart from their lives because it no longer provides jobs or any other interaction between the facility and the locals. Residents relate to the land, not with the facility. However, there is no evidence of rejection towards PERF, it is simply seen as something neutral to their lives. The low levels of education, traditional lack of social commitment and citizenship, and the experience accumulated by PERF´s planners and builders in other countries, may be contributing points to the fact that Zumbi residents did not oppose to PERF. It is clear that the country needs a legislation which seriously considers the psycho-social dimension involved in the implementation of wind farms
Resumo:
This work presents the application of a multiobjective evolutionary algorithm (MOEA) for optimal power flow (OPF) solution. The OPF is modeled as a constrained nonlinear optimization problem, non-convex of large-scale, with continuous and discrete variables. The violated inequality constraints are treated as objective function of the problem. This strategy allows attending the physical and operational restrictions without compromise the quality of the found solutions. The developed MOEA is based on the theory of Pareto and employs a diversity-preserving mechanism to overcome the premature convergence of algorithm and local optimal solutions. Fuzzy set theory is employed to extract the best compromises of the Pareto set. Results for the IEEE-30, RTS-96 and IEEE-354 test systems are presents to validate the efficiency of proposed model and solution technique.
Resumo:
In Percolation Theory, functions like the probability that a given site belongs to the infinite cluster, average size of clusters, etc. are described through power laws and critical exponents. This dissertation uses a method called Finite Size Scaling to provide a estimative of those exponents. The dissertation is divided in four parts. The first one briefly presents the main results for Site Percolation Theory for d = 2 dimension. Besides, some important quantities for the determination of the critical exponents and for the phase transistions understanding are defined. The second shows an introduction to the fractal concept, dimension and classification. Concluded the base of our study, in the third part the Scale Theory is mentioned, wich relates critical exponents and the quantities described in Chapter 2. In the last part, through the Finite Size Scaling method, we determine the critical exponents fi and. Based on them, we used the previous Chapter scale relations in order to determine the remaining critical exponents
Resumo:
In this work, using the fact that in 3-3-1 models the same leptonic bilinear contributes to the masses of both charged leptons and neutrinos, we develop an effective operator mechanism to generate mass for all leptons. The effective operators have dimension five for the case of charged leptons and dimension seven for neutrinos. By adding extra scalar multiplets and imposing the discrete symmetry Z(9)xZ(2) we are able to generate realistic textures for the leptonic mixing matrix. This mechanism requires new physics at the TeV scale.