949 resultados para Inter-region power flow
Resumo:
Li, Xing, 'Transition region, coronal heating and the fast solar wind', Astronomy and Astrophysics (2003) 406 pp.345-356 RAE2008
Resumo:
A well-known paradigm for load balancing in distributed systems is the``power of two choices,''whereby an item is stored at the less loaded of two (or more) random alternative servers. We investigate the power of two choices in natural settings for distributed computing where items and servers reside in a geometric space and each item is associated with the server that is its nearest neighbor. This is in fact the backdrop for distributed hash tables such as Chord, where the geometric space is determined by clockwise distance on a one-dimensional ring. Theoretically, we consider the following load balancing problem. Suppose that servers are initially hashed uniformly at random to points in the space. Sequentially, each item then considers d candidate insertion points also chosen uniformly at random from the space,and selects the insertion point whose associated server has the least load. For the one-dimensional ring, and for Euclidean distance on the two-dimensional torus, we demonstrate that when n data items are hashed to n servers,the maximum load at any server is log log n / log d + O(1) with high probability. While our results match the well-known bounds in the standard setting in which each server is selected equiprobably, our applications do not have this feature, since the sizes of the nearest-neighbor regions around servers are non-uniform. Therefore, the novelty in our methods lies in developing appropriate tail bounds on the distribution of nearest-neighbor region sizes and in adapting previous arguments to this more general setting. In addition, we provide simulation results demonstrating the load balance that results as the system size scales into the millions.
Resumo:
Scene flow methods estimate the three-dimensional motion field for points in the world, using multi-camera video data. Such methods combine multi-view reconstruction with motion estimation. This paper describes an alternative formulation for dense scene flow estimation that provides reliable results using only two cameras by fusing stereo and optical flow estimation into a single coherent framework. Internally, the proposed algorithm generates probability distributions for optical flow and disparity. Taking into account the uncertainty in the intermediate stages allows for more reliable estimation of the 3D scene flow than previous methods allow. To handle the aperture problems inherent in the estimation of optical flow and disparity, a multi-scale method along with a novel region-based technique is used within a regularized solution. This combined approach both preserves discontinuities and prevents over-regularization – two problems commonly associated with the basic multi-scale approaches. Experiments with synthetic and real test data demonstrate the strength of the proposed approach.
Resumo:
With the proliferation of mobile wireless communication and embedded systems, the energy efficiency becomes a major design constraint. The dissipated energy is often referred as the product of power dissipation and the input-output delay. Most of electronic design automation techniques focus on optimising only one of these parameters either power or delay. Industry standard design flows integrate systematic methods of optimising either area or timing while for power consumption optimisation one often employs heuristics which are characteristic to a specific design. In this work we answer three questions in our quest to provide a systematic approach to joint power and delay Optimisation. The first question of our research is: How to build a design flow which incorporates academic and industry standard design flows for power optimisation? To address this question, we use a reference design flow provided by Synopsys and integrate in this flow academic tools and methodologies. The proposed design flow is used as a platform for analysing some novel algorithms and methodologies for optimisation in the context of digital circuits. The second question we answer is: Is possible to apply a systematic approach for power optimisation in the context of combinational digital circuits? The starting point is a selection of a suitable data structure which can easily incorporate information about delay, power, area and which then allows optimisation algorithms to be applied. In particular we address the implications of a systematic power optimisation methodologies and the potential degradation of other (often conflicting) parameters such as area or the delay of implementation. Finally, the third question which this thesis attempts to answer is: Is there a systematic approach for multi-objective optimisation of delay and power? A delay-driven power and power-driven delay optimisation is proposed in order to have balanced delay and power values. This implies that each power optimisation step is not only constrained by the decrease in power but also the increase in delay. Similarly, each delay optimisation step is not only governed with the decrease in delay but also the increase in power. The goal is to obtain multi-objective optimisation of digital circuits where the two conflicting objectives are power and delay. The logic synthesis and optimisation methodology is based on AND-Inverter Graphs (AIGs) which represent the functionality of the circuit. The switching activities and arrival times of circuit nodes are annotated onto an AND-Inverter Graph under the zero and a non-zero-delay model. We introduce then several reordering rules which are applied on the AIG nodes to minimise switching power or longest path delay of the circuit at the pre-technology mapping level. The academic Electronic Design Automation (EDA) tool ABC is used for the manipulation of AND-Inverter Graphs. We have implemented various combinatorial optimisation algorithms often used in Electronic Design Automation such as Simulated Annealing and Uniform Cost Search Algorithm. Simulated Annealing (SMA) is a probabilistic meta heuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. We used SMA to probabilistically decide between moving from one optimised solution to another such that the dynamic power is optimised under given delay constraints and the delay is optimised under given power constraints. A good approximation to the global optimum solution of energy constraint is obtained. Uniform Cost Search (UCS) is a tree search algorithm used for traversing or searching a weighted tree, tree structure, or graph. We have used Uniform Cost Search Algorithm to search within the AIG network, a specific AIG node order for the reordering rules application. After the reordering rules application, the AIG network is mapped to an AIG netlist using specific library cells. Our approach combines network re-structuring, AIG nodes reordering, dynamic power and longest path delay estimation and optimisation and finally technology mapping to an AIG netlist. A set of MCNC Benchmark circuits and large combinational circuits up to 100,000 gates have been used to validate our methodology. Comparisons for power and delay optimisation are made with the best synthesis scripts used in ABC. Reduction of 23% in power and 15% in delay with minimal overhead is achieved, compared to the best known ABC results. Also, our approach is also implemented on a number of processors with combinational and sequential components and significant savings are achieved.
Resumo:
Great demand in power optimized devices shows promising economic potential and draws lots of attention in industry and research area. Due to the continuously shrinking CMOS process, not only dynamic power but also static power has emerged as a big concern in power reduction. Other than power optimization, average-case power estimation is quite significant for power budget allocation but also challenging in terms of time and effort. In this thesis, we will introduce a methodology to support modular quantitative analysis in order to estimate average power of circuits, on the basis of two concepts named Random Bag Preserving and Linear Compositionality. It can shorten simulation time and sustain high accuracy, resulting in increasing the feasibility of power estimation of big systems. For power saving, firstly, we take advantages of the low power characteristic of adiabatic logic and asynchronous logic to achieve ultra-low dynamic and static power. We will propose two memory cells, which could run in adiabatic and non-adiabatic mode. About 90% dynamic power can be saved in adiabatic mode when compared to other up-to-date designs. About 90% leakage power is saved. Secondly, a novel logic, named Asynchronous Charge Sharing Logic (ACSL), will be introduced. The realization of completion detection is simplified considerably. Not just the power reduction improvement, ACSL brings another promising feature in average power estimation called data-independency where this characteristic would make power estimation effortless and be meaningful for modular quantitative average case analysis. Finally, a new asynchronous Arithmetic Logic Unit (ALU) with a ripple carry adder implemented using the logically reversible/bidirectional characteristic exhibiting ultra-low power dissipation with sub-threshold region operating point will be presented. The proposed adder is able to operate multi-functionally.
Resumo:
The sudden decrease of plasma stored energy and subsequent power deposition on the first wall of a tokamak due to edge localised modes (ELMs) is potentially detrimental to the success of a future fusion reactor. Understanding and control of ELMs is critical for the longevity of these devices and also to maximise their performance. The commonly accepted picture of ELMs posits a critical pressure gradient and current density in the plasma edge, above which coupled magnetohy drodynamic peeling-ballooning modes become unstable. Much analysis has been presented in recent years on the spatial and temporal evolution of the edge pressure gradient. However, the edge current density has typically been overlooked due to the difficulties in measuring this quantity. In this thesis, a novel method of current density recovery is presented, using the equilibrium solver CLISTE to reconstruct a high resolution equilibrium utilising both external magnetic and internal edge kinetic data measured on the ASDEX Upgrade tokamak. The evolution of the edge current density relative to an ELM crash is presented, showing that a resistive delay in the buildup of the current density is unlikely. An uncertainty analysis shows that the edge current density can be determined with an accuracy consistent with that of the kinetic data used. A comparison with neoclassical theory demonstrates excellent agreement be- tween the current density determined by CLISTE and the calculated profiles. Three ELM mitigation regimes are investigated: Type-II ELMs, ELMs sup- pressed by external magnetic perturbations, and Nitrogen seeded ELMs. In the first two cases, the current density is found to decrease as mitigation on- sets, indicating a more ballooning-like plasma behaviour. In the latter case, the flux surface averaged current density can decrease while the local current density increases, providing a mechanism to suppress both the peeling and ballooning modes.
Resumo:
This thesis explores the inter-related attempts to secure the legitimation of risk and democracy with regard to Bt cotton, a genetically modified crop, in the state of Andhra Pradesh in India. The research included nine months of ethnographic fieldwork, extensive library and newspaper research, as well as university attendance in India, undertaken between June, 2010 and March, 2011. This comparative study (involving organic, NPM and Bt cotton cultivation) was conducted in three villages in Telangana, a region which was granted secession from Andhra Pradesh in July, 2013, and in Hyderabad, the state capital. Andhra Pradesh is renowned for its agrarian crisis and farmer suicides, as well as for the conflict which Bt cotton represents. This study adopts the categories of legitimation developed by Van Leeuwen (2007; 2008) in order to explore the theory of risk society (Beck, 1992; 1994; 1999; 2009), and the Habermasian (1996: 356-366) core-periphery model as means of theoretically analysing democratic legitimacy. The legitimation of risk and democracy in relation to Bt cotton refers to normative views on the way in which power should be exercised with regard to risk differentiation, construction and definition. The analysis finds that the more legitimate the exercise of power, the lower the exposure to risk as a concern for the collective. This also has consequences for the way in which resources are distributed, knowledge constructed, and democratic praxis institutionalised as a concern for social and epistemic justice. The thesis argues that the struggle to legitimate risk and democracy has implications not only for the constitution of the new state of Telangana and the region’s development, but also for the emergence of global society and the future development of humanity as a whole.
Resumo:
We measured the midlatitude daytime ionospheric D region electron density profile height variations in July and August 2005 near Duke University by using radio atmospherics (or sferics for short), which are the high-power, broadband very low frequency (VLF) signals launched by lightning discharges. As expected, the measured daytime D region electron density profile heights showed temporal variations quantitatively correlated with solar zenith angle changes. In the midlatitude geographical regions near Duke University, the observed quiet time heights decreased from ∼80 km near sunrise to ∼71 km near noon when the solar zenith angle was minimum. The measured height quantitative dependence on the solar zenith angle was slightly different from the low-latitude measurement given in a previous work. We also observed unexpected spatial variations not linked to the solar zenith angle on some days, with 15% of days exhibiting regional differences larger than 0.5 km. In these 2 months, 14 days had sudden height drops caused by solar flare X-rays, with a minimum height of 63.4 km observed. The induced height change during a solar flare event was approximately proportional to the logarithm of the X-ray flux. In the long waveband (wavelength, 1-8 Å), an increase in flux by a factor of 10 resulted in 6.3 km decrease of the height at the flux peak time, nearly a perfect agreement with the previous measurement. During the rising and decaying phases of the solar flare, the height changes correlated more consistently with the short, rather than the long, wavelength X-ray flux changes. © 2010 by the American Geophysical Union.
Resumo:
© 2015 Published by Elsevier B.V.Tree growth resources and the efficiency of resource-use for biomass production determine the productivity of forest ecosystems. In nutrient-limited forests, nitrogen (N)-fertilization increases foliage [N], which may increase photosynthetic rates, leaf area index (L), and thus light interception (I
Resumo:
The pseudo-spectral solution method offers a flexible and fast alternative to the more usual finite element/volume/difference methods, particularly when the long-time transient behaviour of a system is of interest. Since the exact solution is obtained at the grid collocation points superior accuracy can be achieved on modest grid resolution. Furthermore, the grid can be freely adapted with time and in space, to particular flow conditions or geometric variations. This is especially advantageous where strongly coupled, time-dependent, multi-physics solutions are investigated. Examples include metallurgical applications involving the interaction of electromagnetic fields and conducting liquids with a free sutface. The electromagnetic field then determines the instantaneous liquid volume shape and the liquid shape affects in turn the electromagnetic field. In AC applications a thin "skin effect" region results on the free surface that dominates grid requirements. Infinitesimally thin boundary cells can be introduced using Chebyshev polynomial expansions without detriment to the numerical accuracy. This paper presents a general methodology of the pseudo-spectral approach and outlines the solution procedures used. Several instructive example applications are given: the aluminium electrolysis MHD problem, induction melting and stirring and the dynamics of magnetically levitated droplets in AC and DC fields. Comparisons to available analytical solutions and to experimental measurements will be discussed.
Resumo:
Heat is extracted away from an electronic package by convection, conduction, and/or radiation. The amount of heat extracted by forced convection using air is highly dependent on the characteristics of the airflow around the package which includes its velocity and direction. Turbulence in the air is also important and is required to be modeled accurately in thermal design codes that use computational fluid dynamics (CFD). During air cooling the flow can be classified as laminar, transitional, or turbulent. In electronics systems, the flow around the packages is usually in the transition region, which lies between laminar and turbulent flow. This requires a low-Reynolds number numerical model to fully capture the impact of turbulence on the fluid flow calculations. This paper provides comparisons between a number of turbulence models with experimental data. These models included the distance from the nearest wall and the local velocity (LVEL), Wolfshtein, Norris and Reynolds, k-ε, k-ω, shear-stress transport (SST), and kε/kl models. Results show that in terms of the fluid flow calculations most of the models capture the difficult wake recirculation region behind the package reasonably well, although for packages whose heights cause a high degree of recirculation behind the package the SST model appears to struggle. The paper also demonstrates the sensitivity of the models to changes in the mesh density; this study is aimed specifically at thermal design engineers as mesh independent simulations are rarely conducted in an industrial environment.
Resumo:
The growth of computer power allows the solution of complex problems related to compressible flow, which is an important class of problems in modern day CFD. Over the last 15 years or so, many review works on CFD have been published. This book concerns both mathematical and numerical methods for compressible flow. In particular, it provides a clear cut introduction as well as in depth treatment of modern numerical methods in CFD. This book is organised in two parts. The first part consists of Chapters 1 and 2, and is mainly devoted to theoretical discussions and results. Chapter 1 concerns fundamental physical concepts and theoretical results in gas dynamics. Chapter 2 describes the basic mathematical theory of compressible flow using the inviscid Euler equations and the viscous Navier–Stokes equations. Existence and uniqueness results are also included. The second part consists of modern numerical methods for the Euler and Navier–Stokes equations. Chapter 3 is devoted entirely to the finite volume method for the numerical solution of the Euler equations and covers fundamental concepts such as order of numerical schemes, stability and high-order schemes. The finite volume method is illustrated for 1-D as well as multidimensional Euler equations. Chapter 4 covers the theory of the finite element method and its application to compressible flow. A section is devoted to the combined finite volume–finite element method, and its background theory is also included. Throughout the book numerous examples have been included to demonstrate the numerical methods. The book provides a good insight into the numerical schemes, theoretical analysis, and validation of test problems. It is a very useful reference for applied mathematicians, numerical analysts, and practice engineers. It is also an important reference for postgraduate researchers in the field of scientific computing and CFD.
Resumo:
The position and structure of the North Atlantic Subtropical Front is studied using Lagrangian flow tracks and remote sensing (AVHRR imagery: TOPEX/POSEIDON altimetry: SeaWiFS) in a broad region ( similar to 31 degree to similar to 36 degree N) of marked gradient of dynamic height (Azores Current) that extends from the Mid-Atlantic Ridge (MAR), near similar to 40 degree W, to the Eastern Boundary ( similar to 10 degree W). Drogued Argos buoy and ALACE tracks are superposed on infrared satellite images in the Subtropical Front region. Cold (cyclonic) structures, called storms, and warm (anticyclonic) structures of 100-300 km in size can be found on the south side of the Subtropical Front outcrop, which has a temperature contrast of about 1 degree C that can be followed for similar to 2500 km near 35 degree N. Warmer water adjacent to the outcrop is flowing eastward (Azores Current) but some warm water is returned westward about 300 km to the south (southern Counterflow). Estimates of horizontal diffusion in a Storm (D=2.2t10 super(2) m super(2) s super(-1)) and in the Subtropical Front region near 200 m depth (D sub(x)=1.3t10 super(4) m super(2) s super(-1), D sub(y)=2.6t10 super(3) m super(2) s super(-1)) are made from the Lagrangian tracks. Altimeter and in situ measurements show that Storms track westwards. Storms are separated by about 510 km and move westward at 2.7 km d super(-1). Remote sensing reveals that some initial structures start evolving as far east as 23 degree W but are more organized near 29 degree W and therefore Storms are about 1 year old when they reach the MAR (having travelled a distance of 1000 km). Structure and seasonality in SeaWiFS data in the region is examined.
Resumo:
The purpose of this study was to mathematically characterize the effects of defined experimental parameters (probe speed and the ratio of the probe diameter to the diameter of sample container) on the textural/mechanical properties of model gel systems. In addition, this study examined the applicability of dimensional analysis for the rheological interpretation of textural data in terms of shear stress and rate of shear. Aqueous gels (pH 7) were prepared containing 15% w/w poly(methylvinylether-co-maleic anhydride) and poly(vinylpyrrolidone) (PVP) (0, 3, 6, or 9% w/w). Texture profile analysis (TPA) was performed using a Stable Micro Systems texture analyzer (model TA-XT 2; Surrey, UK) in which an analytical probe was twice compressed into each formulation to a defined depth (15 mm) and at defined rates (1, 3, 5, 8, and 10 mm s-1), allowing a delay period (15 s) between the end of the first and beginning of the second compressions. Flow rheograms were performed using a Carri-Med CSL2-100 rheometer (TA Instruments, Surrey, UK) with parallel plate geometry under controlled shearing stresses at 20.0°?±?0.1°C. All formulations exhibited pseudoplastic flow with no thixotropy. Increasing concentrations of PVP significantly increased formulation hardness, compressibility, adhesiveness, and consistency. Increased hardness, compressibility, and consistency were ascribed to enhanced polymeric entanglements, thereby increasing the resistance to deformation. Increasing probe speed increased formulation hardness in a linear manner, because of the effects of probe speed on probe displacement and surface area. The relationship between formulation hardness and probe displacement was linear and was dependent on probe speed. Furthermore, the proportionality constant (gel strength) increased as a function of PVP concentration. The relationship between formulation hardness and diameter ratio was biphasic and was statistically defined by two linear relationships relating to diameter ratios from 0 to 0.4 and from 0.4 to 0.563. The dramatically increased hardness, associated with diameter ratios in excess of 0.4, was accredited to boundary effects, that is, the effect of the container wall on product flow. Using dimensional analysis, the hardness and probe displacement in TPA were mathematically transformed into corresponding rheological parameters, namely shearing stress and rate of shear, thereby allowing the application of the power law (??=?k?n) to textural data. Importantly, the consistencies (k) of the formulations, calculated using transformed textural data, were statistically similar to those obtained using flow rheometry. In conclusion, this study has, firstly, characterized the relationships between textural data and two key instrumental parameters in TPA and, secondly, described a method by which rheological information may be derived using this technique. This will enable a greater application of TPA for the rheological characterization of pharmaceutical gels and, in addition, will enable efficient interpretation of textural data under different experimental parameters.