923 resultados para Radon mitigation
Resumo:
Shipboard X-band radar images acquired on 24 June 2009 are used to study nonlinear internal wave characteristics in the northeastern South China Sea. The studied images show three nonlinear internal waves in a packet. A method based on the Radon Transform technique is introduced to calculate internal wave parameters such as the direction of propagation and internal wave velocity from backscatter images. Assuming that the ocean is a two-layer finite depth system, we can derive the mixed-layer depth by applying the internal wave velocity to the mixed-layer depth formula. Results show reasonably good agreement with in-situ thermistor chain and conductivity-temperature-depth data sets.
Resumo:
The recovery and fate of three species of dinoflagellates, Alexandrium tamarense, Cochlodinium polykrikoides and Scrippsiella trochoidea, after having been sedimented by yellow clay, were investigated in the laboratory. The effect of burying period in yellow clay pellet and mixing on the recovery of settled algal cells were studied. The morphological changes of algal cells in yellow clay pellet were also tracked. Results showed that there was almost no recovery for A. tamarense and C. polykrikoides, and the cells decomposed after 2-3 days after visible changes in morphology and chloroplasts. There was some recovery for S. trochoidea. Moreover, S. trochoidea cysts were formed in clay pellet during the period of about 14 days, with the highest abundance of 87 000 cysts g(-1) clay and the incidence of cyst formation of 6.5%, which was considered as a potential threat for the further occurrence of algal blooms. S. trochoidea cysts were isolated from yellow clay and incubated to test their viability, and a germination ratio of more than 30% was obtained after incubation for 1 month. These results showed the species specificity of the mitigation effect of yellow clay. It is suggested that cautions be taken for some harmful species and thorough risk assessments be conducted before using this mitigation strategy in the field.
Resumo:
Previous attempts to remove the brown tide organism, Aureococcus anophagefferens, through flocculation with clays have been unsuccessful, in spite of adopting concentrations and dispersal protocols that yielded excellent cell removal efficiency (RE>90%) with other species, so a study was planned to improve cell removal. Four modifications in clay preparation and dispersal were explored: 1) varying the salinity of the clay suspension; 2) mixing of the clay-cell suspension after clay addition; 3) varying of concentration of the initial clay stock; 4) pulsed loading of the clay slurry. The effect of salinity was dependent on the clay mineral type: phosphatic clay (IMC-P2) had a higher RE than kaolinite (H-DP) when seawater was used to disperse the clay, but H-DP removed cells more efficiently when suspended in distilled water prior to application. Mixing after dispersal approximately doubled RE for both clays compared to when the slurry was layered over the culture surface. Lowering the concentration of clay stock and pulsing the clay loading increased RE, regardless of mineral type. However, this increase was more apparent for clays dispersed in seawater than in distilled water. In general, application procedures that decrease the rate of self-aggregation among the clay particles and increase the collision frequency between clay particles and A. anophagefferens achieve higher cell removal efficiency. These empirical studies demonstrated that clays might be an important control option for the brown tide organism, given the proper attention to preparation, dispersal methods, environmental impacts, and the hydrodynamic properties of the system being treated. Implications for the treatment of brown tides in the field are discussed.
Resumo:
本文讨论了基于视觉和结构光的焊缝形貌视觉检测系统的组成原理,针对拼板激光焊接工程中对焊缝检测的实时性要求高的特点,创新性地提出了一种快速获得焊缝错配缺陷的算法,主要步骤为:首先基于有关标准的检测指标阈值的设定,再次,针对在线提取到的图像进行预处理,主要是加窗和中值滤波;最后为Radon变换与错配检测。该算法能减少计算任务,从而快速获得错配检测指标。实验给出了不等厚板拼焊时的线性错配的数值和分布,同时验证了该算法的有效性。
Resumo:
The unique geologic, geomorphic and climatic conditions of southeast Tibet have made the region to develop the multi-style and frequently occurring geologic hazards, especially the collapses and landslides and debris flows along the section of Ranwu-Lulang in Sichuan-Tibet highway. However, most of those geologic hazards have close relationship with the loose accumulations. That is, the loose accumulations are the main carrier of most geologic hazards. Thereof, the huge-thick accumulations along the highway is regarded as the objective in the thesis to study the geologic background, hazarding model and mitigation methods comprehensively, based on the multi-disciplinary theories and former materials. First of all, in the paper, based on field engineering geologic investigations, the genetic type and the characteristics of spatiotemporal distribution of the huge-thick loose accumulations along the highway, have been analysized from the factors of regional geology and geomorphy and climate, as well as the coupling acting of those factors with inoculation and eruption of the loose accumulations geologic hazards. The huge-thick loose accumulations has complex genetic types and specific regulations of spatiotemporal distribution, closely controlled by the outer environment of the region. The accumulations are composed of earth and boulder, with disorder structure and poor sorting, specific forming environments and depositing conditions. And its physical and mechanic properties are greatly distinguished from rock and common earth inland. When Sichuan-Tibet highway was firstly constructed along the north bank of Purlung Tsangpo River, the huge-thick loose accumulations was cut into many high and steep slopes. Through the survey to the cut-slopes and systematic investigation to their failures, the combination of height and angle of the accumulations slope has been obtained. At the same time, the types of genetic structure of those cut-slopes are also analysized and concluded, as well as their failure models. It is studied in the paper that there are piaster, duality, multielement and complexity types in genetic structure, and rip-dump-repose, rip-shear-slip and weathering-flake types in failure models. Moreover, it is briefly introduced present engineering performance methods and techniques dealing with the deformation and failure of the accumulations cut-slope. It is also suggested that several new techniques of slope enforcement and the method of landslide and rockfall avoiding should be applied. The research of high and steep cut-slope along the highway has broadened the acknowledgement of the combination of cut-slope height and angle. Especially, the dissertation also has made the monographic studies about the geologic background and hazarding models and prevention methods of some classic but difficult accumulations geologic hazards. They are: (1) Research of the engineering geologic background of the 102 landslide group and key problems about the project of tunnel. The 102 landslide group is a famous accumulational one composed of glacial tills and glaciofuvial deposit. The tunnel project is a feasible and optional one which can solve the present plight of “sliding after just harnessing” in the 102 section. Based on the glacial geomorphy and its depositing character, distribution of seepage line, a few drillhole materials and some surveying data, the position of contact surface between gneiss and accumulations has been recognized, and the retreating velocities of three different time scales (short, medium and long term) have been approximately calculated, and the weathering thickness of gneiss has also been estimated in the paper. On the basis of above acknowledgement, new engineering geomechnic mode is established. Numerical analysis about the stability of the No.2 landslide is done by way of FLAC program, which supplies the conclusion that the landslide there develops periodically. Thereof, 4 projects of tunnel going through the landslide have been put forwards. Safety distance of the tunnel from clinohefron has been numerically analysized. (2) Research of the geologic setting and disaster model and hazard mitigation of sliding-sand-slope. From the geologic setting of talus cone, it is indicated that the sliding-sand-slope is the process of the re-transportation and re-deposit of sand under the gravity action and from the talus cone. It is the failure of the talus cone essentially. The layering structure of the sliding-sand-slope is discovered. The models of movement and failure of the sliding-sand-slope has been put forwards. The technique, “abamurus+grass-bush fence+degradable culture pan”, is suggested to enforcement and green the sliding-sand-slope. (3) Characteristics and hazarding model and disaster mitigation of debris flow. The sources of solid material of three oversize debris flows have been analysized. It is found that a large amount of moraine existing in the glacial valley and large landslide dam-break are the two important features for oversize debris flow to be taken place. The disaster models of oversize and common debris flows have been generalized respectively. The former model better interpret the event of the Yigong super-large landslide-dam breaking. The features of common debris flow along the highway section, scouring and silting and burying and impacting, are formulated carefully. It is suggested that check dam is a better engineering structure to prevent valley from steeply scouring by debris flow. Moreover, the function of check dam in enforcing the slope is numerically calculated by FLAC program. (4) Songzong ancient ice-dammed lake and its slope stability. The lacustrine profile in Songzong landslide, more than 88 meters thick, is carefully described and measured. The Optical Simulated Luminescence (OSL) ages in the bottom and top of the silty clay layer are 22.5±3.3 kaB.P., 16.1±1.7 kaB.P., respectively. It is indicated by the ages that the lacustrine deposits formed during the Last Glacial Maximum ranging from 25ka B.P. to 15ka B.P. The special characteristics of the lacustrine sediment and the ancient lake line in Songzong basin indicated that the lacustrine sediment is related to the blocking of the Purlung Tsangpo River by the glacier in Last Glacial Maximum from Dongqu valley. The characteristics of the lacustrine profile also indicate that the Songzong ice-dammed lake might run through the Last Glacial Maximum. Two dimensional numerical modeling and analysis are done to simulate the slope stability under the conditions of nature and earthquake by FLAC program. The factor of safety of the lacusrtine slope is 1.04, but it will take place horizontal flow under earthquake activity due to the liquefaction of the 18.33 m silt layer. The realign to prevent the road from landslide is suggested.
Resumo:
I address of reconstruction of spatial irregular sampling seismic data to regular grids. Spatial irregular sampling data impairs results of prestack migration, multiple attenuations, spectra estimation. Prestack 5-D volumes are often divided into sub-sections for further processing. Shot gathers are easy to obtain from irregular sampling volumes. My strategy for reconstruction is as follows: I resort irregular sampling gathers into a form of easy to bin and perform bin regularization, then utilize F-K inversion to reconstruct seismic data. In consideration of poor ability of F-K regularization to fill in large gaps, I sort regular sampling gathers to CMP and proposed high-resolution parabolic Radon transform to interpolate data and extrapolate offsets. To strong interfering noise--multiples, I use hybrid-domain high-resolution parabolic Radon transform to attenuate it. F-K regularization demand ultimately for lower computing costs. I proposed several methods to further improve efficiency of F-K inversion: first I introduce 1D and 2D NFFT algorithm for a rapid calculation of DFT operators; then develop fast 1D and 2D CG method to solve least-square equations, and utilize preconditioner to accelerate convergence of CG iterations; what’s more, I use Delaunay triangulation for weight calculation and use bandlimit frequency and varying bandwidth technique for competitive computation. Numerical 2D and 3D examples are offered to verify reasonable results and more efficiency. F-K regularization has poor ability to fill in large gaps, so I rearrange data as CMP gathers and develop hybrid-domain high-resolution parabolic Radon transforms which be used ether to interpolate null traces and extrapolate near and far offsets or suppress a strong interfere noise: multiples. I use it to attenuate multiples to verify performances of our algorithm and proposed routines for industrial application. Numerical examples and field data examples show a nice performance of our method.
Resumo:
At present, in order to image complex structures more accurately, the seismic migration methods has been developed from isotropic media to the anisotropic media. This dissertation develops a prestack time migration algorithm and application aspects for complex structures systematically. In transversely isotropic media with a vertical symmetry axis (VTI media), the dissertation starts from the theory that the prestack time migration is an approximation of the prestack depth migration, based on the one way wave equation and VTI time migration dispersion relation, by combining the stationary-phase theory gives a wave equation based VTI prestack time migration algorithm. Based on this algorithm, we can analytically obtain the travel time and amplitude expression in VTI media, as while conclude how the anisotropic parameter influence the time migration, and by analyzing the normal moveout of the far offset seismic data and lateral inhomogeneity of velocity, we can update the velocity model and estimate the anisotropic parameter model through the time migration. When anisotropic parameter is zero, this algorithm degenerates to the isotropic time migration algorithm naturally, so we can propose an isotopic processing procedure for imaging. This procedure may keep the main character of time migration such as high computational efficiency and velocity estimation through the migration, and, additionally, partially compensate the geometric divergence by adopting the deconvolution imaging condition of wave equation migration. Application of this algorithm to the complicated synthetic dataset and field data demonstrates the effectiveness of the approach. In the dissertation we also present an approach for estimating the velocity model and anisotropic parameter model. After analyzing the velocity and anisotropic parameter impaction on the time migration, and based on the normal moveout of the far offset seismic data and lateral inhomogeneity of velocity, through migration we can update the velocity model and estimate the anisotropic parameter model by combining the advantages of velocity analysis in isotropic media and anisotropic parameter estimation in VTI media. Testing on the synthetic and field data, demonstrates the method is effective and very steady. Massive synthetic dataset、2D sea dataset and 3D field datasets are used for VTI prestack time migration and compared to the stacked section after NMO and prestack isotropic time migration stacked section to demonstrate that VTI prestack time migration method in this paper can obtain better focusing and less positioning errors of complicated dip reflectors. When subsurface is more complex, primaries and multiples could not be separated in the Radon domain because they can no longer be described with simple functions (parabolic). We propose an attenuating multiple method in the image domain to resolve this problem. For a given velocity model,since time migration takes the complex structures wavefield propagation in to account, primaries and multiples have different offset-domain moveout discrepancies, then can be separated using techniques similar to the prior migration with Radon transform. Since every individual offset-domain common-reflection point gather incorporates complex 3D propagation effects, our method has the advantage of working with 3D data and complicated geology. Testing on synthetic and real data, we demonstrate the power of the method in discriminating between primaries and multiples after prestack time migration, and multiples can be attenuated in the image space considerably.
Resumo:
Rockfall is a geological evolution process involving detachment of blocks or boulders from a slope face, then their free falls, bouncing, rolling or sliding, and finally deposition near the toe of the slope. Many facts indicate that the rockfall can cause hazards to peoples, and it can be regarded as a geological hazard. A rockfall event may only involve a boulder or rock, and also several ones. When there are peoples, buildings, or other man-made establishments within the scope of rockfall trajectory, losses will be possibly induced in tenns of human lives or damages to these facilities. Researches into mechanism, kinematics, dynamics, hazard assessment, risk analysis, and mitigation measures of rockfalls are extremely necessary and important. Occurrence of rockfall is controlled by a lot of conditions, mainly including topographical, geomorphic, geological ones and triggering factors. The rockfall especially in mountainous areas, has different origins, and occurs to be frequent, unexpected, uncertain, in groups, periodic and sectional. The characterization and classification of the rockfalls not only increase knowledge about rockfall mechanism, but also can instruct mitigation of the hazards. In addition, stability of potential rockfalls have various sensitivity to different triggering factors and changes of geometrical conditions. Through theoretical analyses, laboratory experiments and field tests, the author presents some back-analysis methods for friction coefficients of sliding and rolling, and restitution coefficients. The used input data can be obtained economically and accurately in the field. Through deep studies on hazard assessment methods and analysis of factors influencing rockfall hazard, this paper presents a new assessment methodology consisting of preliminary assessment and detailed one. From the application in a 430 km long stretch of the Highway, which is located between Paksho and Nyingtri in Tibet, the methodology can be applicable for the rockfall hazard assessment in complex and difficult terrains. In addition, risk analyses along the stretch are conducted by computing the probability of encountering rockfalls and life losses resulting from rockfall impacts. Rockfall hazards may be mitigated by avoiding hazardous areas, clearness of dangerous rocks, reinforcement, obstructing the rockfalls, leading the rockfalls, warning and monitoring for rockfalls, etc. Seen from present remedial level of rockfall hazards, different mitigation measures, economical and effective buffering units, monitoring tecliniques and consciousness of environmental protection for rockfall mitigations should be further developed.
Resumo:
The dissertation addressed the problems of signals reconstruction and data restoration in seismic data processing, which takes the representation methods of signal as the main clue, and take the seismic information reconstruction (signals separation and trace interpolation) as the core. On the natural bases signal representation, I present the ICA fundamentals, algorithms and its original applications to nature earth quake signals separation and survey seismic signals separation. On determinative bases signal representation, the paper proposed seismic dada reconstruction least square inversion regularization methods, sparseness constraints, pre-conditioned conjugate gradient methods, and their applications to seismic de-convolution, Radon transformation, et. al. The core contents are about de-alias uneven seismic data reconstruction algorithm and its application to seismic interpolation. Although the dissertation discussed two cases of signal representation, they can be integrated into one frame, because they both deal with the signals or information restoration, the former reconstructing original signals from mixed signals, the later reconstructing whole data from sparse or irregular data. The goal of them is same to provide pre-processing methods and post-processing method for seismic pre-stack depth migration. ICA can separate the original signals from mixed signals by them, or abstract the basic structure from analyzed data. I surveyed the fundamental, algorithms and applications of ICA. Compared with KL transformation, I proposed the independent components transformation concept (ICT). On basis of the ne-entropy measurement of independence, I implemented the FastICA and improved it by covariance matrix. By analyzing the characteristics of the seismic signals, I introduced ICA into seismic signal processing firstly in Geophysical community, and implemented the noise separation from seismic signal. Synthetic and real data examples show the usability of ICA to seismic signal processing and initial effects are achieved. The application of ICA to separation quake conversion wave from multiple in sedimentary area is made, which demonstrates good effects, so more reasonable interpretation of underground un-continuity is got. The results show the perspective of application of ICA to Geophysical signal processing. By virtue of the relationship between ICA and Blind Deconvolution , I surveyed the seismic blind deconvolution, and discussed the perspective of applying ICA to seismic blind deconvolution with two possible solutions. The relationship of PC A, ICA and wavelet transform is claimed. It is proved that reconstruction of wavelet prototype functions is Lie group representation. By the way, over-sampled wavelet transform is proposed to enhance the seismic data resolution, which is validated by numerical examples. The key of pre-stack depth migration is the regularization of pre-stack seismic data. As a main procedure, seismic interpolation and missing data reconstruction are necessary. Firstly, I review the seismic imaging methods in order to argue the critical effect of regularization. By review of the seismic interpolation algorithms, I acclaim that de-alias uneven data reconstruction is still a challenge. The fundamental of seismic reconstruction is discussed firstly. Then sparseness constraint on least square inversion and preconditioned conjugate gradient solver are studied and implemented. Choosing constraint item with Cauchy distribution, I programmed PCG algorithm and implement sparse seismic deconvolution, high resolution Radon Transformation by PCG, which is prepared for seismic data reconstruction. About seismic interpolation, dealias even data interpolation and uneven data reconstruction are very good respectively, however they can not be combined each other. In this paper, a novel Fourier transform based method and a algorithm have been proposed, which could reconstruct both uneven and alias seismic data. I formulated band-limited data reconstruction as minimum norm least squares inversion problem where an adaptive DFT-weighted norm regularization term is used. The inverse problem is solved by pre-conditional conjugate gradient method, which makes the solutions stable and convergent quickly. Based on the assumption that seismic data are consisted of finite linear events, from sampling theorem, alias events can be attenuated via LS weight predicted linearly from low frequency. Three application issues are discussed on even gap trace interpolation, uneven gap filling, high frequency trace reconstruction from low frequency data trace constrained by few high frequency traces. Both synthetic and real data numerical examples show the proposed method is valid, efficient and applicable. The research is valuable to seismic data regularization and cross well seismic. To meet 3D shot profile depth migration request for data, schemes must be taken to make the data even and fitting the velocity dataset. The methods of this paper are used to interpolate and extrapolate the shot gathers instead of simply embedding zero traces. So, the aperture of migration is enlarged and the migration effect is improved. The results show the effectiveness and the practicability.
Resumo:
Web threats are becoming a major issue for both governments and companies. Generally, web threats increased as much as 600% during last year (WebSense, 2013). This appears to be a significant issue, since many major businesses seem to provide these services. Denial of Service (DoS) attacks are one of the most significant web threats and generally their aim is to waste the resources of the target machine (Mirkovic & Reiher, 2004). Dis-tributed Denial of Service (DDoS) attacks are typically executed from many sources and can result in large traf-fic flows. During last year 11% of DDoS attacks were over 60 Gbps (Prolexic, 2013a). The DDoS attacks are usually performed from the large botnets, which are networks of remotely controlled computers. There is an increasing effort by governments and companies to shut down the botnets (Dittrich, 2012), which has lead the attackers to look for alternative DDoS attack methods. One of the techniques to which attackers are returning to is DDoS amplification attacks. Amplification attacks use intermediate devices called amplifiers in order to amplify the attacker's traffic. This work outlines an evaluation tool and evaluates an amplification attack based on the Trivial File Transfer Proto-col (TFTP). This attack could have amplification factor of approximately 60, which rates highly alongside other researched amplification attacks. This could be a substantial issue globally, due to the fact this protocol is used in approximately 599,600 publicly open TFTP servers. Mitigation methods to this threat have also been consid-ered and a variety of countermeasures are proposed. Effects of this attack on both amplifier and target were analysed based on the proposed metrics. While it has been reported that the breaching of TFTP would be possible (Schultz, 2013), this paper provides a complete methodology for the setup of the attack, and its verification.
Resumo:
In the Spallation Neutron Source (SNS) facility at Oak Ridge National Laboratory (ORNL), the deposition of a high-energy proton beam into the liquid mercury target forms bubbles whose asymmetric collapse cause Cavitation Damage Erosion (CDE) to the container walls, thereby reducing its usable lifetime. One proposed solution for mitigation of this damage is to inject a population of microbubbles into the mercury, yielding a compliant and attenuative medium that will reduce the resulting cavitation damage. This potential solution presents the task of creating a diagnostic tool to monitor bubble population in the mercury flow in order to correlate void fraction and damage. Details of an acoustic waveguide for the eventual measurement of two-phase mercury-helium flow void fraction are discussed. The assembly’s waveguide is a vertically oriented stainless steel cylinder with 5.08cm ID, 1.27cm wall thickness and 40cm length. For water experiments, a 2.54cm thick stainless steel plate at the bottom supports the fluid, provides an acoustically rigid boundary condition, and is the mounting point for a hydrophone. A port near the bottom is the inlet for the fluid of interest. A spillover reservoir welded to the upper portion of the main tube allows for a flow-through design, yielding a pressure release top boundary condition for the waveguide. A cover on the reservoir supports an electrodynamic shaker that is driven by linear frequency sweeps to excite the tube. The hydrophone captures the frequency response of the waveguide. The sound speed of the flowing medium is calculated, assuming a linear dependence of axial mode number on modal frequency (plane wave). Assuming that the medium has an effective-mixture sound speed, and that it contains bubbles which are much smaller than the resonance radii at the highest frequency of interest (Wood’s limit), the void fraction of the flow is calculated. Results for water and bubbly water of varying void fraction are presented, and serve to demonstrate the accuracy and precision of the apparatus.
Resumo:
The case for energy policy modelling is strong in Ireland, where stringent EU climate targets are projected to be overshot by 2015. Policy targets aiming to deliver greenhouse gas and renewable energy targets have been made, but it is unclear what savings are to be achieved and from which sectors. Concurrently, the growth of personal mobility has caused an astonishing increase in CO2 emissions from private cars in Ireland, a 37% rise between 2000 and 2008, and while there have been improvements in the efficiency of car technology, there was no decrease in the energy intensity of the car fleet in the same period. This thesis increases the capacity for evidenced-based policymaking in Ireland by developing techno-economic transport energy models and using them to analyse historical trends and to project possible future scenarios. A central focus of this thesis is to understand the effect of the car fleet‘s evolving technical characteristics on energy demand. A car stock model is developed to analyse this question from three angles: Firstly, analysis of car registration and activity data between 2000 and 2008 examines the trends which brought about the surge in energy demand. Secondly, the car stock is modelled into the future and is used to populate a baseline “no new policy” scenario, looking at the impact of recent (2008-2011) policy and purchasing developments on projected energy demand and emissions. Thirdly, a range of technology efficiency, fuel switching and behavioural scenarios are developed up to 2025 in order to indicate the emissions abatement and renewable energy penetration potential from alternative policy packages. In particular, an ambitious car fleet electrification target for Ireland is examined. The car stock model‘s functionality is extended by linking it with other models: LEAP-Ireland, a bottom-up energy demand model for all energy sectors in the country; Irish TIMES, a linear optimisation energy system model; and COPERT, a pollution model. The methodology is also adapted to analyse trends in freight energy demand in a similar way. Finally, this thesis addresses the gap in the representation of travel behaviour in linear energy systems models. A novel methodology is developed and case studies for Ireland and California are presented using the TIMES model. Transport Energy
Resumo:
The sudden decrease of plasma stored energy and subsequent power deposition on the first wall of a tokamak due to edge localised modes (ELMs) is potentially detrimental to the success of a future fusion reactor. Understanding and control of ELMs is critical for the longevity of these devices and also to maximise their performance. The commonly accepted picture of ELMs posits a critical pressure gradient and current density in the plasma edge, above which coupled magnetohy drodynamic peeling-ballooning modes become unstable. Much analysis has been presented in recent years on the spatial and temporal evolution of the edge pressure gradient. However, the edge current density has typically been overlooked due to the difficulties in measuring this quantity. In this thesis, a novel method of current density recovery is presented, using the equilibrium solver CLISTE to reconstruct a high resolution equilibrium utilising both external magnetic and internal edge kinetic data measured on the ASDEX Upgrade tokamak. The evolution of the edge current density relative to an ELM crash is presented, showing that a resistive delay in the buildup of the current density is unlikely. An uncertainty analysis shows that the edge current density can be determined with an accuracy consistent with that of the kinetic data used. A comparison with neoclassical theory demonstrates excellent agreement be- tween the current density determined by CLISTE and the calculated profiles. Three ELM mitigation regimes are investigated: Type-II ELMs, ELMs sup- pressed by external magnetic perturbations, and Nitrogen seeded ELMs. In the first two cases, the current density is found to decrease as mitigation on- sets, indicating a more ballooning-like plasma behaviour. In the latter case, the flux surface averaged current density can decrease while the local current density increases, providing a mechanism to suppress both the peeling and ballooning modes.
Resumo:
The International Energy Agency has repeatedly identified increased end-use energy efficiency as the quickest, least costly method of green house gas mitigation, most recently in the 2012 World Energy Outlook, and urges all governing bodies to increase efforts to promote energy efficiency policies and technologies. The residential sector is recognised as a major potential source of cost effective energy efficiency gains. Within the EU this relative importance can be seen from a review of the National Energy Efficiency Action Plans (NEEAP) submitted by member states, which in all cases place a large emphasis on the residential sector. This is particularly true for Ireland whose residential sector has historically had higher energy consumption and CO2 emissions than the EU average and whose first NEEAP targeted 44% of the energy savings to be achieved in 2020 from this sector. This thesis develops a bottom-up engineering archetype modelling approach to analyse the Irish residential sector and to estimate the technical energy savings potential of a number of policy measures. First, a model of space and water heating energy demand for new dwellings is built and used to estimate the technical energy savings potential due to the introduction of the 2008 and 2010 changes to part L of the building regulations governing energy efficiency in new dwellings. Next, the author makes use of a valuable new dataset of Building Energy Rating (BER) survey results to first characterise the highly heterogeneous stock of existing dwellings, and then to estimate the technical energy savings potential of an ambitious national retrofit programme targeting up to 1 million residential dwellings. This thesis also presents work carried out by the author as part of a collaboration to produce a bottom-up, multi-sector LEAP model for Ireland. Overall this work highlights the challenges faced in successfully implementing both sets of policy measures. It points to the wide potential range of final savings possible from particular policy measures and the resulting high degree of uncertainty as to whether particular targets will be met and identifies the key factors on which the success of these policies will depend. It makes recommendations on further modelling work and on the improvements necessary in the data available to researchers and policy makers alike in order to develop increasingly sophisticated residential energy demand models and better inform policy.
Resumo:
Due to growing concerns regarding the anthropogenic interference with the climate system, countries across the world are being challenged to develop effective strategies to mitigate climate change by reducing or preventing greenhouse gas (GHG) emissions. The European Union (EU) is committed to contribute to this challenge by setting a number of climate and energy targets for the years 2020, 2030 and 2050 and then agreeing effort sharing amongst Member States. This thesis focus on one Member State, Ireland, which faces specific challenges and is not on track to meet the targets agreed to date. Before this work commenced, there were no projections of energy demand or supply for Ireland beyond 2020. This thesis uses techno-economic energy modelling instruments to address this knowledge gap. It builds and compares robust, comprehensive policy scenarios, providing a means of assessing the implications of different future energy and emissions pathways for the Irish economy, Ireland’s energy mix and the environment. A central focus of this thesis is to explore the dynamics of the energy system moving towards a low carbon economy. This thesis develops an energy systems model (the Irish TIMES model) to assess the implications of a range of energy and climate policy targets and target years. The thesis also compares the results generated from the least cost scenarios with official projections and target pathways and provides useful metrics and indications to identify key drivers and to support both policy makers and stakeholder in identifying cost optimal strategies. The thesis also extends the functionality of energy system modelling by developing and applying new methodologies to provide additional insights with a focus on particular issues that emerge from the scenario analysis carried out. Firstly, the thesis develops a methodology for soft-linking an energy systems model (Irish TIMES) with a power systems model (PLEXOS) to improve the interpretation of the electricity sector results in the energy system model. The soft-linking enables higher temporal resolution and improved characterisation of power plants and power system operation Secondly, the thesis develops a methodology for the integration of agriculture and energy systems modelling to enable coherent economy wide climate mitigation scenario analysis. This provides a very useful starting point for considering the trade-offs between the energy system and agriculture in the context of a low carbon economy and for enabling analysis of land-use competition. Three specific time scale perspectives are examined in this thesis (2020, 2030, 2050), aligning with key policy target time horizons. The results indicate that Ireland’s short term mandatory emissions reduction target will not be achieved without a significant reassessment of renewable energy policy and that the current dominant policy focus on wind-generated electricity is misplaced. In the medium to long term, the results suggest that energy efficiency is the first cost effective measure to deliver emissions reduction; biomass and biofuels are likely to be the most significant fuel source for Ireland in the context of a low carbon future prompting the need for a detailed assessment of possible implications for sustainability and competition with the agri-food sectors; significant changes are required in infrastructure to deliver deep emissions reductions (to enable the electrification of heat and transport, to accommodate carbon capture and storage facilities (CCS) and for biofuels); competition between energy and agriculture for land-use will become a key issue. The purpose of this thesis is to increase the evidence-based underpinning energy and climate policy decisions in Ireland. The methodology is replicable in other Member States.