940 resultados para future energy scenario


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Among the different approaches for a construction of a fundamental quantum theory of gravity the Asymptotic Safety scenario conjectures that quantum gravity can be defined within the framework of conventional quantum field theory, but only non-perturbatively. In this case its high energy behavior is controlled by a non-Gaussian fixed point of the renormalization group flow, such that its infinite cutoff limit can be taken in a well defined way. A theory of this kind is referred to as non-perturbatively renormalizable. In the last decade a considerable amount of evidence has been collected that in four dimensional metric gravity such a fixed point, suitable for the Asymptotic Safety construction, indeed exists. This thesis extends the Asymptotic Safety program of quantum gravity by three independent studies that differ in the fundamental field variables the investigated quantum theory is based on, but all exhibit a gauge group of equivalent semi-direct product structure. It allows for the first time for a direct comparison of three asymptotically safe theories of gravity constructed from different field variables. The first study investigates metric gravity coupled to SU(N) Yang-Mills theory. In particular the gravitational effects to the running of the gauge coupling are analyzed and its implications for QED and the Standard Model are discussed. The second analysis amounts to the first investigation on an asymptotically safe theory of gravity in a pure tetrad formulation. Its renormalization group flow is compared to the corresponding approximation of the metric theory and the influence of its enlarged gauge group on the UV behavior of the theory is analyzed. The third study explores Asymptotic Safety of gravity in the Einstein-Cartan setting. Here, besides the tetrad, the spin connection is considered a second fundamental field. The larger number of independent field components and the enlarged gauge group render any RG analysis of this system much more difficult than the analog metric analysis. In order to reduce the complexity of this task a novel functional renormalization group equation is proposed, that allows for an evaluation of the flow in a purely algebraic manner. As a first example of its suitability it is applied to a three dimensional truncation of the form of the Holst action, with the Newton constant, the cosmological constant and the Immirzi parameter as its running couplings. A detailed comparison of the resulting renormalization group flow to a previous study of the same system demonstrates the reliability of the new equation and suggests its use for future studies of extended truncations in this framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rapid development in the field of lighting and illumination allows low energy consumption and a rapid growth in the use, and development of solid-state sources. As the efficiency of these devices increases and their cost decreases there are predictions that they will become the dominant source for general illumination in the short term. The objective of this thesis is to study, through extensive simulations in realistic scenarios, the feasibility and exploitation of visible light communication (VLC) for vehicular ad hoc networks (VANETs) applications. A brief introduction will introduce the new scenario of smart cities in which visible light communication will become a fundamental enabling technology for the future communication systems. Specifically, this thesis focus on the acquisition of several, frequent, and small data packets from vehicles, exploited as sensors of the environment. The use of vehicles as sensors is a new paradigm to enable an efficient environment monitoring and an improved traffic management. In most cases, the sensed information must be collected at a remote control centre and one of the most challenging aspects is the uplink acquisition of data from vehicles. My thesis discusses the opportunity to take advantage of short range vehicle-to-vehicle (V2V) and vehicle-to-roadside (V2R) communications to offload the cellular networks. More specifically, it discusses the system design and assesses the obtainable cellular resource saving, by considering the impact of the percentage of vehicles equipped with short range communication devices, of the number of deployed road side units, and of the adopted routing protocol. When short range communications are concerned, WAVE/IEEE 802.11p is considered as standard for VANETs. Its use together with VLC will be considered in urban vehicular scenarios to let vehicles communicate without involving the cellular network. The study is conducted by simulation, considering both a simulation platform (SHINE, simulation platform for heterogeneous interworking networks) developed within the Wireless communication Laboratory (Wilab) of the University of Bologna and CNR, and network simulator (NS3). trying to realistically represent all the wireless network communication aspects. Specifically, simulation of vehicular system was performed and introduced in ns-3, creating a new module for the simulator. This module will help to study VLC applications in VANETs. Final observations would enhance and encourage potential research in the area and optimize performance of VLC systems applications in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis work has been carried out during the Erasmus exchange period at the “Université Paris 6 – Pierre et Marie Curie”, in the “Edifices PolyMétalliques – EPOM” team, leaded by Prof. Anna Proust, belonging to the “Institut Parisien de Chimie Moléculaire”, under the supervision of Dr. Guillaume Izzet and Dr. Geoffroy Guillemot. The redox properties of functionalized Keggin and Dawson POMs have been exploited in photochemical, catalytic and reactivity tests. For the photochemical purposes, the selected POMs have been functionalized with different photoactive FGs, and the resulting products have been characterized by CV analyses, luminescence tests and UV-Vis analyses. In future, these materials will be tested for hydrogen photoproduction and polymerization of photoactive films. For the catalytic purposes, POMs have been firstly functionalized with silanol moieties, to obtain original coordination sites, and then post-functionalized with TMs such as V, Ti and Zr in their highest oxidation states. In this way, the catalytic properties of TMs were coupled to the redox properties of POM frameworks. The redox behavior of some of these hybrids has been studied by spectro-electrochemical and EPR methods. Catalytic epoxidation tests have been carried out on allylic alcohols and n-olefins, employing different catalysts and variable amounts of them. The performances of POM-V hybrids have been compared to those of VO(iPrO)3. Finally, reactivity of POM-VIII hybrids has been studied, using styrene oxide and ethyl-2-diazoacetate as substrates. All the obtained products have been analyzed via NMR techniques. Cyclovoltammetric analyses have been carried out in order to determine the redox behavior of selected hybrids.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of next generation microwave technology for backhauling systems is driven by an increasing capacity demand. In order to provide higher data rates and throughputs over a point-to-point link, a cost-effective performance improvement is enabled by an enhanced energy-efficiency of the transmit power amplification stage, whereas a combination of spectrally efficient modulation formats and wider bandwidths is supported by amplifiers that fulfil strict constraints in terms of linearity. An optimal trade-off between these conflicting requirements can be achieved by resorting to flexible digital signal processing techniques at baseband. In such a scenario, the adaptive digital pre-distortion is a well-known linearization method, that comes up to be a potentially widely-used solution since it can be easily integrated into base stations. Its operation can effectively compensate for the inter-modulation distortion introduced by the power amplifier, keeping up with the frequency-dependent time-varying behaviour of the relative nonlinear characteristic. In particular, the impact of the memory effects become more relevant and their equalisation become more challenging as the input discrete signal feature a wider bandwidth and a faster envelope to pre-distort. This thesis project involves the research, design and simulation a pre-distorter implementation at RTL based on a novel polyphase architecture, which makes it capable of operating over very wideband signals at a sampling rate that complies with the actual available clock speed of current digital devices. The motivation behind this structure is to carry out a feasible pre-distortion for the multi-band spectrally efficient complex signals carrying multiple channels that are going to be transmitted in near future high capacity and reliability microwave backhaul links.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In questo lavoro di tesi sono state evidenziate alcune problematiche relative alle macchine exascale (sistemi che sviluppano un exaflops di Potenza di calcolo) e all'evoluzione dei software che saranno eseguiti su questi sistemi, prendendo in esame principalmente la necessità del loro sviluppo, in quanto indispensabili per lo studio di problemi scientifici e tecnologici di più grandi dimensioni, con particolare attenzione alla Material Science, che è uno dei campi che ha avuto maggiori sviluppi grazie all'utilizzo di supercomputer, ed ad uno dei codici HPC più utilizzati in questo contesto: Quantum ESPRESSO. Dal punto di vista del software sono state presentate le prime misure di efficienza energetica su architettura ibrida grazie al prototipo di cluster EURORA sul software Quantum ESPRESSO. Queste misure sono le prime ad essere state pubblicate nel contesto software per la Material Science e serviranno come baseline per future ottimizzazioni basate sull'efficienza energetica. Nelle macchine exascale infatti uno dei requisiti per l'accesso sarà la capacità di essere energeticamente efficiente, così come oggi è un requisito la scalabilità del codice. Un altro aspetto molto importante, riguardante le macchine exascale, è la riduzione del numero di comunicazioni che riduce il costo energetico dell'algoritmo parallelo, poiché in questi nuovi sistemi costerà di più, da un punto di vista energetico, spostare i dati che calcolarli. Per tale motivo in questo lavoro sono state esposte una strategia, e la relativa implementazione, per aumentare la località dei dati in uno degli algoritmi più dispendiosi, dal punto di vista computazionale, in Quantum ESPRESSO: Fast Fourier Transform (FFT). Per portare i software attuali su una macchina exascale bisogna iniziare a testare la robustezza di tali software e i loro workflow su test case che stressino al massimo le macchine attualmente a disposizione. In questa tesi per testare il flusso di lavoro di Quantum ESPRESSO e WanT, un software per calcolo di trasporto, è stato caratterizzato un sistema scientificamente rilevante costituito da un cristallo di PDI - FCN2 che viene utilizzato per la costruzione di transistor organici OFET. Infine è stato simulato un dispositivo ideale costituito da due elettrodi in oro con al centro una singola molecola organica.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the year 2013, the detection of a diffuse astrophysical neutrino flux with the IceCube neutrino telescope – constructed at the geographic South Pole – was announced by the IceCube collaboration. However, the origin of these neutrinos is still unknown as no sources have been identified to this day. Promising neutrino source candidates are blazars, which are a subclass of active galactic nuclei with radio jets pointing towards the Earth. In this thesis, the neutrino flux from blazars is tested with a maximum likelihood stacking approach, analyzing the combined emission from uniform groups of objects. The stacking enhances the sensitivity w.r.t. the still unsuccessful single source searches. The analysis utilizes four years of IceCube data including one year from the completed detector. As all results presented in this work are compatible with background, upper limits on the neutrino flux are given. It is shown that, under certain conditions, some hadronic blazar models can be challenged or even rejected. Moreover, the sensitivity of this analysis – and any other future IceCube point source search – was enhanced by the development of a new angular reconstruction method. It is based on a detailed simulation of the photon propagation in the Antarctic ice. The median resolution for muon tracks, induced by high-energy neutrinos, is improved for all neutrino energies above IceCube’s lower threshold at 0.1TeV. By reprocessing the detector data and simulation from the year 2010, it is shown that the new method improves IceCube’s discovery potential by 20% to 30% depending on the declination.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Graphene, the thinnest two-dimensional material possible, is considered as a realistic candidate for the numerous applications in electronic, energy storage and conversion devices due to its unique properties, such as high optical transmittance, high conductivity, excellent chemical and thermal stability. However, the electronic and chemical properties of graphene are highly dependent on their preparation methods. Therefore, the development of novel chemical exfoliation process which aims at high yield synthesis of high quality graphene while maintaining good solution processability is of great concern. This thesis focuses on the solution production of high-quality graphene by wet-chemical exfoliation methods and addresses the applications of the chemically exfoliated graphene in organic electronics and energy storage devices.rnPlatinum is the most commonly used catalysts for fuel cells but they suffered from sluggish electron transfer kinetics. On the other hand, heteroatom doped graphene is known to enhance not only electrical conductivity but also long term operation stability. In this regard, a simple synthetic method is developed for the nitrogen doped graphene (NG) preparation. Moreover, iron (Fe) can be incorporated into the synthetic process. As-prepared NG with and without Fe shows excellent catalytic activity and stability compared to that of Pt based catalysts.rnHigh electrical conductivity is one of the most important requirements for the application of graphene in electronic devices. Therefore, for the fabrication of electrically conductive graphene films, a novel methane plasma assisted reduction of GO is developed. The high electrical conductivity of plasma reduced GO films revealed an excellent electrochemical performance in terms of high power and energy densities when used as an electrode in the micro-supercapacitors.rnAlthough, GO can be prepared in bulk scale, large amount of defect density and low electrical conductivity are major drawbacks. To overcome the intrinsic limitation of poor quality of GO and/or reduced GO, a novel protocol is extablished for mass production of high-quality graphene by means of electrochemical exfoliation of graphite. The prepared graphene shows high electrical conductivity, low defect density and good solution processability. Furthermore, when used as electrodes in organic field-effect transistors and/or in supercapacitors, the electrochemically exfoliated graphene shows excellent device performances. The low cost and environment friendly production of such high-quality graphene is of great importance for future generation electronics and energy storage devices. rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hybrid Elektrodenmaterialien (HEM) sind der Schlüssel zu grundlegenden Fortschritten in der Energiespeicherung und Systemen zur Energieumwandlung, einschließlich Lithium-Ionen-Batterien (LiBs), Superkondensatoren (SCs) und Brennstoffzellen (FCs). Die faszinierenden Eigenschaften von Graphen machen es zu einem guten Ausgangsmaterial für die Darstellung von HEM. Jedoch scheitern traditionelle Verfahren zur Herstellung von Graphen-HEM (GHEM) scheitern häufig an der fehlenden Kontrolle über die Morphologie und deren Einheitlichkeit, was zu unzureichenden Grenzflächenwechselwirkungen und einer mangelhaften Leistung des Materials führt. Diese Arbeit konzentriert sich auf die Herstellung von GHEM über kontrollierte Darstellungsmethoden und befasst sich mit der Nutzung von definierten GHEM für die Energiespeicherung und -umwandlung. Die große Volumenausdehnung bildet den Hauptnachteil der künftigen Lithium-Speicher-Materialien. Als erstes wird ein dreidimensionaler Graphen Schaumhybrid zur Stärkung der Grundstruktur und zur Verbesserung der elektrochemischen Leistung des Fe3O4 Anodenmaterials dargestellt. Der Einsatz von Graphenschalen und Graphennetzen realisiert dabei einen doppelten Schutz gegen die Volumenschwankung des Fe3O4 bei dem elektrochemischen Prozess. Die Leistung der SCs und der FCs hängt von der Porenstruktur und der zugänglichen Oberfläche, beziehungsweise den katalytischen Stellen der Elektrodenmaterialien ab. Wir zeigen, dass die Steuerung der Porosität über Graphen-basierte Kohlenstoffnanoschichten (HPCN) die zugängliche Oberfläche und den Ionentransport/Ladungsspeicher für SCs-Anwendungen erhöht. Desweiteren wurden Stickstoff dotierte Kohlenstoffnanoschichten (NDCN) für die kathodische Sauerstoffreduktion (ORR) hergestellt. Eine maßgeschnittene Mesoporosität verbunden mit Heteroatom Doping (Stickstoff) fördert die Exposition der aktiven Zentren und die ORR-Leistung der metallfreien Katalysatoren. Hochwertiges elektrochemisch exfoliiertes Graphen (EEG) ist ein vielversprechender Kandidat für die Darstellung von GHEM. Allerdings ist die kontrollierte Darstellung von EEG-Hybriden weiterhin eine große Herausforderung. Zu guter Letzt wird eine Bottom-up-Strategie für die Darstellung von EEG Schichten mit einer Reihe von funktionellen Nanopartikeln (Si, Fe3O4 und Pt NPs) vorgestellt. Diese Arbeit zeigt einen vielversprechenden Weg für die wirtschaftliche Synthese von EEG und EEG-basierten Materialien.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivation Thanks for a scholarship offered by ALma Mater Studiorum I could stay in Denmark for six months during which I could do physical tests on the device Gyro PTO at the Departmet of Civil Engineering of Aalborg University. Aim The goal of my thesis is an hydraulic evaluation of the device: Gyro PTO, a gyroscopic device for conversion of mechanical energy in ocean surface waves to electrical energy. The principle of the system is the application of the gyroscopic moment of flywheels equipped on a swing float excited by waves. The laboratory activities were carried out by: Morten Kramer, Jan Olsen, Irene Guaraldi, Morten Thøtt, Nikolaj Holk. The main purpose of the tests was to investigate the power absorption performance in irregular waves, but testing also included performance measures in regular waves and simple tests to get knowledge about characteristics of the device, which could facilitate the possibility of performing numerical simulations and optimizations. Methodology To generate the waves and measure the performance of the device a workstation was created in the laboratory. The workstation consist of four computers in each of wich there was a different program. Programs have been used : Awasys6, LabView, Wave lab, Motive optitrack, Matlab, Autocad Main Results Thanks to the obtained data with the tank testing was possible to make the process of wave analisys. We obtained significant wave height and period through a script Matlab and then the values of power produced, and energy efficiency of the device for two types of waves: regular and irregular. We also got results as: physical size, weight, inertia moments, hydrostatics, eigen periods, mooring stiffness, friction, hydrodynamic coefficients etc. We obtained significant parameters related to the prototype in the laboratory after which we scale up the results obtained for two future applications: one in Nissun Brending and in the North Sea. Conclusions The main conclusion on the testing is that more focus should be put into ensuring a stable and positive power output in a variety of wave conditions. In the irregular waves the power production was negative and therefore it does not make sense to scale up the results directly. The average measured capture width in the regular waves was 0.21 m. As the device width is 0.63 m this corresponds to a capture width ratio of: 0.21/0.63 * 100 = 33 %. Let’s assume that it is possible to get the device to produce as well in irregular waves under any wave conditions, and lets further assume that the yearly absorbed energy can be converted into electricity at a PTO-efficiency of 90 %. Under all those assumptions the results in table are found, i.e. a Nissum Bredning would produce 0.87 MWh/year and a North Sea device 85 MWh/year.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Solid-state shear pulverization (SSSP) is a unique processing technique for mechanochemical modification of polymers, compatibilization of polymer blends, and exfoliation and dispersion of fillers in polymer nanocomposites. A systematic parametric study of the SSSP technique is conducted to elucidate the detailed mechanism of the process and establish the basis for a range of current and future operation scenarios. Using neat, single component polypropylene (PP) as the model material, we varied machine type, screw design, and feed rate to achieve a range of shear and compression applied to the material, which can be quantified through specific energy input (Ep). As a universal processing variable, Ep reflects the level of chain scission occurring in the material, which correlates well to the extent of the physical property changes of the processed PP. Additionally, we compared the operating cost estimates of SSSP and conventional twin screw extrusion to determine the practical viability of SSSP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Solid-state shear pulverization (SSSP) is a unique processing technique for mechanochemical modification of polymers, compatibilization of polymer blends, and exfoliation and dispersion of fillers in polymer nanocomposites. A systematic parametric study of the SSSP technique is conducted to elucidate the detailed mechanism of the process and establish the basis for a range of current and future operation scenarios. Using neat, single component polypropylene (PP) as the model material, we varied machine type, screw design, and feed rate to achieve a range of shear and compression applied to the material, which can be quantified through specific energy input (Ep). As a universal processing variable, Ep reflects the level of chain scission occurring in the material, which correlates well to the extent of the physical property changes of the processed PP. Additionally, we compared the operating cost estimates of SSSP and conventional twin screw extrusion to determine the practical viability of SSSP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: In order to optimise the cost-effectiveness of active surveillance to substantiate freedom from disease, a new approach using targeted sampling of farms was developed and applied on the example of infectious bovine rhinotracheitis (IBR) and enzootic bovine leucosis (EBL) in Switzerland. Relevant risk factors (RF) for the introduction of IBR and EBL into Swiss cattle farms were identified and their relative risks defined based on literature review and expert opinions. A quantitative model based on the scenario tree method was subsequently used to calculate the required sample size of a targeted sampling approach (TS) for a given sensitivity. We compared the sample size with that of a stratified random sample (sRS) with regard to efficiency. RESULTS: The required sample sizes to substantiate disease freedom were 1,241 farms for IBR and 1,750 farms for EBL to detect 0.2% herd prevalence with 99% sensitivity. Using conventional sRS, the required sample sizes were 2,259 farms for IBR and 2,243 for EBL. Considering the additional administrative expenses required for the planning of TS, the risk-based approach was still more cost-effective than a sRS (40% reduction on the full survey costs for IBR and 8% for EBL) due to the considerable reduction in sample size. CONCLUSIONS: As the model depends on RF selected through literature review and was parameterised with values estimated by experts, it is subject to some degree of uncertainty. Nevertheless, this approach provides the veterinary authorities with a promising tool for future cost-effective sampling designs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Potential future changes in tropical cyclone (TC) characteristics are among the more serious regional threats of global climate change. Therefore, a better understanding of how anthropogenic climate change may affect TCs and how these changes translate in socio-economic impacts is required. Here, we apply a TC detection and tracking method that was developed for ERA-40 data to time-slice experiments of two atmospheric general circulation models, namely the fifth version of the European Centre model of Hamburg model (MPI, Hamburg, Germany, T213) and the Japan Meteorological Agency/ Meteorological research Institute model (MRI, Tsukuba city, Japan, TL959). For each model, two climate simulations are available: a control simulation for present-day conditions to evaluate the model against observations, and a scenario simulation to assess future changes. The evaluation of the control simulations shows that the number of intense storms is underestimated due to the model resolution. To overcome this deficiency, simulated cyclone intensities are scaled to the best track data leading to a better representation of the TC intensities. Both models project an increased number of major hurricanes and modified trajectories in their scenario simulations. These changes have an effect on the projected loss potentials. However, these state-of-the-art models still yield contradicting results, and therefore they are not yet suitable to provide robust estimates of losses due to uncertainties in simulated hurricane intensity, location and frequency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The 1s-2s interval has been measured in the muonium (;mgr;(+)e(-)) atom by Doppler-free two-photon pulsed laser spectroscopy. The frequency separation of the states was determined to be 2 455 528 941.0(9.8) MHz, in good agreement with quantum electrodynamics. The result may be interpreted as a measurement of the muon-electron charge ratio as -1-1.1(2.1)x10(-9). We expect significantly higher accuracy at future high flux muon sources and from cw laser technology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The past decade has seen the energy consumption in servers and Internet Data Centers (IDCs) skyrocket. A recent survey estimated that the worldwide spending on servers and cooling have risen to above $30 billion and is likely to exceed spending on the new server hardware . The rapid rise in energy consumption has posted a serious threat to both energy resources and the environment, which makes green computing not only worthwhile but also necessary. This dissertation intends to tackle the challenges of both reducing the energy consumption of server systems and by reducing the cost for Online Service Providers (OSPs). Two distinct subsystems account for most of IDC’s power: the server system, which accounts for 56% of the total power consumption of an IDC, and the cooling and humidifcation systems, which accounts for about 30% of the total power consumption. The server system dominates the energy consumption of an IDC, and its power draw can vary drastically with data center utilization. In this dissertation, we propose three models to achieve energy effciency in web server clusters: an energy proportional model, an optimal server allocation and frequency adjustment strategy, and a constrained Markov model. The proposed models have combined Dynamic Voltage/Frequency Scaling (DV/FS) and Vary-On, Vary-off (VOVF) mechanisms that work together for more energy savings. Meanwhile, corresponding strategies are proposed to deal with the transition overheads. We further extend server energy management to the IDC’s costs management, helping the OSPs to conserve, manage their own electricity cost, and lower the carbon emissions. We have developed an optimal energy-aware load dispatching strategy that periodically maps more requests to the locations with lower electricity prices. A carbon emission limit is placed, and the volatility of the carbon offset market is also considered. Two energy effcient strategies are applied to the server system and the cooling system respectively. With the rapid development of cloud services, we also carry out research to reduce the server energy in cloud computing environments. In this work, we propose a new live virtual machine (VM) placement scheme that can effectively map VMs to Physical Machines (PMs) with substantial energy savings in a heterogeneous server cluster. A VM/PM mapping probability matrix is constructed, in which each VM request is assigned with a probability running on PMs. The VM/PM mapping probability matrix takes into account resource limitations, VM operation overheads, server reliability as well as energy effciency. The evolution of Internet Data Centers and the increasing demands of web services raise great challenges to improve the energy effciency of IDCs. We also express several potential areas for future research in each chapter.