949 resultados para POWER DENSITY


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present paper we numerically study instrumental impact on statistical properties of quasi-CW Raman fiber laser using a simple model of multimode laser radiation. Effects, that have the most influence, are limited electrical bandwidth of measurement equipment and noise. To check this influence, we developed a simple model of the multimode quasi- CW generation with exponential statistics (i.e. uncorrelated modes). We found that the area near zero intensity in probability density function (PDF) is strongly affected by both factors, for example both lead to formation of a negative wing of intensity distribution. But far wing slope of PDF is not affected by noise and, for moderate mismatch between optical and electrical bandwidth, is only slightly affected by bandwidth limitation. The generation spectrum often becomes broader at higher power in experiments, so the spectral/electrical bandwidth mismatch factor increases over the power that can lead to artificial dependence of the PDF slope over the power. It was also found that both effects influence the ACF background level: noise impact decreases it, while limited bandwidth leads to its increase. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes the investigation of the effects of ocular supplements with different levels of nutrients on the macular pigment optical density (MPOD) in participants with healthy eyes. Abstract A review of the literature highlighted that ocular supplements are produced in various combinations of nutrients and concentrations. The ideal concentrations of nutrients such as lutein (L) have not been established. It was unclear whether different stages of eye disease require different concentrations of key nutrients, leading to the design of this study. The primary aim was to determine the effects of ocular supplements with different concentrations of nutrients on the MPOD of healthy participants. The secondary aim was to determine L and zeaxanthin (Z) intake at the start and end of the study through completion of food diaries. The primary study was split into two experiments. Experiment 1 was an exploratory study to determine sample size and experiment 2 the main study. Statistical power was calculated and a sample size of 38 was specified. Block stratification for age, gender and smoking habit was applied and from 101 volunteers 42 completed the study, 31 with both sets of food diaries. Four confounders were accounted for in the design of the study; gender, smoking habit, age and diet. Further factors that could affect comparability of results between studies were identified during the study and were not monitored; ethnicity, gastro-intestinal health, alcohol intake, body mass index and genetics. Comparisons were made between the sample population and the Sheffield general population according to recent demographic results in the public domain. Food diaries were analysed and shown to have no statistical difference when comparing baseline to final results. The average L and Z intake for the 31 participants who returned both sets of food diaries was initially 1.96mg and 1.51mg for the final food diaries. The effect of the two ocular supplements with different levels of xanthophyll (6mg lutein/zeaxanthin and 10mg lutein only) on MPOD was not significantly different over a four-month period.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For micro gas turbines (MGT) of around 1 kW or less, a commercially suitable recuperator must be used to produce a thermal efficiency suitable for use in UK Domestic Combined Heat and Power (DCHP). This paper uses computational fluid dynamics (CFD) to investigate a recuperator design based on a helically coiled pipe-in-pipe heat exchanger which utilises industry standard stock materials and manufacturing techniques. A suitable mesh strategy was established by geometrically modelling separate boundary layer volumes to satisfy y + near wall conditions. A higher mesh density was then used to resolve the core flow. A coiled pipe-in-pipe recuperator solution for a 1 kW MGT DCHP unit was established within the volume envelope suitable for a domestic wall-hung boiler. Using a low MGT pressure ratio (necessitated by using a turbocharger oil cooled journal bearing platform) meant unit size was larger than anticipated. Raising MGT pressure ratio from 2.15 to 2.5 could significantly reduce recuperator volume. Dimensional reasoning confirmed the existence of optimum pipe diameter combinations for minimum pressure drop. Maximum heat exchanger effectiveness was achieved using an optimum or minimum pressure drop pipe combination with large pipe length as opposed to a large pressure drop pipe combination with shorter pipe length. © 2011 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A szerző egy, a szennyezőanyag-kibocsátás európai kereskedelmi rendszerében megfelelésre kötelezett gázturbinás erőmű szén-dioxid-kibocsátását modellezi négy termékre (völgy- és csúcsidőszaki áramár, gázár, kibocsátási kvóta) vonatkozó reálopciós modell segítségével. A profitmaximalizáló erőmű csak abban az esetben termel és szennyez, ha a megtermelt áramon realizálható fedezete pozitív. A jövőbeli időszak összesített szén-dioxid-kibocsátása megfeleltethető európai típusú bináris különbözetopciók összegének. A modell keretein belül a szén-dioxid-kibocsátás várható értékét és sűrűségfüggvényét becsülhetjük, az utóbbi segítségével a szén-dioxid-kibocsátási pozíció kockáztatott értékét határozhatjuk meg, amely az erőmű számára előírt megfelelési kötelezettség teljesítésének adott konfidenciaszint melletti költségét jelenti. A sztochasztikus modellben az alaptermékek geometriai Ornstein-Uhlenbeck-folyamatot követnek. Ezt illesztette a szerző a német energiatőzsdéről származó publikus piaci adatokra. A szimulációs modellre támaszkodva megvizsgálta, hogy a különböző technológiai és piaci tényezők ceteris paribus megváltozása milyen hatással van a megfelelés költségére, a kockáztatott értékére. ______ The carbon-dioxide emissions of an EU Emissions Trading System participant, gas-fuelled power generator are modelled by using real options for four underlying instruments (peak and off-peak electricity, gas, emission quota). This profit-maximizing power plant operates and emits pollution only if its profit (spread) on energy produced is positive. The future emissions can be estimated by a sum of European binary-spread options. Based on the real-option model, the expected value of emissions and its probability-density function can be deducted. Also calculable is the Value at Risk of emission quota position, which gives the cost of compliance at a given confidence level. To model the prices of the four underlying instruments, the geometric Ornstein-Uhlenbeck process is supposed and matched to public available price data from EEX. Based on the simulation model, the effects of various technological and market factors are analysed for the emissions level and the cost of compliance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the past few decades, we have been enjoying tremendous benefits thanks to the revolutionary advancement of computing systems, driven mainly by the remarkable semiconductor technology scaling and the increasingly complicated processor architecture. However, the exponentially increased transistor density has directly led to exponentially increased power consumption and dramatically elevated system temperature, which not only adversely impacts the system's cost, performance and reliability, but also increases the leakage and thus the overall power consumption. Today, the power and thermal issues have posed enormous challenges and threaten to slow down the continuous evolvement of computer technology. Effective power/thermal-aware design techniques are urgently demanded, at all design abstraction levels, from the circuit-level, the logic-level, to the architectural-level and the system-level. ^ In this dissertation, we present our research efforts to employ real-time scheduling techniques to solve the resource-constrained power/thermal-aware, design-optimization problems. In our research, we developed a set of simple yet accurate system-level models to capture the processor's thermal dynamic as well as the interdependency of leakage power consumption, temperature, and supply voltage. Based on these models, we investigated the fundamental principles in power/thermal-aware scheduling, and developed real-time scheduling techniques targeting at a variety of design objectives, including peak temperature minimization, overall energy reduction, and performance maximization. ^ The novelty of this work is that we integrate the cutting-edge research on power and thermal at the circuit and architectural-level into a set of accurate yet simplified system-level models, and are able to conduct system-level analysis and design based on these models. The theoretical study in this work serves as a solid foundation for the guidance of the power/thermal-aware scheduling algorithms development in practical computing systems.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Catering to society's demand for high performance computing, billions of transistors are now integrated on IC chips to deliver unprecedented performances. With increasing transistor density, the power consumption/density is growing exponentially. The increasing power consumption directly translates to the high chip temperature, which not only raises the packaging/cooling costs, but also degrades the performance/reliability and life span of the computing systems. Moreover, high chip temperature also greatly increases the leakage power consumption, which is becoming more and more significant with the continuous scaling of the transistor size. As the semiconductor industry continues to evolve, power and thermal challenges have become the most critical challenges in the design of new generations of computing systems. ^ In this dissertation, we addressed the power/thermal issues from the system-level perspective. Specifically, we sought to employ real-time scheduling methods to optimize the power/thermal efficiency of the real-time computing systems, with leakage/ temperature dependency taken into consideration. In our research, we first explored the fundamental principles on how to employ dynamic voltage scaling (DVS) techniques to reduce the peak operating temperature when running a real-time application on a single core platform. We further proposed a novel real-time scheduling method, “M-Oscillations” to reduce the peak temperature when scheduling a hard real-time periodic task set. We also developed three checking methods to guarantee the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research from single core platform to multi-core platform. We investigated the energy estimation problem on the multi-core platforms and developed a light weight and accurate method to calculate the energy consumption for a given voltage schedule on a multi-core platform. Finally, we concluded the dissertation with elaborated discussions of future extensions of our research. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Orthogonal Frequency-Division Multiplexing (OFDM) has been proved to be a promising technology that enables the transmission of higher data rate. Multicarrier Code-Division Multiple Access (MC-CDMA) is a transmission technique which combines the advantages of both OFDM and Code-Division Multiplexing Access (CDMA), so as to allow high transmission rates over severe time-dispersive multi-path channels without the need of a complex receiver implementation. Also MC-CDMA exploits frequency diversity via the different subcarriers, and therefore allows the high code rates systems to achieve good Bit Error Rate (BER) performances. Furthermore, the spreading in the frequency domain makes the time synchronization requirement much lower than traditional direct sequence CDMA schemes. There are still some problems when we use MC-CDMA. One is the high Peak-to-Average Power Ratio (PAPR) of the transmit signal. High PAPR leads to nonlinear distortion of the amplifier and results in inter-carrier self-interference plus out-of-band radiation. On the other hand, suppressing the Multiple Access Interference (MAI) is another crucial problem in the MC-CDMA system. Imperfect cross-correlation characteristics of the spreading codes and the multipath fading destroy the orthogonality among the users, and then cause MAI, which produces serious BER degradation in the system. Moreover, in uplink system the received signals at a base station are always asynchronous. This also destroys the orthogonality among the users, and hence, generates MAI which degrades the system performance. Besides those two problems, the interference should always be considered seriously for any communication system. In this dissertation, we design a novel MC-CDMA system, which has low PAPR and mitigated MAI. The new Semi-blind channel estimation and multi-user data detection based on Parallel Interference Cancellation (PIC) have been applied in the system. The Low Density Parity Codes (LDPC) has also been introduced into the system to improve the performance. Different interference models are analyzed in multi-carrier communication systems and then the effective interference suppression for MC-CDMA systems is employed in this dissertation. The experimental results indicate that our system not only significantly reduces the PAPR and MAI but also effectively suppresses the outside interference with low complexity. Finally, we present a practical cognitive application of the proposed system over the software defined radio platform.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Orthogonal Frequency-Division Multiplexing (OFDM) has been proved to be a promising technology that enables the transmission of higher data rate. Multicarrier Code-Division Multiple Access (MC-CDMA) is a transmission technique which combines the advantages of both OFDM and Code-Division Multiplexing Access (CDMA), so as to allow high transmission rates over severe time-dispersive multi-path channels without the need of a complex receiver implementation. Also MC-CDMA exploits frequency diversity via the different subcarriers, and therefore allows the high code rates systems to achieve good Bit Error Rate (BER) performances. Furthermore, the spreading in the frequency domain makes the time synchronization requirement much lower than traditional direct sequence CDMA schemes. There are still some problems when we use MC-CDMA. One is the high Peak-to-Average Power Ratio (PAPR) of the transmit signal. High PAPR leads to nonlinear distortion of the amplifier and results in inter-carrier self-interference plus out-of-band radiation. On the other hand, suppressing the Multiple Access Interference (MAI) is another crucial problem in the MC-CDMA system. Imperfect cross-correlation characteristics of the spreading codes and the multipath fading destroy the orthogonality among the users, and then cause MAI, which produces serious BER degradation in the system. Moreover, in uplink system the received signals at a base station are always asynchronous. This also destroys the orthogonality among the users, and hence, generates MAI which degrades the system performance. Besides those two problems, the interference should always be considered seriously for any communication system. In this dissertation, we design a novel MC-CDMA system, which has low PAPR and mitigated MAI. The new Semi-blind channel estimation and multi-user data detection based on Parallel Interference Cancellation (PIC) have been applied in the system. The Low Density Parity Codes (LDPC) has also been introduced into the system to improve the performance. Different interference models are analyzed in multi-carrier communication systems and then the effective interference suppression for MC-CDMA systems is employed in this dissertation. The experimental results indicate that our system not only significantly reduces the PAPR and MAI but also effectively suppresses the outside interference with low complexity. Finally, we present a practical cognitive application of the proposed system over the software defined radio platform.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work looks at the effect on mid-gap interface state defect density estimates for In0.53Ga0.47As semiconductor capacitors when different AC voltage amplitudes are selected for a fixed voltage bias step size (100 mV) during room temperature only electrical characterization. Results are presented for Au/Ni/Al2O3/In0.53Ga0.47As/InP metal–oxide–semiconductor capacitors with (1) n-type and p-type semiconductors, (2) different Al2O3 thicknesses, (3) different In0.53Ga0.47As surface passivation concentrations of ammonium sulphide, and (4) different transfer times to the atomic layer deposition chamber after passivation treatment on the semiconductor surface—thereby demonstrating a cross-section of device characteristics. The authors set out to determine the importance of the AC voltage amplitude selection on the interface state defect density extractions and whether this selection has a combined effect with the oxide capacitance. These capacitors are prototypical of the type of gate oxide material stacks that could form equivalent metal–oxide–semiconductor field-effect transistors beyond the 32 nm technology node. The authors do not attempt to achieve the best scaled equivalent oxide thickness in this work, as our focus is on accurately extracting device properties that will allow the investigation and reduction of interface state defect densities at the high-k/III–V semiconductor interface. The operating voltage for future devices will be reduced, potentially leading to an associated reduction in the AC voltage amplitude, which will force a decrease in the signal-to-noise ratio of electrical responses and could therefore result in less accurate impedance measurements. A concern thus arises regarding the accuracy of the electrical property extractions using such impedance measurements for future devices, particularly in relation to the mid-gap interface state defect density estimated from the conductance method and from the combined high–low frequency capacitance–voltage method. The authors apply a fixed voltage step of 100 mV for all voltage sweep measurements at each AC frequency. Each of these measurements is repeated 15 times for the equidistant AC voltage amplitudes between 10 mV and 150 mV. This provides the desired AC voltage amplitude to step size ratios from 1:10 to 3:2. Our results indicate that, although the selection of the oxide capacitance is important both to the success and accuracy of the extraction method, the mid-gap interface state defect density extractions are not overly sensitive to the AC voltage amplitude employed regardless of what oxide capacitance is used in the extractions, particularly in the range from 50% below the voltage sweep step size to 50% above it. Therefore, the use of larger AC voltage amplitudes in this range to achieve a better signal-to-noise ratio during impedance measurements for future low operating voltage devices will not distort the extracted interface state defect density.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quantile regression (QR) was first introduced by Roger Koenker and Gilbert Bassett in 1978. It is robust to outliers which affect least squares estimator on a large scale in linear regression. Instead of modeling mean of the response, QR provides an alternative way to model the relationship between quantiles of the response and covariates. Therefore, QR can be widely used to solve problems in econometrics, environmental sciences and health sciences. Sample size is an important factor in the planning stage of experimental design and observational studies. In ordinary linear regression, sample size may be determined based on either precision analysis or power analysis with closed form formulas. There are also methods that calculate sample size based on precision analysis for QR like C.Jennen-Steinmetz and S.Wellek (2005). A method to estimate sample size for QR based on power analysis was proposed by Shao and Wang (2009). In this paper, a new method is proposed to calculate sample size based on power analysis under hypothesis test of covariate effects. Even though error distribution assumption is not necessary for QR analysis itself, researchers have to make assumptions of error distribution and covariate structure in the planning stage of a study to obtain a reasonable estimate of sample size. In this project, both parametric and nonparametric methods are provided to estimate error distribution. Since the method proposed can be implemented in R, user is able to choose either parametric distribution or nonparametric kernel density estimation for error distribution. User also needs to specify the covariate structure and effect size to carry out sample size and power calculation. The performance of the method proposed is further evaluated using numerical simulation. The results suggest that the sample sizes obtained from our method provide empirical powers that are closed to the nominal power level, for example, 80%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thermoelectric materials are revisited for various applications including power generation. The direct conversion of temperature differences into electric voltage and vice versa is known as thermoelectric effect. Possible applications of thermoelectric materials are in eco-friendly refrigeration, electric power generation from waste heat, infrared sensors, temperature controlled-seats and portable picnic coolers. Thermoelectric materials are also extensively researched upon as an alternative to compression based refrigeration. This utilizes the principle of Peltier cooling. The performance characteristic of a thermoelectric material, termed as figure of merit (ZT) is a function of several transport coefficients such as electrical conductivity (σ), thermal conductivity (κ) and Seebeck coefficient of the material (S). ZT is expressed asκσTZTS2=, where T is the temperature in degree absolute. A large value of Seebeck coefficient, high electrical conductivity and low thermal conductivity are necessary to realize a high performance thermoelectric material. The best known thermoelectric materials are phonon-glass electron – crystal (PGEC) system where the phonons are scattered within the unit cell by the rattling structure and electrons are scattered less as in crystals to obtain a high electrical conductivity. A survey of literature reveals that correlated semiconductors and Kondo insulators containing rare earth or transition metal ions are found to be potential thermoelectric materials. The structural magnetic and charge transport properties in manganese oxides having the general formula of RE1−xAExMnO3 (RE = rare earth, AE= Ca, Sr, Ba) are solely determined by the mixed valence (3+/4+) state of Mn ions. In strongly correlated electron systems, magnetism and charge transport properties are strongly correlated. Within the area of strongly correlated electron systems the study of manganese oxides, widely known as manganites exhibit unique magneto electric transport properties, is an active area of research.Strongly correlated systems like perovskite manganites, characterized by their narrow localized band and hoping conduction, were found to be good candidates for thermoelectric applications. Manganites represent a highly correlated electron system and exhibit a variety of phenomena such as charge, orbital and magnetic ordering, colossal magneto resistance and Jahn-Teller effect. The strong inter-dependence between the magnetic order parameters and the transport coefficients in manganites has generated much research interest in the thermoelectric properties of manganites. Here, large thermal motion or rattling of rare earth atoms with localized magnetic moments is believed to be responsible for low thermal conductivity of these compounds. The 4f levels in these compounds, lying near the Fermi energy, create large density of states at the Fermi level and hence they are likely to exhibit a fairly large value of Seebeck coefficient.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The hydro dynamical actions in big Lakes directly influence dynamic, physical and chemical affairs. The circulation's models and temperature have something to do with the movements of fluids, and analysis for circulation in Caspian sea is because of the lack of observation through which the circulations and out comings are determined. Through the studies, three dimensional simulations (Large- Scale) are planned and performed, according to Smolakiewicz and Margolin works. This is a non- hydrostatic and Boussinesq approximation is used in its formulation is used in its formulation on the basis of Lipps (1990) theorem and curve lines, the fluid is constant adiabatic and stratified, and the wind power is considered zero. The profile of speed according to previous depth and before ridge can be drawn on the basis of density available between northern and southern ridges. The circulation field is drawn from 3 cm/s to 13 cm/s on the plate z= 5 cm , the vertical changes of speed on the plate is 0.02 m/s. Vertical profile , horizontal speed in previous on, and after the ridges on are drawn on different spaces. It changes from 0.5 cm/s to 1 cm/s before ridges.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current space exploration has transpired through the use of chemical rockets, and they have served us well, but they have their limitations. Exploration of the outer solar system, Jupiter and beyond will most likely require a new generation of propulsion system. One potential technology class to provide spacecraft propulsion and power systems involve thermonuclear fusion plasma systems. In this class it is well accepted that d-He3 fusion is the most promising of the fuel candidates for spacecraft applications as the 14.7 MeV protons carry up to 80% of the total fusion power while ‘s have energies less than 4 MeV. The other minor fusion products from secondary d-d reactions consisting of 3He, n, p, and 3H also have energies less than 4 MeV. Furthermore there are two main fusion subsets namely, Magnetic Confinement Fusion devices and Inertial Electrostatic Confinement (or IEC) Fusion devices. Magnetic Confinement Fusion devices are characterized by complex geometries and prohibitive structural mass compromising spacecraft use at this stage of exploration. While generating energy from a lightweight and reliable fusion source is important, another critical issue is harnessing this energy into usable power and/or propulsion. IEC fusion is a method of fusion plasma confinement that uses a series of biased electrodes that accelerate a uniform spherical beam of ions into a hollow cathode typically comprised of a gridded structure with high transparency. The inertia of the imploding ion beam compresses the ions at the center of the cathode increasing the density to the point where fusion occurs. Since the velocity distributions of fusion particles in an IEC are essentially isotropic and carry no net momentum, a means of redirecting the velocity of the particles is necessary to efficiently extract energy and provide power or create thrust. There are classes of advanced fuel fusion reactions where direct-energy conversion based on electrostatically-biased collector plates is impossible due to potential limits, material structure limitations, and IEC geometry. Thermal conversion systems are also inefficient for this application. A method of converting the isotropic IEC into a collimated flow of fusion products solves these issues and allows direct energy conversion. An efficient traveling wave direct energy converter has been proposed and studied by Momota , Shu and further studied by evaluated with numerical simulations by Ishikawa and others. One of the conventional methods of collimating charged particles is to surround the particle source with an applied magnetic channel. Charged particles are trapped and move along the lines of flux. By introducing expanding lines of force gradually along the magnetic channel, the velocity component perpendicular to the lines of force is transferred to the parallel one. However, efficient operation of the IEC requires a null magnetic field at the core of the device. In order to achieve this, Momota and Miley have proposed a pair of magnetic coils anti-parallel to the magnetic channel creating a null hexapole magnetic field region necessary for the IEC fusion core. Numerically, collimation of 300 eV electrons without a stabilization coil was demonstrated to approach 95% at a profile corresponding to Vsolenoid = 20.0V, Ifloating = 2.78A, Isolenoid = 4.05A while collimation of electrons with stabilization coil present was demonstrated to reach 69% at a profile corresponding to Vsolenoid = 7.0V, Istab = 1.1A, Ifloating = 1.1A, Isolenoid = 1.45A. Experimentally, collimation of electrons with stabilization coil present was demonstrated experimentally to be 35% at 100 eV and reach a peak of 39.6% at 50eV with a profile corresponding to Vsolenoid = 7.0V, Istab = 1.1A, Ifloating = 1.1A, Isolenoid = 1.45A and collimation of 300 eV electrons without a stabilization coil was demonstrated to approach 49% at a profile corresponding to Vsolenoid = 20.0V, Ifloating = 2.78A, Isolenoid = 4.05A 6.4% of the 300eV electrons’ initial velocity is directed to the collector plates. The remaining electrons are trapped by the collimator’s magnetic field. These particles oscillate around the null field region several hundred times and eventually escape to the collector plates. At a solenoid voltage profile of 7 Volts, 100 eV electrons are collimated with wall and perpendicular component losses of 31%. Increasing the electron energy beyond 100 eV increases the wall losses by 25% at 300 eV. Ultimately it was determined that a field strength deriving from 9.5 MAT/m would be required to collimate 14.7 MeV fusion protons from d-3He fueled IEC fusion core. The concept of the proton collimator has been proven to be effective to transform an isotropic source into a collimated flow of particles ripe for direct energy conversion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context. In February-March 2014, the MAGIC telescopes observed the high-frequency peaked BL Lac 1ES 1011+496 (z=0.212) in flaring state at very-high energy (VHE, E>100GeV). The flux reached a level more than 10 times higher than any previously recorded flaring state of the source. Aims. Description of the characteristics of the flare presenting the light curve and the spectral parameters of the night-wise spectra and the average spectrum of the whole period. From these data we aim at detecting the imprint of the Extragalactic Background Light (EBL) in the VHE spectrum of the source, in order to constrain its intensity in the optical band. Methods. We analyzed the gamma-ray data from the MAGIC telescopes using the standard MAGIC software for the production of the light curve and the spectra. For the constraining of the EBL we implement the method developed by the H.E.S.S. collaboration in which the intrinsic energy spectrum of the source is modeled with a simple function (< 4 parameters), and the EBL-induced optical depth is calculated using a template EBL model. The likelihood of the observed spectrum is then maximized, including a normalization factor for the EBL opacity among the free parameters. Results. The collected data allowed us to describe the flux changes night by night and also to produce di_erential energy spectra for all nights of the observed period. The estimated intrinsic spectra of all the nights could be fitted by power-law functions. Evaluating the changes in the fit parameters we conclude that the spectral shape for most of the nights were compatible, regardless of the flux level, which enabled us to produce an average spectrum from which the EBL imprint could be constrained. The likelihood ratio test shows that the model with an EBL density 1:07 (-0.20,+0.24)stat+sys, relative to the one in the tested EBL template (Domínguez et al. 2011), is preferred at the 4:6 σ level to the no-EBL hypothesis, with the assumption that the intrinsic source spectrum can be modeled as a log-parabola. This would translate into a constraint of the EBL density in the wavelength range [0.24 μm,4.25 μm], with a peak value at 1.4 μm of λF_ = 12:27^(+2:75)_ (-2:29) nW m^(-2) sr^(-1), including systematics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the semiconductor industry struggles to maintain its momentum down the path following the Moore's Law, three dimensional integrated circuit (3D IC) technology has emerged as a promising solution to achieve higher integration density, better performance, and lower power consumption. However, despite its significant improvement in electrical performance, 3D IC presents several serious physical design challenges. In this dissertation, we investigate physical design methodologies for 3D ICs with primary focus on two areas: low power 3D clock tree design, and reliability degradation modeling and management. Clock trees are essential parts for digital system which dissipate a large amount of power due to high capacitive loads. The majority of existing 3D clock tree designs focus on minimizing the total wire length, which produces sub-optimal results for power optimization. In this dissertation, we formulate a 3D clock tree design flow which directly optimizes for clock power. Besides, we also investigate the design methodology for clock gating a 3D clock tree, which uses shutdown gates to selectively turn off unnecessary clock activities. Different from the common assumption in 2D ICs that shutdown gates are cheap thus can be applied at every clock node, shutdown gates in 3D ICs introduce additional control TSVs, which compete with clock TSVs for placement resources. We explore the design methodologies to produce the optimal allocation and placement for clock and control TSVs so that the clock power is minimized. We show that the proposed synthesis flow saves significant clock power while accounting for available TSV placement area. Vertical integration also brings new reliability challenges including TSV's electromigration (EM) and several other reliability loss mechanisms caused by TSV-induced stress. These reliability loss models involve complex inter-dependencies between electrical and thermal conditions, which have not been investigated in the past. In this dissertation we set up an electrical/thermal/reliability co-simulation framework to capture the transient of reliability loss in 3D ICs. We further derive and validate an analytical reliability objective function that can be integrated into the 3D placement design flow. The reliability aware placement scheme enables co-design and co-optimization of both the electrical and reliability property, thus improves both the circuit's performance and its lifetime. Our electrical/reliability co-design scheme avoids unnecessary design cycles or application of ad-hoc fixes that lead to sub-optimal performance. Vertical integration also enables stacking DRAM on top of CPU, providing high bandwidth and short latency. However, non-uniform voltage fluctuation and local thermal hotspot in CPU layers are coupled into DRAM layers, causing a non-uniform bit-cell leakage (thereby bit flip) distribution. We propose a performance-power-resilience simulation framework to capture DRAM soft error in 3D multi-core CPU systems. In addition, a dynamic resilience management (DRM) scheme is investigated, which adaptively tunes CPU's operating points to adjust DRAM's voltage noise and thermal condition during runtime. The DRM uses dynamic frequency scaling to achieve a resilience borrow-in strategy, which effectively enhances DRAM's resilience without sacrificing performance. The proposed physical design methodologies should act as important building blocks for 3D ICs and push 3D ICs toward mainstream acceptance in the near future.