860 resultados para Large-scale Structure


Relevância:

100.00% 100.00%

Publicador:

Resumo:

To tackle the challenges at circuit level and system level VLSI and embedded system design, this dissertation proposes various novel algorithms to explore the efficient solutions. At the circuit level, a new reliability-driven minimum cost Steiner routing and layer assignment scheme is proposed, and the first transceiver insertion algorithmic framework for the optical interconnect is proposed. At the system level, a reliability-driven task scheduling scheme for multiprocessor real-time embedded systems, which optimizes system energy consumption under stochastic fault occurrences, is proposed. The embedded system design is also widely used in the smart home area for improving health, wellbeing and quality of life. The proposed scheduling scheme for multiprocessor embedded systems is hence extended to handle the energy consumption scheduling issues for smart homes. The extended scheme can arrange the household appliances for operation to minimize monetary expense of a customer based on the time-varying pricing model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study presents a computational parametric analysis of DME steam reforming in a large scale Circulating Fluidized Bed (CFB) reactor. The Computational Fluid Dynamic (CFD) model used, which is based on Eulerian-Eulerian dispersed flow, has been developed and validated in Part I of this study [1]. The effect of the reactor inlet configuration, gas residence time, inlet temperature and steam to DME ratio on the overall reactor performance and products have all been investigated. The results have shown that the use of double sided solid feeding system remarkable improvement in the flow uniformity, but with limited effect on the reactions and products. The temperature has been found to play a dominant role in increasing the DME conversion and the hydrogen yield. According to the parametric analysis, it is recommended to run the CFB reactor at around 300 °C inlet temperature, 5.5 steam to DME molar ratio, 4 s gas residence time and 37,104 ml gcat -1 h-1 space velocity. At these conditions, the DME conversion and hydrogen molar concentration in the product gas were both found to be around 80%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Some color centers in diamond can serve as quantum bits which can be manipulated with microwave pulses and read out with laser, even at room temperature. However, the photon collection efficiency of bulk diamond is greatly reduced by refraction at the diamond/air interface. To address this issue, we fabricated arrays of diamond nanostructures, differing in both diameter and top end shape, with HSQ and Cr as the etching mask materials, aiming toward large scale fabrication of single-photon sources with enhanced collection efficiency made of nitrogen vacancy (NV) embedded diamond. With a mixture of O2 and CHF3 gas plasma, diamond pillars with diameters down to 45 nm were obtained. The top end shape evolution has been represented with a simple model. The tests of size dependent single-photon properties confirmed an improved single-photon collection efficiency enhancement, larger than tenfold, and a mild decrease of decoherence time with decreasing pillar diameter was observed as expected. These results provide useful information for future applications of nanostructured diamond as a single-photon source.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spread of antibiotic resistance among bacteria responsible for nosocomial and community-acquired infections urges for novel therapeutic or prophylactic targets and for innovative pathogen-specific antibacterial compounds. Major challenges are posed by opportunistic pathogens belonging to the low GC% gram-positive bacteria. Among those, Enterococcus faecalis is a leading cause of hospital-acquired infections associated with life-threatening issues and increased hospital costs. To better understand the molecular properties of enterococci that may be required for virulence, and that may explain the emergence of these bacteria in nosocomial infections, we performed the first large-scale functional analysis of E. faecalis V583, the first vancomycin-resistant isolate from a human bloodstream infection. E. faecalis V583 is within the high-risk clonal complex 2 group, which comprises mostly isolates derived from hospital infections worldwide. We conducted broad-range screenings of candidate genes likely involved in host adaptation (e.g., colonization and/or virulence). For this purpose, a library was constructed of targeted insertion mutations in 177 genes encoding putative surface or stress-response factors. Individual mutants were subsequently tested for their i) resistance to oxidative stress, ii) antibiotic resistance, iii) resistance to opsonophagocytosis, iv) adherence to the human colon carcinoma Caco-2 epithelial cells and v) virulence in a surrogate insect model. Our results identified a number of factors that are involved in the interaction between enterococci and their host environments. Their predicted functions highlight the importance of cell envelope glycopolymers in E. faecalis host adaptation. This study provides a valuable genetic database for understanding the steps leading E. faecalis to opportunistic virulence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Strong convective events can produce extreme precipitation, hail, lightning or gusts, potentially inducing severe socio-economic impacts. These events have a relatively small spatial extension and, in most cases, a short lifetime. In this study, a model is developed for estimating convective extreme events based on large scale conditions. It is shown that strong convective events can be characterized by a Weibull distribution of radar-based rainfall with a low shape and high scale parameter value. A radius of 90km around a station reporting a convective situation turned out to be suitable. A methodology is developed to estimate the Weibull parameters and thus the occurrence probability of convective events from large scale atmospheric instability and enhanced near-surface humidity, which are usually found on a larger scale than the convective event itself. Here, the probability for the occurrence of extreme convective events is estimated from the KO-index indicating the stability, and relative humidity at 1000hPa. Both variables are computed from ERA-Interim reanalysis. In a first version of the methodology, these two variables are applied to estimate the spatial rainfall distribution and to estimate the occurrence of a convective event. The developed method shows significant skill in estimating the occurrence of convective events as observed at synoptic stations, lightning measurements, and severe weather reports. In order to take frontal influences into account, a scheme for the detection of atmospheric fronts is implemented. While generally higher instability is found in the vicinity of fronts, the skill of this approach is largely unchanged. Additional improvements were achieved by a bias-correction and the use of ERA-Interim precipitation. The resulting estimation method is applied to the ERA-Interim period (1979-2014) to establish a ranking of estimated convective extreme events. Two strong estimated events that reveal a frontal influence are analysed in detail. As a second application, the method is applied to GCM-based decadal predictions in the period 1979-2014, which were initialized every year. It is shown that decadal predictive skill for convective event frequencies over Germany is found for the first 3-4 years after the initialization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim Positive regional correlations between biodiversity and human population have been detected for several taxonomic groups and geographical regions. Such correlations could have important conservation implications and have been mainly attributed to ecological factors, with little testing for an artefactual explanation: more populated regions may show higher biodiversity because they are more thoroughly surveyed. We tested the hypothesis that the correlation between people and herptile diversity in Europe is influenced by survey effort

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increasing integration of renewable energies in the electricity grid contributes considerably to achieve the European Union goals on energy and Greenhouse Gases (GHG) emissions reduction. However, it also brings problems to grid management. Large scale energy storage can provide the means for a better integration of the renewable energy sources, for balancing supply and demand, to increase energy security, to enhance a better management of the grid and also to converge towards a low carbon economy. Geological formations have the potential to store large volumes of fluids with minimal impact to environment and society. One of the ways to ensure a large scale energy storage is to use the storage capacity in geological reservoir. In fact, there are several viable technologies for underground energy storage, as well as several types of underground reservoirs that can be considered. The geological energy storage technologies considered in this research were: Underground Gas Storage (UGS), Hydrogen Storage (HS), Compressed Air Energy Storage (CAES), Underground Pumped Hydro Storage (UPHS) and Thermal Energy Storage (TES). For these different types of underground energy storage technologies there are several types of geological reservoirs that can be suitable, namely: depleted hydrocarbon reservoirs, aquifers, salt formations and caverns, engineered rock caverns and abandoned mines. Specific site screening criteria are applicable to each of these reservoir types and technologies, which determines the viability of the reservoir itself, and of the technology for any particular site. This paper presents a review of the criteria applied in the scope of the Portuguese contribution to the EU funded project ESTMAP – Energy Storage Mapping and Planning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fire incident in buildings is common, so the fire safety design of the framed structure is imperative, especially for the unprotected or partly protected bare steel frames. However, software for structural fire analysis is not widely available. As a result, the performance-based structural fire design is urged on the basis of using user-friendly and conventional nonlinear computer analysis programs so that engineers do not need to acquire new structural analysis software for structural fire analysis and design. The tool is desired to have the capacity of simulating the different fire scenarios and associated detrimental effects efficiently, which includes second-order P-D and P-d effects and material yielding. Also the nonlinear behaviour of large-scale structure becomes complicated when under fire, and thus its simulation relies on an efficient and effective numerical analysis to cope with intricate nonlinear effects due to fire. To this end, the present fire study utilizes a second order elastic/plastic analysis software NIDA to predict structural behaviour of bare steel framed structures at elevated temperatures. This fire study considers thermal expansion and material degradation due to heating. Degradation of material strength with increasing temperature is included by a set of temperature-stress-strain curves according to BS5950 Part 8 mainly, which implicitly allows for creep deformation. This finite element stiffness formulation of beam-column elements is derived from the fifth-order PEP element which facilitates the computer modeling by one member per element. The Newton-Raphson method is used in the nonlinear solution procedure in order to trace the nonlinear equilibrium path at specified elevated temperatures. Several numerical and experimental verifications of framed structures are presented and compared against solutions in literature. The proposed method permits engineers to adopt the performance-based structural fire analysis and design using typical second-order nonlinear structural analysis software.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Inflation is a period of accelerated expansion in the very early universe, which has the appealing aspect that it can create primordial perturbations via quantum fluctuations. These primordial perturbations have been observed in the cosmic microwave background, and these perturbations also function as the seeds of all large-scale structure in the universe. Curvaton models are simple modifications of the standard inflationary paradigm, where inflation is driven by the energy density of the inflaton, but another field, the curvaton, is responsible for producing the primordial perturbations. The curvaton decays after inflation as ended, where the isocurvature perturbations of the curvaton are converted into adiabatic perturbations. Since the curvaton must decay, it must have some interactions. Additionally realistic curvaton models typically have some self-interactions. In this work we consider self-interacting curvaton models, where the self-interaction is a monomial in the potential, suppressed by the Planck scale, and thus the self-interaction is very weak. Nevertheless, since the self-interaction makes the equations of motion non-linear, it can modify the behaviour of the model very drastically. The most intriguing aspect of this behaviour is that the final properties of the perturbations become highly dependent on the initial values. Departures of Gaussian distribution are important observables of the primordial perturbations. Due to the non-linearity of the self-interacting curvaton model and its sensitivity to initial conditions, it can produce significant non-Gaussianity of the primordial perturbations. In this work we investigate the non-Gaussianity produced by the self-interacting curvaton, and demonstrate that the non-Gaussianity parameters do not obey the analytically derived approximate relations often cited in the literature. Furthermore we also consider a self-interacting curvaton with a mass in the TeV-scale. Motivated by realistic particle physics models such as the Minimally Supersymmetric Standard Model, we demonstrate that a curvaton model within the mass range can be responsible for the observed perturbations if it can decay late enough.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The tropical easterly jet (TEJ) is a prominent atmospheric circulation feature observed during the Asian summer monsoon. It is generally assumed that sensible heating over the Tibetan Plateau directly influences the location of the TEJ. However, other studies have suggested the importance of latent heating in determining the jet location. In this paper, the relative importance of latent heating on the maintenance of the TEJ is explored through simulations with a general circulation model. The simulation of the TEJ by the Community Atmosphere Model, version 3.1 is discussed in detail. These simulations showed that the location of the TEJ is well correlated with the location of the precipitation. Significant zonal shifts in the location of the precipitation resulted in similar shifts in the zonal location of the TEJ. These zonal shifts had minimal effect on the large-scale structure of the jet. Further, provided that precipitation patterns were relatively unchanged, orography did not directly impact the location of the TEJ. These changes were robust even with changes in the cumulus parameterization. This suggests the potential important role of latent heating in determining the location and structure of the TEJ. These results were used to explain the significant differences in the zonal location of the TEJ in the years 1988 and 2002. To understand the contribution of the latitudinal location of latent heating on the strength of the TEJ, aqua-planet simulations were carried out. It has been shown that for similar amounts of net latent heating, the jet is stronger when heating is in the higher tropical latitudes. This may partly explain the reason for the jet to be very strong during the JJA monsoon season.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is divided into two parts: interacting dark matter and fluctuations in cosmology. There is an incongruence between the properties that dark matter is expected to possess between the early universe and the late universe. Weakly-interacting dark matter yields the observed dark matter relic density and is consistent with large-scale structure formation; however, there is strong astrophysical evidence in favor of the idea that dark matter has large self-interactions. The first part of this thesis presents two models in which the nature of dark matter fundamentally changes as the universe evolves. In the first model, the dark matter mass and couplings depend on the value of a chameleonic scalar field that changes as the universe expands. In the second model, dark matter is charged under a hidden SU(N) gauge group and eventually undergoes confinement. These models introduce very different mechanisms to explain the separation between the physics relevant for freezeout and for small-scale dynamics.

As the universe continues to evolve, it will asymptote to a de Sitter vacuum phase. Since there is a finite temperature associated with de Sitter space, the universe is typically treated as a thermal system, subject to rare thermal fluctuations, such as Boltzmann brains. The second part of this thesis begins by attempting to escape this unacceptable situation within the context of known physics: vacuum instability induced by the Higgs field. The vacuum decay rate competes with the production rate of Boltzmann brains, and the cosmological measures that have a sufficiently low occurrence of Boltzmann brains are given more credence. Upon further investigation, however, there are certain situations in which de Sitter space settles into a quiescent vacuum with no fluctuations. This reasoning not only provides an escape from the Boltzmann brain problem, but it also implies that vacuum states do not uptunnel to higher-energy vacua and that perturbations do not decohere during slow-roll inflation, suggesting that eternal inflation is much less common than often supposed. Instead, decoherence occurs during reheating, so this analysis does not alter the conventional understanding of the origin of density fluctuations from primordial inflation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We know from the CMB and observations of large-scale structure that the universe is extremely flat, homogenous, and isotropic. The current favored mechanism for generating these characteristics is inflation, a theorized period of exponential expansion of the universe that occurred shortly after the Big Bang. Most theories of inflation generically predict a background of stochastic gravitational waves. These gravitational waves should leave their unique imprint on the polarization of the CMB via Thompson scattering. Scalar perturbations of the metric will cause a pattern of polarization with no curl (E-mode). Tensor perturbations (gravitational waves) will cause a unique pattern of polarization on the CMB that includes a curl component (B-mode). A measurement of the ratio of the tensor to scalar perturbations (r) tells us the energy scale of inflation. Recent measurements by the BICEP2 team detect the B-mode spectrum with a tensor-to-scalar ratio of r = 0.2 (+0.05, −0.07). An independent confirmation of this result is the next step towards understanding the inflationary universe.

This thesis describes my work on a balloon-borne polarimeter called SPIDER, which is designed to illuminate the physics of the early universe through measurements of the cosmic microwave background polarization. SPIDER consists of six single-frequency, on-axis refracting telescopes contained in a shared-vacuum liquid-helium cryostat. Its large format arrays of millimeter-wave detectors and tight control of systematics will give it unprecedented sensitivity. This thesis describes how the SPIDER detectors are characterized and calibrated for flight, as well as how the systematics requirements for the SPIDER system are simulated and measured.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Precision polarimetry of the cosmic microwave background (CMB) has become a mainstay of observational cosmology. The ΛCDM model predicts a polarization of the CMB at the level of a few μK, with a characteristic E-mode pattern. On small angular scales, a B-mode pattern arises from the gravitational lensing of E-mode power by the large scale structure of the universe. Inflationary gravitational waves (IGW) may be a source of B-mode power on large angular scales, and their relative contribution to primordial fluctuations is parameterized by a tensor-to-scalar ratio r. BICEP2 and Keck Array are a pair of CMB polarimeters at the South Pole designed and built for optimal sensitivity to the primordial B-mode peak around multipole l ~ 100. The BICEP2/Keck Array program intends to achieve a sensitivity to r ≥ 0.02. Auxiliary science goals include the study of gravitational lensing of E-mode into B-mode signal at medium angular scales and a high precision survey of Galactic polarization. These goals require low noise and tight control of systematics. We describe the design and calibration of the instrument. We also describe the analysis of the first three years of science data. BICEP2 observes a significant B-mode signal at 150 GHz in excess of the level predicted by the lensed-ΛCDM model, and Keck Array confirms the excess signal at > 5σ. We combine the maps from the two experiments to produce 150 GHz Q and U maps which have a depth of 57 nK deg (3.4 μK arcmin) over an effective area of 400 deg2 for an equivalent survey weight of 248000 μK2. We also show preliminary Keck Array 95 GHz maps. A joint analysis with the Planck collaboration reveals that much of BICEP2/Keck Array's observed 150 GHz signal at low l is more likely a Galactic dust foreground than a measurement of r. Marginalizing over dust and r, lensing B-modes are detected at 7.0σ significance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we try to detect the SZ effect in the 2MASS DWT clusters and less bound objects in order to constrain the warm-hot intergalactic medium distribution on large scales by cross-correlation analysis. The results of both observed WMAP and mock SZ effect map indicate that the hot gas distributes from inside as well as outside of the high density regions of galaxy clusters, which is consistent with the results of both observation and hydro simulation. Therefore, the DWT measurement of the cross-correlation would be a powerful tool to probe the missing of baryons in the universe.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the non-Gaussianity induced by the Sunyaev-Zel'dovich (SZ) effect in cosmic microwave background (CMB) fluctuation maps. If a CMB map is contaminated by the SZ effect of galaxies or galaxy clusters, the CMB maps should have similar non-Gaussian features to the galaxy and cluster fields. Using the WMAP data and 2MASS galaxy catalogue, we show that the non-Gaussianity of the 2MASS galaxies is imprinted on WMAP maps. The signature of non-Gaussianity can be seen with the fourth-order cross-correlation between the wavelet variables of the WMAP maps and 2MASS clusters. The intensity of the fourth-order non-Gaussian features is found to be consistent with the contamination of the SZ effect of 2MASS galaxies. We also show that this non-Gaussianity can not be seen by the high-order autocorrelation of the WMAP. This is because the SZ signals in the autocorrelations of the WMAP data generally are weaker than the WMAP-2MASS cross-correlations by a factor f(2), which is the ratio between the powers of the SZ-effect map and the CMB fluctuations on the scale considered. Therefore, the ratio of high-order autocorrelations of CMB maps to cross-correlations of the CMB maps and galaxy field would be effective to constrain the powers of the SZ effect on various scales.