154 resultados para TUNING RANGE
Resumo:
The design cycle for complex special-purpose computing systems is extremely costly and time-consuming. It involves a multiparametric design space exploration for optimization, followed by design verification. Designers of special purpose VLSI implementations often need to explore parameters, such as optimal bitwidth and data representation, through time-consuming Monte Carlo simulations. A prominent example of this simulation-based exploration process is the design of decoders for error correcting systems, such as the Low-Density Parity-Check (LDPC) codes adopted by modern communication standards, which involves thousands of Monte Carlo runs for each design point. Currently, high-performance computing offers a wide set of acceleration options that range from multicore CPUs to Graphics Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs). The exploitation of diverse target architectures is typically associated with developing multiple code versions, often using distinct programming paradigms. In this context, we evaluate the concept of retargeting a single OpenCL program to multiple platforms, thereby significantly reducing design time. A single OpenCL-based parallel kernel is used without modifications or code tuning on multicore CPUs, GPUs, and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL in order to introduce FPGAs as a potential platform to efficiently execute simulations coded in OpenCL. We use LDPC decoding simulations as a case study. Experimental results were obtained by testing a variety of regular and irregular LDPC codes that range from short/medium (e.g., 8,000 bit) to long length (e.g., 64,800 bit) DVB-S2 codes. We observe that, depending on the design parameters to be simulated, on the dimension and phase of the design, the GPU or FPGA may suit different purposes more conveniently, thus providing different acceleration factors over conventional multicore CPUs.
Resumo:
Transportation accounts for 22% of greenhouse gas emissions in the UK, and increases to 25% in Northern Ireland. Surface transport carbon dioxide emissions, consisting of road and rail, are dominated by cars. Demand for mobility is rising rapidly and vehicle numbers are expected to more than double by 2050. Car manufacturers are working towards reducing their carbon footprint through improving fuel efficiency and controlling exhaust emissions. Fuel efficiency is now a key consideration of consumers purchasing a new vehicle. While measures have been taken to help to reduce pollutants, in the future, alternative technologies will have to be used in the transportation industry to achieve sustainability. There are currently many alternatives to the market leader, the internal combustion engine. These alternatives include hydrogen fuel cell vehicles and electric vehicles, a term which is widely used to cover battery electric vehicles, plug-in hybrid electric vehicles and extended-range electric vehicles. This study draws direct comparisons measuring the differing performance in terms of fuel consumption, carbon emissions and range of a typical family saloon car using different fuel types. These comparisons will then be analysed to see what effect switching from a conventionally fuelled vehicle to a range extended electric vehicle would have not only on the end user, but also the UK government.
Resumo:
Ceria (CeO2) and ceria-based composite materials, especially Ce1-xZrxO2 solid solutions, possess a wide range of applications in many important catalytic processes, such as three-way catalysts, owing to their excellent oxygen storage capacity (OSC) through the oxygen vacancy formation and refilling. Much of this activity has focused on the understanding of the electronic and structural properties of defective CeO2 with and without doping, and comprehending the determining factor for oxygen vacancy formation and the rule to tune the formation energy by doping has constituted a central issue in material chemistry related to ceria. However, the calculation on electronic structures and the corresponding relaxation patterns in defective CeO2-x oxides remains at present a challenge in the DFT framework. A pragmatic approach based on density functional theory with the inclusion of on-site Coulomb correction, i.e. the so-called DFT + U technique, has been extensively applied in the majority of recent theoretical investigations. Firstly, we review briefly the latest electronic structure calculations of defective CeO2(111), focusing on the phenomenon of multiple configurations of the localized 4f electrons, as well as the discussions of its formation mechanism and the catalytic role in activating the O-2 molecule. Secondly, aiming at shedding light on the doping effect on tuning the oxygen vacancy formation in ceria-based solid solutions, we summarize the recent theoretical results of Ce1-xZrxO2 solid solutions in terms of the effect of dopant concentrations and crystal phases. A general model on O vacancy formation is also discussed; it consists of electrostatic and structural relaxation terms, and the vital role of the later is emphasized. Particularly, we discuss the crucial role of the localized structural relaxation patterns in determining the superb oxygen storage capacity in kappa-phase Ce1-xZr1-xO2. Thirdly, we briefly discuss some interesting findings for the oxygen vacancy formation in pure ceria nanoparticles (NPs) uncovered by DFT calculations and compare those with the bulk or extended surfaces of ceria as well as different particle sizes, emphasizing the role of the electrostatic field in determining the O vacancy formation.
Resumo:
Using low-energy electron-diffraction (LEED) formalism, we demonstrate theoretically that LEED I-V spectra are characterized mainly by short-range order. We also show experimentally that diffuse LEED (DLEED) I-V spectra can be accurately measured from a disordered system using a video-LEED system even at very low coverage. These spectra demonstrate that experimental DLEED I-V spectra from disordered systems may be used to determine local structures. As an example, it is shown that experimental DLEED I-V spectra from K/Co {1010BAR} at potassium coverages of 0.07, 0.1, and 0.13 monolayer closely resemble calculated and experimental LEED I-V spectra for a well-ordered Co{1010BAR}-c(2X2)-K superstructure, leading to the conclusion that at low coverages, potassium atoms are located in the fourfold-hollow sites and that there is no large bond-length change with coverage.
Resumo:
Three photocatalyst inks based on the redox dyes, Resazurin (Rz), Basic Blue 66 (BB66) and Acid Violet 7 (AV7), are used to assess the photocatalytic activities of a variety of different materials, such as commercial paint, tiles and glass and laboratory made samples of sol–gel coated glass and paint, which collectively exhibit a wide range of activities that cannot currently be probed by any one of the existing ISO tests. Unlike the ISO tests, the ink tests are fast (typically <10 min), simple to employ and inexpensive. Previous work indicates that the Rz ink test at least correlates linearly with other photocatalytic tests such as the photomineralisation of stearic acid. The average time to bleach 90% of the key RGB colour component of the ink, red for Rz and BB66 inks and green for AV7 ink, is determined, ttb(90), for eight samples of each of the different materials tested. Five laboratories conducted the tests and the results revealed an average repeatability and reproducibility of: ca. 11% and ca 21%, respectively, which compare well with those reported for the current ISO tests. Additional work on commercial self-cleaning glass using an Rz ink showed that the change in the red component of the RGB image of the ink correlated linearly with that of the change of absorbance at 608 nm, as measured using UV/vis spectroscopy, and the change in the a* component of the Lab colour analysis of the ink, as measured using diffuse reflectance spectroscopy. As a consequence, all three methods generate the same ttb(90). The advantages of the RGB digital image analysis method are discussed briefly.
Resumo:
New independent dating evidence is presented for a lacustrine record for which an age-depth model had already been derived through the interpretation of the pollen signal. Quartz OSL ages support radiocarbon ages that were previously considered to suffer an underestimation due to contamination, and imply a younger chronology for the core. The successful identification of the Campanian Ignimbrite as a cryptotephra within the core also validates this younger chronology, as well as extending the known geographical range of this tephra layer within Italy. These new results suggest that care should always be taken when building chronologies from proxy records that are correlated to the tuned records from which the global signal is often derived (i.e. double tuning). We do not offer this as the definitive chronology for Lake Fimon, but multiple lines of dating evidence show that there is sufficient reason to seriously consider it. The Quaternary dating community should always have all age information available, even when significant temporal offsets are apparent between various lines of evidence to be: 1) better informed when they face similar dilemmas in the future and 2) allow multiple working hypotheses to be considered.
Resumo:
The objective of this study is to provide an alternative model approach, i.e., artificial neural network (ANN) model, to predict the compositional viscosity of binary mixtures of room temperature ionic liquids (in short as ILs) [C n-mim] [NTf 2] with n=4, 6, 8, 10 in methanol and ethanol over the entire range of molar fraction at a broad range of temperatures from T=293.0328.0K. The results show that the proposed ANN model provides alternative way to predict compositional viscosity successfully with highly improved accuracy and also show its potential to be extensively utilized to predict compositional viscosity over a wide range of temperatures and more complex viscosity compositions, i.e., more complex intermolecular interactions between components in which it would be hard or impossible to establish the analytical model. © 2010 IEEE.
Resumo:
To determine the effect of microbial metabolites on the release of root exudates from perennial ryegrass, seedlings were pulse labelled with [14C]-CO2 in the presence of a range of soil micro-organisms. Microbial inoculants were spatially separated from roots by Millipore membranes so that root infection did not occur. Using this technique, only microbial metabolites affected root exudation. The effect of microbial metabolites on carbon assimilation and distribution and root exudation was determined for 15 microbial species. Assimilation of a pulse label varied by over 3.5 fold, dependent on inoculant. Distribution of the label between roots and shoots also varied with inoculant, but the carbon pool that was most sensitive to inoculation was root exudation. In the absence of a microbial inoculant only 1% of assimilated label was exuded. Inoculation of the microcosms always caused an increase in exudation but the percentage exuded varied greatly, within the range of 3-34%. © 1995 Kluwer Academic Publishers.
Resumo:
Quasi-phase matching (QPM) can be used to increase the conversion efficiency of the high harmonic generation (HHG) process. We observed QPM with an improved dual-gas foil target with a 1 kHz, 10 mJ, 30 fs laser system. Phase tuning and enhancement were possible within a spectral range from 17 nm to 30 nm. Furthermore analytical calculations and numerical simulations were carried out to distinguish QPM from other effects, such as the influence of adjacent jets on each other or the laser gas interaction. The simulations were performed with a 3 dimensional code to investigate the phase matching of the short and long trajectories individually over a large spectral range.
Resumo:
Wilms' tumor gene 1 (WT1) is overexpressed in the majority (70-90%) of acute leukemias and has been identified as an independent adverse prognostic factor, a convenient minimal residual disease (MRD) marker and potential therapeutic target in acute leukemia. We examined WT1 expression patterns in childhood acute lymphoblastic leukemia (ALL), where its clinical implication remains unclear. Using a real-time quantitative PCR designed according to Europe Against Cancer Program recommendations, we evaluated WT1 expression in 125 consecutively enrolled patients with childhood ALL (106 BCP-ALL, 19 T-ALL) and compared it with physiologic WT1 expression in normal and regenerating bone marrow (BM). In childhood B-cell precursor (BCP)-ALL, we detected a wide range of WT1 levels (5 logs) with a median WT1 expression close to that of normal BM. WT1 expression in childhood T-ALL was significantly higher than in BCP-ALL (P<0.001). Patients with MLL-AF4 translocation showed high WT1 overexpression (P<0.01) compared to patients with other or no chromosomal aberrations. Older children (> or =10 years) expressed higher WT1 levels than children under 10 years of age (P<0.001), while there was no difference in WT1 expression in patients with peripheral blood leukocyte count (WBC) > or =50 x 10(9)/l and lower. Analysis of relapsed cases (14/125) indicated that an abnormal increase or decrease in WT1 expression was associated with a significantly increased risk of relapse (P=0.0006), and this prognostic impact of WT1 was independent of other main risk factors (P=0.0012). In summary, our study suggests that WT1 expression in childhood ALL is very variable and much lower than in AML or adult ALL. WT1, thus, will not be a useful marker for MRD detection in childhood ALL, however, it does represent a potential independent risk factor in childhood ALL. Interestingly, a proportion of childhood ALL patients express WT1 at levels below the normal physiological BM WT1 expression, and this reduced WT1 expression appears to be associated with a higher risk of relapse.
Resumo:
Transport accounts for 22% of greenhouse gas emissions in the United Kingdom and cars are expected tomore than double by 2050. Car manufacturers are continually aiming for a substantially reduced carbonfootprint through improved fuel efficiency and better powertrain performance due to the strict EuropeanUnion emissions standards. However, road tax, not just fuel efficiency, is a key consideration of consumerswhen purchasing a car. While measures have been taken to reduce emissions through stricter standards, infuture, alternative technologies will be used. Electric vehicles, hybrid vehicles and range extended electricvehicles have been identified as some of these future technologies. In this research a virtual test bed of aconventional internal combustion engine and a range extended electric vehicle family saloon car were builtin AVL’s vehicle and powertrain system level simulation tool, CRUISE, to simulate the New EuropeanDrive Cycle and the results were then soft-linked to a techno-economic model to compare the effectivenessof current support mechanisms over the full life cycle of both cars. The key finding indicates that althoughcarbon emissions are substantially reduced, switching is still not financially the best option for either theconsumer or the government in the long run.
Resumo:
An organism’s home range dictates the spatial scale on which important processes occur (e.g. competition and predation) and directly affects the relationship between individual fitness and local habitat quality. Many reef fish species have very restricted home ranges after settlement and, here, we quantify home-range size in juveniles of a widespread and abundant reef fish in New Zealand, the common triplefin (Forsterygion lapillum). We conducted visual observations on 49 juveniles (mean size = 35-mm total length) within the Wellington harbour, New Zealand. Home ranges were extremely small, 0.053 m2 ± 0.029 (mean ± s.d.) and were unaffected by adult density, body size or substrate composition. A regression tree indicated that home-range size sharply decreased ~4.5 juveniles m–2 and a linear mixed model confirmed that home-range sizes in high-density areas (>4.5 juveniles m–2) were significantly smaller (34%) than those in low-density areas (after accounting for a significant effect of fish movement on our home-range estimates). Our results suggest that conspecific density may have negative and non-linear effects on home-range size, which could shape the spatial distribution of juveniles within a population, as well as influence individual fitness across local density gradients.
Resumo:
In the reinsurance market, the risks natural catastrophes pose to portfolios of properties must be quantified, so that they can be priced, and insurance offered. The analysis of such risks at a portfolio level requires a simulation of up to 800 000 trials with an average of 1000 catastrophic events per trial. This is sufficient to capture risk for a global multi-peril reinsurance portfolio covering a range of perils including earthquake, hurricane, tornado, hail, severe thunderstorm, wind storm, storm surge and riverine flooding, and wildfire. Such simulations are both computation and data intensive, making the application of high-performance computing techniques desirable.
In this paper, we explore the design and implementation of portfolio risk analysis on both multi-core and many-core computing platforms. Given a portfolio of property catastrophe insurance treaties, key risk measures, such as probable maximum loss, are computed by taking both primary and secondary uncertainties into account. Primary uncertainty is associated with whether or not an event occurs in a simulated year, while secondary uncertainty captures the uncertainty in the level of loss due to the use of simplified physical models and limitations in the available data. A combination of fast lookup structures, multi-threading and careful hand tuning of numerical operations is required to achieve good performance. Experimental results are reported for multi-core processors and systems using NVIDIA graphics processing unit and Intel Phi many-core accelerators.
Resumo:
Context. The ESA Rosetta spacecraft, currently orbiting around comet 67P/Churyumov-Gerasimenko, has already provided in situ measurements of the dust grain properties from several instruments,particularly OSIRIS and GIADA. We propose adding value to those measurements by combining them with ground-based observations of the dust tail to monitor the overall, time-dependent dust-production rate and size distribution.
Aims. To constrain the dust grain properties, we take Rosetta OSIRIS and GIADA results into account, and combine OSIRIS data during the approach phase (from late April to early June 2014) with a large data set of ground-based images that were acquired with the ESO Very Large Telescope (VLT) from February to November 2014.
Methods. A Monte Carlo dust tail code, which has already been used to characterise the dust environments of several comets and active asteroids, has been applied to retrieve the dust parameters. Key properties of the grains (density, velocity, and size distribution) were obtained from Rosetta observations: these parameters were used as input of the code to considerably reduce the number of free parameters. In this way, the overall dust mass-loss rate and its dependence on the heliocentric distance could be obtained accurately.
Results. The dust parameters derived from the inner coma measurements by OSIRIS and GIADA and from distant imaging using VLT data are consistent, except for the power index of the size-distribution function, which is α = −3, instead of α = −2, for grains smaller than 1 mm. This is possibly linked to the presence of fluffy aggregates in the coma. The onset of cometary activity occurs at approximately 4.3 AU, with a dust production rate of 0.5 kg/s, increasing up to 15 kg/s at 2.9 AU. This implies a dust-to-gas mass ratio varying between 3.8 and 6.5 for the best-fit model when combined with water-production rates from the MIRO experiment.