950 resultados para Tuning.
Resumo:
The design cycle for complex special-purpose computing systems is extremely costly and time-consuming. It involves a multiparametric design space exploration for optimization, followed by design verification. Designers of special purpose VLSI implementations often need to explore parameters, such as optimal bitwidth and data representation, through time-consuming Monte Carlo simulations. A prominent example of this simulation-based exploration process is the design of decoders for error correcting systems, such as the Low-Density Parity-Check (LDPC) codes adopted by modern communication standards, which involves thousands of Monte Carlo runs for each design point. Currently, high-performance computing offers a wide set of acceleration options that range from multicore CPUs to Graphics Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs). The exploitation of diverse target architectures is typically associated with developing multiple code versions, often using distinct programming paradigms. In this context, we evaluate the concept of retargeting a single OpenCL program to multiple platforms, thereby significantly reducing design time. A single OpenCL-based parallel kernel is used without modifications or code tuning on multicore CPUs, GPUs, and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL in order to introduce FPGAs as a potential platform to efficiently execute simulations coded in OpenCL. We use LDPC decoding simulations as a case study. Experimental results were obtained by testing a variety of regular and irregular LDPC codes that range from short/medium (e.g., 8,000 bit) to long length (e.g., 64,800 bit) DVB-S2 codes. We observe that, depending on the design parameters to be simulated, on the dimension and phase of the design, the GPU or FPGA may suit different purposes more conveniently, thus providing different acceleration factors over conventional multicore CPUs.
Resumo:
Highly efficient In2O3-Co3O4 catalysts were prepared for ultralow-temperature CO oxidation by simultaneously tuning the CO adsorption strength and oxygen activation over a Co3O4 surface, which could completely convert CO to CO2 at temperatures as low as -105 degrees C compared to -40 degrees C over pure Co3O4, with enhanced stability.
Resumo:
Ceria (CeO2) and ceria-based composite materials, especially Ce1-xZrxO2 solid solutions, possess a wide range of applications in many important catalytic processes, such as three-way catalysts, owing to their excellent oxygen storage capacity (OSC) through the oxygen vacancy formation and refilling. Much of this activity has focused on the understanding of the electronic and structural properties of defective CeO2 with and without doping, and comprehending the determining factor for oxygen vacancy formation and the rule to tune the formation energy by doping has constituted a central issue in material chemistry related to ceria. However, the calculation on electronic structures and the corresponding relaxation patterns in defective CeO2-x oxides remains at present a challenge in the DFT framework. A pragmatic approach based on density functional theory with the inclusion of on-site Coulomb correction, i.e. the so-called DFT + U technique, has been extensively applied in the majority of recent theoretical investigations. Firstly, we review briefly the latest electronic structure calculations of defective CeO2(111), focusing on the phenomenon of multiple configurations of the localized 4f electrons, as well as the discussions of its formation mechanism and the catalytic role in activating the O-2 molecule. Secondly, aiming at shedding light on the doping effect on tuning the oxygen vacancy formation in ceria-based solid solutions, we summarize the recent theoretical results of Ce1-xZrxO2 solid solutions in terms of the effect of dopant concentrations and crystal phases. A general model on O vacancy formation is also discussed; it consists of electrostatic and structural relaxation terms, and the vital role of the later is emphasized. Particularly, we discuss the crucial role of the localized structural relaxation patterns in determining the superb oxygen storage capacity in kappa-phase Ce1-xZr1-xO2. Thirdly, we briefly discuss some interesting findings for the oxygen vacancy formation in pure ceria nanoparticles (NPs) uncovered by DFT calculations and compare those with the bulk or extended surfaces of ceria as well as different particle sizes, emphasizing the role of the electrostatic field in determining the O vacancy formation.
Resumo:
New independent dating evidence is presented for a lacustrine record for which an age-depth model had already been derived through the interpretation of the pollen signal. Quartz OSL ages support radiocarbon ages that were previously considered to suffer an underestimation due to contamination, and imply a younger chronology for the core. The successful identification of the Campanian Ignimbrite as a cryptotephra within the core also validates this younger chronology, as well as extending the known geographical range of this tephra layer within Italy. These new results suggest that care should always be taken when building chronologies from proxy records that are correlated to the tuned records from which the global signal is often derived (i.e. double tuning). We do not offer this as the definitive chronology for Lake Fimon, but multiple lines of dating evidence show that there is sufficient reason to seriously consider it. The Quaternary dating community should always have all age information available, even when significant temporal offsets are apparent between various lines of evidence to be: 1) better informed when they face similar dilemmas in the future and 2) allow multiple working hypotheses to be considered.
Resumo:
Modern control methods like optimal control and model predictive control (MPC) provide a framework for simultaneous regulation of the tracking performance and limiting the control energy, thus have been widely deployed in industrial applications. Yet, due to its simplicity and robustness, the conventional P (Proportional) and PI (Proportional–Integral) control are still the most common methods used in many engineering systems, such as electric power systems, automotive, and Heating, Ventilation and Air Conditioning (HVAC) for buildings, where energy efficiency and energy saving are the critical issues to be addressed. Yet, little has been done so far to explore the effect of its parameter tuning on both the system performance and control energy consumption, and how these two objectives are correlated within the P and PI control framework. In this paper, the P and PI controllers are designed with a simultaneous consideration of these two aspects. Two case studies are investigated in detail, including the control of Voltage Source Converters (VSCs) for transmitting offshore wind power to onshore AC grid through High Voltage DC links, and the control of HVAC systems. Results reveal that there exists a better trade-off between the tracking performance and the control energy through a proper choice of the P and PI controller parameters.
Resumo:
The two families of fluorescent PET (photoinduced electron transfer) sensors (1-9) show that the effective proton density near the surface of several micelle membranes changes over 2-3 orders of magnitude as the microlocation of the sensor (with respect to the membrane) is altered via hydrophobic tuning.
Resumo:
Background
Therapist responses to initial shame disclosure in therapy have received little empirical attention.
Aim
This study explored different therapeutic responses to shame disclosures in terms of their perceived helpfulness. Responses ranged from complete withdrawal from the feeling (withdrawal) to completely tuning into it (non-withdrawal). Given the tendency of shame to evoke avoidance, participants higher on shame-proneness (as measured by The Experience of Shame Scale) were expected to perceive withdrawal responses to shame as more helpful than non-withdrawal responses.
Methodology
Fifty-five non-clinical participants were assessed for shame-proneness before viewing videos of mock therapy sessions showing clients either disclosing shame (two videos) or shock (control condition). Participants then rated the helpfulness of different therapist responses. The responses differed in the degree they allowed the client to withdraw from their emotions.
Results
High shame proneness was associated with rating withdrawal responses to shame as least helpful. Overall, neither the withdrawal response nor the non-withdrawal response were rated as particularly helpful. The therapeutic response which addressed management strategies when shame is initially experienced in therapy was deemed most helpful.
Conclusion
Despite the tendency to withdraw from shame feelings, this response is not deemed helpful in therapy.
Resumo:
Quasi-phase matching (QPM) can be used to increase the conversion efficiency of the high harmonic generation (HHG) process. We observed QPM with an improved dual-gas foil target with a 1 kHz, 10 mJ, 30 fs laser system. Phase tuning and enhancement were possible within a spectral range from 17 nm to 30 nm. Furthermore analytical calculations and numerical simulations were carried out to distinguish QPM from other effects, such as the influence of adjacent jets on each other or the laser gas interaction. The simulations were performed with a 3 dimensional code to investigate the phase matching of the short and long trajectories individually over a large spectral range.
Resumo:
It has been found that the catalytic activity and selectivity of a metal film deposited on a solid electrolyte could be enhanced dramatically and in a reversible way by applying an electrical current or potential between the metal catalyst and the counter electrode (also deposited on the electrolyte). This phenomenon is know as NEMCA [S. Bebelis, C.G. Vayenas, Journal of Catalysis, 118 (1989) 125-146.] or electrochemical promotion (EP) [J. Prichard, Nature, 343 (1990) 592.] of catalysis. Yttria-doped barium zirconate, BaZr0.9Y0.1O3 - α (BZY), a known proton conductor, has been used in this study. It has been reported that proton conducting perovskites can, under the appropriate conditions, act also as oxide ion conductors. In mixed conducting systems the mechanism of conduction depends upon the gas atmosphere that to which the material is exposed. Therefore, the use of a mixed ionic (oxide ion and proton) conducting membrane as a support for a platinum catalyst may facilitate the tuning of the promotional behaviour of the catalyst by allowing the control of the conduction mechanism of the electrolyte. The conductivity of BZY under different atmospheres was measured and the presence of oxide ion conduction under the appropriate conditions was confirmed. Moreover, kinetic experiments on ethylene oxidation corroborated the findings from the conductivity measurements showing that the use of a mixed ionic conductor allows for the tuning of the reaction rate. © 2006 Elsevier B.V. All rights reserved.
Resumo:
The North Atlantic has played a key role in abrupt climate changes due to the sensitivity of the Atlantic Meridional Overturning Circulation (AMOC) to the location and strength of deep water formation. It is crucial for modelling future climate change to understand the role of the AMOC in the rapid warming and gradual cooling cycles known as Dansgaard-Oescher (DO) events which are recorded in the Greenland ice cores. However, palaeoceanographic research into DO events has been hampered by the uncertainty in timing due largely to the lack of a precise chronological time frame for marine records. While tephrochronology provides links to the Greenland ice core records at a few points, radiocarbon remains the primary dating method for most marine cores. Due to variations in the atmospheric and oceanic 14C concentration, radiocarbon ages must be calibrated to provide calendric ages. The IntCal Working Group provides a global estimate of ocean 14C ages for calibration of marine radiocarbon dates, but the variability of the surface marine reservoir age in the North Atlantic particularly during Heinrich or DO events, makes calibration uncertain. In addition, the current Marine09 radiocarbon calibration beyond around 15 ka BP is largely based on 'tuning' to the Hulu Cave isotope record, so that the timing of events may not be entirely synchronous with the Greenland ice cores. The use of event-stratigraphy and independent chronological markers such as tephra provide the scope to improve marine radiocarbon reservoir age estimates particularly in the North Atlantic where a number of tephra horizons have been identified in both marine sediments and the Greenland ice cores. Quantification of timescale uncertainties is critical but statistical techniques which can take into account the differential dating between events can improve the precision. Such techniques should make it possible to develop specific marine calibration curves for selected regions.
Resumo:
The spectral sensitivity of visual pigments in vertebrate eyes is optimized for specific light conditions. One of such pigments, rhodopsin (RH1), mediates dim-light vision. Amino acid replacements at tuning sites may alter spectral sensitivity, providing a mechanism to adapt to ambient light conditions and depth of habitat in fish. Here we present a first investigation of RH1 gene polymorphism among two ecotypes of Atlantic cod in Icelandic waters, which experience divergent light environments throughout the year due to alternative foraging behaviour. We identified one synonymous single nucleotide polymorphism (SNP) in the RH1 protein coding region and one in the 3' untranslated region (3'-UTR) that are strongly divergent between these two ecotypes. Moreover, these polymorphisms coincided with the well-known panthophysin (Pan I) polymorphism that differentiates coastal and frontal (migratory) populations of Atlantic cod. While the RH1 SNPs do not provide direct inference for a specific molecular mechanism, their association with this dim-sensitive pigment indicates the involvement of the visual system in local adaptation of Atlantic cod.
Resumo:
In the reinsurance market, the risks natural catastrophes pose to portfolios of properties must be quantified, so that they can be priced, and insurance offered. The analysis of such risks at a portfolio level requires a simulation of up to 800 000 trials with an average of 1000 catastrophic events per trial. This is sufficient to capture risk for a global multi-peril reinsurance portfolio covering a range of perils including earthquake, hurricane, tornado, hail, severe thunderstorm, wind storm, storm surge and riverine flooding, and wildfire. Such simulations are both computation and data intensive, making the application of high-performance computing techniques desirable.
In this paper, we explore the design and implementation of portfolio risk analysis on both multi-core and many-core computing platforms. Given a portfolio of property catastrophe insurance treaties, key risk measures, such as probable maximum loss, are computed by taking both primary and secondary uncertainties into account. Primary uncertainty is associated with whether or not an event occurs in a simulated year, while secondary uncertainty captures the uncertainty in the level of loss due to the use of simplified physical models and limitations in the available data. A combination of fast lookup structures, multi-threading and careful hand tuning of numerical operations is required to achieve good performance. Experimental results are reported for multi-core processors and systems using NVIDIA graphics processing unit and Intel Phi many-core accelerators.
Resumo:
Generating timetables for an institution is a challenging and time consuming task due to different demands on the overall structure of the timetable. In this paper, a new hybrid method which is a combination of a great deluge and artificial bee colony algorithm (INMGD-ABC) is proposed to address the university timetabling problem. Artificial bee colony algorithm (ABC) is a population based method that has been introduced in recent years and has proven successful in solving various optimization problems effectively. However, as with many search based approaches, there exist weaknesses in the exploration and exploitation abilities which tend to induce slow convergence of the overall search process. Therefore, hybridization is proposed to compensate for the identified weaknesses of the ABC. Also, inspired from imperialist competitive algorithms, an assimilation policy is implemented in order to improve the global exploration ability of the ABC algorithm. In addition, Nelder–Mead simplex search method is incorporated within the great deluge algorithm (NMGD) with the aim of enhancing the exploitation ability of the hybrid method in fine-tuning the problem search region. The proposed method is tested on two differing benchmark datasets i.e. examination and course timetabling datasets. A statistical analysis t-test has been conducted and shows the performance of the proposed approach as significantly better than basic ABC algorithm. Finally, the experimental results are compared against state-of-the art methods in the literature, with results obtained that are competitive and in certain cases achieving some of the current best results to those in the literature.
Resumo:
Trends and focii of interest in atomic modelling and data are identified in connection with recent observations and experiments in fusion and astrophysics. In the fusion domain, spectral observations are included of core, beam penetrated and divertor plasma. The helium beam experiments at JET and the studies with very heavy species at ASDEX and JET are noted. In the astrophysics domain, illustrations are given from the SOHO and CHANDRA spacecraft which span from the solar upper atmosphere, through soft x-rays from comets to supernovae remnants. It is shown that non-Maxwellian, dynamic and possibly optically thick regimes must be considered. The generalized collisional-radiative model properly describes the collisional regime of most astrophysical and laboratory fusion plasmas and yields self-consistent derived data for spectral emission, power balance and ionization state studies. The tuning of this method to routine analysis of the spectral observations is described. A forward look is taken as to how such atomic modelling, and the atomic data which underpin it, ought to evolve to deal with the extended conditions and novel environments of the illustrations. It is noted that atomic physics influences most aspects of fusion and astrophysical plasma behaviour but the effectiveness of analysis depends on the quality of the bi-directional pathway from fundamental data production through atomic/plasma model development to the confrontation with experiment. The principal atomic data capability at JET, and other fusion and astrophysical laboratories, is supplied via the Atomic Data and Analysis Structure (ADAS) Project. The close ties between the various experiments and ADAS have helped in this path of communication.
Resumo:
Energy consumption is an important concern in modern multicore processors. The energy consumed by a multicore processor during the execution of an application can be minimized by tuning the hardware state utilizing knobs such as frequency, voltage etc. The existing theoretical work on energy minimization using Global DVFS (Dynamic Voltage and Frequency Scaling), despite being thorough, ignores the time and the energy consumed by the CPU on memory accesses and the dynamic energy consumed by the idle cores. This article presents an analytical energy-performance model for parallel workloads that accounts for the time and the energy consumed by the CPU chip on memory accesses in addition to the time and energy consumed by the CPU on CPU instructions. In addition, the model we present also accounts for the dynamic energy consumed by the idle cores. The existing work on global DVFS for parallel workloads shows that using a single frequency for the entire duration of a parallel application is not energy optimal and that varying the frequency according to the changes in the parallelism of the workload can save energy. We present an analytical framework around our energy-performance model to predict the operating frequencies (that depend upon the amount of parallelism) for global DVFS that minimize the overall CPU energy consumption. We show how the optimal frequencies in our model differ from the optimal frequencies in a model that does not account for memory accesses. We further show how the memory intensity of an application affects the optimal frequencies.