975 resultados para Tuning.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis investigates the hydrodynamics of a small, seabed mounted, bottom hinged, wave energy converter in shallow water. The Oscillating Wave Surge Converter is a pitching flap-type device which is located in 10-15m of water to take advantage of the amplification of horizontal water particle motion in shallow water. A conceptual model of the hydrodynamics of the device has been formulated and shows that, as the motion of the flap is highly constrained, the magnitude of the force applied to the flap by the wave is strongly linked to the power absorption.

An extensive set of experiments has been carried out in the wave tank at Queen’s University at both 40th and 20th scales. The experiments have included testing in realistic sea states to estimate device performance as well as fundamental tests using small amplitude monochromatic waves to determine the force applied to the flap by the waves. The results from the physical modelling programme have been used in conjunction with numerical data from WAMIT to validate the conceptual model.

The work finds that tuning the OWSC to the incident wave periods is problematic and only results in a marginal increase in power capture. It is also found that the addition of larger diameter rounds to the edges of the flap reduces viscous losses and has a greater effect on the performance of the device than tuning. As wave force is the primary driver of device performance it is shown that the flap should fill the water column and should pierce the water surface to reduce losses due to wave overtopping.

With the water depth fixed at approximately 10m it is shown that the width of the flap has the greatest impact on the magnitude of wave force, and thus device performance. An 18m wide flap is shown to have twice the absorption efficiency of a 6m wide flap and captures 6 times the power. However, the increase in power capture with device width is not limitless and a 24m wide flap is found to be affected by two-dimensional hydrodynamics which reduces its performance per unit width, especially in sea states with short periods. It is also shown that as the width increases the performance gains associated with the addition of the end effectors reduces. Furthermore, it is shown that as the flap width increases the natural pitching period of the flap increases, thus detuning the flap further from the wave periods of interest for wave energy conversion.

The effect of waves approaching the flap from an oblique angle is also investigated and the power capture is found to decrease with the cosine squared of the encounter angle. The characteristic of the damping applied by the power take off system is found to have a significant effect on the power capture of the device, with constant damping producing between 20% and 30% less power than quadratic damping. Furthermore, it is found that applying a higher level of damping, or a damping bias, to the flap as it pitches towards the beach increases the power capture by 10%.

A further set of experiments has been undertaken in a case study used to predict the power capture of a prototype of the OWSC concept. The device, called the Oyster Demonstrator, has been developed by Aquamarine Power Ltd. and is to be installed at the European Marine Energy Centre, Scotland, in 2009.

The work concludes that OWSC is a viable wave energy converter and absorption efficiencies of up 75% have been measured. It is found that to maximise power absorption the flap should be approximately 20m wide with large diameter rounded edges, having its pivot close to the seabed and its top edge piercing the water surface.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We introduce a scheme to reconstruct arbitrary states of networks composed of quantum oscillators-e. g., the motionalstate of trapped ions or the radiation state of coupled cavities. The scheme involves minimal resources and minimal access, in the sense that it (i) requires only the interaction between a one-qubit probe and a single node of the network; (ii) provides the Weyl characteristic function of the network directly from the data, avoiding any tomographic transformation; (iii) involves the tuning of only one coupling parameter. In addition, we show that a number of quantum properties can be extracted without full reconstruction of the state. The scheme can be used for probing quantum simulations of anharmonic many-body systems and quantum computations with continuous variables. Experimental implementation with trapped ions is also discussed and shown to be within reach of current technology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A framework supporting fast prototyping as well as tuning of distributed applications is presented. The approach is based on the adoption of a formal model that is used to describe the orchestration of distributed applications. The formal model (Orc by Misra and Cook) can be used to support semi-formal reasoning about the applications at hand. The paper describes how the framework can be used to derive and evaluate alternative orchestrations of a well know parallel/distributed computation pattern; and shows how the same formal model can be used to support generation of prototypes of distributed applications skeletons directly from the application description.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mathematical modelling has become an essential tool in the design of modern catalytic systems. Emissions legislation is becoming increasingly stringent, and so mathematical models of aftertreatment systems must become more accurate in order to provide confidence that a catalyst will convert pollutants over the required range of conditions. 
Automotive catalytic converter models contain several sub-models that represent processes such as mass and heat transfer, and the rates at which the reactions proceed on the surface of the precious metal. Of these sub-models, the prediction of the surface reaction rates is by far the most challenging due to the complexity of the reaction system and the large number of gas species involved. The reaction rate sub-model uses global reaction kinetics to describe the surface reaction rate of the gas species and is based on the Langmuir Hinshelwood equation further developed by Voltz et al. [1] The reactions can be modelled using the pre-exponential and activation energies of the Arrhenius equations and the inhibition terms. 
The reaction kinetic parameters of aftertreatment models are found from experimental data, where a measured light-off curve is compared against a predicted curve produced by a mathematical model. The kinetic parameters are usually manually tuned to minimize the error between the measured and predicted data. This process is most commonly long, laborious and prone to misinterpretation due to the large number of parameters and the risk of multiple sets of parameters giving acceptable fits. Moreover, the number of coefficients increases greatly with the number of reactions. Therefore, with the growing number of reactions, the task of manually tuning the coefficients is becoming increasingly challenging. 
In the presented work, the authors have developed and implemented a multi-objective genetic algorithm to automatically optimize reaction parameters in AxiSuite®, [2] a commercial aftertreatment model. The genetic algorithm was developed and expanded from the code presented by Michalewicz et al. [3] and was linked to AxiSuite using the Simulink add-on for Matlab. 
The default kinetic values stored within the AxiSuite model were used to generate a series of light-off curves under rich conditions for a number of gas species, including CO, NO, C3H8 and C3H6. These light-off curves were used to generate an objective function. 
This objective function was used to generate a measure of fit for the kinetic parameters. The multi-objective genetic algorithm was subsequently used to search between specified limits to attempt to match the objective function. In total the pre-exponential factors and activation energies of ten reactions were simultaneously optimized. 
The results reported here demonstrate that, given accurate experimental data, the optimization algorithm is successful and robust in defining the correct kinetic parameters of a global kinetic model describing aftertreatment processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we propose a design paradigm for energy efficient and variation-aware operation of next-generation multicore heterogeneous platforms. The main idea behind the proposed approach lies on the observation that not all operations are equally important in shaping the output quality of various applications and of the overall system. Based on such an observation, we suggest that all levels of the software design stack, including the programming model, compiler, operating system (OS) and run-time system should identify the critical tasks and ensure correct operation of such tasks by assigning them to dynamically adjusted reliable cores/units. Specifically, based on error rates and operating conditions identified by a sense-and-adapt (SeA) unit, the OS selects and sets the right mode of operation of the overall system. The run-time system identifies the critical/less-critical tasks based on special directives and schedules them to the appropriate units that are dynamically adjusted for highly-accurate/approximate operation by tuning their voltage/frequency. Units that execute less significant operations can operate at voltages less than what is required for correct operation and consume less power, if required, since such tasks do not need to be always exact as opposed to the critical ones. Such scheme can lead to energy efficient and reliable operation, while reducing the design cost and overheads of conventional circuit/micro-architecture level techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hardware designers and engineers typically need to explore a multi-parametric design space in order to find the best configuration for their designs using simulations that can take weeks to months to complete. For example, designers of special purpose chips need to explore parameters such as the optimal bitwidth and data representation. This is the case for the development of complex algorithms such as Low-Density Parity-Check (LDPC) decoders used in modern communication systems. Currently, high-performance computing offers a wide set of acceleration options, that range from multicore CPUs to graphics processing units (GPUs) and FPGAs. Depending on the simulation requirements, the ideal architecture to use can vary. In this paper we propose a new design flow based on OpenCL, a unified multiplatform programming model, which accelerates LDPC decoding simulations, thereby significantly reducing architectural exploration and design time. OpenCL-based parallel kernels are used without modifications or code tuning on multicore CPUs, GPUs and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL for mapping the simulations into FPGAs. To the best of our knowledge, this is the first time that a single, unmodified OpenCL code is used to target those three different platforms. We show that, depending on the design parameters to be explored in the simulation, on the dimension and phase of the design, the GPU or the FPGA may suit different purposes more conveniently, providing different acceleration factors. For example, although simulations can typically execute more than 3x faster on FPGAs than on GPUs, the overhead of circuit synthesis often outweighs the benefits of FPGA-accelerated execution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a new programming methodology for introducing and tuning parallelism in Erlang programs, using source-level code refactoring from sequential source programs to parallel programs written using our skeleton library, Skel. High-level cost models allow us to predict with reasonable accuracy the parallel performance of the refactored program, enabling programmers to make informed decisions about which refactorings to apply. Using our approach, we demonstrate easily obtainable, significant and scalable speedups of up to 21 on a 24-core machine over the sequential code.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a Bayesian learning setting, the posterior distribution of a predictive model arises from a trade-off between its prior distribution and the conditional likelihood of observed data. Such distribution functions usually rely on additional hyperparameters which need to be tuned in order to achieve optimum predictive performance; this operation can be efficiently performed in an Empirical Bayes fashion by maximizing the posterior marginal likelihood of the observed data. Since the score function of this optimization problem is in general characterized by the presence of local optima, it is necessary to resort to global optimization strategies, which require a large number of function evaluations. Given that the evaluation is usually computationally intensive and badly scaled with respect to the dataset size, the maximum number of observations that can be treated simultaneously is quite limited. In this paper, we consider the case of hyperparameter tuning in Gaussian process regression. A straightforward implementation of the posterior log-likelihood for this model requires O(N^3) operations for every iteration of the optimization procedure, where N is the number of examples in the input dataset. We derive a novel set of identities that allow, after an initial overhead of O(N^3), the evaluation of the score function, as well as the Jacobian and Hessian matrices, in O(N) operations. We prove how the proposed identities, that follow from the eigendecomposition of the kernel matrix, yield a reduction of several orders of magnitude in the computation time for the hyperparameter optimization problem. Notably, the proposed solution provides computational advantages even with respect to state of the art approximations that rely on sparse kernel matrices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thin, oxidised Al films grown an one face of fused silica prisms are exposed. tinder ambient conditions, to single shots from an excimer laser operating at wavelength 248 nm. Preliminary characterisation of the films using attenuated total reflection yields optical and thickness data for the Al and Al oxide layers; this step facilitates the subsequent, accurate tuning of the excimer laser pulse to the: surface plasmon resonance at the Al/(oxide)/air interface and the calculation of the fluence actually absorbed by the thin film system. Ablation damage is characterised using scanning electron, and atomic force microscopy. When the laser pulse is incident, through the prism on the sample at less than critical angle, the damage features are molten in nature with small islands of sub-micrometer dimension much in evidence, a mechanism of film melt-through and subsegment blow-off due to the build up of vapour pressure at the substrate/film interface is appropriate. By contrast, when the optical input is surface plasmon mediated, predominately mechanical damage results with the film fragmenting into large flakes of dimensions on the order of 10 mu m. It is suggested that the ability of surface plasmons to transport energy leads to enhanced, preferential absorption of energy at defect sites causing stress throughout the film which exceeds the ultimate tensile stress for the film: this in turn leads to film break-up before melting can onset. (C) 1998 Elsevier Science B.V.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An improved dual-gas quasi-phase matching (QPM) foil target for high harmonic generation (HHG) is presented. The target can be setup with 12 individual gas inlets each feeding multiple nozzles separated by a minimum distance of 10 μm. Three-dimensional gas density profiles of these jets were measured using a Mach-Zehnder Interferometer. These measurements reveal how the jets influence the density of gas in adjacent jets and how this leads to increased local gas densities. The analysis shows that the gas profiles of the jets are well defined up to a distance of about 300 μm from the orifice. This target design offers experimental flexibility, not only for HHG/QPM investigations, but also for a wide range of experiments due to the large number of possible jet configurations. We demonstrate the application to controlled phase tuning in the extreme ultraviolet using a 1 kHz-10 mJ-30 fs-laser system where interference between two jets in the spectral range from 17 to 30 nm was observed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The design cycle for complex special-purpose computing systems is extremely costly and time-consuming. It involves a multiparametric design space exploration for optimization, followed by design verification. Designers of special purpose VLSI implementations often need to explore parameters, such as optimal bitwidth and data representation, through time-consuming Monte Carlo simulations. A prominent example of this simulation-based exploration process is the design of decoders for error correcting systems, such as the Low-Density Parity-Check (LDPC) codes adopted by modern communication standards, which involves thousands of Monte Carlo runs for each design point. Currently, high-performance computing offers a wide set of acceleration options that range from multicore CPUs to Graphics Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs). The exploitation of diverse target architectures is typically associated with developing multiple code versions, often using distinct programming paradigms. In this context, we evaluate the concept of retargeting a single OpenCL program to multiple platforms, thereby significantly reducing design time. A single OpenCL-based parallel kernel is used without modifications or code tuning on multicore CPUs, GPUs, and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL in order to introduce FPGAs as a potential platform to efficiently execute simulations coded in OpenCL. We use LDPC decoding simulations as a case study. Experimental results were obtained by testing a variety of regular and irregular LDPC codes that range from short/medium (e.g., 8,000 bit) to long length (e.g., 64,800 bit) DVB-S2 codes. We observe that, depending on the design parameters to be simulated, on the dimension and phase of the design, the GPU or FPGA may suit different purposes more conveniently, thus providing different acceleration factors over conventional multicore CPUs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Highly efficient In2O3-Co3O4 catalysts were prepared for ultralow-temperature CO oxidation by simultaneously tuning the CO adsorption strength and oxygen activation over a Co3O4 surface, which could completely convert CO to CO2 at temperatures as low as -105 degrees C compared to -40 degrees C over pure Co3O4, with enhanced stability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ceria (CeO2) and ceria-based composite materials, especially Ce1-xZrxO2 solid solutions, possess a wide range of applications in many important catalytic processes, such as three-way catalysts, owing to their excellent oxygen storage capacity (OSC) through the oxygen vacancy formation and refilling. Much of this activity has focused on the understanding of the electronic and structural properties of defective CeO2 with and without doping, and comprehending the determining factor for oxygen vacancy formation and the rule to tune the formation energy by doping has constituted a central issue in material chemistry related to ceria. However, the calculation on electronic structures and the corresponding relaxation patterns in defective CeO2-x oxides remains at present a challenge in the DFT framework. A pragmatic approach based on density functional theory with the inclusion of on-site Coulomb correction, i.e. the so-called DFT + U technique, has been extensively applied in the majority of recent theoretical investigations. Firstly, we review briefly the latest electronic structure calculations of defective CeO2(111), focusing on the phenomenon of multiple configurations of the localized 4f electrons, as well as the discussions of its formation mechanism and the catalytic role in activating the O-2 molecule. Secondly, aiming at shedding light on the doping effect on tuning the oxygen vacancy formation in ceria-based solid solutions, we summarize the recent theoretical results of Ce1-xZrxO2 solid solutions in terms of the effect of dopant concentrations and crystal phases. A general model on O vacancy formation is also discussed; it consists of electrostatic and structural relaxation terms, and the vital role of the later is emphasized. Particularly, we discuss the crucial role of the localized structural relaxation patterns in determining the superb oxygen storage capacity in kappa-phase Ce1-xZr1-xO2. Thirdly, we briefly discuss some interesting findings for the oxygen vacancy formation in pure ceria nanoparticles (NPs) uncovered by DFT calculations and compare those with the bulk or extended surfaces of ceria as well as different particle sizes, emphasizing the role of the electrostatic field in determining the O vacancy formation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

New independent dating evidence is presented for a lacustrine record for which an age-depth model had already been derived through the interpretation of the pollen signal. Quartz OSL ages support radiocarbon ages that were previously considered to suffer an underestimation due to contamination, and imply a younger chronology for the core. The successful identification of the Campanian Ignimbrite as a cryptotephra within the core also validates this younger chronology, as well as extending the known geographical range of this tephra layer within Italy. These new results suggest that care should always be taken when building chronologies from proxy records that are correlated to the tuned records from which the global signal is often derived (i.e. double tuning). We do not offer this as the definitive chronology for Lake Fimon, but multiple lines of dating evidence show that there is sufficient reason to seriously consider it. The Quaternary dating community should always have all age information available, even when significant temporal offsets are apparent between various lines of evidence to be: 1) better informed when they face similar dilemmas in the future and 2) allow multiple working hypotheses to be considered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modern control methods like optimal control and model predictive control (MPC) provide a framework for simultaneous regulation of the tracking performance and limiting the control energy, thus have been widely deployed in industrial applications. Yet, due to its simplicity and robustness, the conventional P (Proportional) and PI (Proportional–Integral) control are still the most common methods used in many engineering systems, such as electric power systems, automotive, and Heating, Ventilation and Air Conditioning (HVAC) for buildings, where energy efficiency and energy saving are the critical issues to be addressed. Yet, little has been done so far to explore the effect of its parameter tuning on both the system performance and control energy consumption, and how these two objectives are correlated within the P and PI control framework. In this paper, the P and PI controllers are designed with a simultaneous consideration of these two aspects. Two case studies are investigated in detail, including the control of Voltage Source Converters (VSCs) for transmitting offshore wind power to onshore AC grid through High Voltage DC links, and the control of HVAC systems. Results reveal that there exists a better trade-off between the tracking performance and the control energy through a proper choice of the P and PI controller parameters.