15 resultados para Computational power

em Cambridge University Engineering Department Publications Database


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The integration of multiple functionalities into individual nanoelectronic components is increasingly explored as a means to step up computational power, or for advanced signal processing. Here, we report the fabrication of a coupled nanowire transistor, a device where two superimposed high-performance nanowire field-effect transistors capable of mutual interaction form a thyristor-like circuit. The structure embeds an internal level of signal processing, showing promise for applications in analogue computation. The device is naturally derived from a single NW via a self-aligned fabrication process.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Computer Aided Control Engineering involves three parallel streams: Simulation and modelling, Control system design (off-line), and Controller implementation. In industry the bottleneck problem has always been modelling, and this remains the case - that is where control (and other) engineers put most of their technical effort. Although great advances in software tools have been made, the cost of modelling remains very high - too high for some sectors. Object-oriented modelling, enabling truly re-usable models, seems to be the key enabling technology here. Software tools to support control systems design have two aspects to them: aiding and managing the work-flow in particular projects (whether of a single engineer or of a team), and provision of numerical algorithms to support control-theoretic and systems-theoretic analysis and design. The numerical problems associated with linear systems have been largely overcome, so that most problems can be tackled routinely without difficulty - though problems remain with (some) systems of extremely large dimensions. Recent emphasis on control of hybrid and/or constrained systems is leading to the emerging importance of geometric algorithms (ellipsoidal approximation, polytope projection, etc). Constantly increasing computational power is leading to renewed interest in design by optimisation, an example of which is MPC. The explosion of embedded control systems has highlighted the importance of autocode generation, directly from modelling/simulation products to target processors. This is the 'new kid on the block', and again much of the focus of commercial tools is on this part of the control engineer's job. Here the control engineer can no longer ignore computer science (at least, for the time being). © 2006 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Computer Aided Control Engineering involves three parallel streams: Simulation and modelling, Control system design (off-line), and Controller implementation. In industry the bottleneck problem has always been modelling, and this remains the case - that is where control (and other) engineers put most of their technical effort. Although great advances in software tools have been made, the cost of modelling remains very high - too high for some sectors. Object-oriented modelling, enabling truly re-usable models, seems to be the key enabling technology here. Software tools to support control systems design have two aspects to them: aiding and managing the work-flow in particular projects (whether of a single engineer or of a team), and provision of numerical algorithms to support control-theoretic and systems-theoretic analysis and design. The numerical problems associated with linear systems have been largely overcome, so that most problems can be tackled routinely without difficulty - though problems remain with (some) systems of extremely large dimensions. Recent emphasis on control of hybrid and/or constrained systems is leading to the emerging importance of geometric algorithms (ellipsoidal approximation, polytope projection, etc). Constantly increasing computational power is leading to renewed interest in design by optimisation, an example of which is MPC. The explosion of embedded control systems has highlighted the importance of autocode generation, directly from modelling/simulation products to target processors. This is the 'new kid on the block', and again much of the focus of commercial tools is on this part of the control engineer's job. Here the control engineer can no longer ignore computer science (at least, for the time being). ©2006 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The application of Bayes' Theorem to signal processing provides a consistent framework for proceeding from prior knowledge to a posterior inference conditioned on both the prior knowledge and the observed signal data. The first part of the lecture will illustrate how the Bayesian methodology can be applied to a variety of signal processing problems. The second part of the lecture will introduce the concept of Markov Chain Monte-Carlo (MCMC) methods which is an effective approach to overcoming many of the analytical and computational problems inherent in statistical inference. Such techniques are at the centre of the rapidly developing area of Bayesian signal processing which, with the continual increase in available computational power, is likely to provide the underlying framework for most signal processing applications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present the results of a computational study of the post-processed Galerkin methods put forward by Garcia-Archilla et al. applied to the non-linear von Karman equations governing the dynamic response of a thin cylindrical panel periodically forced by a transverse point load. We spatially discretize the shell using finite differences to produce a large system of ordinary differential equations (ODEs). By analogy with spectral non-linear Galerkin methods we split this large system into a 'slowly' contracting subsystem and a 'quickly' contracting subsystem. We then compare the accuracy and efficiency of (i) ignoring the dynamics of the 'quick' system (analogous to a traditional spectral Galerkin truncation and sometimes referred to as 'subspace dynamics' in the finite element community when applied to numerical eigenvectors), (ii) slaving the dynamics of the quick system to the slow system during numerical integration (analogous to a non-linear Galerkin method), and (iii) ignoring the influence of the dynamics of the quick system on the evolution of the slow system until we require some output, when we 'lift' the variables from the slow system to the quick using the same slaving rule as in (ii). This corresponds to the post-processing of Garcia-Archilla et al. We find that method (iii) produces essentially the same accuracy as method (ii) but requires only the computational power of method (i) and is thus more efficient than either. In contrast with spectral methods, this type of finite-difference technique can be applied to irregularly shaped domains. We feel that post-processing of this form is a valuable method that can be implemented in computational schemes for a wide variety of partial differential equations (PDEs) of practical importance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article describes a computational study of viscous effects on lobed mixer flowfields. The computations, which were carried out using a compressible, three-dimensional, unstructured-mesh Navier-Stokes solver, were aimed at assessing the impacts on mixer performance of inlet boundary-layer thickness and boundary-layer separation within the lobe. The geometries analyzed represent a class of lobed mixer configurations used in turbofan engines. Parameters investigated included lobe penetration angles from 22 to 45 deg, stream-to-stream velocity ratios from 0.5 to 1.0, and two inlet boundary-layer displacement thicknesses. The results show quantitatively the increasing influence of viscous effects as lobe penetration angle is increased. It is shown that the simple estimate of shed circulation given by Skebe et al. (Experimental Investigation of Three-Dimensional Forced Mixer Lobe Flow Field, AIAA Paper 88-3785, July, 1988) can be extended even to situations in which the flow is separated, provided an effective mixer exit angle and height are defined. An examination of different loss sources is also carried out to illustrate the relative contributions of mixing loss and of boundary-layer viscous effects in cases of practical interest.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two new maximum power point tracking algorithms are presented: the input voltage sensor, and duty ratio maximum power point tracking algorithm (ViSD algorithm); and the output voltage sensor, and duty ratio maximum power point tracking algorithm (VoSD algorithm). The ViSD and VoSD algorithms have the features, characteristics and advantages of the incremental conductance algorithm (INC); but, unlike the incremental conductance algorithm which requires two sensors (the voltage sensor and current sensor), the two algorithms are more desirable because they require only one sensor: the voltage sensor. Moreover, the VoSD technique is less complex; hence, it requires less computational processing. Both the ViSD and the VoSD techniques operate by maximising power at the converter output, instead of the input. The ViSD algorithm uses a voltage sensor placed at the input of a boost converter, while the VoSD algorithm uses a voltage sensor placed at the output of a boost converter. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The pressure oscillation within combustion chambers of aeroengines and industrial gas turbines is a major technical challenge to the development of high-performance and low-emission propulsion systems. In this paper, an approach integrating computational fluid dynamics and one-dimensional linear stability analysis is developed to predict the modes of oscillation in a combustor and their frequencies and growth rates. Linear acoustic theory was used to describe the acoustic waves propagating upstream and downstream of the combustion zone, which enables the computational fluid dynamics calculation to be efficiently concentrated on the combustion zone. A combustion oscillation was found to occur with its predicted frequency in agreement with experimental measurements. Furthermore, results from the computational fluid dynamics calculation provide the flame transfer function to describe unsteady heat release rate. Departures from ideal one-dimensional flows are described by shape factors. Combined with this information, low-order models can work out the possible oscillation modes and their initial growth rates. The approach developed here can be used in more general situations for the analysis of combustion oscillations. Copyright © 2012 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we develop a new approach to sparse principal component analysis (sparse PCA). We propose two single-unit and two block optimization formulations of the sparse PCA problem, aimed at extracting a single sparse dominant principal component of a data matrix, or more components at once, respectively. While the initial formulations involve nonconvex functions, and are therefore computationally intractable, we rewrite them into the form of an optimization program involving maximization of a convex function on a compact set. The dimension of the search space is decreased enormously if the data matrix has many more columns (variables) than rows. We then propose and analyze a simple gradient method suited for the task. It appears that our algorithm has best convergence properties in the case when either the objective function or the feasible set are strongly convex, which is the case with our single-unit formulations and can be enforced in the block case. Finally, we demonstrate numerically on a set of random and gene expression test problems that our approach outperforms existing algorithms both in quality of the obtained solution and in computational speed. © 2010 Michel Journée, Yurii Nesterov, Peter Richtárik and Rodolphe Sepulchre.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two-phase computational fluid dynamics modelling is used to investigate the magnitude of different contributions to the wet steam losses in a three-stage model low pressure steam turbine. The thermodynamic losses (due to irreversible heat transfer across a finite temperature difference) and the kinematic relaxation losses (due to the frictional drag of the drops) are evaluated directly from the computational fluid dynamics simulation using a concept based on entropy production rates. The braking losses (due to the impact of large drops on the rotor) are investigated by a separate numerical prediction. The simulations show that in the present case, the dominant effect is the thermodynamic loss that accounts for over 90% of the wetness losses and that both the thermodynamic and the kinematic relaxation losses depend on the droplet diameter. The numerical results are brought into context with the well-known Baumann correlation, and a comparison with available measurement data in the literature is given. The ability of the numerical approach to predict the main wetness losses is confirmed, which permits the use of computational fluid dynamics for further studies on wetness loss correlations. © IMechE 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.