15 resultados para Expense caloric
em Cambridge University Engineering Department Publications Database
Resumo:
Displacement estimation is a key step in the evaluation of tissue elasticity by quasistatic strain imaging. An efficient approach may incorporate a tracking strategy whereby each estimate is initially obtained from its neighbours' displacements and then refined through a localized search. This increases the accuracy and reduces the computational expense compared with exhaustive search. However, simple tracking strategies fail when the target displacement map exhibits complex structure. For example, there may be discontinuities and regions of indeterminate displacement caused by decorrelation between the pre- and post-deformation radio frequency (RF) echo signals. This paper introduces a novel displacement tracking algorithm, with a search strategy guided by a data quality indicator. Comparisons with existing methods show that the proposed algorithm is more robust when the displacement distribution is challenging.
Resumo:
Approximate Bayesian computation (ABC) has become a popular technique to facilitate Bayesian inference from complex models. In this article we present an ABC approximation designed to perform biased filtering for a Hidden Markov Model when the likelihood function is intractable. We use a sequential Monte Carlo (SMC) algorithm to both fit and sample from our ABC approximation of the target probability density. This approach is shown to, empirically, be more accurate w.r.t.~the original filter than competing methods. The theoretical bias of our method is investigated; it is shown that the bias goes to zero at the expense of increased computational effort. Our approach is illustrated on a constrained sequential lasso for portfolio allocation to 15 constituents of the FTSE 100 share index.
Resumo:
Wavelength conversion in the 1.55-μm regime was achieved for the first time in an integrated SOA/DFB laser by modulating the output power of the laser with a light beam of a different wavelength externally injected into the SOA section. In terms of speed, response times as low as 13ps were observed, though at the expense of reduced extinction ratio. Generally, these results indicate that operation in the 10s of GB/s should be possible.
Resumo:
The majority of computational studies of confined explosion hazards apply simple and inaccurate combustion models, requiring ad hoc corrections to obtain realistic flame shapes and often predicting an order of magnitude error in the overpressures. This work describes the application of a laminar flamelet model to a series of two-dimensional test cases. The model is computationally efficient applying an algebraic expression to calculate the flame surface area, an empirical correlation for the laminar flame speed and a novel unstructured, solution adaptive numerical grid system which allows important features of the solution to be resolved close to the flame. Accurate flame shapes are predicted, the correct burning rate is predicted near the walls, and an improvement in the predicted overpressures is obtained. However, in these fully turbulent calculations the overpressures are still too high and the flame arrival times too low, indicating the need for a model for the early laminar burning phase. Due to the computational expense, it is unrealistic to model a laminar flame in the complex geometries involved and therefore a pragmatic approach is employed which constrains the flame to propagate at the laminar flame speed. Transition to turbulent burning occurs at a specified turbulent Reynolds number. With the laminar phase model included, the predicted flame arrival times increase significantly, but are still too low. However, this has no significant effect on the overpressures, which are predicted accurately for a baffled channel test case where rapid transition occurs once the flame reaches the first pair of baffles. In a channel with obstacles on the centreline, transition is more gradual and the accuracy of the predicted overpressures is reduced. However, although the accuracy is still less than desirable in some cases, it is much better than the order of magnitude error previously expected.
Resumo:
Current design codes for floating offshore structures are based on measures of short-term reliability. That is, a design storm is selected via an extreme value analysis of the environmental conditions and the reliability of the vessel in that design storm is computed. Although this approach yields valuable information on the vessel motions, it does not produce a statistically rigorous assessment of the lifetime probability of failure. An alternative approach is to perform a long-term reliability analysis in which consideration is taken of all sea states potentially encountered by the vessel during the design life. Although permitted as a design approach in current design codes, the associated computational expense generally prevents its use in practice. A new efficient approach to long-term reliability analysis is presented here, the results of which are compared with a traditional short-term analysis for the surge motion of a representative moored FPSO in head seas. This serves to illustrate the failure probabilities actually embedded within current design code methods, and the way in which design methods might be adapted to achieve a specified target safety level.
Resumo:
The electromechanical coupling behaviour of a novel, highly coiled piezoelectric strip structure is developed in full, in order to expound its performance and efficiency. The strip is doubly coiled for compactness and, compared to a standard straight actuator of the same cross-section, it is shown that the actuator here offers better generative forces and energy conversion, and substantial actuated displacements, however, at the expense of a much lower stiffness. The device is therefore proposed for high-displacement, quasi-static applications. © 2006 Elsevier B.V. All rights reserved.
Resumo:
Model based compensation schemes are a powerful approach for noise robust speech recognition. Recently there have been a number of investigations into adaptive training, and estimating the noise models used for model adaptation. This paper examines the use of EM-based schemes for both canonical models and noise estimation, including discriminative adaptive training. One issue that arises when estimating the noise model is a mismatch between the noise estimation approximation and final model compensation scheme. This paper proposes FA-style compensation where this mismatch is eliminated, though at the expense of a sensitivity to the initial noise estimates. EM-based discriminative adaptive training is evaluated on in-car and Aurora4 tasks. FA-style compensation is then evaluated in an incremental mode on the in-car task. © 2011 IEEE.
Resumo:
This paper develops a path-following steering control strategy for an articulated heavy goods vehicle. The controller steers the axles of the semi-trailer so that its rear end follows the path of the fifth wheel coupling: for all paths and all speeds. This substantially improves low-speed manoeuvrability, off-tracking, and tyre scrubbing (wear). It also increases high-speed stability, reduces 'rearward amplification', and reduces the propensity to roll over in high-speed transient manoeuvres. The design of a novel experimental heavy goods vehicle with three independent hydraulically actuated steering axles is presented. The path-following controller is tested on the experimental vehicle, at low and high speeds. The field test results are compared with vehicle simulations and found to agree well. The benefits of this steering control approach are quantified. In a low-speed 'roundabout' manoeuvre, low-speed off-tracking was reduced by 73 per cent, from 4.25 m for a conventional vehicle to 1.15 m for the experimental vehicle; swept-path width was reduced by 2 m (28 per cent); peak scrubbing tyre forces were reduced by 83 per cent; and entry tail-swing was eliminated. In an 80 km/h lane-change manoeuvre, peak path error for the experimental vehicle was 33 per cent less than for the conventional vehicle, and rearward amplification of the trailer was 35 per cent less. Increasing the bandwidth of the steering actuators improved the high-speed dynamic performance of the vehicle, but at the expense of increased oil flow.
Resumo:
We describe a method for verifying seismic modelling parameters. It is equivalent to performing several iterations of unconstrained least-squares migration (LSM). The approach allows the comparison of modelling/imaging parameter configurations with greater confidence than simply viewing the migrated images. The method is best suited to determining discrete parameters but can be used for continuous parameters albeit with greater computational expense.
Resumo:
A severe shortage of good quality donor cornea is now an international crisis in public health. Alternatives for donor tissue need to be urgently developed to meet the increasing demand for corneal transplantation. Hydrogels have been widely used as scaffolds for corneal tissue regeneration due to their large water content, similar to that of native tissue. However, these hydrogel scaffolds lack the fibrous structure that functions as a load-bearing component in the native tissue, resulting in poor mechanical performance. This work shows that mechanical properties of compliant hydrogels can be substantially enhanced with electrospun nanofiber reinforcement. Electrospun gelatin nanofibers were infiltrated with alginate hydrogels, yielding transparent fiber-reinforced hydrogels. Without prior crosslinking, electrospun gelatin nanofibers improved the tensile elastic modulus of the hydrogels from 78±19 kPa to 450±100 kPa. Stiffer hydrogels, with elastic modulus of 820±210 kPa, were obtained by crosslinking the gelatin fibers with carbodiimide hydrochloride in ethanol before the infiltration process, but at the expense of transparency. The developed fiber-reinforced hydrogels show great promise as mechanically robust scaffolds for corneal tissue engineering applications.
Resumo:
A severe shortage of good quality donor cornea is now an international crisis in public health. Alternatives for donor tissue need to be urgently developed to meet the increasing demand for corneal transplantation. Hydrogels have been widely used as scaffolds for corneal tissue regeneration due to their large water content, similar to that of native tissue. However, these hydrogel scaffolds lack the fibrous structure that functions as a load-bearing component in the native tissue, resulting in poor mechanical performance. This work shows that mechanical properties of compliant hydrogels can be substantially enhanced with electrospun nanofiber reinforcement. Electrospun gelatin nanofibers were infiltrated with alginate hydrogels, yielding transparent fiber-reinforced hydrogels. Without prior crosslinking, electrospun gelatin nanofibers improved the tensile elastic modulus of the hydrogels from 78±19. kPa to 450±100. kPa. Stiffer hydrogels, with elastic modulus of 820±210. kPa, were obtained by crosslinking the gelatin fibers with carbodiimide hydrochloride in ethanol before the infiltration process, but at the expense of transparency. The developed fiber-reinforced hydrogels show great promise as mechanically robust scaffolds for corneal tissue engineering applications. © 2013 Elsevier Ltd.
Resumo:
Simple air-path models for modern (VGT/EGR equipped) diesel engines are in common use, and have been reported in the literature. This paper addresses some of the shortcomings of control-oriented models to allow better prediction of the cylinder charge properties. A fast response CO2 analyzer is used to validate the model by comparing the recorded and predicted CO2 concentrations in both the intake port and exhaust manifold of one of the cylinders. Data showing the recorded NOx emissions and exhaust gas opacity during a step change in engine load illustrate the spikes in both NOx and smoke seen during transient conditions. The predicted cylinder charge properties from the model are examined and compared with the measured NOx and opacity. Together, the emissions data and charge properties paint a consistent picture of the phenomena occurring during the transient. Alternative strategies for the fueling and cylinder charge during these load transients are investigated and discussed. Experimental results are presented showing that spikes in both NOx and smoke can be avoided at the expense of some loss in torque response. Even if the torque response must be maintained, it is demonstrated that it is still possible to eliminate spikes in NOx emissions for the transient situation being examined. Copyright © 2006 SAE International.
Resumo:
Design optimisation of compressor systems is a computationally expensive problem due to the large number of variables, complicated design space and expense of the analysis tools. One approach to reduce the expense of the process and make it achievable in industrial timescales is to employ multi-fidelity techniques, which utilise more rapid tools in conjunction with the highest fidelity analyses. The complexity of the compressor design landscape is such that the starting point for these optimisations can influence the achievable results; these starting points are often existing (optimised) compressor designs, which form a limited set in terms of both quantity and diversity of the design. To facilitate the multi-fidelity optimisation procedure, a compressor synthesis code was developed which allowed the performance attributes (e.g. stage loadings, inlet conditions) to be stipulated, enabling the generation of a variety of compressors covering a range of both design topology and quality to act as seeding geometries for the optimisation procedures. Analysis of the performance of the multi-fidelity optimisation system when restricting its exploration space to topologically different areas of the design space indicated little advantage over allowing the system to search the design space itself. However, comparing results from optimisations started from seed designs with different aerodynamic qualites indicated an improved performance could be achieved by starting an optimisation from a higher quality point, and thus that the choice of starting point did affect the final outcome of the optimisations. Both investigations indicated that the performance gains through the optimisation were largely defined by the early exploration of the design space where the multi-fidelity speedup could be exploited, thus extending this region is likely to have the greatest effect on performance of the optimisation system. © 2013 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.
Resumo:
Design optimisation of compressor systems is a computationally expensive problem due to the large number of variables, complicated design space and expense of the analysis tools. One approach to reduce the expense of the process and make it achievable in industrial timescales is to employ multi-fidelity techniques, which utilise more rapid tools in conjunction with the highest fidelity analyses. The complexity of the compressor design landscape is such that the starting point for these optimisations can influence the achievable results; these starting points are often existing (optimised) compressor designs, which form a limited set in terms of both quantity and diversity of the design. To facilitate the multi-fidelity optimisation procedure, a compressor synthesis code was developed which allowed the performance attributes (e.g. stage loadings, inlet conditions) to be stipulated, enabling the generation of a variety of compressors covering a range of both design topology and quality to act as seeding geometries for the optimisation procedures. Analysis of the performance of the multi-fidelity optimisation system when restricting its exploration space to topologically different areas of the design space indicated little advantage over allowing the system to search the design space itself. However, comparing results from optimisations started from seed designs with different aerodynamic qualites indicated an improved performance could be achieved by starting an optimisation from a higher quality point, and thus that the choice of starting point did affect the final outcome of the optimisations. Both investigations indicated that the performance gains through the optimisation were largely defined by the early exploration of the design space where the multi-fidelity speedup could be exploited, thus extending this region is likely to have the greatest effect on performance of the optimisation system. © 2012 AIAA.
Resumo:
Multi-impact of projectiles on thin 304 stainless steel plates is investigated to assess the degradation of ballistic performance, and to characterise the inherent mechanisms. Assessment of ballistic degradation is by means of a double-impact of rigid spheres at the same site on a circular clamped plate. The limiting velocity of the second impact, will be altered by the velocity of the antecedent impact. Finite element analyses were used to elucidate experimental results and understand the underlying mechanisms that give rise to the performance degradation. The effect of strength and ductility on the single and multi-impact performance was also considered. The model captured the experimental results with excellent agreement. Moreover, the material parameters used within the model were exclusively obtained from published works with no fitting or calibration required. An attempt is made to quantify the elevation of the ballistic limit of thin plates by the dynamic mechanism of travelling hinges. Key conclusions: The multi-hit performance scales linearly with the single-hit performance; and strength is a significantly greater effector of increased ballistic limit than ductility, even at the expense of toughness. © 2014 Elsevier Ltd.