34 resultados para Penalty-based function
Resumo:
Spherical indentation creep testing was used to examine the effect of hydration state on bone mechanical properties. Analysis of creep data was based on the elastic-viscoelastic correspondence principle and utilized a direct solution for the finite loading-rate experimental conditions. The zero-time shear modulus was computed from the creep compliance function and compared to the indentation modulus obtained via conventional indentation analysis, based on an elastic unloading response. The method was validated using a well-known polymer material under three different loading conditions. The method was applied to bone samples prepared with different water content by partial exchange with ethanol, where 70% ethanol was considered as the baseline condition. A hydration increase was associated with a 43% decrease in stiffness, while a hydration decrease resulted in a 20% increase in bone tissue stiffness.
Resumo:
Bone is an anisotropic material, and its mechanical properties are determined by its microstructure as well as its composition. Mechanical properties of bone are a consequence of the proportions of, and the interactions between, mineral, collagen and water. Water plays an important role in maintaining the mechanical integrity of the composite, but the manner in which water interacts within the ultrastructure is unclear. Dentine being an isotropic two-dimensional structure presents a homogenous composite to examine the dehydration effects. Nanoindentation methods for determining the viscoelastic properties have recently been developed and are a subject of great interest. Here, one method based on elastic-viscoelastic correspondence for 'ramp and hold' creep testing (Oyen, J. Mater. Res., 2005) has been used to analyze viscoelastic behavior of polymeric and biological materials. The method of 'ramp and hold' allows the shear modulus at time zero to be determined from fitting of the displacement during the maximum load hold. Changes in the viscoelastic properties of bone and dentine were examined as the material was systematically dehydrated in a series of water:solvent mixes. Samples of equine dentine were sectioned and cryo-polished. Shear modulus was obtained by nanoindentation using spherical indenters with a maximum load hold of 120s. Samples were tested in different solvent concentrations sequentially, 70% ethanol to 50% ethanol, 70 % ethanol to 100% ethanol, 70% ethanol to 70% methanol to 100% methanol, and 70% ethanol to 100% acetone, after storage in each condition for 24h. By selectively removing and then replacing water from the composite, insights in to the ultrastructure of the tissue can be gained from the corresponding changes in the experimentally determined moduli, as well as an understanding of the complete reversibility of the dehydration process. © 2006 Materials Research Society.
Resumo:
Model-based approaches to handle additive and convolutional noise have been extensively investigated and used. However, the application of these schemes to handling reverberant noise has received less attention. This paper examines the extension of two standard additive/convolutional noise approaches to handling reverberant noise. The first is an extension of vector Taylor series (VTS) compensation, reverberant VTS, where a mismatch function including reverberant noise is used. The second scheme modifies constrained MLLR to allow a wide-span of frames to be taken into account and projected into the required dimensionality. To allow additive noise to be handled, both these schemes are combined with standard VTS. The approaches are evaluated and compared on two tasks, MC-WSJ-AV, and a reverberant simulated version of AURORA-4. © 2011 IEEE.
Resumo:
This paper develops an algorithm for finding sparse signals from limited observations of a linear system. We assume an adaptive Gaussian model for sparse signals. This model results in a least square problem with an iteratively reweighted L2 penalty that approximates the L0-norm. We propose a fast algorithm to solve the problem within a continuation framework. In our examples, we show that the correct sparsity map and sparsity level are gradually learnt during the iterations even when the number of observations is reduced, or when observation noise is present. In addition, with the help of sophisticated interscale signal models, the algorithm is able to recover signals to a better accuracy and with reduced number of observations than typical L1-norm and reweighted L1 norm methods. ©2010 IEEE.
Resumo:
This paper is in two parts and addresses two of getting more information out of the RF signal from three-dimensional (3D) mechanically-swept medical ultrasound . The first topic is the use of non-blind deconvolution improve the clarity of the data, particularly in the direction to the individual B-scans. The second topic is imaging. We present a robust and efficient approach to estimation and display of axial strain information. deconvolution, we calculate an estimate of the point-spread at each depth in the image using Field II. This is used as of an Expectation Maximisation (EM) framework in which ultrasound scatterer field is modelled as the product of (a) a smooth function and (b) a fine-grain varying function. the E step, a Wiener filter is used to estimate the scatterer based on an assumed piecewise smooth component. In the M , wavelet de-noising is used to estimate the piecewise smooth from the scatterer field. strain imaging, we use a quasi-static approach with efficient based algorithms. Our contributions lie in robust and 3D displacement tracking, point-wise quality-weighted , and a stable display that shows not only strain but an indication of the quality of the data at each point in the . This enables clinicians to see where the strain estimate is and where it is mostly noise. deconvolution, we present in-vivo images and simulations quantitative performance measures. With the blurred 3D taken as OdB, we get an improvement in signal to noise ratio 4.6dB with a Wiener filter alone, 4.36dB with the ForWaRD and S.18dB with our EM algorithm. For strain imaging show images based on 2D and 3D data and describe how full D analysis can be performed in about 20 seconds on a typical . We will also present initial results of our clinical study to explore the applications of our system in our local hospital. © 2008 IEEE.
Resumo:
The optimization of dialogue policies using reinforcement learning (RL) is now an accepted part of the state of the art in spoken dialogue systems (SDS). Yet, it is still the case that the commonly used training algorithms for SDS require a large number of dialogues and hence most systems still rely on artificial data generated by a user simulator. Optimization is therefore performed off-line before releasing the system to real users. Gaussian Processes (GP) for RL have recently been applied to dialogue systems. One advantage of GP is that they compute an explicit measure of uncertainty in the value function estimates computed during learning. In this paper, a class of novel learning strategies is described which use uncertainty to control exploration on-line. Comparisons between several exploration schemes show that significant improvements to learning speed can be obtained and that rapid and safe online optimisation is possible, even on a complex task. Copyright © 2011 ISCA.
Resumo:
We study the role of connectivity on the linear and nonlinear elastic behavior of amorphous systems using a two-dimensional random network of harmonic springs as a model system. A natural characterization of these systems arises in terms of the network coordination relative to that of an isostatic network $\delta z$; a floppy network has $\delta z<0$, while a stiff network has $\delta z>0$. Under the influence of an externally applied load we observe that the response of both floppy and rigid network are controlled by the same critical point, corresponding to the onset of rigidity. We use numerical simulations to compute the exponents which characterize the shear modulus, the amplitude of non-affine displacements, and the network stiffening as a function of $\delta z$, derive these theoretically and make predictions for the mechanical response of glasses and fibrous networks.
Resumo:
Chemical looping combustion (CLC) uses a metal oxide (the oxygen carrier) to provide oxygen for the combustion of a fuel and gives an inherent separation of pure CO2 with minimal energy penalty. In solid-fuel CLC, volatile matter will interact with oxygen carriers. Here, the interaction between iron-based oxygen carriers and a volatile hydrocarbon (n-heptane) was investigated in both a laboratory-scale fluidised bed and a thermogravimetric analyser (TGA). Experiments were undertaken to characterise the thermal decomposition of the n-heptane occurring in the presence and in the absence of the oxygen carrier. In a bed of inert particles, carbon deposition increased with temperature and acetylene appeared as a possible precursor. For a bed of carrier consisting of pure Fe2O3, carbon deposition occurred once the Fe2O3 was fully reduced to Fe. When the Fe2O3 was doped with 10 mol % Al2O3 (Fe90Al), deposition started when the carrier was reduced to a mixture of Fe and FeAl2O4, the latter being very unreactive. Furthermore, when pure Fe2O3 was fully reduced to Fe, agglomeration of the fluidised bed occurred. However, Fe90Al did not give agglomeration even after extended reduction. The results suggest that Fe90Al is promising for the CLC of solid fuels. © 2012 The Combustion Institute. Published by Elsevier Inc. All rights reserved.
Resumo:
Accurate and efficient computation of the distance function d for a given domain is important for many areas of numerical modeling. Partial differential (e.g. HamiltonJacobi type) equation based distance function algorithms have desirable computational efficiency and accuracy. In this study, as an alternative, a Poisson equation based level set (distance function) is considered and solved using the meshless boundary element method (BEM). The application of this for shape topology analysis, including the medial axis for domain decomposition, geometric de-featuring and other aspects of numerical modeling is assessed. © 2011 Elsevier Ltd. All rights reserved.
Resumo:
We investigate the electrical properties of Silicon-on-Insulator photonic crystals as a function of doping level and air filling factor. A very interesting trade-off between conductivity and optical losses in L3 cavities is also found. © 2011 IEEE.
Resumo:
We apply adjoint-based sensitivity analysis to a time-delayed thermo-acoustic system: a Rijke tube containing a hot wire. We calculate how the growth rate and frequency of small oscillations about a base state are affected either by a generic passive control element in the system (the structural sensitivity analysis) or by a generic change to its base state (the base-state sensitivity analysis). We illustrate the structural sensitivity by calculating the effect of a second hot wire with a small heat-release parameter. In a single calculation, this shows how the second hot wire changes the growth rate and frequency of the small oscillations, as a function of its position in the tube. We then examine the components of the structural sensitivity in order to determine the passive control mechanism that has the strongest influence on the growth rate. We find that a force applied to the acoustic momentum equation in the opposite direction to the instantaneous velocity is the most stabilizing feedback mechanism. We also find that its effect is maximized when it is placed at the downstream end of the tube. This feedback mechanism could be supplied, for example, by an adiabatic mesh. We illustrate the base-state sensitivity by calculating the effects of small variations in the damping factor, the heat-release time-delay coefficient, the heat-release parameter, and the hot-wire location. The successful application of sensitivity analysis to thermo-acoustics opens up new possibilities for the passive control of thermo-acoustic oscillations by providing gradient information that can be combined with constrained optimization algorithms in order to reduce linear growth rates. © Cambridge University Press 2013.
Resumo:
In any thermoacoustic analysis, it is important not only to predict linear frequencies and growth rates, but also the amplitude and frequencies of any limit cycles. The Flame Describing Function (FDF) approach is a quasi-linear analysis which allows the prediction of both the linear and nonlinear behaviour of a thermoacoustic system. This means that one can predict linear growth rates and frequencies, and also the amplitudes and frequencies of any limit cycles. The FDF achieves this by assuming that the acoustics are linear and that the flame, which is the only nonlinear element in the thermoacoustic system, can be adequately described by considering only its response at the frequency at which it is forced. Therefore any harmonics generated by the flame's nonlinear response are not considered. This implies that these nonlinear harmonics are small or that they are sufficiently filtered out by the linear dynamics of the system (the low-pass filter assumption). In this paper, a flame model with a simple saturation nonlinearity is coupled to simple duct acoustics, and the success of the FDF in predicting limit cycles is studied over a range of flame positions and acoustic damping parameters. Although these two parameters affect only the linear acoustics and not the nonlinear flame dynamics, they determine the validity of the low-pass filter assumption made in applying the flame describing function approach. Their importance is highlighted by studying the level of success of an FDF-based analysis as they are varied. This is achieved by comparing the FDF's prediction of limit-cycle amplitudes to the amplitudes seen in time domain simulations.
Resumo:
This paper presents a Bayesian probabilistic framework to assess soil properties and model uncertainty to better predict excavation-induced deformations using field deformation data. The potential correlations between deformations at different depths are accounted for in the likelihood function needed in the Bayesian approach. The proposed approach also accounts for inclinometer measurement errors. The posterior statistics of the unknown soil properties and the model parameters are computed using the Delayed Rejection (DR) method and the Adaptive Metropolis (AM) method. As an application, the proposed framework is used to assess the unknown soil properties of multiple soil layers using deformation data at different locations and for incremental excavation stages. The developed approach can be used for the design of optimal revisions for supported excavation systems. © 2010 ASCE.
Resumo:
The problem of robust stabilization of nonlinear systems in the presence of input uncertainties is of great importance in practical implementation. Stabilizing control laws may not be robust to this type of uncertainty, especially if cancellation of nonlinearities is used in the design. By exploiting a connection between robustness and optimality, "domination redesign" of the control Lyapunov function (CLF) based Sontag's formula has been shown to possess robustness to static and dynamic input uncertainties. In this paper we provide a sufficient condition for the domination redesign to apply. This condition relies on properties of local homogeneous approximations of the system and of the CLF. We show that an inverse optimal control law may not exist when these conditions are violated and illustrate how these conditions may guide the choice of a CLF which is suitable for domination redesign. © 1999 Elsevier Science B.V. All rights reserved.
Resumo:
The double-heterogeneity characterising pebble-bed high temperature reactors (HTRs) makes Monte Carlo based calculation tools the most suitable for detailed core analyses. These codes can be successfully used to predict the isotopic evolution during irradiation of the fuel of this kind of cores. At the moment, there are many computational systems based on MCNP that are available for performing depletion calculation. All these systems use MCNP to supply problem dependent fluxes and/or microscopic cross sections to the depletion module. This latter then calculates the isotopic evolution of the fuel resolving Bateman's equations. In this paper, a comparative analysis of three different MCNP-based depletion codes is performed: Montburns2.0, MCNPX2.6.0 and BGCore. Monteburns code can be considered as the reference code for HTR calculations, since it has been already verified during HTR-N and HTR-N1 EU project. All calculations have been performed on a reference model representing an infinite lattice of thorium-plutonium fuelled pebbles. The evolution of k-inf as a function of burnup has been compared, as well as the inventory of the important actinides. The k-inf comparison among the codes shows a good agreement during the entire burnup history with the maximum difference lower than 1%. The actinide inventory prediction agrees well. However significant discrepancy in Am and Cm concentrations calculated by MCNPX as compared to those of Monteburns and BGCore has been observed. This is mainly due to different Am-241 (n,γ) branching ratio utilized by the codes. The important advantage of BGCore is its significantly lower execution time required to perform considered depletion calculations. While providing reasonably accurate results BGCore runs depletion problem about two times faster than Monteburns and two to five times faster than MCNPX. © 2009 Elsevier B.V. All rights reserved.