954 resultados para INVERSE PROBLEM


Relevância:

60.00% 60.00%

Publicador:

Resumo:

A novel methodology based on instrumented indentation is developed to determine the mechanical properties of amorphous materials which present cohesive-frictional behaviour. The approach is based on the concept of a universal hardness equation, which results from the assumption of a characteristic indentation pressure proportional to the hardness. The actual universal hardness equation is obtained from a detailed finite element analysis of the process of sharp indentation for a very wide range of material properties, and the inverse problem (i.e. how to extract the elastic modulus, the compressive yield strength and the friction angle) from instrumented indentation is solved. The applicability and limitations of the novel approach are highlighted. Finally, the model is validated against experimental data in metallic and ceramic glasses as well as polymers, covering a wide range of amorphous materials in terms of elastic modulus, yield strength and friction angle.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The possibility of application of structural reliability theory to the computation of the safety margins of excavated tunnels is presented. After a brief description of the existing procedures the limitations of the safety coefficients such as they usually defined, the proposed limit states are precised as well as the random variables and the applied methodology. Also presented are simple examples, some of them based in actual cases, and to end, some conclusions are established the most important one being the probability of using the method to solve the inverse problem of identification.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In tunnel construction, as in every engineering work, it is usual the decision making, with incomplete data. Nevertheless, consciously or not, the builder weighs the risks (even if this is done subjectively) so that he can offer a cost. The objective of this paper is to recall the existence of a methodology to treat the uncertainties in the data so that it is possible to see their effect on the output of the computational model used and then to estimate the failure probability or the safety margin of a structure. In this scheme it is possible to include the subjective knowledge on the statistical properties of the random variables and, using a numerical model consistent with the degree of complexity appropiate to the problem at hand, to make rationally based decisions. As will be shown with the method it is possible to quantify the relative importance of the random variables and, in addition, it can be used, under certain conditions, to solve the inverse problem. It is then a method very well suited both to the project and to the control phases of tunnel construction.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The possibility of application of structural reliability theory to the computation of the safety margins of excavated tunnels is presented. After a brief description of the existing procedures and the limitations of the safety coefficients such as they are usually defined, the proposed limit states are precised as well as the random variables and the applied methodology. Also presented are simple examples, some of them based in actual cases, and to end, some conclusions are established the most important one being the probability of using the method to solve the inverse problem of identification.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Use of nonlinear parameter estimation techniques is now commonplace in ground water model calibration. However, there is still ample room for further development of these techniques in order to enable them to extract more information from calibration datasets, to more thoroughly explore the uncertainty associated with model predictions, and to make them easier to implement in various modeling contexts. This paper describes the use of pilot points as a methodology for spatial hydraulic property characterization. When used in conjunction with nonlinear parameter estimation software that incorporates advanced regularization functionality (such as PEST), use of pilot points can add a great deal of flexibility to the calibration process at the same time as it makes this process easier to implement. Pilot points can be used either as a substitute for zones of piecewise parameter uniformity, or in conjunction with such zones. In either case, they allow the disposition of areas of high and low hydraulic property value to be inferred through the calibration process, without the need for the modeler to guess the geometry of such areas prior to estimating the parameters that pertain to them. Pilot points and regularization can also be used as an adjunct to geostatistically based stochastic parameterization methods. Using the techniques described herein, a series of hydraulic property fields can be generated, all of which recognize the stochastic characterization of an area at the same time that they satisfy the constraints imposed on hydraulic property values by the need to ensure that model outputs match field measurements. Model predictions can then be made using all of these fields as a mechanism for exploring predictive uncertainty.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A calibration methodology based on an efficient and stable mathematical regularization scheme is described. This scheme is a variant of so-called Tikhonov regularization in which the parameter estimation process is formulated as a constrained minimization problem. Use of the methodology eliminates the need for a modeler to formulate a parsimonious inverse problem in which a handful of parameters are designated for estimation prior to initiating the calibration process. Instead, the level of parameter parsimony required to achieve a stable solution to the inverse problem is determined by the inversion algorithm itself. Where parameters, or combinations of parameters, cannot be uniquely estimated, they are provided with values, or assigned relationships with other parameters, that are decreed to be realistic by the modeler. Conversely, where the information content of a calibration dataset is sufficient to allow estimates to be made of the values of many parameters, the making of such estimates is not precluded by preemptive parsimonizing ahead of the calibration process. White Tikhonov schemes are very attractive and hence widely used, problems with numerical stability can sometimes arise because the strength with which regularization constraints are applied throughout the regularized inversion process cannot be guaranteed to exactly complement inadequacies in the information content of a given calibration dataset. A new technique overcomes this problem by allowing relative regularization weights to be estimated as parameters through the calibration process itself. The technique is applied to the simultaneous calibration of five subwatershed models, and it is demonstrated that the new scheme results in a more efficient inversion, and better enforcement of regularization constraints than traditional Tikhonov regularization methodologies. Moreover, it is argued that a joint calibration exercise of this type results in a more meaningful set of parameters than can be achieved by individual subwatershed model calibration. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a Solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The cost of uniqueness is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, ill turn, can lead to erroneous predictions made by a model that is ostensibly well calibrated. Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration parameter covariance matrices shows that the latter often possess a much smaller spectral bandwidth than the former. It is also demonstrated that, as all inevitable consequence of the fact that a calibrated model cannot replicate every detail of the true system, model-to-measurement residuals can show a high degree of spatial correlation, a fact which must be taken into account when assessing these residuals either qualitatively, or quantitatively in the exploration of model predictive uncertainty. These principles are demonstrated using a synthetic case in which spatial parameter definition is based oil pilot points, and calibration is Implemented using both zones of piecewise constancy and constrained minimization regularization. (C) 2005 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The estimation of a concentration-dependent diffusion coefficient in a drying process is known as an inverse coefficient problem. The solution is sought wherein the space-average concentration is known as function of time (mass loss monitoring). The problem is stated as the minimization of a functional and gradient-based algorithms are used to solve it. Many numerical and experimental examples that demonstrate the effectiveness of the proposed approach are presented. Thin slab drying was carried out in an isothermal drying chamber built in our laboratory. The diffusion coefficients of fructose obtained with the present method are compared with existing literature results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This report gives an overview of the work being carried out, as part of the NEUROSAT project, in the Neural Computing Research Group at Aston University. The aim is to give a general review of the work and methods, with reference to other documents which provide the detail. The document is ongoing and will be updated as parts of the project are completed. Thus some of the references are not yet present. In the broadest sense, the Aston part of NEUROSAT is about using neural networks (and other advanced statistical techniques) to extract wind vectors from satellite measurements of ocean surface radar backscatter. The work involves several phases, which are outlined below. A brief summary of the theory and application of satellite scatterometers forms the first section. The next section deals with the forward modelling of the scatterometer data, after which the inverse problem is addressed. Dealiasing (or disambiguation) is discussed, together with proposed solutions. Finally a holistic framework is presented in which the problem can be solved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The ERS-1 Satellite was launched in July 1991 by the European Space Agency into a polar orbit at about km800, carrying a C-band scatterometer. A scatterometer measures the amount of radar back scatter generated by small ripples on the ocean surface induced by instantaneous local winds. Operational methods that extract wind vectors from satellite scatterometer data are based on the local inversion of a forward model, mapping scatterometer observations to wind vectors, by the minimisation of a cost function in the scatterometer measurement space.par This report uses mixture density networks, a principled method for modelling conditional probability density functions, to model the joint probability distribution of the wind vectors given the satellite scatterometer measurements in a single cell (the `inverse' problem). The complexity of the mapping and the structure of the conditional probability density function are investigated by varying the number of units in the hidden layer of the multi-layer perceptron and the number of kernels in the Gaussian mixture model of the mixture density network respectively. The optimal model for networks trained per trace has twenty hidden units and four kernels. Further investigation shows that models trained with incidence angle as an input have results comparable to those models trained by trace. A hybrid mixture density network that incorporates geophysical knowledge of the problem confirms other results that the conditional probability distribution is dominantly bimodal.par The wind retrieval results improve on previous work at Aston, but do not match other neural network techniques that use spatial information in the inputs, which is to be expected given the ambiguity of the inverse problem. Current work uses the local inverse model for autonomous ambiguity removal in a principled Bayesian framework. Future directions in which these models may be improved are given.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider the direct adaptive inverse control of nonlinear multivariable systems with different delays between every input-output pair. In direct adaptive inverse control, the inverse mapping is learned from examples of input-output pairs. This makes the obtained controller sub optimal, since the network may have to learn the response of the plant over a larger operational range than necessary. Moreover, in certain applications, the control problem can be redundant, implying that the inverse problem is ill posed. In this paper we propose a new algorithm which allows estimating and exploiting uncertainty in nonlinear multivariable control systems. This approach allows us to model strongly non-Gaussian distribution of control signals as well as processes with hysteresis. The proposed algorithm circumvents the dynamic programming problem by using the predicted neural network uncertainty to localise the possible control solutions to consider.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

High-level cognitive factors, including self-awareness, are believed to play an important role in human visual perception. The principal aim of this study was to determine whether oscillatory brain rhythms play a role in the neural processes involved in self-monitoring attentional status. To do so we measured cortical activity using magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) while participants were asked to self-monitor their internal status, only initiating the presentation of a stimulus when they perceived their attentional focus to be maximal. We employed a hierarchical Bayesian method that uses fMRI results as soft-constrained spatial information to solve the MEG inverse problem, allowing us to estimate cortical currents in the order of millimeters and milliseconds. Our results show that, during self-monitoring of internal status, there was a sustained decrease in power within the 7-13 Hz (alpha) range in the rostral cingulate motor area (rCMA) on the human medial wall, beginning approximately 430 msec after the trial start (p < 0.05, FDR corrected). We also show that gamma-band power (41-47 Hz) within this area was positively correlated with task performance from 40-640 msec after the trial start (r = 0.71, p < 0.05). We conclude: (1) the rCMA is involved in processes governing self-monitoring of internal status; and (2) the qualitative differences between alpha and gamma activity are reflective of their different roles in self-monitoring internal states. We suggest that alpha suppression may reflect a strengthening of top-down interareal connections, while a positive correlation between gamma activity and task performance indicates that gamma may play an important role in guiding visuomotor behavior. © 2013 Yamagishi et al.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Visual evoked magnetic responses were recorded to full-field and left and right half-field stimulation with three check sizes (70′, 34′ and 22′) in five normal subjects. Recordings were made sequentially on a 20-position grid (4 × 5) based on the inion, by means of a single-channel direct current-Superconducting Quantum Interference Device second-order gradiometer. The topographic maps were consistent on the same subjects recorded 2 months apart. The half-field responses produced the strongest signals in the contralateral hemisphere and were consistent with the cruciform model of the calcarine fissure. Right half fields produced upper-left-quadrant outgoing fields and lower-left-quadrant ingoing fields, while the left half field produced the opposite response. The topographic maps also varied with check size, with the larger checks producing positive or negative maximum position more anteriorly than small checks. In addition, with large checks the full-field responses could be explained as the summation of the two half fields, whereas full-field responses to smaller checks were more unpredictable and may be due to sources located at the occipital pole or lateral surface. In addition, dipole sources were located as appropriate with the use of inverse problem solutions. Topographic data will be vital to the clinical use of the visual evoked field but, in addition, provides complementary information to visual evoked potentials, allowing detailed studies of the visual cortex. © 1992 Kluwer Academic Publishers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objective of this work was to explore the performance of a recently introduced source extraction method, FSS (Functional Source Separation), in recovering induced oscillatory change responses from extra-cephalic magnetoencephalographic (MEG) signals. Unlike algorithms used to solve the inverse problem, FSS does not make any assumption about the underlying biophysical source model; instead, it makes use of task-related features (functional constraints) to estimate source/s of interest. FSS was compared with blind source separation (BSS) approaches such as Principal and Independent Component Analysis, PCA and ICA, which are not subject to any explicit forward solution or functional constraint, but require source uncorrelatedness (PCA), or independence (ICA). A visual MEG experiment with signals recorded from six subjects viewing a set of static horizontal black/white square-wave grating patterns at different spatial frequencies was analyzed. The beamforming technique Synthetic Aperture Magnetometry (SAM) was applied to localize task-related sources; obtained spatial filters were used to automatically select BSS and FSS components in the spatial area of interest. Source spectral properties were investigated by using Morlet-wavelet time-frequency representations and significant task-induced changes were evaluated by means of a resampling technique; the resulting spectral behaviours in the gamma frequency band of interest (20-70 Hz), as well as the spatial frequency-dependent gamma reactivity, were quantified and compared among methods. Among the tested approaches, only FSS was able to estimate the expected sustained gamma activity enhancement in primary visual cortex, throughout the whole duration of the stimulus presentation for all subjects, and to obtain sources comparable to invasively recorded data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work sets out to evaluate the potential benefits and pit-falls in using a priori information to help solve the Magnetoencephalographic (MEG) inverse problem. In chapter one the forward problem in MEG is introduced, together with a scheme that demonstrates how a priori information can be incorporated into the inverse problem. Chapter two contains a literature review of techniques currently used to solve the inverse problem. Emphasis is put on the kind of a priori information that is used by each of these techniques and the ease with which additional constraints can be applied. The formalism of the FOCUSS algorithm is shown to allow for the incorporation of a priori information in an insightful and straightforward manner. In chapter three it is described how anatomical constraints, in the form of a realistically shaped source space, can be extracted from a subject’s Magnetic Resonance Image (MRI). The use of such constraints relies on accurate co-registration of the MEG and MRI co-ordinate systems. Variations of the two main co-registration approaches, based on fiducial markers or on surface matching, are described and the accuracy and robustness of a surface matching algorithm is evaluated. Figures of merit introduced in chapter four are shown to given insight into the limitations of a typical measurement set-up and potential value of a priori information. It is shown in chapter five that constrained dipole fitting and FOCUSS outperform unconstrained dipole fitting when data with low SNR is used. However, the effect of errors in the constraints can reduce this advantage. Finally, it is demonstrated in chapter six that the results of different localisation techniques give corroborative evidence about the location and activation sequence of the human visual cortical areas underlying the first 125ms of the visual magnetic evoked response recorded with a whole head neuromagnetometer.