982 resultados para Inverse approach


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimation of design quantiles of hydrometeorological variables at critical locations in river basins is necessary for hydrological applications. To arrive at reliable estimates for locations (sites) where no or limited records are available, various regional frequency analysis (RFA) procedures have been developed over the past five decades. The most widely used procedure is based on index-flood approach and L-moments. It assumes that values of scale and shape parameters of frequency distribution are identical across all the sites in a homogeneous region. In real-world scenario, this assumption may not be valid even if a region is statistically homogeneous. To address this issue, a novel mathematical approach is proposed. It involves (i) identification of an appropriate frequency distribution to fit the random variable being analyzed for homogeneous region, (ii) use of a proposed transformation mechanism to map observations of the variable from original space to a dimensionless space where the form of distribution does not change, and variation in values of its parameters is minimal across sites, (iii) construction of a growth curve in the dimensionless space, and (iv) mapping the curve to the original space for the target site by applying inverse transformation to arrive at required quantile(s) for the site. Effectiveness of the proposed approach (PA) in predicting quantiles for ungauged sites is demonstrated through Monte Carlo simulation experiments considering five frequency distributions that are widely used in RFA, and by case study on watersheds in conterminous United States. Results indicate that the PA outperforms methods based on index-flood approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we study the inverse mode shape problem for an Euler-Bernoulli beam, using an analytical approach. The mass and stiffness variations are determined for a beam, having various boundary conditions, which has a prescribed polynomial second mode shape with an internal node. It is found that physically feasible rectangular cross-section beams which satisfy the inverse problem exist for a variety of boundary conditions. The effect of the location of the internal node on the mass and stiffness variations and on the deflection of the beam is studied. The derived functions are used to verify the p-version finite element code, for the cantilever boundary condition. The paper also presents the bounds on the location of the internal node, for a valid mass and stiffness variation, for any given boundary condition. The derived property variations, corresponding to a given mode shape and boundary condition, also provides a simple closed-form solution for a class of non-uniform Euler-Bernoulli beams. These closed-form solutions can also be used to check optimization algorithms proposed for modal tailoring.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper an explicit guidance law for the powered descent phase of the soft lunar landing is presented. The descent trajectory, expressed in polynomial form is fixed based on the boundary conditions imposed by the precise soft landing mission. Adapting an inverse model based approach, the guidance command is computed from the known spacecraft trajectory. The guidance formulation ensures the vertical orientation of the spacecraft during touchdown. Also a closed form relation for the final flight time is proposed. The final time is expressed as a function of initial position and velocity of the spacecraft ( at the start of descent) and also depends on the desired landing site. To ensure the fuel minimum descent the proposed explicit method is extended to optimal guidance formulation. The effectiveness of the proposed guidance laws are demonstrated with simulation results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

By the semi-inverse method, a variational principle is obtained for the Lane-Emden equation, which gives much numerical convenience when applying finite element methods or Ritz method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

By the semi-inverse method, a variational principle is obtained for the Thomas-Fermi equation, then the Ritz method is applied to solve an analytical solution, which is a much simpler and more efficient method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The refractive indices of particles and dispersion medium are important parameters in many colloidal experiments using optical techniques, such as turbidity and light scattering measurements. These data are in general wavelength-dependent and may not be available at some wavelengths fitting to the experimental requirement. in this Study we present a novel approach to inversely determine the refractive indices of particles and dispersion medium by examining the consistency of measured extinction cross sections of particles with their theoretical values using a series of trial values of the refractive indices. The colloidal suspension of polystyrene particles dispersed in water was used as an example to demonstrate how this approach works and the data obtained via such a method are compared with those reported in literature, showing a good agreement between both. Furthermore, the factors that affect the accuracy of measurements are discussed. We also present some data of the refractive indices of polystyrene over a range of wavelengths smaller than 400 nm that have been not reported in the available literature. (C) 2008 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nanomagnetic structures have the potential to surpass silicon's scaling limitations both as elements in hybrid CMOS logic and as novel computational elements. Magnetic force microscopy (MFM) offers a convenient characterization technique for use in the design of such nanomagnetic structures. MFM measures the magnetic field and not the sample's magnetization. As such the question of the uniqueness of the relationship between an external magnetic field and a magnetization distribution is a relevant one. To study this problem we present a simple algorithm which searches for magnetization distributions consistent with an external magnetic field and solutions to the micromagnetic equations' qualitative features. The algorithm is not computationally intensive and is found to be effective for our test cases. On the basis of our results we propose a systematic approach for interpreting MFM measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new method of measuring the mean size of solvent clusters in swollen polymer membrane is presented in this paper. This method is based on a combination of inverse gas chromatography (IGC) and equilibrium swelling. The mechanism is that weight fraction activity coefficient of solvent in swollen polymer is influenced by its clusters size. The mean clusters size of solvent in swollen polymer can be calculated as the quotient of the weight fraction activity coefficient of clustering system dividing the weigh fraction activity coefficient of non-clustering system. In this experiment, the weigh fraction activity coefficient of non-clustering system was measured with IGC. Methanol, ethanol and polyimide systems were tested with the new method at three temperatures, 20, 40, and 60degreesC. The mean clusters size of methanol in polyimide was five, four, and three at each temperature condition, respectively. Ethanol did not form clusters (the mean clusters size was one). In contrast to the inherent narrow temperature range in DSC, XRD, and FTIR methods, the temperature range in IGC and equilibrium swelling is broad. Compared with DSC. XRD. and FTIR, this new method can detect the clusters of solvent-polymer system at higher temperature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis we study the general problem of reconstructing a function, defined on a finite lattice from a set of incomplete, noisy and/or ambiguous observations. The goal of this work is to demonstrate the generality and practical value of a probabilistic (in particular, Bayesian) approach to this problem, particularly in the context of Computer Vision. In this approach, the prior knowledge about the solution is expressed in the form of a Gibbsian probability distribution on the space of all possible functions, so that the reconstruction task is formulated as an estimation problem. Our main contributions are the following: (1) We introduce the use of specific error criteria for the design of the optimal Bayesian estimators for several classes of problems, and propose a general (Monte Carlo) procedure for approximating them. This new approach leads to a substantial improvement over the existing schemes, both regarding the quality of the results (particularly for low signal to noise ratios) and the computational efficiency. (2) We apply the Bayesian appraoch to the solution of several problems, some of which are formulated and solved in these terms for the first time. Specifically, these applications are: teh reconstruction of piecewise constant surfaces from sparse and noisy observationsl; the reconstruction of depth from stereoscopic pairs of images and the formation of perceptual clusters. (3) For each one of these applications, we develop fast, deterministic algorithms that approximate the optimal estimators, and illustrate their performance on both synthetic and real data. (4) We propose a new method, based on the analysis of the residual process, for estimating the parameters of the probabilistic models directly from the noisy observations. This scheme leads to an algorithm, which has no free parameters, for the restoration of piecewise uniform images. (5) We analyze the implementation of the algorithms that we develop in non-conventional hardware, such as massively parallel digital machines, and analog and hybrid networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a novel image denoising technique based on the normal inverse Gaussian (NIG) density model using an extended non-negative sparse coding (NNSC) algorithm proposed by us. This algorithm can converge to feature basis vectors, which behave in the locality and orientation in spatial and frequency domain. Here, we demonstrate that the NIG density provides a very good fitness to the non-negative sparse data. In the denoising process, by exploiting a NIG-based maximum a posteriori estimator (MAP) of an image corrupted by additive Gaussian noise, the noise can be reduced successfully. This shrinkage technique, also referred to as the NNSC shrinkage technique, is self-adaptive to the statistical properties of image data. This denoising method is evaluated by values of the normalized signal to noise rate (SNR). Experimental results show that the NNSC shrinkage approach is indeed efficient and effective in denoising. Otherwise, we also compare the effectiveness of the NNSC shrinkage method with methods of standard sparse coding shrinkage, wavelet-based shrinkage and the Wiener filter. The simulation results show that our method outperforms the three kinds of denoising approaches mentioned above.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a new approach for the inversion of anisotropic P-wave data based on Monte Carlo methods combined with a multigrid approach. Simulated annealing facilitates objective minimization of the functional characterizing the misfit between observed and predicted traveltimes, as controlled by the Thomsen anisotropy parameters (epsilon, delta). Cycling between finer and coarser grids enhances the computational efficiency of the inversion process, thus accelerating the convergence of the solution while acting as a regularization technique of the inverse problem. Multigrid perturbation samples the probability density function without the requirements for the user to adjust tuning parameters. This increases the probability that the preferred global, rather than a poor local, minimum is attained. Undertaking multigrid refinement and Monte Carlo search in parallel produces more robust convergence than does the initially more intuitive approach of completing them sequentially. We demonstrate the usefulness of the new multigrid Monte Carlo (MGMC) scheme by applying it to (a) synthetic, noise-contaminated data reflecting an isotropic subsurface of constant slowness, horizontally layered geologic media and discrete subsurface anomalies; and (b) a crosshole seismic data set acquired by previous authors at the Reskajeage test site in Cornwall, UK. Inverted distributions of slowness (s) and the Thomson anisotropy parameters (epsilon, delta) compare favourably with those obtained previously using a popular matrix-based method. Reconstruction of the Thomsen epsilon parameter is particularly robust compared to that of slowness and the Thomsen delta parameter, even in the face of complex subsurface anomalies. The Thomsen epsilon and delta parameters have enhanced sensitivities to bulk-fabric and fracture-based anisotropies in the TI medium at Reskajeage. Because reconstruction of slowness (s) is intimately linked to that epsilon and delta in the MGMC scheme, inverted images of phase velocity reflect the integrated effects of these two modes of anisotropy. The new MGMC technique thus promises to facilitate rapid inversion of crosshole P-wave data for seismic slownesses and the Thomsen anisotropy parameters, with minimal user input in the inversion process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Groundwater flow in hard-rock aquifers is strongly controlled by the characteristics and distribution of structural heterogeneity. A methodology for catchment-scale characterisation is presented, based on the integration of complementary, multi-scale hydrogeological, geophysical and geological approaches. This was applied to three contrasting catchments underlain by metamorphic rocks in the northern parts of Ireland (Republic of Ireland and Northern Ireland, UK). Cross-validated surface and borehole geophysical investigations confirm the discontinuous overburden, lithological compartmentalisation of the bedrock and important spatial variations of the weathered bedrock profiles at macro-scale. Fracture analysis suggests that the recent (Alpine) tectonic fabric exerts strong control on the internal aquifer structure at meso-scale, which is likely to impact on the anisotropy of aquifer properties. The combination of the interpretation of depth-specific hydraulic-test data with the structural information provided by geophysical tests allows characterisation of the hydrodynamic properties of the identified aquifer units. Regionally, the distribution of hydraulic conductivities can be described by inverse power laws specific to the aquifer litho-type. Observed groundwater flow directions reflect this multi-scale structure. The proposed integrated approach applies widely available investigative tools to identify key dominant structures controlling groundwater flow, characterising the aquifer type for each catchment and resolving the spatial distribution of relevant aquifer units and associated hydrodynamic parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analysis of the acoustical functioning of musical instruments invariably involves the estimation of model parameters. The broad aim of this paper is to develop methods for estimation of clarinet reed parameters that are representative of actual playing conditions. This presents various challenges because of the di?culties of measuring the directly relevant variables without interfering with the control of the instrument. An inverse modelling approach is therefore proposed, in which the equations governing the sound generation mechanism of the clarinet
are employed in an optimisation procedure to determine the reed parameters from the mouthpiece pressure and volume ?ow signals. The underlying physical model captures most of the reed dynamics and is simple enough to be used in an inversion process. The optimisation procedure is ?rst tested by applying it to numerically synthesised signals, and then applied to mouthpiece signals acquired during notes blown by a human player. The proposed inverse modelling approach raises the possibility of revealing information about the way in which the embouchure-related reed parameters are controlled by the player, and also facilitates physics-based re-synthesis of clarinet sounds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Individuals carrying pathogenic mutations in the BRCA1 and BRCA2 genes have a high lifetime risk of breast cancer. BRCA1 and BRCA2 are involved in DNA double-strand break repair, DNA alterations that can be caused by exposure to reactive oxygen species, a main source of which are mitochondria. Mitochondrial genome variations affect electron transport chain efficiency and reactive oxygen species production. Individuals with different mitochondrial haplogroups differ in their metabolism and sensitivity to oxidative stress. Variability in mitochondrial genetic background can alter reactive oxygen species production, leading to cancer risk. In the present study, we tested the hypothesis that mitochondrial haplogroups modify breast cancer risk in BRCA1/2 mutation carriers.

Methods: We genotyped 22,214 (11,421 affected, 10,793 unaffected) mutation carriers belonging to the Consortium of Investigators of Modifiers of BRCA1/2 for 129 mitochondrial polymorphisms using the iCOGS array. Haplogroup inference and association detection were performed using a phylogenetic approach. ALTree was applied to explore the reference mitochondrial evolutionary tree and detect subclades enriched in affected or unaffected individuals.

Results: We discovered that subclade T1a1 was depleted in affected BRCA2 mutation carriers compared with the rest of clade T (hazard ratio (HR) = 0.55; 95% confidence interval (CI), 0.34 to 0.88; P = 0.01). Compared with the most frequent haplogroup in the general population (that is, H and T clades), the T1a1 haplogroup has a HR of 0.62 (95% CI, 0.40 to 0.95; P = 0.03). We also identified three potential susceptibility loci, including G13708A/rs28359178, which has demonstrated an inverse association with familial breast cancer risk.

Conclusions: This study illustrates how original approaches such as the phylogeny-based method we used can empower classical molecular epidemiological studies aimed at identifying association or risk modification effects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Attempts to record, understand and respond to variations in child welfare and protection reporting, service patterns and outcomes are international, numerous and longstanding. Reframing such variations as an issue of inequity between children and between families opens the way to a new approach to explaining the profound difference in intervention rates between and within countries and administrative districts. Recent accounts of variation have frequently been based on the idea that there is a binary division between bias and risk (or need). Here we propose seeing supply (bias) and demand (risk) factors as two aspects of a single system, both framed, in part, by social structures. A recent finding from a study of intervention rates in England, the 'inverse intervention law', is used to illustrate the complex ways in which a range of factors interact to produce intervention rates. In turn, this analysis raises profound moral, policy, practice and research questions about current child welfare and child protection services.