934 resultados para Amphiphile Copolymere, Blockcopolymere, statistische Copolymere, inverse Emulsionen, Mizellen
Resumo:
We address the problem of high-resolution reconstruction in frequency-domain optical-coherence tomography (FDOCT). The traditional method employed uses the inverse discrete Fourier transform, which is limited in resolution due to the Heisenberg uncertainty principle. We propose a reconstruction technique based on zero-crossing (ZC) interval analysis. The motivation for our approach lies in the observation that, for a multilayered specimen, the backscattered signal may be expressed as a sum of sinusoids, and each sinusoid manifests as a peak in the FDOCT reconstruction. The successive ZC intervals of a sinusoid exhibit high consistency, with the intervals being inversely related to the frequency of the sinusoid. The statistics of the ZC intervals are used for detecting the frequencies present in the input signal. The noise robustness of the proposed technique is improved by using a cosine-modulated filter bank for separating the input into different frequency bands, and the ZC analysis is carried out on each band separately. The design of the filter bank requires the design of a prototype, which we accomplish using a Kaiser window approach. We show that the proposed method gives good results on synthesized and experimental data. The resolution is enhanced, and noise robustness is higher compared with the standard Fourier reconstruction. (c) 2012 Optical Society of America
Resumo:
The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter. (C) 2012 Society of Photo-Optical Instrumentation Engineers (SPIE). DOI: 10.1117/1.JBO.17.10.106015]
Resumo:
Traditional image reconstruction methods in rapid dynamic diffuse optical tomography employ l(2)-norm-based regularization, which is known to remove the high-frequency components in the reconstructed images and make them appear smooth. The contrast recovery in these type of methods is typically dependent on the iterative nature of method employed, where the nonlinear iterative technique is known to perform better in comparison to linear techniques (noniterative) with a caveat that nonlinear techniques are computationally complex. Assuming that there is a linear dependency of solution between successive frames resulted in a linear inverse problem. This new framework with the combination of l(1)-norm based regularization can provide better robustness to noise and provide better contrast recovery compared to conventional l(2)-based techniques. Moreover, it is shown that the proposed l(1)-based technique is computationally efficient compared to its counterpart (l(2)-based one). The proposed framework requires a reasonably close estimate of the actual solution for the initial frame, and any suboptimal estimate leads to erroneous reconstruction results for the subsequent frames.
Resumo:
We consider an inverse elasticity problem in which forces and displacements are known on the boundary and the material property distribution inside the body is to be found. In other words, we need to estimate the distribution of constitutive properties using the finite boundary data sets. Uniqueness of the solution to this problem is proved in the literature only under certain assumptions for a given complete Dirichlet-to-Neumann map. Another complication in the numerical solution of this problem is that the number of boundary data sets needed to establish uniqueness is not known even under the restricted cases where uniqueness is proved theoretically. In this paper, we present a numerical technique that can assess the sufficiency of given boundary data sets by computing the rank of a sensitivity matrix that arises in the Gauss-Newton method used to solve the problem. Numerical experiments are presented to illustrate the method.
Resumo:
The discrepancy between the X-ray and NMR structures of Mycobacterium tuberculosis peptidyl-tRNA hydrolase in relation to the functionally important plasticity of the molecule led to molecular dynamics simulations. The X-ray and the NMR studies along with the simulations indicated an inverse correlation between crowding and molecular volume. A detailed comparison of proteins for which X-ray and the NMR structures appears to confirm this correlation. In consonance with the reported results of the investigations in cellular compartments and aqueous solution, the comparison indicates that the crowding results in compaction of the molecule as well as change in its shape, which could specifically involve regions of the molecule important in function. Crowding could thus influence the action of proteins through modulation of the functionally important plasticity of the molecule. Selvaraj M, Ahmad R, Varshney U and Vijayan M 2012 Crowding, molecular volume and plasticity: An assessment involving crystallography, NMR and simulations. J. Biosci. 37 953-963] DOI 10.1007/s12038-012-9276-5
Resumo:
We propose an iterative data reconstruction technique specifically designed for multi-dimensional multi-color fluorescence imaging. Markov random field is employed (for modeling the multi-color image field) in conjunction with the classical maximum likelihood method. It is noted that, ill-posed nature of the inverse problem associated with multi-color fluorescence imaging forces iterative data reconstruction. Reconstruction of three-dimensional (3D) two-color images (obtained from nanobeads and cultured cell samples) show significant reduction in the background noise (improved signal-to-noise ratio) with an impressive overall improvement in the spatial resolution (approximate to 250 nm) of the imaging system. Proposed data reconstruction technique may find immediate application in 3D in vivo and in vitro multi-color fluorescence imaging of biological specimens. (C) 2012 American Institute of Physics. http://dx.doi.org/10.1063/1.4769058]
Resumo:
Electrical failure of insulation is known to be an extremal random process wherein nominally identical pro-rated specimens of equipment insulation, at constant stress fail at inordinately different times even under laboratory test conditions. In order to be able to estimate the life of power equipment, it is necessary to run long duration ageing experiments under accelerated stresses, to acquire and analyze insulation specific failure data. In the present work, Resin Impregnated Paper (RIP) a relatively new insulation system of choice used in transformer bushings, is taken as an example. The failure data has been processed using proven statistical methods, both graphical and analytical. The physical model governing insulation failure at constant accelerated stress has been assumed to be based on temperature dependent inverse power law model.
Resumo:
The study extends the first order reliability method (FORM) and inverse FORM to update reliability models for existing, statically loaded structures based on measured responses. Solutions based on Bayes' theorem, Markov chain Monte Carlo simulations, and inverse reliability analysis are developed. The case of linear systems with Gaussian uncertainties and linear performance functions is shown to be exactly solvable. FORM and inverse reliability based methods are subsequently developed to deal with more general problems. The proposed procedures are implemented by combining Matlab based reliability modules with finite element models residing on the Abaqus software. Numerical illustrations on linear and nonlinear frames are presented. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Most of the existing WCET estimation methods directly estimate execution time, ET, in cycles. We propose to study ET as a product of two factors, ET = IC * CPI, where IC is instruction count and CPI is cycles per instruction. Considering directly the estimation of ET may lead to a highly pessimistic estimate since implicitly these methods may be using worst case IC and worst case CPI. We hypothesize that there exists a functional relationship between CPI and IC such that CPI=f(IC). This is ascertained by computing the covariance matrix and studying the scatter plots of CPI versus IC. IC and CPI values are obtained by running benchmarks with a large number of inputs using the cycle accurate architectural simulator, Simplescalar on two different architectures. It is shown that the benchmarks can be grouped into different classes based on the CPI versus IC relationship. For some benchmarks like FFT, FIR etc., both IC and CPI are almost a constant irrespective of the input. There are other benchmarks that exhibit a direct or an inverse relationship between CPI and IC. In such a case, one can predict CPI for a given IC as CPI=f(IC). We derive the theoretical worst case IC for a program, denoted as SWIC, using integer linear programming(ILP) and estimate WCET as SWIC*f(SWIC). However, if CPI decreases sharply with IC then measured maximum cycles is observed to be a better estimate. For certain other benchmarks, it is observed that the CPI versus IC relationship is either random or CPI remains constant with varying IC. In such cases, WCET is estimated as the product of SWIC and measured maximum CPI. It is observed that use of the proposed method results in tighter WCET estimates than Chronos, a static WCET analyzer, for most benchmarks for the two architectures considered in this paper.
Resumo:
The SUSY Les Houches Accord (SLHA) 2 extended the first SLHA to include various generalisations of the Minimal Supersymmetric Standard Model (MSSM) as well as its simplest next-to-minimal version. Here, we propose further extensions to it, to include the most general and well-established see-saw descriptions (types I/II/III, inverse, and linear) in both an effective and a simple gauged extension of the MSSM framework. In addition, we generalise the PDG numbering scheme to reflect the properties of the particles. (c) 2012 Elsevier B.V. All rights reserved.
Resumo:
The inverse problem in photoacoustic tomography (PAT) seeks to obtain the absorbed energy map from the boundary pressure measurements for which computationally intensive iterative algorithms exist. The computational challenge is heightened when the reconstruction is done using boundary data split into its frequency spectrum to improve source localization and conditioning of the inverse problem. The key idea of this work is to modify the update equation wherein the Jacobian and the perturbation in data are summed over all wave numbers, k, and inverted only once to recover the absorbed energy map. This leads to a considerable reduction in the overall computation time. The results obtained using simulated data, demonstrates the efficiency of the proposed scheme without compromising the accuracy of reconstruction.
Resumo:
A number of spectral analysis of surface wave tests were performed on asphaltic and cement concrete pavements by dropping freely a 6.5kg spherical mass, having a radius of 5.82cm, from a height (h) of 0.51.5m. The maximum wavelength ((max)), up to which the shear wave velocity profile can be detected with the usage of surface wave measurements, increases continuously with an increase in h. As compared to the asphaltic pavement, the values of (max) and (min) become greater for the chosen cement concrete pavement, where (min) refers to the minimum wavelength. With h=0.5m, a good assessment of the top layers of both the present chosen asphaltic and the cement concrete pavements, including soil subgrade, can be made. For a given h, as compared to the selected asphaltic pavement, the first receiver in case of the chosen cement concrete pavement needs to be placed at a greater distance from the source. Inverse analysis has also been performed to characterise the shear wave velocity profile of different layers of the pavements.
Resumo:
Using a Girsanov change of measures, we propose novel variations within a particle-filtering algorithm, as applied to the inverse problem of state and parameter estimations of nonlinear dynamical systems of engineering interest, toward weakly correcting for the linearization or integration errors that almost invariably occur whilst numerically propagating the process dynamics, typically governed by nonlinear stochastic differential equations (SDEs). Specifically, the correction for linearization, provided by the likelihood or the Radon-Nikodym derivative, is incorporated within the evolving flow in two steps. Once the likelihood, an exponential martingale, is split into a product of two factors, correction owing to the first factor is implemented via rejection sampling in the first step. The second factor, which is directly computable, is accounted for via two different schemes, one employing resampling and the other using a gain-weighted innovation term added to the drift field of the process dynamics thereby overcoming the problem of sample dispersion posed by resampling. The proposed strategies, employed as add-ons to existing particle filters, the bootstrap and auxiliary SIR filters in this work, are found to non-trivially improve the convergence and accuracy of the estimates and also yield reduced mean square errors of such estimates vis-a-vis those obtained through the parent-filtering schemes.
Resumo:
In this paper, the free vibration of a non-uniform free-free Euler-Bernoulli beam is studied using an inverse problem approach. It is found that the fourth-order governing differential equation for such beams possess a fundamental closed-form solution for certain polynomial variations of the mass and stiffness. An infinite number of non-uniform free-free beams exist, with different mass and stiffness variations, but sharing the same fundamental frequency. A detailed study is conducted for linear, quadratic and cubic variations of mass, and on how to pre-select the internal nodes such that the closed-form solutions exist for the three cases. A special case is also considered where, at the internal nodes, external elastic constraints are present. The derived results are provided as benchmark solutions for the validation of non-uniform free-free beam numerical codes. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
We introduce the class Sigma(k)(d) of k-stellated (combinatorial) spheres of dimension d (0 <= k <= d + 1) and compare and contrast it with the class S-k(d) (0 <= k <= d) of k-stacked homology d-spheres. We have E-1(d) = S-1(d), and Sigma(k)(d) subset of S-k(d) ford >= 2k-1. However, for each k >= 2 there are k-stacked spheres which are not k-stellated. For d <= 2k - 2, the existence of k-stellated spheres which are not k-stacked remains an open question. We also consider the class W-k(d) (and K-k(d)) of simplicial complexes all whose vertex-links belong to Sigma(k)(d - 1) (respectively, S-k(d - 1)). Thus, W-k(d) subset of K-k(d) for d >= 2k, while W-1(d) = K-1(d). Let (K) over bar (k)(d) denote the class of d-dimensional complexes all whose vertex-links are k-stacked balls. We show that for d >= 2k + 2, there is a natural bijection M -> (M) over bar from K-k(d) onto (K) over bar (k)(d + 1) which is the inverse to the boundary map partial derivative: (K) over bar (k)(d + 1) -> (K) over bar (k)(d). Finally, we complement the tightness results of our recent paper, Bagchi and Datta (2013) 5], by showing that, for any field F, an F-orientable (k + 1)-neighbourly member of W-k(2k + 1) is F-tight if and only if it is k-stacked.