862 resultados para Error correction model
Resumo:
Near-infrared diffuse optical tomography (DOT) technique has the capability of providing good quantitative reconstruction of tissue absorption and scattering properties with additional inputs such as input and output modulation depths and correction for the photon leakage. We have calculated the two-dimensional (2D) input modulation depth from three-dimensional (3D) diffusion to model the 2D diffusion of photons. The photon leakage when light traverses from phantom to the fiber tip is estimated using a solid angle model. The experiments are carried for single (5 and 6 mm) as well as multiple inhomogeneities (6 and 8 mm) with higher absorption coefficient in a homogeneous phantom. Diffusion equation for photon transport is solved using finite element method and Jacobian is modeled for reconstructing the optical parameters. We study the development and performance of DOT system using modulated single light source and multiple detectors. The dual source methods are reported to have better reconstruction capabilities to resolve and localize single as well as multiple inhomogeneities because of its superior noise rejection capability. However, an experimental setup with dual sources is much more difficult to implement because of adjustment of two out of phase identical light probes symmetrically on either side of the detector during scanning time. Our work shows that with a relatively simpler system with a single source, the results are better in terms of resolution and localization. The experiments are carried out with 5 and 6 mm inhomogeneities separately and 6 and 8 mm inhomogeneities both together with absorption coefficient almost three times as that of the background. The results show that our experimental single source system with additional inputs such as 2D input/output modulation depth and air fiber interface correction is capable of detecting 5 and 6 mm inhomogeneities separately and can identify the size difference of multiple inhomogeneities such as 6 and 8 mm. The localization error is zero. The recovered absorption coefficient is 93% of inhomogeneity that we have embedded in experimental phantom.
Resumo:
The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.
Resumo:
This paper proposes a current-error space-vector-based hysteresis controller with online computation of boundary for two-level inverter-fed induction motor (IM) drives. The proposed hysteresis controller has got all advantages of conventional current-error space-vector-based hysteresis controllers like quick transient response, simplicity, adjacent voltage vector switching, etc. Major advantage of the proposed controller-based voltage-source-inverters-fed drive is that phase voltage frequency spectrum produced is exactly similar to that of a constant switching frequency space-vector pulsewidth modulated (SVPWM) inverter. In this proposed hysteresis controller, stator voltages along alpha- and beta-axes are estimated during zero and active voltage vector periods using current errors along alpha- and beta-axes and steady-state model of IM. Online computation of hysteresis boundary is carried out using estimated stator voltages in the proposed hysteresis controller. The proposed scheme is simple and capable of taking inverter upto six-step-mode operation, if demanded by drive system. The proposed hysteresis-controller-based inverter-fed drive scheme is experimentally verified. The steady state and transient performance of the proposed scheme is extensively tested. The experimental results are giving constant frequency spectrum for phase voltage similar to that of constant frequency SVPWM inverter-fed drive.
Resumo:
Diffuse optical tomography (DOT) is one of the ways to probe highly scattering media such as tissue using low-energy near infra-red light (NIR) to reconstruct a map of the optical property distribution. The interaction of the photons in biological tissue is a non-linear process and the phton transport through the tissue is modelled using diffusion theory. The inversion problem is often solved through iterative methods based on nonlinear optimization for the minimization of a data-model misfit function. The solution of the non-linear problem can be improved by modeling and optimizing the cost functional. The cost functional is f(x) = x(T)Ax - b(T)x + c and after minimization, the cost functional reduces to Ax = b. The spatial distribution of optical parameter can be obtained by solving the above equation iteratively for x. As the problem is non-linear, ill-posed and ill-conditioned, there will be an error or correction term for x at each iteration. A linearization strategy is proposed for the solution of the nonlinear ill-posed inverse problem by linear combination of system matrix and error in solution. By propagating the error (e) information (obtained from previous iteration) to the minimization function f(x), we can rewrite the minimization function as f(x; e) = (x + e)(T) A(x + e) - b(T)(x + e) + c. The revised cost functional is f(x; e) = f(x) + e(T)Ae. The self guided spatial weighted prior (e(T)Ae) error (e, error in estimating x) information along the principal nodes facilitates a well resolved dominant solution over the region of interest. The local minimization reduces the spreading of inclusion and removes the side lobes, thereby improving the contrast, localization and resolution of reconstructed image which has not been possible with conventional linear and regularization algorithm.
Resumo:
Two models for AF relaying, namely, fixed gain and fixed power relaying, have been extensively studied in the literature given their ability to harness spatial diversity. In fixed gain relaying, the relay gain is fixed but its transmit power varies as a function of the source-relay channel gain. In fixed power relaying, the relay transmit power is fixed, but its gain varies. We revisit and generalize the fundamental two-hop AF relaying model. We present an optimal scheme in which an average power constrained AF relay adapts its gain and transmit power to minimize the symbol error probability (SEP) at the destination. Also derived are insightful and practically amenable closed-form bounds for the optimal relay gain. We then analyze the SEP of MPSK, derive tight bounds for it, and characterize the diversity order for Rayleigh fading. Also derived is an SEP approximation that is accurate to within 0.1 dB. Extensive results show that the scheme yields significant energy savings of 2.0-7.7 dB at the source and relay. Optimal relay placement for the proposed scheme is also characterized, and is different from fixed gain or power relaying. Generalizations to MQAM and other fading distributions are also discussed.
Resumo:
Gene expression in living systems is inherently stochastic, and tends to produce varying numbers of proteins over repeated cycles of transcription and translation. In this paper, an expression is derived for the steady-state protein number distribution starting from a two-stage kinetic model of the gene expression process involving p proteins and r mRNAs. The derivation is based on an exact path integral evaluation of the joint distribution, P(p, r, t), of p and r at time t, which can be expressed in terms of the coupled Langevin equations for p and r that represent the two-stage model in continuum form. The steady-state distribution of p alone, P(p), is obtained from P(p, r, t) (a bivariate Gaussian) by integrating out the r degrees of freedom and taking the limit t -> infinity. P(p) is found to be proportional to the product of a Gaussian and a complementary error function. It provides a generally satisfactory fit to simulation data on the same two-stage process when the translational efficiency (a measure of intrinsic noise levels in the system) is relatively low; it is less successful as a model of the data when the translational efficiency (and noise levels) are high.
Resumo:
Propranolol, a beta-adrenergic receptor blocker, is presently considered to be a potential therapeutic intervention under investigation for its role in prevention and treatment of osteoporosis. However, no studies have compared the osteoprotective properties of propranolol with well accepted therapeu-tic interventions for the treatment of osteoporosis. To address this question, this study was designed to evaluate the bone protective effects of zoledronic acid, alfacalcidol and propranolol in an animal model of postmenopausal osteoporosis. Five days after ovariectomy, 36 ovariectomized (OVX) rats were divided in- to 6 equal groups, randomized to treatments zoledronic acid (100 μg/kg, intravenous single dose); alfacal-cidol (0.5 μg/kg, oral gauge daily); propranolol (0.1mg/kg, subcutaneously 5 days per week) for 12 weeks. Untreated OVX and sham OVX were used as controls. At the end of the study, rats were killed under anesthesia. For bone porosity evaluation, whole fourth lumbar vertebrae (LV4) were removed. LV4 were also used to measure bone mechanical propeties. Left femurs were used for bone histology. Propranolol showed a significant decrease in bone porosity in comparison to OVX control. Moreover, propranolol sig- nificantly improved bone mechanical properties and bone quality when compared with OVX control. The osteoprotective effect of propranolol was comparable with zoledronic acid and alfacalcidol. Based on this comparative study, the results strongly suggest that propranolol might be new therapeutic intervention for the management of postmenopausal osteoporosis in humans.
Resumo:
This paper presents a novel, soft computing based solution to a complex optimal control or dynamic optimization problem that requires the solution to be available in real-time. The complexities in this problem of optimal guidance of interceptors launched with high initial heading errors include the more involved physics of a three dimensional missile-target engagement, and those posed by the assumption of a realistic dynamic model such as time-varying missile speed, thrust, drag and mass, besides gravity, and upper bound on the lateral acceleration. The classic, pure proportional navigation law is augmented with a polynomial function of the heading error, and the values of the coefficients of the polynomial are determined using differential evolution (DE). The performance of the proposed DE enhanced guidance law is compared against the existing conventional laws in the literature, on the criteria of time and energy optimality, peak lateral acceleration demanded, terminal speed and robustness to unanticipated target maneuvers, to illustrate the superiority of the proposed law. (C) 2013 Elsevier B. V. All rights reserved.
Resumo:
A novel Projection Error Propagation-based Regularization (PEPR) method is proposed to improve the image quality in Electrical Impedance Tomography (EIT). PEPR method defines the regularization parameter as a function of the projection error developed by difference between experimental measurements and calculated data. The regularization parameter in the reconstruction algorithm gets modified automatically according to the noise level in measured data and ill-posedness of the Hessian matrix. Resistivity imaging of practical phantoms in a Model Based Iterative Image Reconstruction (MoBIIR) algorithm as well as with Electrical Impedance Diffuse Optical Reconstruction Software (EIDORS) with PEPR. The effect of PEPR method is also studied with phantoms with different configurations and with different current injection methods. All the resistivity images reconstructed with PEPR method are compared with the single step regularization (STR) and Modified Levenberg Regularization (LMR) techniques. The results show that, the PEPR technique reduces the projection error and solution error in each iterations both for simulated and experimental data in both the algorithms and improves the reconstructed images with better contrast to noise ratio (CNR), percentage of contrast recovery (PCR), coefficient of contrast (COC) and diametric resistivity profile (DRP). (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
In this work, we consider two-dimensional (2-D) binary channels in which the 2-D error patterns are constrained so that errors cannot occur in adjacent horizontal or vertical positions. We consider probabilistic and combinatorial models for such channels. A probabilistic model is obtained from a 2-D random field defined by Roth, Siegel and Wolf (2001). Based on the conjectured ergodicity of this random field, we obtain an expression for the capacity of the 2-D non-adjacent-errors channel. We also derive an upper bound for the asymptotic coding rate in the combinatorial model.
Resumo:
[1] Evaporative fraction (EF) is a measure of the amount of available energy at the earth surface that is partitioned into latent heat flux. The currently operational thermal sensors like the Moderate Resolution Imaging Spectroradiometer (MODIS) on satellite platforms provide data only at 1000 m, which constraints the spatial resolution of EF estimates. A simple model (disaggregation of evaporative fraction (DEFrac)) based on the observed relationship between EF and the normalized difference vegetation index is proposed to spatially disaggregate EF. The DEFrac model was tested with EF estimated from the triangle method using 113 clear sky data sets from the MODIS sensor aboard Terra and Aqua satellites. Validation was done using the data at four micrometeorological tower sites across varied agro-climatic zones possessing different land cover conditions in India using Bowen ratio energy balance method. The root-mean-square error (RMSE) of EF estimated at 1000 m resolution using the triangle method was 0.09 for all the four sites put together. The RMSE of DEFrac disaggregated EF was 0.09 for 250 m resolution. Two models of input disaggregation were also tried with thermal data sharpened using two thermal sharpening models DisTrad and TsHARP. The RMSE of disaggregated EF was 0.14 for both the input disaggregation models for 250 m resolution. Moreover, spatial analysis of disaggregation was performed using Landsat-7 (Enhanced Thematic Mapper) ETM+ data over four grids in India for contrasted seasons. It was observed that the DEFrac model performed better than the input disaggregation models under cropped conditions while they were marginally similar under non-cropped conditions.
Resumo:
Tetracene is an important conjugated molecule for device applications. We have used the diagrammatic valence bond method to obtain the desired states, in a Hilbert space of about 450 million singlets and 902 million triplets. We have also studied the donor/acceptor (D/A)-substituted tetracenes with D and A groups placed symmetrically about the long axis of the molecule. In these cases, by exploiting a new symmetry, which is a combination of C-2 symmetry and electron-hole symmetry, we are able to obtain their low-lying states. In the case of substituted tetracene, we find that optically allowed one-photon excitation gaps reduce with increasing D/A strength, while the lowest singlet triplet gap is only wealdy affected. In all the systems we have studied, the excited singlet state, S-i, is at more than twice the energy of the lowest triplet state and the second triplet is very close to the S-1 state. Thus, donor-acceptor-substituted tetracene could be a good candidate in photovoltaic device application as it satisfies energy criteria for singlet fission. We have also obtained the model exact second harmonic generation (SHG) coefficients using the correction vector method, and we find that the SHG responses increase with the increase in D/A strength.
Resumo:
The Onsager model for the secondary flow field in a high-speed rotating cylinder is extended to incorporate the difference in mass of the two species in a binary gas mixture. The base flow is an isothermal solid-body rotation in which there is a balance between the radial pressure gradient and the centrifugal force density for each species. Explicit expressions for the radial variation of the pressure, mass/mole fractions, and from these the radial variation of the viscosity, thermal conductivity and diffusion coefficient, are derived, and these are used in the computation of the secondary flow. For the secondary flow, the mass, momentum and energy equations in axisymmetric coordinates are expanded in an asymptotic series in a parameter epsilon = (Delta m/m(av)), where Delta m is the difference in the molecular masses of the two species, and the average molecular mass m(av) is defined as m(av) = (rho(w1)m(1) + rho(w2)m(2))/rho(w), where rho(w1) and rho(w2) are the mass densities of the two species at the wall, and rho(w) = rho(w1) + rho(w2). The equation for the master potential and the boundary conditions are derived correct to O(epsilon(2)). The leading-order equation for the master potential contains a self-adjoint sixth-order operator in the radial direction, which is different from the generalized Onsager model (Pradhan & Kumaran, J. Fluid Mech., vol. 686, 2011, pp. 109-159), since the species mass difference is included in the computation of the density, viscosity and thermal conductivity in the base state. This is solved, subject to boundary conditions, to obtain the leading approximation for the secondary flow, followed by a solution of the diffusion equation for the leading correction to the species mole fractions. The O(epsilon) and O(epsilon(2)) equations contain inhomogeneous terms that depend on the lower-order solutions, and these are solved in a hierarchical manner to obtain the O(epsilon) and O(epsilon(2)) corrections to the master potential. A similar hierarchical procedure is used for the Carrier-Maslen model for the end-cap secondary flow. The results of the Onsager hierarchy, up to O(epsilon(2)), are compared with the results of direct simulation Monte Carlo simulations for a binary hard-sphere gas mixture for secondary flow due to a wall temperature gradient, inflow/outflow of gas along the axis, as well as mass and momentum sources in the flow. There is excellent agreement between the solutions for the secondary flow correct to O(epsilon(2)) and the simulations, to within 15 %, even at a Reynolds number as low as 100, and length/diameter ratio as low as 2, for a low stratification parameter A of 0.707, and when the secondary flow velocity is as high as 0.2 times the maximum base flow velocity, and the ratio 2 Delta m/(m(1) + m(2)) is as high as 0.5. Here, the Reynolds number Re = rho(w)Omega R-2/mu, the stratification parameter A = root m Omega R-2(2)/(2k(B)T), R and Omega are the cylinder radius and angular velocity, m is the molecular mass, rho(w) is the wall density, mu is the viscosity and T is the temperature. The leading-order solutions do capture the qualitative trends, but are not in quantitative agreement.
Resumo:
Grating Compression Transform (GCT) is a two-dimensional analysis of speech signal which has been shown to be effective in multi-pitch tracking in speech mixtures. Multi-pitch tracking methods using GCT apply Kalman filter framework to obtain pitch tracks which requires training of the filter parameters using true pitch tracks. We propose an unsupervised method for obtaining multiple pitch tracks. In the proposed method, multiple pitch tracks are modeled using time-varying means of a Gaussian mixture model (GMM), referred to as TVGMM. The TVGMM parameters are estimated using multiple pitch values at each frame in a given utterance obtained from different patches of the spectrogram using GCT. We evaluate the performance of the proposed method on all voiced speech mixtures as well as random speech mixtures having well separated and close pitch tracks. TVGMM achieves multi-pitch tracking with 51% and 53% multi-pitch estimates having error <= 20% for random mixtures and all-voiced mixtures respectively. TVGMM also results in lower root mean squared error in pitch track estimation compared to that by Kalman filtering.
Resumo:
We address the problem of designing an optimal pointwise shrinkage estimator in the transform domain, based on the minimum probability of error (MPE) criterion. We assume an additive model for the noise corrupting the clean signal. The proposed formulation is general in the sense that it can handle various noise distributions. We consider various noise distributions (Gaussian, Student's-t, and Laplacian) and compare the denoising performance of the estimator obtained with the mean-squared error (MSE)-based estimators. The MSE optimization is carried out using an unbiased estimator of the MSE, namely Stein's Unbiased Risk Estimate (SURE). Experimental results show that the MPE estimator outperforms the SURE estimator in terms of SNR of the denoised output, for low (0 -10 dB) and medium values (10 - 20 dB) of the input SNR.