197 resultados para spatial error
Resumo:
Three-dimensional (3D) resolution improvement in multi-photon multiple-excitation-spot-optical microscopy is proposed. Specially designed spatial filter is employed for improving the overall 3D resolution of the imaging system. An improvement up to a factor of 14.5 and sub-femto liter volume excitation is achieved. The system shows substantial sidelobe reduction (<4%) due to the non-linear intensity dependence of multiphoton process. Polarization effect on x-oriented and freely rotating dipoles shows dramatic change in the field distribution at the focal-plane. The resulting point-spread function has the ability to produce several strongly localized polarization dependent field patterns which may find applications in optical engineering and bioimaging.
Resumo:
Diffuse optical tomography (DOT) using near-infrared (NIR) light is a promising tool for noninvasive imaging of deep tissue. This technique is capable of quantitative reconstructions of absorption coefficient inhomogeneities of tissue. The motivation for reconstructing the optical property variation is that it, and, in particular, the absorption coefficient variation, can be used to diagnose different metabolic and disease states of tissue. In DOT, like any other medical imaging modality, the aim is to produce a reconstruction with good spatial resolution and accuracy from noisy measurements. We study the performance of a phase array system for detection of optical inhomogeneities in tissue. The light transport through a tissue is diffusive in nature and can be modeled using diffusion equation if the optical parameters of the inhomogeneity are close to the optical properties of the background. The amplitude cancellation method that uses dual out-of-phase sources (phase array) can detect and locate small objects in turbid medium. The inverse problem is solved using model based iterative image reconstruction. Diffusion equation is solved using finite element method for providing the forward model for photon transport. The solution of the forward problem is used for computing the Jacobian and the simultaneous equation is solved using conjugate gradient search. The simulation studies have been carried out and the results show that a phase array system can resolve inhomogeneities with sizes of 5 mm when the absorption coefficient of the inhomogeneity is twice that of the background tissue. To validate this result, a prototype model for performing a dual-source system has been developed. Experiments are carried out by inserting an inhomogeneity of high optical absorption coefficient in an otherwise homogeneous phantom while keeping the scattering coefficient same. The high frequency (100 MHz) modulated dual out-of-phase laser source light is propagated through the phantom. The interference of these sources creates an amplitude null and a phase shift of 180° along a plane between the two sources with a homogeneous object. A solid resin phantom with inhomogeneities simulating the tumor is used in our experiment. The amplitude and phase changes are found to be disturbed by the presence of the inhomogeneity in the object. The experimental data (amplitude and the phase measured at the detector) are used for reconstruction. The results show that the method is able to detect multiple inhomogeneities with sizes of 4 mm. The localization error for a 5 mm inhomogeneity is found to be approximately 1 mm.
Resumo:
Evaluation of the probability of error in decision feedback equalizers is difficult due to the presence of a hard limiter in the feedback path. This paper derives the upper and lower bounds on the probability of a single error and multiple error patterns. The bounds are fairly tight. The bounds can also be used to select proper tap gains of the equalizer.
Resumo:
Upper bounds on the probability of error due to co-channel interference are proposed in this correspondence. The bounds are easy to compute and can be fairly tight.
Resumo:
Land cover (LC) and land use (LU) dynamics induced by human and natural processes play a major role in global as well as regional patterns of landscapes influencing biodiversity, hydrology, ecology and climate. Changes in LC features resulting in forest fragmentations have posed direct threats to biodiversity, endangering the sustainability of ecological goods and services. Habitat fragmentation is of added concern as the residual spatial patterns mitigate or exacerbate edge effects. LU dynamics are obtained by classifying temporal remotely sensed satellite imagery of different spatial and spectral resolutions. This paper reviews five different image classification algorithms using spatio-temporal data of a temperate watershed in Himachal Pradesh, India. Gaussian Maximum Likelihood classifier was found to be apt for analysing spatial pattern at regional scale based on accuracy assessment through error matrix and ROC (receiver operating characteristic) curves. The LU information thus derived was then used to assess spatial changes from temporal data using principal component analysis and correspondence analysis based image differencing. The forest area dynamics was further studied by analysing the different types of fragmentation through forest fragmentation models. The computed forest fragmentation and landscape metrics show a decline of interior intact forests with a substantial increase in patch forest during 1972-2007.
Resumo:
One of the long standing problems in quantum chemistry had been the inability to exploit full spatial and spin symmetry of an electronic Hamiltonian belonging to a non-Abelian point group. Here, we present a general technique which can utilize all the symmetries of an electronic (magnetic) Hamiltonian to obtain its full eigenvalue spectrum. This is a hybrid method based on Valence Bond basis and the basis of constant z-component of the total spin. This technique is applicable to systems with any point group symmetry and is easy to implement on a computer. We illustrate the power of the method by applying it to a model icosahedral half-filled electronic system. This model spans a huge Hilbert space (dimension 1,778,966) and in the largest non-Abelian point group. The C60 molecule has this symmetry and hence our calculation throw light on the higher energy excited states of the bucky ball. This method can also be utilized to study finite temperature properties of strongly correlated systems within an exact diagonalization approach. (C) 2011 Wiley Periodicals, Inc. Int J Quantum Chem, 2012
Resumo:
The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.
Resumo:
This paper proposes a current-error space-vector-based hysteresis controller with online computation of boundary for two-level inverter-fed induction motor (IM) drives. The proposed hysteresis controller has got all advantages of conventional current-error space-vector-based hysteresis controllers like quick transient response, simplicity, adjacent voltage vector switching, etc. Major advantage of the proposed controller-based voltage-source-inverters-fed drive is that phase voltage frequency spectrum produced is exactly similar to that of a constant switching frequency space-vector pulsewidth modulated (SVPWM) inverter. In this proposed hysteresis controller, stator voltages along alpha- and beta-axes are estimated during zero and active voltage vector periods using current errors along alpha- and beta-axes and steady-state model of IM. Online computation of hysteresis boundary is carried out using estimated stator voltages in the proposed hysteresis controller. The proposed scheme is simple and capable of taking inverter upto six-step-mode operation, if demanded by drive system. The proposed hysteresis-controller-based inverter-fed drive scheme is experimentally verified. The steady state and transient performance of the proposed scheme is extensively tested. The experimental results are giving constant frequency spectrum for phase voltage similar to that of constant frequency SVPWM inverter-fed drive.
Resumo:
We present a heterogeneous finite element method for the solution of a high-dimensional population balance equation, which depends both the physical and the internal property coordinates. The proposed scheme tackles the two main difficulties in the finite element solution of population balance equation: (i) spatial discretization with the standard finite elements, when the dimension of the equation is more than three, (ii) spurious oscillations in the solution induced by standard Galerkin approximation due to pure advection in the internal property coordinates. The key idea is to split the high-dimensional population balance equation into two low-dimensional equations, and discretize the low-dimensional equations separately. In the proposed splitting scheme, the shape of the physical domain can be arbitrary, and different discretizations can be applied to the low-dimensional equations. In particular, we discretize the physical and internal spaces with the standard Galerkin and Streamline Upwind Petrov Galerkin (SUPG) finite elements, respectively. The stability and error estimates of the Galerkin/SUPG finite element discretization of the population balance equation are derived. It is shown that a slightly more regularity, i.e. the mixed partial derivatives of the solution has to be bounded, is necessary for the optimal order of convergence. Numerical results are presented to support the analysis.
Resumo:
Diffuse optical tomography (DOT) is one of the ways to probe highly scattering media such as tissue using low-energy near infra-red light (NIR) to reconstruct a map of the optical property distribution. The interaction of the photons in biological tissue is a non-linear process and the phton transport through the tissue is modelled using diffusion theory. The inversion problem is often solved through iterative methods based on nonlinear optimization for the minimization of a data-model misfit function. The solution of the non-linear problem can be improved by modeling and optimizing the cost functional. The cost functional is f(x) = x(T)Ax - b(T)x + c and after minimization, the cost functional reduces to Ax = b. The spatial distribution of optical parameter can be obtained by solving the above equation iteratively for x. As the problem is non-linear, ill-posed and ill-conditioned, there will be an error or correction term for x at each iteration. A linearization strategy is proposed for the solution of the nonlinear ill-posed inverse problem by linear combination of system matrix and error in solution. By propagating the error (e) information (obtained from previous iteration) to the minimization function f(x), we can rewrite the minimization function as f(x; e) = (x + e)(T) A(x + e) - b(T)(x + e) + c. The revised cost functional is f(x; e) = f(x) + e(T)Ae. The self guided spatial weighted prior (e(T)Ae) error (e, error in estimating x) information along the principal nodes facilitates a well resolved dominant solution over the region of interest. The local minimization reduces the spreading of inclusion and removes the side lobes, thereby improving the contrast, localization and resolution of reconstructed image which has not been possible with conventional linear and regularization algorithm.
Resumo:
Urbanisation is a dynamic complex phenomenon involving large scale changes in the land uses at local levels. Analyses of changes in land uses in urban environments provide a historical perspective of land use and give an opportunity to assess the spatial patterns, correlation, trends, rate and impacts of the change, which would help in better regional planning and good governance of the region. Main objective of this research is to quantify the urban dynamics using temporal remote sensing data with the help of well-established landscape metrics. Bangalore being one of the rapidly urbanising landscapes in India has been chosen for this investigation. Complex process of urban sprawl was modelled using spatio temporal analysis. Land use analyses show 584% growth in built-up area during the last four decades with the decline of vegetation by 66% and water bodies by 74%. Analyses of the temporal data reveals an increase in urban built up area of 342.83% (during 1973-1992), 129.56% (during 1992-1999), 106.7% (1999-2002), 114.51% (2002-2006) and 126.19% from 2006 to 2010. The Study area was divided into four zones and each zone is further divided into 17 concentric circles of 1 km incrementing radius to understand the patterns and extent of the urbanisation at local levels. The urban density gradient illustrates radial pattern of urbanisation for the period 1973-2010. Bangalore grew radially from 1973 to 2010 indicating that the urbanisation is intensifying from the central core and has reached the periphery of the Greater Bangalore. Shannon's entropy, alpha and beta population densities were computed to understand the level of urbanisation at local levels. Shannon's entropy values of recent time confirms dispersed haphazard urban growth in the city, particularly in the outskirts of the city. This also illustrates the extent of influence of drivers of urbanisation in various directions. Landscape metrics provided in depth knowledge about the sprawl. Principal component analysis helped in prioritizing the metrics for detailed analyses. The results clearly indicates that whole landscape is aggregating to a large patch in 2010 as compared to earlier years which was dominated by several small patches. The large scale conversion of small patches to large single patch can be seen from 2006 to 2010. In the year 2010 patches are maximally aggregated indicating that the city is becoming more compact and more urbanised in recent years. Bangalore was the most sought after destination for its climatic condition and the availability of various facilities (land availability, economy, political factors) compared to other cities. The growth into a single urban patch can be attributed to rapid urbanisation coupled with the industrialisation. Monitoring of growth through landscape metrics helps to maintain and manage the natural resources. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Monitoring and visualizing specimens at a large penetration depth is a challenge. At depths of hundreds of microns, several physical effects (such as, scattering, PSF distortion and noise) deteriorate the image quality and prohibit a detailed study of key biological phenomena. In this study, we use a Bessel-like beam in-conjugation with an orthogonal detection system to achieve depth imaging. A Bessel-like penetrating diffractionless beam is generated by engineering the back-aperture of the excitation objective. The proposed excitation scheme allows continuous scanning by simply translating the detection PSF. This type of imaging system is beneficial for obtaining depth information from any desired specimen layer, including nano-particle tracking in thick tissue. As demonstrated by imaging the fluorescent polymer-tagged-CaCO3 particles and yeast cells in a tissue-like gel-matrix, the system offers a penetration depth that extends up to 650 mu m. This achievement will advance the field of fluorescence imaging and deep nano-particle tracking.
Resumo:
Ensuring reliable operation over an extended period of time is one of the biggest challenges facing present day electronic systems. The increased vulnerability of the components to atmospheric particle strikes poses a big threat in attaining the reliability required for various mission critical applications. Various soft error mitigation methodologies exist to address this reliability challenge. A general solution to this problem is to arrive at a soft error mitigation methodology with an acceptable implementation overhead and error tolerance level. This implementation overhead can then be reduced by taking advantage of various derating effects like logical derating, electrical derating and timing window derating, and/or making use of application redundancy, e. g. redundancy in firmware/software executing on the so designed robust hardware. In this paper, we analyze the impact of various derating factors and show how they can be profitably employed to reduce the hardware overhead to implement a given level of soft error robustness. This analysis is performed on a set of benchmark circuits using the delayed capture methodology. Experimental results show upto 23% reduction in the hardware overhead when considering individual and combined derating factors.
Resumo:
The use of mutagenic drugs to drive HIV-1 past its error threshold presents a novel intervention strategy, as suggested by the quasispecies theory, that may be less susceptible to failure via viral mutation-induced emergence of drug resistance than current strategies. The error threshold of HIV-1, mu(c), however, is not known. Application of the quasispecies theory to determine mu(c) poses significant challenges: Whereas the quasispecies theory considers the asexual reproduction of an infinitely large population of haploid individuals, HIV-1 is diploid, undergoes recombination, and is estimated to have a small effective population size in vivo. We performed population genetics-based stochastic simulations of the within-host evolution of HIV-1 and estimated the structure of the HIV-1 quasispecies and mu(c). We found that with small mutation rates, the quasispecies was dominated by genomes with few mutations. Upon increasing the mutation rate, a sharp error catastrophe occurred where the quasispecies became delocalized in sequence space. Using parameter values that quantitatively captured data of viral diversification in HIV-1 patients, we estimated mu(c) to be 7 x 10(-5) -1 x 10(-4) substitutions/site/replication, similar to 2-6 fold higher than the natural mutation rate of HIV-1, suggesting that HIV-1 survives close to its error threshold and may be readily susceptible to mutagenic drugs. The latter estimate was weakly dependent on the within-host effective population size of HIV-1. With large population sizes and in the absence of recombination, our simulations converged to the quasispecies theory, bridging the gap between quasispecies theory and population genetics-based approaches to describing HIV-1 evolution. Further, mu(c) increased with the recombination rate, rendering HIV-1 less susceptible to error catastrophe, thus elucidating an added benefit of recombination to HIV-1. Our estimate of mu(c) may serve as a quantitative guideline for the use of mutagenic drugs against HIV-1.