964 resultados para Application method
Resumo:
The availability of a reliable bound on an integral involving the square of the modulus of a form factor on the unitarity cut allows one to constrain the form factor at points inside the analyticity domain and its shape parameters, and also to isolate domains on the real axis and in the complex energy plane where zeros are excluded. In this lecture note, we review the mathematical techniques of this formalism in its standard form, known as the method of unitarity bounds, and recent developments which allow us to include information on the phase and modulus along a part of the unitarity cut. We also provide a brief summary of some results that we have obtained in the recent past, which demonstrate the usefulness of the method for precision predictions on the form factors.
Resumo:
The cis/trans isomer ratios of the Xaa-Pyr (Pyr = pyrrolidine) 3 degrees amide bonds are significantly high (similar to 90% cis) in the novel peptidomimetics where Pyr contains 1,3-oxazine (Oxa) or 1,3-thiazine (Thi) at its 2 position. We find that an unusual n -> pi(i-1)* interaction, selectively stabilizes the cis conformer and the n X n repulsion destabilizes the trans conformer of these molecules. Both these electronic effects oppose the steric effects in the 3 degrees amide bond. The structural requirements for manifestation of these electronic effects are determined. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Background: There has been growing interest in integrative taxonomy that uses data from multiple disciplines for species delimitation. Typically, in such studies, monophyly is taken as a proxy for taxonomic distinctiveness and these units are treated as potential species. However, monophyly could arise due to stochastic processes. Thus here, we have employed a recently developed tool based on coalescent approach to ascertain the taxonomic distinctiveness of various monophyletic units. Subsequently, the species status of these taxonomic units was further tested using corroborative evidence from morphology and ecology. This inter-disciplinary approach was implemented on endemic centipedes of the genus Digitipes (Attems 1930) from the Western Ghats (WG) biodiversity hotspot of India. The species of the genus Digitipes are morphologically conserved, despite their ancient late Cretaceous origin. Principal Findings: Our coalescent analysis based on mitochondrial dataset indicated the presence of nine putative species. The integrative approach, which includes nuclear, morphology, and climate datasets supported distinctiveness of eight putative species, of which three represent described species and five were new species. Among the five new species, three were morphologically cryptic species, emphasizing the effectiveness of this approach in discovering cryptic diversity in less explored areas of the tropics like the WG. In addition, species pairs showed variable divergence along the molecular, morphological and climate axes. Conclusions: A multidisciplinary approach illustrated here is successful in discovering cryptic diversity with an indication that the current estimates of invertebrate species richness for the WG might have been underestimated. Additionally, the importance of measuring multiple secondary properties of species while defining species boundaries was highlighted given variable divergence of each species pair across the disciplines.
Resumo:
We propose a Riesz transform approach to the demodulation of digital holograms. The Riesz transform is a higher-dimensional extension of the Hilbert transform and is steerable to a desired orientation. Accurate demodulation of the hologram requires a reliable methodology by which quadrature-phase functions (or simply, quadratures) can be constructed. The Riesz transform, by itself, does not yield quadratures. However, one can start with the Riesz transform and construct the so-called vortex operator by employing the notion of quasi-eigenfunctions, and this approach results in accurate quadratures. The key advantage of using the vortex operator is that it effectively handles nonplanar fringes (interference patterns) and has the ability to compensate for the local orientation. Therefore, this method results in aberration-free holographic imaging even in the case when the wavefronts are not planar. We calibrate the method by estimating the orientation from a reference hologram, measured with an empty field of view. Demodulation results on synthesized planar as well as nonplanar fringe patterns show that the accuracy of demodulation is high. We also perform validation on real experimental measurements of Caenorhabditis elegans acquired with a digital holographic microscope. (c) 2012 Optical Society of America
Resumo:
A novel method is proposed for fracture toughness determination of graded microstructurally complex (Pt,Ni)Al bond coats using edge-notched doubly clamped beams subjected to bending. Micron-scale beams are machined using the focused ion beam and loaded in bending under a nanoindenter. Failure loads gathered from the pop-ins in the load-displacement curves combined with XFEM analysis are used to calculate K-c at individual zones, free from substrate effects. The testing technique and sources of errors in measurement are described and possible micromechanisms of fracture in such heterogeneous coatings discussed.
Resumo:
A novel approach that can more effectively use the structural information provided by the traditional imaging modalities in multimodal diffuse optical tomographic imaging is introduced. This approach is based on a prior image-constrained-l(1) minimization scheme and has been motivated by the recent progress in the sparse image reconstruction techniques. It is shown that the proposed framework is more effective in terms of localizing the tumor region and recovering the optical property values both in numerical and gelatin phantom cases compared to the traditional methods that use structural information. (C) 2012 Optical Society of America
Resumo:
The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter. (C) 2012 Society of Photo-Optical Instrumentation Engineers (SPIE). DOI: 10.1117/1.JBO.17.10.106015]
Resumo:
Traditional image reconstruction methods in rapid dynamic diffuse optical tomography employ l(2)-norm-based regularization, which is known to remove the high-frequency components in the reconstructed images and make them appear smooth. The contrast recovery in these type of methods is typically dependent on the iterative nature of method employed, where the nonlinear iterative technique is known to perform better in comparison to linear techniques (noniterative) with a caveat that nonlinear techniques are computationally complex. Assuming that there is a linear dependency of solution between successive frames resulted in a linear inverse problem. This new framework with the combination of l(1)-norm based regularization can provide better robustness to noise and provide better contrast recovery compared to conventional l(2)-based techniques. Moreover, it is shown that the proposed l(1)-based technique is computationally efficient compared to its counterpart (l(2)-based one). The proposed framework requires a reasonably close estimate of the actual solution for the initial frame, and any suboptimal estimate leads to erroneous reconstruction results for the subsequent frames.
Resumo:
We develop a quadratic C degrees interior penalty method for linear fourth order boundary value problems with essential and natural boundary conditions of the Cahn-Hilliard type. Both a priori and a posteriori error estimates are derived. The performance of the method is illustrated by numerical experiments.
Resumo:
The Turkevich-Frens synthesis starting conditions are expanded, ranging the gold salt concentrations up to 2 mM and citrate/gold(III) molar ratios up to 18:1. For each concentration of the initial gold salt solution, the citrate/gold(III) molar ratios are systematically varied from 2:1 to 18:1 and both the size and size distribution of the resulting gold nanoparticles are compared. This study reveals a different nanoparticle size evolution for gold salt solutions ranging below 0.8 mM compared to the case of gold salt solutions above 0.8 mM. In the case of Au3+]<0.8 mM, both the size and size distribution vary substantially with the citrate/gold(III) ratio, both displaying plateaux that evolve inversely to Au3+] at larger ratios. Conversely, for Au3+]>= 0.8 mM, the size and size distribution of the synthesized gold nanoparticles continuously rise as the citrate/gold(III) ratio is increased. A starting gold salt concentration of 0.6 mM leads to the formation of the most monodisperse gold nanoparticles (polydispersity index<0.1) for a wide range of citrate/gold(III) molar ratios (from 4:1 to 18:1). Via a model for the formation of gold nanoparticles by the citrate method, the experimental trends in size could be qualitatively predicted:the simulations showed that the destabilizing effect of increased electrolyte concentration at high initial Au3+] is compensated by a slight increase in zeta potential of gold nanoparticles to produce concentrated dispersion of gold nanoparticles of small sizes.
Resumo:
Error analysis for a stable C (0) interior penalty method is derived for general fourth order problems on polygonal domains under minimal regularity assumptions on the exact solution. We prove that this method exhibits quasi-optimal order of convergence in the discrete H (2), H (1) and L (2) norms. L (a) norm error estimates are also discussed. Theoretical results are demonstrated by numerical experiments.
Resumo:
The solvated metal atom dispersion (SMAD) method has been used for the synthesis of colloids of metal nanoparticles. It is a top-down approach involving condensation of metal atoms in low temperature solvent matrices in a SMAD reactor maintained at 77 K. Warming of the matrix results in a slurry of metal atoms that interact with one another to form particles that grow in size. The organic solvent solvates the particles and acts as a weak capping agent to halt/slow down the growth process to a certain extent. This as-prepared colloid consists of metal nanoparticles that are quite polydisperse. In a process termed as digestive ripening, addition of a capping agent to the as-prepared colloid which is polydisperse renders it highly monodisperse either under ambient or thermal conditions. In this, as yet not well-understood process, smaller particles grow and the larger ones diminish in size until the system attains uniformity in size and a dynamic equilibrium is established. Using the SMAD method in combination with digestive ripening process, highly monodisperse metal, core-shell, alloy, and composite nanoparticles have been synthesized. This article is a review of our contributions together with some literature reports on this methodology to realize various nanostructured materials.
Resumo:
Ground management problems are typically solved by the simulation-optimization approach where complex numerical models are used to simulate the groundwater flow and/or contamination transport. These numerical models take a lot of time to solve the management problems and hence become computationally expensive. In this study, Artificial Neural Network (ANN) and Particle Swarm Optimization (PSO) models were developed and coupled for the management of groundwater of Dore river basin in France. The Analytic Element Method (AEM) based flow model was developed and used to generate the dataset for the training and testing of the ANN model. This developed ANN-PSO model was applied to minimize the pumping cost of the wells, including cost of the pipe line. The discharge and location of the pumping wells were taken as the decision variable and the ANN-PSO model was applied to find out the optimal location of the wells. The results of the ANN-PSO model are found similar to the results obtained by AEM-PSO model. The results show that the ANN model can reduce the computational burden significantly as it is able to analyze different scenarios, and the ANN-PSO model is capable of identifying the optimal location of wells efficiently.
Resumo:
A combination of chemical and thermal annealing techniques has been employed to synthesize a rarely reported nanocup structure of Mn doped ZnO with good yield. Nanocup structures are obtained by thermally annealing the powder samples consisting of nanosheets, synthesized chemically at room temperature, isochronally in a furnace at 200-500 degrees C temperature range for 2 h. Strong excitonic absorption in the UV and photoluminescence (PL) emission in UV-visible regions are observed in all the samples at room temperature. The sample obtained at 300 degrees C annealing temperature exhibits strong PL emission in the UV due to near-band-edge emission along with very week defect related emissions in the visible regions. The synthesized samples have been found to be exhibiting stable optical properties for 10 months which proved the unique feature of the presented technique of synthesis of nanocup structures. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
In many real world prediction problems the output is a structured object like a sequence or a tree or a graph. Such problems range from natural language processing to compu- tational biology or computer vision and have been tackled using algorithms, referred to as structured output learning algorithms. We consider the problem of structured classifi- cation. In the last few years, large margin classifiers like sup-port vector machines (SVMs) have shown much promise for structured output learning. The related optimization prob -lem is a convex quadratic program (QP) with a large num-ber of constraints, which makes the problem intractable for large data sets. This paper proposes a fast sequential dual method (SDM) for structural SVMs. The method makes re-peated passes over the training set and optimizes the dual variables associated with one example at a time. The use of additional heuristics makes the proposed method more efficient. We present an extensive empirical evaluation of the proposed method on several sequence learning problems.Our experiments on large data sets demonstrate that the proposed method is an order of magnitude faster than state of the art methods like cutting-plane method and stochastic gradient descent method (SGD). Further, SDM reaches steady state generalization performance faster than the SGD method. The proposed SDM is thus a useful alternative for large scale structured output learning.