182 resultados para Classical methods
Resumo:
Background: Cotton leaf curl Kokhran Virus-Dabawali (CLCuKV-Dab) is a monopartite begomovirus encoding two proteins V1 and V2 in the virion sense and four proteins Cl, C2, C3 and C4 in the complementary sense. The C4 protein of monopartite begomoviruses has been implicated to play a role in symptom determination and virus movement. The present work aims at the biochemical characterization of this protein. Methods: The C4 protein of CLCuKV-Dab was purified in fusion with GST and tested for the ability to hydrolyze ATP and other phosphate containing compounds. ATPase activity was assayed by using radiolabeled gamma-32P]-ATP and separating the product of reaction by thin layer chromatography. The hydrolysis of other compounds was monitored by the formation of a blue colored phosphomolybdate complex which was estimated by measuring the absorbance at 655 nm. Results: The purified GST-C4 protein exhibited metal ion dependent ATPase and inorganic pyrophosphatase activities. Deletion of a sequence resembling the catalytic motif present in phosphotyrosine phosphatases resulted in 70% reduction in both the activities. Mutational analysis suggested arginine 13 to be catalytically important for the ATPase and cysteine 8 for the pyrophosphatase activity of GST-C4. Interaction of V2 with GST-C4 resulted in an increase in both the enzymatic activities of GST-C4. Conclusions: The residues important for the enzymatic activities of GST-C4 are present in a motif different from the classical Walker motifs and the non-classical ATP binding motifs reported so far. General significance: The C4 protein of CLCuKV-Dab, a putative natively unfolded protein, exhibits enzymatic activities.
Resumo:
The RILEM work-of-fracture method for measuring the specific fracture energy of concrete from notched three-point bend specimens is still the most common method used throughout the world, despite the fact that the specific fracture energy so measured is known to vary with the size and shape of the test specimen. The reasons for this variation have also been known for nearly two decades, and two methods have been proposed in the literature to correct the measured size-dependent specific fracture energy (G(f)) in order to obtain a size-independent value (G(F)). It has also been proved recently, on the basis of a limited set of results on a single concrete mix with a compressive strength of 37 MPa, that when the size-dependent G(f) measured by the RILEM method is corrected following either of these two methods, the resulting specific fracture energy G(F) is very nearly the same and independent of the size of the specimen. In this paper, we will provide further evidence in support of this important conclusion using extensive independent test results of three different concrete mixes ranging in compressive strength from 57 to 122 MPa. (c) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Fastest curve-fitting procedures are proposed for vertical and radial consolidations for rapid loading methods. In vertical consolidation, the next load increment can be applied at 50-60% consolidation (or even earlier if the compression index is known). In radial consolidation, the next load increment can be applied at just 10-15% consolidation. The effects of secondary consolidation on the coefficient of consolidation and ultimate settlement are minimized in both cases. A quick procedure is proposed in vertical consolidation that determines how far is calculated from the true , where is coefficient of consolidation. In radial consolidation no such procedure is required because at 10-15% the consolidation effects of secondary consolidation are already less in most inorganic soils. The proposed rapid loading methods can be used when the settlement or time of load increment is not known. The characteristic features of vertical, radial, three-dimensional, and secondary consolidations are given in terms of the rate of settlement. A relationship is proposed between the coefficient of the vertical consolidation, load increment ratio, and compression index. (C) 2013 American Society of Civil Engineers.
Resumo:
In many systems, nucleation of a stable solid may occur in the presence of other (often more than one) metastable phases. These may be polymorphic solids or even liquid phases. Sometimes, the metastable phase might have a lower free energy minimum than the liquid but higher than the stable-solid-phase minimum and have characteristics in between the parent liquid and the globally stable solid phase. In such cases, nucleation of the solid phase from the melt may be facilitated by the metastable phase because the latter can ``wet'' the interface between the parent and the daughter phases, even though there may be no signature of the existence of metastable phase in the thermodynamic properties of the parent liquid and the stable solid phase. Straightforward application of classical nucleation theory (CNT) is flawed here as it overestimates the nucleation barrier because surface tension is overestimated (by neglecting the metastable phases of intermediate order) while the thermodynamic free energy gap between daughter and parent phases remains unchanged. In this work, we discuss a density functional theory (DFT)-based statistical mechanical approach to explore and quantify such facilitation. We construct a simple order-parameter-dependent free energy surface that we then use in DFT to calculate (i) the order parameter profile, (ii) the overall nucleation free energy barrier, and (iii) the surface tension between the parent liquid and the metastable solid and also parent liquid and stable solid phases. The theory indeed finds that the nucleation free energy barrier can decrease significantly in the presence of wetting. This approach can provide a microscopic explanation of the Ostwald step rule and the well-known phenomenon of ``disappearing polymorphs'' that depends on temperature and other thermodynamic conditions. Theory reveals a diverse scenario for phase transformation kinetics, some of which may be explored via modem nanoscopic synthetic methods.
Resumo:
A review of high operating temperature (HOT) infrared (IR) photon detector technology vis-a-vis material requirements, device design and state of the art achieved is presented in this article. The HOT photon detector concept offers the promise of operation at temperatures above 120 K to near room temperature. Advantages are reduction in system size, weight, cost and increase in system reliability. A theoretical study of the thermal generation-recombination (g-r) processes such as Auger and defect related Shockley Read Hall (SRH) recombination responsible for increasing dark current in HgCdTe detectors is presented. Results of theoretical analysis are used to evaluate performance of long wavelength (LW) and mid wavelength (MW) IR detectors at high operating temperatures. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
In this article, we derive an a posteriori error estimator for various discontinuous Galerkin (DG) methods that are proposed in (Wang, Han and Cheng, SIAM J. Numer. Anal., 48: 708-733, 2010) for an elliptic obstacle problem. Using a key property of DG methods, we perform the analysis in a general framework. The error estimator we have obtained for DG methods is comparable with the estimator for the conforming Galerkin (CG) finite element method. In the analysis, we construct a non-linear smoothing function mapping DG finite element space to CG finite element space and use it as a key tool. The error estimator consists of a discrete Lagrange multiplier associated with the obstacle constraint. It is shown for non-over-penalized DG methods that the discrete Lagrange multiplier is uniformly stable on non-uniform meshes. Finally, numerical results demonstrating the performance of the error estimator are presented.
Resumo:
Energy research is to a large extent materials research, encompassing the physics and chemistry of materials, including their synthesis, processing toward components and design toward architectures, allowing for their functionality as energy devices, extending toward their operation parameters and environment, including also their degradation, limited life, ultimate failure and potential recycling. In all these stages, X-ray and electron spectroscopy are helpful methods for analysis, characterization and diagnostics for the engineer and for the researcher working in basic science.This paper gives a short overview of experiments with X-ray and electron spectroscopy for solar energy and water splitting materials and addresses also the issue of solar fuel, a relatively new topic in energy research. The featured systems are iron oxide and tungsten oxide as photoanodes, and hydrogenases as molecular systems. We present surface and subsurface studies with ambient pressure XPS and hard X-ray XPS, resonant photoemission, light induced effects in resonant photoemission experiments and a photo-electrochemical in situ/operando NEXAFS experiment in a liquid cell, and nuclear resonant vibrational spectroscopy (NRVS). (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Structural dynamics of dendritic spines is one of the key correlative measures of synaptic plasticity for encoding short-term and long-term memory. Optical studies of structural changes in brain tissue using confocal microscopy face difficulties of scattering. This results in low signal-to-noise ratio and thus limiting the imaging depth to few tens of microns. Multiphoton microscopy (MpM) overcomes this limitation by using low-energy photons to cause localized excitation and achieve high resolution in all three dimensions. Multiple low-energy photons with longer wavelengths minimize scattering and allow access to deeper brain regions at several hundred microns. In this article, we provide a basic understanding of the physical phenomena that give MpM an edge over conventional microscopy. Further, we highlight a few of the key studies in the field of learning and memory which would not have been possible without the advent of MpM.
Resumo:
The sparse recovery methods utilize the l(p)-normbased regularization in the estimation problem with 0 <= p <= 1. These methods have a better utility when the number of independent measurements are limited in nature, which is a typical case for diffuse optical tomographic image reconstruction problem. These sparse recovery methods, along with an approximation to utilize the l(0)-norm, have been deployed for the reconstruction of diffuse optical images. Their performancewas compared systematically using both numerical and gelatin phantom cases to show that these methods hold promise in improving the reconstructed image quality.
Resumo:
We propose to employ bilateral filters to solve the problem of edge detection. The proposed methodology presents an efficient and noise robust method for detecting edges. Classical bilateral filters smooth images without distorting edges. In this paper, we modify the bilateral filter to perform edge detection, which is the opposite of bilateral smoothing. The Gaussian domain kernel of the bilateral filter is replaced with an edge detection mask, and Gaussian range kernel is replaced with an inverted Gaussian kernel. The modified range kernel serves to emphasize dissimilar regions. The resulting approach effectively adapts the detection mask according as the pixel intensity differences. The results of the proposed algorithm are compared with those of standard edge detection masks. Comparisons of the bilateral edge detector with Canny edge detection algorithm, both after non-maximal suppression, are also provided. The results of our technique are observed to be better and noise-robust than those offered by methods employing masks alone, and are also comparable to the results from Canny edge detector, outperforming it in certain cases.
Resumo:
Structural Support Vector Machines (SSVMs) and Conditional Random Fields (CRFs) are popular discriminative methods used for classifying structured and complex objects like parse trees, image segments and part-of-speech tags. The datasets involved are very large dimensional, and the models designed using typical training algorithms for SSVMs and CRFs are non-sparse. This non-sparse nature of models results in slow inference. Thus, there is a need to devise new algorithms for sparse SSVM and CRF classifier design. Use of elastic net and L1-regularizer has already been explored for solving primal CRF and SSVM problems, respectively, to design sparse classifiers. In this work, we focus on dual elastic net regularized SSVM and CRF. By exploiting the weakly coupled structure of these convex programming problems, we propose a new sequential alternating proximal (SAP) algorithm to solve these dual problems. This algorithm works by sequentially visiting each training set example and solving a simple subproblem restricted to a small subset of variables associated with that example. Numerical experiments on various benchmark sequence labeling datasets demonstrate that the proposed algorithm scales well. Further, the classifiers designed are sparser than those designed by solving the respective primal problems and demonstrate comparable generalization performance. Thus, the proposed SAP algorithm is a useful alternative for sparse SSVM and CRF classifier design.
Resumo:
The electronic structure of Nd1-xYxMnO3 (x-0-0.5) is studied using x-ray absorption near-edge structure (XANES) spectroscopy at the Mn K-edge along with the DFT-based LSDA+U and real space cluster calculations. The main edge of the spectra does not show any variation with doping. The pre-edge shows two distinct features which appear well-separated with doping. The intensity of the pre-edge decreases with doping. The theoretical XANES were calculated using real space multiple scattering methods which reproduces the entire experimental spectra at the main edge as well as the pre-edge. Density functional theory calculations are used to obtain the Mn 4p, Mn 3d and O 2p density of states. For x=0, the site-projected density of states at 1.7 eV above Fermi energy shows a singular peak of unoccupied e(g) (spin-up) states which is hybridized Mn 4p and O 2p states. For x=0.5, this feature develops at a higher energy and is highly delocalized and overlaps with the 3d spin-down states which changes the pre-edge intensity. The Mn 4p DOS for both compositions, show considerable difference between the individual p(x), p(y) and p(z)), states. For x=0.5, there is a considerable change in the 4p orbital polarization suggesting changes in the Jahn-Teller effect with doping. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
We analytically evaluate the large deviation function in a simple model of classical particle transfer between two reservoirs. We illustrate how the asymptotic long-time regime is reached starting from a special propagating initial condition. We show that the steady-state fluctuation theorem holds provided that the distribution of the particle number decays faster than an exponential, implying analyticity of the generating function and a discrete spectrum for its evolution operator.
Resumo:
Objective identification and description of mimicked calls is a primary component of any study on avian vocal mimicry but few studies have adopted a quantitative approach. We used spectral feature representations commonly used in human speech analysis in combination with various distance metrics to distinguish between mimicked and non-mimicked calls of the greater racket-tailed drongo, Dicrurus paradiseus and cross-validated the results with human assessment of spectral similarity. We found that the automated method and human subjects performed similarly in terms of the overall number of correct matches of mimicked calls to putative model calls. However, the two methods also misclassified different subsets of calls and we achieved a maximum accuracy of ninety five per cent only when we combined the results of both the methods. This study is the first to use Mel-frequency Cepstral Coefficients and Relative Spectral Amplitude - filtered Linear Predictive Coding coefficients to quantify vocal mimicry. Our findings also suggest that in spite of several advances in automated methods of song analysis, corresponding cross-validation by humans remains essential.
Resumo:
In this paper, the approach for assigning cooperative communication of Uninhabited Aerial Vehicles (UAV) to perform multiple tasks on multiple targets is posed as a combinatorial optimization problem. The multiple task such as classification, attack and verification of target using UAV is employed using nature inspired techniques such as Artificial Immune System (AIS), Particle Swarm Optimization (PSO) and Virtual Bee Algorithm (VBA). The nature inspired techniques have an advantage over classical combinatorial optimization methods like prohibitive computational complexity to solve this NP-hard problem. Using the algorithms we find the best sequence in which to attack and destroy the targets while minimizing the total distance traveled or the maximum distance traveled by an UAV. The performance analysis of the UAV to classify, attack and verify the target is evaluated using AIS, PSO and VBA.