15 resultados para Integración of methods

em Indian Institute of Science - Bangalore - Índia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Foliage density and leaf area index are important vegetation structure variables. They can be measured by several methods but few have been tested in tropical forests which have high structural heterogeneity. In this study, foliage density estimates by two indirect methods, the point quadrat and photographic methods, were compared with those obtained by direct leaf counts in the understorey of a wet evergreen forest in southern India. The point quadrat method has a tendency to overestimate, whereas the photographic method consistently and ignificantly underestimates foliage density. There was stratification within the understorey, with areas close to the ground having higher foliage densities.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Subsurface geophysical surveys were carried out using a large range of methods in an unconfined sandstone aquifer in semiarid south-western Niger for improving both the conceptual model of water flow through the unsaturated zone and the parameterization of numerical a groundwater model of the aquifer. Methods included: electromagnetic mapping, electrical resistivity tomography (ERT), resistivity logging, time domain electromagnetic sounding (TDEM), and magnetic resonance sounding (MRS). Analyses of electrical conductivities, complemented by geochemical measurements, allowed us to identify preferential pathways for infiltration and drainage beneath gullies and alluvial fans. The mean water content estimated by MRS (13%) was used for computing the regional groundwater recharge from long-term change in the water table. The ranges in permeability and water content obtained with MRS allowed a reduction of the degree of freedom of aquifer parameters used in groundwater modelling.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Unintentionally doped homoepitaxial InSb films have been grown by liquid phase epitaxy employing ramp cooling and step cooling growth modes. The effect of growth temperature, degree of supercooling and growth duration on the surface morphology and crystallinity were investigated. The major surface features of the grown film like terracing, inclusions, meniscus lines, etc are presented step-by-step and a variety of methods devised to overcome such undesirable features are described in sufficient detail. The optimization of growth parameters have led to the growth of smooth and continuous films. From the detailed morphological, X-ray diffraction, scanning electron microscopic and Raman studies, a correlation between the surface morphology and crystallinity has been established.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nanoparticles probably constitute the largest class of nanomaterials. Nanoparticles of several inorganic materials have been prepared by employing a variety of synthetic strategies. Besides synthesizing nanoparticles, there has been considerable effort to selectively prepare nanoparticles of different shapes. In view of the great interest in inorganic nanoparticles evinced in the last few years, we have prepared this perspective on the present status of the synthesis of inorganic nanoparticles. This article includes a brief discussion of methods followed by reports on the synthesis of nanoparticles of various classes of inorganic materials such as metals, alloys, oxides chalcogenides and pnictides. A brief section on core-shell nanoparticles is also included.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Scenic word images undergo degradations due to motion blur, uneven illumination, shadows and defocussing, which lead to difficulty in segmentation. As a result, the recognition results reported on the scenic word image datasets of ICDAR have been low. We introduce a novel technique, where we choose the middle row of the image as a sub-image and segment it first. Then, the labels from this segmented sub-image are used to propagate labels to other pixels in the image. This approach, which is unique and distinct from the existing methods, results in improved segmentation. Bayesian classification and Max-flow methods have been independently used for label propagation. This midline based approach limits the impact of degradations that happens to the image. The segmented text image is recognized using the trial version of Omnipage OCR. We have tested our method on ICDAR 2003 and ICDAR 2011 datasets. Our word recognition results of 64.5% and 71.6% are better than those of methods in the literature and also methods that competed in the Robust reading competition. Our method makes an implicit assumption that degradation is not present in the middle row.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recent focus of flood frequency analysis (FFA) studies has been on development of methods to model joint distributions of variables such as peak flow, volume, and duration that characterize a flood event, as comprehensive knowledge of flood event is often necessary in hydrological applications. Diffusion process based adaptive kernel (D-kernel) is suggested in this paper for this purpose. It is data driven, flexible and unlike most kernel density estimators, always yields a bona fide probability density function. It overcomes shortcomings associated with the use of conventional kernel density estimators in FFA, such as boundary leakage problem and normal reference rule. The potential of the D-kernel is demonstrated by application to synthetic samples of various sizes drawn from known unimodal and bimodal populations, and five typical peak flow records from different parts of the world. It is shown to be effective when compared to conventional Gaussian kernel and the best of seven commonly used copulas (Gumbel-Hougaard, Frank, Clayton, Joe, Normal, Plackett, and Student's T) in estimating joint distribution of peak flow characteristics and extrapolating beyond historical maxima. Selection of optimum number of bins is found to be critical in modeling with D-kernel.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A variety of methods are available to estimate future solar radiation (SR) scenarios at spatial scales that are appropriate for local climate change impact assessment. However, there are no clear guidelines available in the literature to decide which methodologies are most suitable for different applications. Three methodologies to guide the estimation of SR are discussed in this study, namely: Case 1: SR is measured, Case 2: SR is measured but sparse and Case 3: SR is not measured. In Case 1, future SR scenarios are derived using several downscaling methodologies that transfer the simulated large-scale information of global climate models to a local scale ( measurements). In Case 2, the SR was first estimated at the local scale for a longer time period using sparse measured records, and then future scenarios were derived using several downscaling methodologies. In Case 3: the SR was first estimated at a regional scale for a longer time period using complete or sparse measured records of SR from which SR at the local scale was estimated. Finally, the future scenarios were derived using several downscaling methodologies. The lack of observed SR data, especially in developing countries, has hindered various climate change impact studies. Hence, this was further elaborated by applying the Case 3 methodology to a semi-arid Malaprabha reservoir catchment in southern India. A support vector machine was used in downscaling SR. Future monthly scenarios of SR were estimated from simulations of third-generation Canadian General Circulation Model (CGCM3) for various SRES emission scenarios (A1B, A2, B1, and COMMIT). Results indicated a projected decrease of 0.4 to 12.2 W m(-2) yr(-1) in SR during the period 2001-2100 across the 4 scenarios. SR was calculated using the modified Hargreaves method. The decreasing trends for the future were in agreement with the simulations of SR from the CGCM3 model directly obtained for the 4 scenarios.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Tradeoffs are examined between mitigating black carbon (BC) and carbon dioxide (CO2) for limiting peak global mean warming, using the following set of methods. A two-box climate model is used to simulate temperatures of the atmosphere and ocean for different rates of mitigation. Mitigation rates for BC and CO2 are characterized by respective timescales for e-folding reduction in emissions intensity of gross global product. There are respective emissions models that force the box model. Lastly there is a simple economics model, with cost of mitigation varying inversely with emission intensity. Constant mitigation timescale corresponds to mitigation at a constant annual rate, for example an e-folding timescale of 40 years corresponds to 2.5% reduction each year. Discounted present cost depends only on respective mitigation timescale and respective mitigation cost at present levels of emission intensity. Least-cost mitigation is posed as choosing respective e-folding timescales, to minimize total mitigation cost under a temperature constraint (e.g. within 2 degrees C above preindustrial). Peak warming is more sensitive to mitigation timescale for CO2 than for BC. Therefore rapid mitigation of CO2 emission intensity is essential to limiting peak warming, but simultaneous mitigation of BC can reduce total mitigation expenditure. (c) 2015 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A new form of a multi-step transversal linearization (MTL) method is developed and numerically explored in this study for a numeric-analytical integration of non-linear dynamical systems under deterministic excitations. As with other transversal linearization methods, the present version also requires that the linearized solution manifold transversally intersects the non-linear solution manifold at a chosen set of points or cross-section in the state space. However, a major point of departure of the present method is that it has the flexibility of treating non-linear damping and stiffness terms of the original system as damping and stiffness terms in the transversally linearized system, even though these linearized terms become explicit functions of time. From this perspective, the present development is closely related to the popular practice of tangent-space linearization adopted in finite element (FE) based solutions of non-linear problems in structural dynamics. The only difference is that the MTL method would require construction of transversal system matrices in lieu of the tangent system matrices needed within an FE framework. The resulting time-varying linearized system matrix is then treated as a Lie element using Magnus’ characterization [W. Magnus, On the exponential solution of differential equations for a linear operator, Commun. Pure Appl. Math., VII (1954) 649–673] and the associated fundamental solution matrix (FSM) is obtained through repeated Lie-bracket operations (or nested commutators). An advantage of this approach is that the underlying exponential transformation could preserve certain intrinsic structural properties of the solution of the non-linear problem. Yet another advantage of the transversal linearization lies in the non-unique representation of the linearized vector field – an aspect that has been specifically exploited in this study to enhance the spectral stability of the proposed family of methods and thus contain the temporal propagation of local errors. A simple analysis of the formal orders of accuracy is provided within a finite dimensional framework. Only a limited numerical exploration of the method is presently provided for a couple of popularly known non-linear oscillators, viz. a hardening Duffing oscillator, which has a non-linear stiffness term, and the van der Pol oscillator, which is self-excited and has a non-linear damping term.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A number of methods exist that use different approaches to assess geometric properties like the surface complementarity and atom packing at the protein-protein interface. We have developed two new and conceptually different measures using the Delaunay tessellation and interface slice selection to compute the surface complementarity and atom packing at the protein-protein interface in a straightforward manner. Our measures show a strong correlation among themselves and with other existing measures, and can be calculated in a highly time-efficient manner. The measures are discriminative for evaluating biological, as well as non-biological protein-protein contacts, especially from large protein complexes and large-scale structural studies(http://pallab.serc. iisc.ernet.in/nip_nsc). (C) 201 Federation of European Biochemical Societies. Published by Elsevier B. V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In general the objective of accurately encoding the input data and the objective of extracting good features to facilitate classification are not consistent with each other. As a result, good encoding methods may not be effective mechanisms for classification. In this paper, an earlier proposed unsupervised feature extraction mechanism for pattern classification has been extended to obtain an invertible map. The method of bimodal projection-based features was inspired by the general class of methods called projection pursuit. The principle of projection pursuit concentrates on projections that discriminate between clusters and not faithful representations. The basic feature map obtained by the method of bimodal projections has been extended to overcome this. The extended feature map is an embedding of the input space in the feature space. As a result, the inverse map exists and hence the representation of the input space in the feature space is exact. This map can be naturally expressed as a feedforward neural network.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Traditional image reconstruction methods in rapid dynamic diffuse optical tomography employ l(2)-norm-based regularization, which is known to remove the high-frequency components in the reconstructed images and make them appear smooth. The contrast recovery in these type of methods is typically dependent on the iterative nature of method employed, where the nonlinear iterative technique is known to perform better in comparison to linear techniques (noniterative) with a caveat that nonlinear techniques are computationally complex. Assuming that there is a linear dependency of solution between successive frames resulted in a linear inverse problem. This new framework with the combination of l(1)-norm based regularization can provide better robustness to noise and provide better contrast recovery compared to conventional l(2)-based techniques. Moreover, it is shown that the proposed l(1)-based technique is computationally efficient compared to its counterpart (l(2)-based one). The proposed framework requires a reasonably close estimate of the actual solution for the initial frame, and any suboptimal estimate leads to erroneous reconstruction results for the subsequent frames.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The significant contribution of naturally occurring disulfide bonds to protein stability has encouraged development of methods to engineer non-native disulfides in proteins. These have yielded mixed results. We summarize applications of the program MODIP for disulfide engineering. The program predicts sites in proteins where disulfides can be stably introduced. The program has also been used as an aid in conformational analysis of naturally occurring disulfides in a-helices, antiparallel and parallel beta-strands. Disulfides in a-helices occur only at N-termini, where the first cysteine residue is the N-cap residue of the helix. The disulfide occurs as a CXXC motif and can possess redox activity. In antiparallel beta-strands, disulfides occur exclusively at non-hydrogen bonded (NHB) registered pairs of antiparallel beta-sheets with only 1 known natural example occurring at a hydrogen bonded (HB) registered pair. Conformational analysis suggests that disulfides between HB residue pairs are under torsional strain. A similar analysis to characterize disulfides in parallel beta-strands was carried out. We observed that only 9 instances of cross-strand disulfides exist in a non-redundant dataset. Stereochemical analysis shows that while tbe chi(square) angles are similar to those of other disulfides, the chi(1) and chi(2) angles show more variation and that one of tbe strands is generally an edge strand.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In today's API-rich world, programmer productivity depends heavily on the programmer's ability to discover the required APIs. In this paper, we present a technique and tool, called MATHFINDER, to discover APIs for mathematical computations by mining unit tests of API methods. Given a math expression, MATHFINDER synthesizes pseudo-code to compute the expression by mapping its subexpressions to API method calls. For each subexpression, MATHFINDER searches for a method such that there is a mapping between method inputs and variables of the subexpression. The subexpression, when evaluated on the test inputs of the method under this mapping, should produce results that match the method output on a large number of tests. We implemented MATHFINDER as an Eclipse plugin for discovery of third-party Java APIs and performed a user study to evaluate its effectiveness. In the study, the use of MATHFINDER resulted in a 2x improvement in programmer productivity. In 96% of the subexpressions queried for in the study, MATHFINDER retrieved the desired API methods as the top-most result. The top-most pseudo-code snippet to implement the entire expression was correct in 93% of the cases. Since the number of methods and unit tests to mine could be large in practice, we also implement MATHFINDER in a MapReduce framework and evaluate its scalability and response time.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Subtle concurrency errors in multithreaded libraries that arise because of incorrect or inadequate synchronization are often difficult to pinpoint precisely using only static techniques. On the other hand, the effectiveness of dynamic race detectors is critically dependent on multithreaded test suites whose execution can be used to identify and trigger races. Usually, such multithreaded tests need to invoke a specific combination of methods with objects involved in the invocations being shared appropriately to expose a race. Without a priori knowledge of the race, construction of such tests can be challenging. In this paper, we present a lightweight and scalable technique for synthesizing precisely these kinds of tests. Given a multithreaded library and a sequential test suite, we describe a fully automated analysis that examines sequential execution traces, and produces as its output a concurrent client program that drives shared objects via library method calls to states conducive for triggering a race. Experimental results on a variety of well-tested Java libraries yield 101 synthesized multithreaded tests in less than four minutes. Analyzing the execution of these tests using an off-the-shelf race detector reveals 187 harmful races, including several previously unreported ones.