63 resultados para Classification of Documentary Credit


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reduction of carbon emissions is of paramount importance in the context of global warming. Countries and global companies are now engaged in understanding systematic ways of achieving well defined emission targets. In fact, carbon credits have become significant and strategic instruments of finance for countries and global companies. In this paper, we formulate and suggest a solution to the carbon allocation problem, which involves determining a cost minimizing allocation of carbon credits among different emitting agents. We address this challenge in the context of a global company which is faced with the challenge of determining an allocation of carbon credit caps among its divisions in a cost effective way. The problem is formulated as a reverse auction problem where the company plays the role of a buyer or carbon planning authority and the different divisions within the company are the emitting agents that specify cost curves for carbon credit reductions. Two natural variants of the problem: (a) with unlimited budget and (b) with limited budget are considered. Suitable assumptions are made on the cost curves and in each of the two cases we show that the resulting problem formulation is a knapsack problem that can be solved optimally using a greedy heuristic. The solution of the allocation problem provides critical decision support to global companies engaged seriously in green programs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Part classification and coding is still considered as laborious and time-consuming exercise. Keeping in view, the crucial role, which it plays, in developing automated CAPP systems, the attempts have been made in this article to automate a few elements of this exercise using a shape analysis model. In this study, a 24-vector directional template is contemplated to represent the feature elements of the parts (candidate and prototype). Various transformation processes such as deformation, straightening, bypassing, insertion and deletion are embedded in the proposed simulated annealing (SA)-like hybrid algorithm to match the candidate part with their prototype. For a candidate part, searching its matching prototype from the information data is computationally expensive and requires large search space. However, the proposed SA-like hybrid algorithm for solving the part classification problem considerably minimizes the search space and ensures early convergence of the solution. The application of the proposed approach is illustrated by an example part. The proposed approach is applied for the classification of 100 candidate parts and their prototypes to demonstrate the effectiveness of the algorithm. (C) 2003 Elsevier Science Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract—A method of testing for parametric faults of analog circuits based on a polynomial representaion of fault-free function of the circuit is presented. The response of the circuit under test (CUT) is estimated as a polynomial in the applied input voltage at relevant frequencies apart from DC. Classification of CUT is based on a comparison of the estimated polynomial coefficients with those of the fault free circuit. The method needs very little augmentation of circuit to make it testable as only output parameters are used for classification. This procedure is shown to uncover several parametric faults causing smaller than 5 % deviations the nominal values. Fault diagnosis based upon sensitivity of polynomial coefficients at relevant frequencies is also proposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of on-line recognition and retrieval of relatively weak industrial signals such as partial discharges (PD), buried in excessive noise, has been addressed in this paper. The major bottleneck being the recognition and suppression of stochastic pulsive interference (PI) due to the overlapping broad band frequency spectrum of PI and PD pulses. Therefore, on-line, onsite, PD measurement is hardly possible in conventional frequency based DSP techniques. The observed PD signal is modeled as a linear combination of systematic and random components employing probabilistic principal component analysis (PPCA) and the pdf of the underlying stochastic process is obtained. The PD/PI pulses are assumed as the mean of the process and modeled instituting non-parametric methods, based on smooth FIR filters, and a maximum aposteriori probability (MAP) procedure employed therein, to estimate the filter coefficients. The classification of the pulses is undertaken using a simple PCA classifier. The methods proposed by the authors were found to be effective in automatic retrieval of PD pulses completely rejecting PI.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a new algorithm for extracting Free-Form Surface Features (FFSFs) from a surface model. The extraction algorithm is based on a modified taxonomy of FFSFs from that proposed in the literature. A new classification scheme has been proposed for FFSFs to enable their representation and extraction. The paper proposes a separating curve as a signature of FFSFs in a surface model. FFSFs are classified based on the characteristics of the separating curve (number and type) and the influence region (the region enclosed by the separating curve). A method to extract these entities is presented. The algorithm has been implemented and tested for various free-form surface features on different types of free-form surfaces (base surfaces) and is found to correctly identify and represent the features irrespective of the type of underlying surface. The representation and extraction algorithm are both based on topology and geometry. The algorithm is data-driven and does not use any pre-defined templates. The definition presented for a feature is unambiguous and application independent. The proposed classification of FFSFs can be used to develop an ontology to determine semantic equivalences for the feature to be exchanged, mapped and used across PLM applications. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract | There exist a huge range of fish species besides other aquatic organisms like squids and salps that locomote in water at large Reynolds numbers, a regime of flow where inertial forces dominate viscous forces. In the present review, we discuss the fluid mechanics governing the locomotion of such organisms. Most fishes propel themselves by periodic undulatory motions of the body and tail, and the typical classification of their swimming modes is based on the fraction of their body that undergoes such undulatory motions. In the angulliform mode, or the eel type, the entire body undergoes undulatory motions in the form of a travelling wave that goes from head to tail, while in the other extreme case, the thunniform mode, only the rear tail (caudal fin) undergoes lateral oscillations. The thunniform mode of swimming is essentially based on the lift force generated by the airfoil like crosssection of the fish tail as it moves laterally through the water, while the anguilliform mode may be understood using the “reactive theory” of Lighthill. In pulsed jet propulsion, adopted by squids and salps, there are two components to the thrust; the first due to the familiar ejection of momentum and the other due to an over-pressure at the exit plane caused by the unsteadiness of the jet. The flow immediately downstream of the body in all three modes consists of vortex rings; the differentiating point being the vastly different orientations of the vortex rings. However, since all the bodies are self-propelling, the thrust force must be equal to the drag force (at steady speed), implying no net force on the body, and hence the wake or flow downstream must be momentumless. For such bodies, where there is no net force, it is difficult to directly define a propulsion efficiency, although it is possible to use some other very different measures like “cost of transportation” to broadly judge performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Monodisperse polyhedral In(2)O(3) nanoparticles were synthesized by differential mobility classification of a polydisperse aerosol formed by evaporation of indium at atmospheric pressure. When free molten indium particles oxidize, oxygen is absorbed preferentially on certain planes leading to the formation of polyhedral In(2)O(3) nanoparticles. It is shown that the position of oxygen addition, its concentration, the annealing temperature and the type of carrier gas are crucial for the resulting particle shape and crystalline quality. Semiconducting nanopolyhedrals, especially nanocubes used for sensors, are expected to offer enhanced sensitivity and improved response time due to the higher surface area as compared to spherical particles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Subsurface lithology and seismic site classification of Lucknow urban center located in the central part of the Indo-Gangetic Basin (IGB) are presented based on detailed shallow subsurface investigations and borehole analysis. These are done by carrying out 47 seismic surface wave tests using multichannel analysis of surface waves (MASW) and 23 boreholes drilled up to 30 m with standard penetration test (SPT) N values. Subsurface lithology profiles drawn from the drilled boreholes show low- to medium-compressibility clay and silty to poorly graded sand available till depth of 30 m. In addition, deeper boreholes (depth >150 m) were collected from the Lucknow Jal Nigam (Water Corporation), Government of Uttar Pradesh to understand deeper subsoil stratification. Deeper boreholes in this paper refer to those with depth over 150 m. These reports show the presence of clay mix with sand and Kankar at some locations till a depth of 150 m, followed by layers of sand, clay, and Kankar up to 400 m. Based on the available details, shallow and deeper cross-sections through Lucknow are presented. Shear wave velocity (SWV) and N-SPT values were measured for the study area using MASW and SPT testing. Measured SWV and N-SPT values for the same locations were found to be comparable. These values were used to estimate 30 m average values of N-SPT (N-30) and SWV (V-s(30)) for seismic site classification of the study area as per the National Earthquake Hazards Reduction Program (NEHRP) soil classification system. Based on the NEHRP classification, the entire study area is classified into site class C and D based on V-s(30) and site class D and E based on N-30. The issue of larger amplification during future seismic events is highlighted for a major part of the study area which comes under site class D and E. Also, the mismatch of site classes based on N-30 and V-s(30) raises the question of the suitability of the NEHRP classification system for the study region. Further, 17 sets of SPT and SWV data are used to develop a correlation between N-SPT and SWV. This represents a first attempt of seismic site classification and correlation between N-SPT and SWV in the Indo-Gangetic Basin.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Diffusion of pentane isomers in zeolites NaX has been investigated using pulsed field gradient nuclear magnetic resonance (PFG-NMR) and molecular dynamics (MD) techniques respectively. Temperature and concentration dependence of diffusivities have been studied. The diffusivities obtained from NMR are roughly an order of magnitude smaller than those obtained from MD. The dependence of diffusivity on loading at high temperatures exhibits a type I behavior according to the classification of Karger and Pfeifer 1]. NMR diffusivities of the isomers exhibit the order D(n-pentane) > D(isopentane) > D(neopentane). The results from MD suggest that the diffusivities of the isomers follow the order D(n-pentane) < D(isopentane) < D(neopentane). The activation energies from NMR show E-a(n-pentane) < E-a(isopentane) < E-a(neopentane) whereas those from MD suggest the order E-a(n-pentane) > (isopentane) > E-a(neopentane). The latter follows the predictions of levitation effect whereas those of NMR appears to be due to the presence of defects in the zeolite crystals. The differences between diffusivities estimated by NMR and MD are attributed to the longer time and length scales sampled by the NMR technique, as compared to MD. (C) 2012 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we discuss a novel procedure for constructing clusters of bound particles in the case of a quantum integrable derivative delta-function Bose gas in one dimension. It is shown that clusters of bound particles can be constructed for this Bose gas for some special values of the coupling constant, by taking the quasi-momenta associated with the corresponding Bethe state to be equidistant points on a single circle in the complex momentum plane. We also establish a connection between these special values of the coupling constant and some fractions belonging to the Farey sequences in number theory. This connection leads to a classification of the clusters of bound particles associated with the derivative delta-function Bose gas and allows us to study various properties of these clusters like their size and their stability under the variation of the coupling constant. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we describe a method for feature extraction and classification of characters manually isolated from scene or natural images. Characters in a scene image may be affected by low resolution, uneven illumination or occlusion. We propose a novel method to perform binarization on gray scale images by minimizing energy functional. Discrete Cosine Transform and Angular Radial Transform are used to extract the features from characters after normalization for scale and translation. We have evaluated our method on the complete test set of Chars74k dataset for English and Kannada scripts consisting of handwritten and synthesized characters, as well as characters extracted from camera captured images. We utilize only synthesized and handwritten characters from this dataset as training set. Nearest neighbor classification is used in our experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Classification of a large document collection involves dealing with a huge feature space where each distinct word is a feature. In such an environment, classification is a costly task both in terms of running time and computing resources. Further it will not guarantee optimal results because it is likely to overfit by considering every feature for classification. In such a context, feature selection is inevitable. This work analyses the feature selection methods, explores the relations among them and attempts to find a minimal subset of features which are discriminative for document classification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are many popular models available for classification of documents like Naïve Bayes Classifier, k-Nearest Neighbors and Support Vector Machine. In all these cases, the representation is based on the “Bag of words” model. This model doesn't capture the actual semantic meaning of a word in a particular document. Semantics are better captured by proximity of words and their occurrence in the document. We propose a new “Bag of Phrases” model to capture this discriminative power of phrases for text classification. We present a novel algorithm to extract phrases from the corpus using the well known topic model, Latent Dirichlet Allocation(LDA), and to integrate them in vector space model for classification. Experiments show a better performance of classifiers with the new Bag of Phrases model against related representation models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Crop type classification using remote sensing data plays a vital role in planning cultivation activities and for optimal usage of the available fertile land. Thus a reliable and precise classification of agricultural crops can help improve agricultural productivity. Hence in this paper a gene expression programming based fuzzy logic approach for multiclass crop classification using Multispectral satellite image is proposed. The purpose of this work is to utilize the optimization capabilities of GEP for tuning the fuzzy membership functions. The capabilities of GEP as a classifier is also studied. The proposed method is compared to Bayesian and Maximum likelihood classifier in terms of performance evaluation. From the results we can conclude that the proposed method is effective for classification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Chebyshev-inequality-based convex relaxations of Chance-Constrained Programs (CCPs) are shown to be useful for learning classifiers on massive datasets. In particular, an algorithm that integrates efficient clustering procedures and CCP approaches for computing classifiers on large datasets is proposed. The key idea is to identify high density regions or clusters from individual class conditional densities and then use a CCP formulation to learn a classifier on the clusters. The CCP formulation ensures that most of the data points in a cluster are correctly classified by employing a Chebyshev-inequality-based convex relaxation. This relaxation is heavily dependent on the second-order statistics. However, this formulation and in general such relaxations that depend on the second-order moments are susceptible to moment estimation errors. One of the contributions of the paper is to propose several formulations that are robust to such errors. In particular a generic way of making such formulations robust to moment estimation errors is illustrated using two novel confidence sets. An important contribution is to show that when either of the confidence sets is employed, for the special case of a spherical normal distribution of clusters, the robust variant of the formulation can be posed as a second-order cone program. Empirical results show that the robust formulations achieve accuracies comparable to that with true moments, even when moment estimates are erroneous. Results also illustrate the benefits of employing the proposed methodology for robust classification of large-scale datasets.