59 resultados para conditional


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The major contribution of this paper is to introduce load compatibility constraints in the mathematical model for the capacitated vehicle routing problem with pickup and deliveries. The employee transportation problem in the Indian call centers and transportation of hazardous materials provided the motivation for this variation. In this paper we develop a integer programming model for the vehicle routing problem with load compatibility constraints. Specifically two types of load compatability constraints are introduced, namely mutual exclusion and conditional exclusion. The model is demonstrated with an application from the employee transportation problem in the Indian call centers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mining and blending operations in the high grade iron ore deposit under study are performed to optimize recovery with minimal alumina content while maintaining required levels of other chemical component and a proper mix of ore types. In the present work the regionalisation of alumina in the ores has been studied independently and its effects on global and local recoverable tonnage as well as on alternatives of mining operations have been evaluated. The global tonnage recovery curves for blocks (20m x 20m x 12m) obtained by simulation closely approximated the curves obtained theoretically using a change of support under the discretised gaussian model. Variations in block size up to 80m x 20m x 12m did not affect the recovery as the horizontal dimensions of the blocks are small in relation to the range of the variogram. A comparison of the local tonnage recovery curves obtained through multiple conditional simulations made with that obtained by the method of uniform conditioning of block grades on an estimate of panel 100m x 100m x 12m panel grade reveals comparable results only in panels which have been well conditioned and possesing an ensemble simulation mean close to the ordinary kriged value for the panel. Study of simple alternative sequence of mining on the conditionally simulated deposit shows that concentration of mining operations simultaneously on a single bench enhances the fluctuation in alumina values of ore mined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increasing variability in device leakage has made the design of keepers for wide OR structures a challenging task. The conventional feedback keepers (CONV) can no longer improve the performance of wide dynamic gates for the future technologies. In this paper, we propose an adaptive keeper technique called rate sensing keeper (RSK) that enables faster switching and tracks the variation across different process corners. It can switch upto 1.9x faster (for 20 legs) than CONV and can scale upto 32 legs as against 20 legs for CONV in a 130-nm 1.2-V process. The delay tracking is within 8% across the different process corners. We demonstrate the circuit operation of RSK using a 32 x 8 register file implemented in an industrial 130-nm 1.2-V CMOS process. The performance of individual dynamic logic gates are also evaluated on chip for various keeper techniques. We show that the RSK technique gives superior performance compared to the other alternatives such as Conditional Keeper (CKP) and current mirror-based keeper (LCR).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the nature of excited states of long polyacene oligomers within a Pariser-Parr-Pople (PPP) Hamiltonian using the Symmetrized Density Matrix Renormalization Group (SDMRG) technique. We find a crossover between the two-photon state and the lowest dipole allowed excited state as the system size is increased from tetracene to pentacene. The spin-gap is the smallest gap. We also study the equilibrium geome tries in the ground and excited states from bond orders and bond-bond correlation functions. We find that the Peierls instability in the ground state of polyacene is conditional both from energetics and structure factors computed froth correlation functions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper addresses the problem of maximum margin classification given the moments of class conditional densities and the false positive and false negative error rates. Using Chebyshev inequalities, the problem can be posed as a second order cone programming problem. The dual of the formulation leads to a geometric optimization problem, that of computing the distance between two ellipsoids, which is solved by an iterative algorithm. The formulation is extended to non-linear classifiers using kernel methods. The resultant classifiers are applied to the case of classification of unbalanced datasets with asymmetric costs for misclassification. Experimental results on benchmark datasets show the efficacy of the proposed method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a novel Second Order Cone Programming (SOCP) formulation for large scale binary classification tasks. Assuming that the class conditional densities are mixture distributions, where each component of the mixture has a spherical covariance, the second order statistics of the components can be estimated efficiently using clustering algorithms like BIRCH. For each cluster, the second order moments are used to derive a second order cone constraint via a Chebyshev-Cantelli inequality. This constraint ensures that any data point in the cluster is classified correctly with a high probability. This leads to a large margin SOCP formulation whose size depends on the number of clusters rather than the number of training data points. Hence, the proposed formulation scales well for large datasets when compared to the state-of-the-art classifiers, Support Vector Machines (SVMs). Experiments on real world and synthetic datasets show that the proposed algorithm outperforms SVM solvers in terms of training time and achieves similar accuracies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many downscaling techniques have been developed in the past few years for projection of station-scale hydrological variables from large-scale atmospheric variables simulated by general circulation models (GCMs) to assess the hydrological impacts of climate change. This article compares the performances of three downscaling methods, viz. conditional random field (CRF), K-nearest neighbour (KNN) and support vector machine (SVM) methods in downscaling precipitation in the Punjab region of India, belonging to the monsoon regime. The CRF model is a recently developed method for downscaling hydrological variables in a probabilistic framework, while the SVM model is a popular machine learning tool useful in terms of its ability to generalize and capture nonlinear relationships between predictors and predictand. The KNN model is an analogue-type method that queries days similar to a given feature vector from the training data and classifies future days by random sampling from a weighted set of K closest training examples. The models are applied for downscaling monsoon (June to September) daily precipitation at six locations in Punjab. Model performances with respect to reproduction of various statistics such as dry and wet spell length distributions, daily rainfall distribution, and intersite correlations are examined. It is found that the CRF and KNN models perform slightly better than the SVM model in reproducing most daily rainfall statistics. These models are then used to project future precipitation at the six locations. Output from the Canadian global climate model (CGCM3) GCM for three scenarios, viz. A1B, A2, and B1 is used for projection of future precipitation. The projections show a change in probability density functions of daily rainfall amount and changes in the wet and dry spell distributions of daily precipitation. Copyright (C) 2011 John Wiley & Sons, Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider evolving exponential RGGs in one dimension and characterize the time dependent behavior of some of their topological properties. We consider two evolution models and study one of them detail while providing a summary of the results for the other. In the first model, the inter-nodal gaps evolve according to an exponential AR(1) process that makes the stationary distribution of the node locations exponential. For this model we obtain the one-step conditional connectivity probabilities and extend it to the k-step case. Finite and asymptotic analysis are given. We then obtain the k-step connectivity probability conditioned on the network being disconnected. We also derive the pmf of the first passage time for a connected network to become disconnected. We then describe a random birth-death model where at each instant, the node locations evolve according to an AR(1) process. In addition, a random node is allowed to die while giving birth to a node at another location. We derive properties similar to those above.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Generalized Distributive Law (GDL) is a message passing algorithm which can efficiently solve a certain class of computational problems, and includes as special cases the Viterbi's algorithm, the BCJR algorithm, the Fast-Fourier Transform, Turbo and LDPC decoding algorithms. In this paper GDL based maximum-likelihood (ML) decoding of Space-Time Block Codes (STBCs) is introduced and a sufficient condition for an STBC to admit low GDL decoding complexity is given. Fast-decoding and multigroup decoding are the two algorithms used in the literature to ML decode STBCs with low complexity. An algorithm which exploits the advantages of both these two is called Conditional ML (CML) decoding. It is shown in this paper that the GDL decoding complexity of any STBC is upper bounded by its CML decoding complexity, and that there exist codes for which the GDL complexity is strictly less than the CML complexity. Explicit examples of two such families of STBCs is given in this paper. Thus the CML is in general suboptimal in reducing the ML decoding complexity of a code, and one should design codes with low GDL complexity rather than low CML complexity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present work involves a computational study of soot formation and transport in case of a laminar acetylene diffusion flame perturbed by a co nvecting line vortex. The topology of the soot contours (as in an earlier experimental work [4]) have been investigated. More soot was produced when vortex was introduced from the air si de in comparison to a fuel side vortex. Also the soot topography was more diffused in case of the air side vortex. The computational model was found to be in good agreement with the ex perimental work [4]. The computational simulation enabled a study of the various parameters affecting soot transport. Temperatures were found to be higher in case of air side vortex as compared to a fuel side vortex. In case of the fuel side vortex, abundance of fuel in the vort ex core resulted in stoichiometrically rich combustion in the vortex core, and more discrete so ot topography. Overall soot production too was low. In case of the air side vortex abundan ce of air in the core resulted in higher temperatures and more soot yield. Statistical techniques like probability density fun ction, correlation coefficient and conditional probability function were introduced to explain the transient dependence of soot yield and transport on various parameters like temperature, a cetylene concentration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present work involves a computational study of soot formation and transport in case of a laminar acetylene diffusion flame perturbed by a convecting line vortex. The topology of the soot contours (as in an earlier experimental work [4]) have been investigated. More soot was produced when vortex was introduced from the air side in comparison to a fuel side vortex. Also the soot topography was more diffused in case of the air side vortex. The computational model was found to be in good agreement with the experimental work [4]. The computational simulation enabled a study of the various parameters affecting soot transport. Temperatures were found to be higher in case of air side vortex as compared to a fuel side vortex. In case of the fuel side vortex, abundance of fuel in the vort ex core resulted in stoichiometrically rich combustion in the vortex core, and more discrete soot topography. Overall soot production too was low. In case of the air side vortex abundance of air in the core resulted in higher temperatures and more soot yield. Statistical techniques like probability density function, correlation coefficient and conditional probability function were introduced to explain the transient dependence of soot yield and transport on various parameters like temperature, a cetylene concentration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of designing good space-time block codes (STBCs) with low maximum-likelihood (ML) decoding complexity has gathered much attention in the literature. All the known low ML decoding complexity techniques utilize the same approach of exploiting either the multigroup decodable or the fast-decodable (conditionally multigroup decodable) structure of a code. We refer to this well-known technique of decoding STBCs as conditional ML (CML) decoding. In this paper, we introduce a new framework to construct ML decoders for STBCs based on the generalized distributive law (GDL) and the factor-graph-based sum-product algorithm. We say that an STBC is fast GDL decodable if the order of GDL decoding complexity of the code, with respect to the constellation size, is strictly less than M-lambda, where lambda is the number of independent symbols in the STBC. We give sufficient conditions for an STBC to admit fast GDL decoding, and show that both multigroup and conditionally multigroup decodable codes are fast GDL decodable. For any STBC, whether fast GDL decodable or not, we show that the GDL decoding complexity is strictly less than the CML decoding complexity. For instance, for any STBC obtained from cyclic division algebras which is not multigroup or conditionally multigroup decodable, the GDL decoder provides about 12 times reduction in complexity compared to the CML decoder. Similarly, for the Golden code, which is conditionally multigroup decodable, the GDL decoder is only half as complex as the CML decoder.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Chebyshev-inequality-based convex relaxations of Chance-Constrained Programs (CCPs) are shown to be useful for learning classifiers on massive datasets. In particular, an algorithm that integrates efficient clustering procedures and CCP approaches for computing classifiers on large datasets is proposed. The key idea is to identify high density regions or clusters from individual class conditional densities and then use a CCP formulation to learn a classifier on the clusters. The CCP formulation ensures that most of the data points in a cluster are correctly classified by employing a Chebyshev-inequality-based convex relaxation. This relaxation is heavily dependent on the second-order statistics. However, this formulation and in general such relaxations that depend on the second-order moments are susceptible to moment estimation errors. One of the contributions of the paper is to propose several formulations that are robust to such errors. In particular a generic way of making such formulations robust to moment estimation errors is illustrated using two novel confidence sets. An important contribution is to show that when either of the confidence sets is employed, for the special case of a spherical normal distribution of clusters, the robust variant of the formulation can be posed as a second-order cone program. Empirical results show that the robust formulations achieve accuracies comparable to that with true moments, even when moment estimates are erroneous. Results also illustrate the benefits of employing the proposed methodology for robust classification of large-scale datasets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Transaction processing is a key constituent of the IT workload of commercial enterprises (e.g., banks, insurance companies). Even today, in many large enterprises, transaction processing is done by legacy "batch" applications, which run offline and process accumulated transactions. Developers acknowledge the presence of multiple loosely coupled pieces of functionality within individual applications. Identifying such pieces of functionality (which we call "services") is desirable for the maintenance and evolution of these legacy applications. This is a hard problem, which enterprises grapple with, and one without satisfactory automated solutions. In this paper, we propose a novel static-analysis-based solution to the problem of identifying services within transaction-processing programs. We provide a formal characterization of services in terms of control-flow and data-flow properties, which is well-suited to the idioms commonly exhibited by business applications. Our technique combines program slicing with the detection of conditional code regions to identify services in accordance with our characterization. A preliminary evaluation, based on a manual analysis of three real business programs, indicates that our approach can be effective in identifying useful services from batch applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of designing good Space-Time Block Codes (STBCs) with low maximum-likelihood (ML) decoding complexity has gathered much attention in the literature. All the known low ML decoding complexity techniques utilize the same approach of exploiting either the multigroup decodable or the fast-decodable (conditionally multigroup decodable) structure of a code. We refer to this well known technique of decoding STBCs as Conditional ML (CML) decoding. In [1], we introduced a framework to construct ML decoders for STBCs based on the Generalized Distributive Law (GDL) and the Factor-graph based Sum-Product Algorithm, and showed that for two specific families of STBCs, the Toepltiz codes and the Overlapped Alamouti Codes (OACs), the GDL based ML decoders have strictly less complexity than the CML decoders. In this paper, we introduce a `traceback' step to the GDL decoding algorithm of STBCs, which enables roughly 4 times reduction in the complexity of the GDL decoders proposed in [1]. Utilizing this complexity reduction from `traceback', we then show that for any STBC (not just the Toeplitz and Overlapped Alamouti Codes), the GDL decoding complexity is strictly less than the CML decoding complexity. For instance, for any STBC obtained from Cyclic Division Algebras that is not multigroup or conditionally multigroup decodable, the GDL decoder provides approximately 12 times reduction in complexity compared to the CML decoder. Similarly, for the Golden code, which is conditionally multigroup decodable, the GDL decoder is only about half as complex as the CML decoder.