895 resultados para Machine shops -- Automation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although many sparse recovery algorithms have been proposed recently in compressed sensing (CS), it is well known that the performance of any sparse recovery algorithm depends on many parameters like dimension of the sparse signal, level of sparsity, and measurement noise power. It has been observed that a satisfactory performance of the sparse recovery algorithms requires a minimum number of measurements. This minimum number is different for different algorithms. In many applications, the number of measurements is unlikely to meet this requirement and any scheme to improve performance with fewer measurements is of significant interest in CS. Empirically, it has also been observed that the performance of the sparse recovery algorithms also depends on the underlying statistical distribution of the nonzero elements of the signal, which may not be known a priori in practice. Interestingly, it can be observed that the performance degradation of the sparse recovery algorithms in these cases does not always imply a complete failure. In this paper, we study this scenario and show that by fusing the estimates of multiple sparse recovery algorithms, which work with different principles, we can improve the sparse signal recovery. We present the theoretical analysis to derive sufficient conditions for performance improvement of the proposed schemes. We demonstrate the advantage of the proposed methods through numerical simulations for both synthetic and real signals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Maximum entropy approach to classification is very well studied in applied statistics and machine learning and almost all the methods that exists in literature are discriminative in nature. In this paper, we introduce a maximum entropy classification method with feature selection for large dimensional data such as text datasets that is generative in nature. To tackle the curse of dimensionality of large data sets, we employ conditional independence assumption (Naive Bayes) and we perform feature selection simultaneously, by enforcing a `maximum discrimination' between estimated class conditional densities. For two class problems, in the proposed method, we use Jeffreys (J) divergence to discriminate the class conditional densities. To extend our method to the multi-class case, we propose a completely new approach by considering a multi-distribution divergence: we replace Jeffreys divergence by Jensen-Shannon (JS) divergence to discriminate conditional densities of multiple classes. In order to reduce computational complexity, we employ a modified Jensen-Shannon divergence (JS(GM)), based on AM-GM inequality. We show that the resulting divergence is a natural generalization of Jeffreys divergence to a multiple distributions case. As far as the theoretical justifications are concerned we show that when one intends to select the best features in a generative maximum entropy approach, maximum discrimination using J-divergence emerges naturally in binary classification. Performance and comparative study of the proposed algorithms have been demonstrated on large dimensional text and gene expression datasets that show our methods scale up very well with large dimensional datasets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multi-GPU machines are being increasingly used in high-performance computing. Each GPU in such a machine has its own memory and does not share the address space either with the host CPU or other GPUs. Hence, applications utilizing multiple GPUs have to manually allocate and manage data on each GPU. Existing works that propose to automate data allocations for GPUs have limitations and inefficiencies in terms of allocation sizes, exploiting reuse, transfer costs, and scalability. We propose a scalable and fully automatic data allocation and buffer management scheme for affine loop nests on multi-GPU machines. We call it the Bounding-Box-based Memory Manager (BBMM). BBMM can perform at runtime, during standard set operations like union, intersection, and difference, finding subset and superset relations on hyperrectangular regions of array data (bounding boxes). It uses these operations along with some compiler assistance to identify, allocate, and manage data required by applications in terms of disjoint bounding boxes. This allows it to (1) allocate exactly or nearly as much data as is required by computations running on each GPU, (2) efficiently track buffer allocations and hence maximize data reuse across tiles and minimize data transfer overhead, and (3) and as a result, maximize utilization of the combined memory on multi-GPU machines. BBMM can work with any choice of parallelizing transformations, computation placement, and scheduling schemes, whether static or dynamic. Experiments run on a four-GPU machine with various scientific programs showed that BBMM reduces data allocations on each GPU by up to 75% compared to current allocation schemes, yields performance of at least 88% of manually written code, and allows excellent weak scaling.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we establish that the Lovasz theta function on a graph can be restated as a kernel learning problem. We introduce the notion of SVM-theta graphs, on which Lovasz theta function can be approximated well by a Support vector machine (SVM). We show that Erdos-Renyi random G(n, p) graphs are SVM-theta graphs for log(4)n/n <= p < 1. Even if we embed a large clique of size Theta(root np/1-p) in a G(n, p) graph the resultant graph still remains a SVM-theta graph. This immediately suggests an SVM based algorithm for recovering a large planted clique in random graphs. Associated with the theta function is the notion of orthogonal labellings. We introduce common orthogonal labellings which extends the idea of orthogonal labellings to multiple graphs. This allows us to propose a Multiple Kernel learning (MKL) based solution which is capable of identifying a large common dense subgraph in multiple graphs. Both in the planted clique case and common subgraph detection problem the proposed solutions beat the state of the art by an order of magnitude.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Climate change impact assessment studies involve downscaling large-scale atmospheric predictor variables (LSAPVs) simulated by general circulation models (GCMs) to site-scale meteorological variables. This article presents a least-square support vector machine (LS-SVM)-based methodology for multi-site downscaling of maximum and minimum daily temperature series. The methodology involves (1) delineation of sites in the study area into clusters based on correlation structure of predictands, (2) downscaling LSAPVs to monthly time series of predictands at a representative site identified in each of the clusters, (3) translation of the downscaled information in each cluster from the representative site to that at other sites using LS-SVM inter-site regression relationships, and (4) disaggregation of the information at each site from monthly to daily time scale using k-nearest neighbour disaggregation methodology. Effectiveness of the methodology is demonstrated by application to data pertaining to four sites in the catchment of Beas river basin, India. Simulations of Canadian coupled global climate model (CGCM3.1/T63) for four IPCC SRES scenarios namely A1B, A2, B1 and COMMIT were downscaled to future projections of the predictands in the study area. Comparison of results with those based on recently proposed multivariate multiple linear regression (MMLR) based downscaling method and multi-site multivariate statistical downscaling (MMSD) method indicate that the proposed method is promising and it can be considered as a feasible choice in statistical downscaling studies. The performance of the method in downscaling daily minimum temperature was found to be better when compared with that in downscaling daily maximum temperature. Results indicate an increase in annual average maximum and minimum temperatures at all the sites for A1B, A2 and B1 scenarios. The projected increment is high for A2 scenario, and it is followed by that for A1B, B1 and COMMIT scenarios. Projections, in general, indicated an increase in mean monthly maximum and minimum temperatures during January to February and October to December.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Today's programming languages are supported by powerful third-party APIs. For a given application domain, it is common to have many competing APIs that provide similar functionality. Programmer productivity therefore depends heavily on the programmer's ability to discover suitable APIs both during an initial coding phase, as well as during software maintenance. The aim of this work is to support the discovery and migration of math APIs. Math APIs are at the heart of many application domains ranging from machine learning to scientific computations. Our approach, called MATHFINDER, combines executable specifications of mathematical computations with unit tests (operational specifications) of API methods. Given a math expression, MATHFINDER synthesizes pseudo-code comprised of API methods to compute the expression by mining unit tests of the API methods. We present a sequential version of our unit test mining algorithm and also design a more scalable data-parallel version. We perform extensive evaluation of MATHFINDER (1) for API discovery, where math algorithms are to be implemented from scratch and (2) for API migration, where client programs utilizing a math API are to be migrated to another API. We evaluated the precision and recall of MATHFINDER on a diverse collection of math expressions, culled from algorithms used in a wide range of application areas such as control systems and structural dynamics. In a user study to evaluate the productivity gains obtained by using MATHFINDER for API discovery, the programmers who used MATHFINDER finished their programming tasks twice as fast as their counterparts who used the usual techniques like web and code search, IDE code completion, and manual inspection of library documentation. For the problem of API migration, as a case study, we used MATHFINDER to migrate Weka, a popular machine learning library. Overall, our evaluation shows that MATHFINDER is easy to use, provides highly precise results across several math APIs and application domains even with a small number of unit tests per method, and scales to large collections of unit tests.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of bipartite ranking, where instances are labeled positive or negative and the goal is to learn a scoring function that minimizes the probability of mis-ranking a pair of positive and negative instances (or equivalently, that maximizes the area under the ROC curve), has been widely studied in recent years. A dominant theoretical and algorithmic framework for the problem has been to reduce bipartite ranking to pairwise classification; in particular, it is well known that the bipartite ranking regret can be formulated as a pairwise classification regret, which in turn can be upper bounded using usual regret bounds for classification problems. Recently, Kotlowski et al. (2011) showed regret bounds for bipartite ranking in terms of the regret associated with balanced versions of the standard (non-pairwise) logistic and exponential losses. In this paper, we show that such (non-pairwise) surrogate regret bounds for bipartite ranking can be obtained in terms of a broad class of proper (composite) losses that we term as strongly proper. Our proof technique is much simpler than that of Kotlowski et al. (2011), and relies on properties of proper (composite) losses as elucidated recently by Reid and Williamson (2010, 2011) and others. Our result yields explicit surrogate bounds (with no hidden balancing terms) in terms of a variety of strongly proper losses, including for example logistic, exponential, squared and squared hinge losses as special cases. An important consequence is that standard algorithms minimizing a (non-pairwise) strongly proper loss, such as logistic regression and boosting algorithms (assuming a universal function class and appropriate regularization), are in fact consistent for bipartite ranking; moreover, our results allow us to quantify the bipartite ranking regret in terms of the corresponding surrogate regret. We also obtain tighter surrogate bounds under certain low-noise conditions via a recent result of Clemencon and Robbiano (2011).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multilevel inverters with dodecagonal (12-sided polygon) voltage space vector structure have advantages, such as complete elimination of fifth and seventh harmonics, reduction in electromagnetic interference, reduction in device voltage ratings, reduction of switching frequency, extension of linear modulation range, etc., making it a viable option for high-power medium-voltage drives. This paper proposes two power circuit topologies capable of generating multilevel dodecagonal voltage space vector structure with symmetric triangles (for the first time) with minimum number of dc-link power supplies and floating capacitor H-bridges. The first power topology is composed of two hybrid cascaded five-level inverters connected to either side of an open-end winding induction machine. Each inverter consists of a three-level neutral-point-clamped inverter, which is cascaded with an isolated H-bridge making it a five-level inverter. The second topology is for a normal induction motor. Both of these circuit topologies have inherent capacitor balancing for floating H-bridges for all modulation indexes, including transient operations. The proposed topologies do not require any precharging circuitry for startup. A simple pulsewidth modulation timing calculation method for space vector modulation is also presented in this paper. Due to the symmetric arrangement of congruent triangles within the voltage space vector structure, the timing computation requires only the sampled reference values and does not require any offline computation, lookup tables, or angle computation. Experimental results for steady-state operation and transient operation are also presented to validate the proposed concept.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Models of river flow time series are essential in efficient management of a river basin. It helps policy makers in developing efficient water utilization strategies to maximize the utility of scarce water resource. Time series analysis has been used extensively for modeling river flow data. The use of machine learning techniques such as support-vector regression and neural network models is gaining increasing popularity. In this paper we compare the performance of these techniques by applying it to a long-term time-series data of the inflows into the Krishnaraja Sagar reservoir (KRS) from three tributaries of the river Cauvery. In this study flow data over a period of 30 years from three different observation points established in upper Cauvery river sub-basin is analyzed to estimate their contribution to KRS. Specifically, ANN model uses a multi-layer feed forward network trained with a back-propagation algorithm and support vector regression with epsilon intensive-loss function is used. Auto-regressive moving average models are also applied to the same data. The performance of different techniques is compared using performance metrics such as root mean squared error (RMSE), correlation, normalized root mean squared error (NRMSE) and Nash-Sutcliffe Efficiency (NSE).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Predicting clinical response to anticancer drugs remains a major challenge in cancer treatment. Emerging reports indicate that the tumour microenvironment and heterogeneity can limit the predictive power of current biomarker-guided strategies for chemotherapy. Here we report the engineering of personalized tumour ecosystems that contextually conserve the tumour heterogeneity, and phenocopy the tumour microenvironment using tumour explants maintained in defined tumour grade-matched matrix support and autologous patient serum. The functional response of tumour ecosystems, engineered from 109 patients, to anticancer drugs, together with the corresponding clinical outcomes, is used to train a machine learning algorithm; the learned model is then applied to predict the clinical response in an independent validation group of 55 patients, where we achieve 100% sensitivity in predictions while keeping specificity in a desired high range. The tumour ecosystem and algorithm, together termed the CANScript technology, can emerge as a powerful platform for enabling personalized medicine.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This letter presents an accurate steady-state phasor model for a doubly fed induction machine. The drawback of existing steady-state phasor model is discussed. In particular, the inconsistency of existing equivalent model with respect to reactive power flows when operated at supersynchronous speeds is highlighted. Relevant mathematical basis for the proposed model is presented and its validity is illustrated on a 2-MW doubly fed induction machine.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a practical situation, it is difficult to model exact contact conditions clue to challenges involved in the estimation of contact forces, and relative displacements between the contacting bodies. Sliding and seizure conditions were simulated on first-of-a-kind displacement controlled system. Self-mated stainless steels have been investigated in detail. Categorization of contact conditions prevailing at the contact interface has been carried out based on the variation of coefficient of friction with number of cycles, and three-dimensional fretting loops. Surface and subsurface micro-cracks have been observed, and their characteristic shows strong dependence on loading conditions. Existence of shear bands in the subsurface region has been observed for high strain and low strain rate loading conditions. Studies also include the influence of initial surface roughness on the damage under two extreme contact conditions. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: Composition of the coronary artery plaque is known to have critical role in heart attack. While calcified plaque can easily be diagnosed by conventional CT, it fails to distinguish between fibrous and lipid rich plaques. In the present paper, the authors discuss the experimental techniques and obtain a numerical algorithm by which the electron density (rho(e)) and the effective atomic number (Z(eff)) can be obtained from the dual energy computed tomography (DECT) data. The idea is to use this inversion method to characterize and distinguish between the lipid and fibrous coronary artery plaques. Methods: For the purpose of calibration of the CT machine, the authors prepare aqueous samples whose calculated values of (rho(e), Z(eff)) lie in the range of (2.65 x 10(23) <= rho(e) <= 3.64 x 10(23)/cm(3)) and (6.80 <= Z(eff) <= 8.90). The authors fill the phantom with these known samples and experimentally determine HU(V-1) and HU(V-2), with V-1,V-2 = 100 and 140 kVp, for the same pixels and thus determine the coefficients of inversion that allow us to determine (rho(e), Z(eff)) from the DECT data. The HU(100) and HU(140) for the coronary artery plaque are obtained by filling the channel of the coronary artery with a viscous solution of methyl cellulose in water, containing 2% contrast. These (rho(e), Z(eff)) values of the coronary artery plaque are used for their characterization on the basis of theoretical models of atomic compositions of the plaque materials. These results are compared with histopathological report. Results: The authors find that the calibration gives Pc with an accuracy of 3.5% while Z(eff) is found within 1% of the actual value, the confidence being 95%. The HU(100) and HU(140) are found to be considerably different for the same plaque at the same position and there is a linear trend between these two HU values. It is noted that pure lipid type plaques are practically nonexistent, and microcalcification, as observed in histopathology, has to be taken into account to explain the nature of the observed (rho(e), Z(eff)) data. This also enables us to judge the composition of the plaque in terms of basic model which considers the plaque to be composed of fibres, lipids, and microcalcification. Conclusions: This simple and reliable method has the potential as an effective modality to investigate the composition of noncalcified coronary artery plaques and thus help in their characterization. In this inversion method, (rho(e), Z(eff)) of the scanned sample can be found by eliminating the effects of the CT machine and also by ensuring that the determination of the two unknowns (rho(e), Z(eff)) does not interfere with each other and the nature of the plaque can be identified in terms of a three component model. (C) 2015 American Association of Physicists in Medicine.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a simple hysteretic method to obtain the energy required to operate the gate-drive, sensors, and other circuits within nonneutral ac switches intended for use in load automated buildings. The proposed method features a switch-mode low part-count self-powered MOSFET ac switch that achieves efficiency and load current THD figures comparable to those of an externally gate-driven switch built using similar MOSFETS. The fundamental operation of the method is explained in detail, followed by the modifications required for practical implementation. Certain design rules that allow the method to accommodate a wide range of single-phase loads from 10 VA to 1 kVA are discussed, along with an efficiency enhancement feature based on inherent MOSFET characteristics. The limitations and side effects of the method are also mentioned according to their levels of severity. Finally, experimental results obtained using a prototype sensor switch are presented, along with a performance comparison of the prototype with an externally gate-driven MOSFET switch.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This letter presents an alternate proof for the steady-state equivalent circuit of a doubly fed induction machine operating at supersynchronous speeds. The spatial orientation of rotating magnetic fields is used to validate the conjugation of rotor side quantities arising in supersynchronous mode. The equivalent circuit is further validated using dynamic simulations of a stand-alone machine.