229 resultados para Pontier, Aug.
Resumo:
Recently, Ebrahimi and Fragouli proposed an algorithm to construct scalar network codes using small fields (and vector network codes of small lengths) satisfying multicast constraints in a given single-source, acyclic network. The contribution of this paper is two fold. Primarily, we extend the scalar network coding algorithm of Ebrahimi and Fragouli (henceforth referred to as the EF algorithm) to block network-error correction. Existing construction algorithms of block network-error correcting codes require a rather large field size, which grows with the size of the network and the number of sinks, and thereby can be prohibitive in large networks. We give an algorithm which, starting from a given network-error correcting code, can obtain another network code using a small field, with the same error correcting capability as the original code. Our secondary contribution is to improve the EF Algorithm itself. The major step in the EF algorithm is to find a least degree irreducible polynomial which is coprime to another large degree polynomial. We suggest an alternate method to compute this coprime polynomial, which is faster than the brute force method in the work of Ebrahimi and Fragouli.
Resumo:
This paper extends some geometric properties of a one-parameter family of relative entropies. These arise as redundancies when cumulants of compressed lengths are considered instead of expected compressed lengths. These parametric relative entropies are a generalization of the Kullback-Leibler divergence. They satisfy the Pythagorean property and behave like squared distances. This property, which was known for finite alphabet spaces, is now extended for general measure spaces. Existence of projections onto convex and certain closed sets is also established. Our results may have applications in the Rényi entropy maximization rule of statistical physics.
Resumo:
Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.
Resumo:
When an electron is injected into liquid helium, it forces open a cavity that is free of helium atoms (an electron bubble). If the electron is in the ground 1S state, this bubble is spherical. By optical pumping it is possible to excite a significant fraction of the electron bubbles to the 1P state; the bubbles then lose spherical symmetry. We present calculations of the energies of photons that are needed to excite these 1P bubbles to higher energy states (1D and 2S) and the matrix elements for these transitions. Measurement of these transition energies would provide detailed information about the shape of the 1P bubbles.
Resumo:
In this paper, we present a fast learning neural network classifier for human action recognition. The proposed classifier is a fully complex-valued neural network with a single hidden layer. The neurons in the hidden layer employ the fully complex-valued hyperbolic secant as an activation function. The parameters of the hidden layer are chosen randomly and the output weights are estimated analytically as a minimum norm least square solution to a set of linear equations. The fast leaning fully complex-valued neural classifier is used for recognizing human actions accurately. Optical flow-based features extracted from the video sequences are utilized to recognize 10 different human actions. The feature vectors are computationally simple first order statistics of the optical flow vectors, obtained from coarse to fine rectangular patches centered around the object. The results indicate the superior performance of the complex-valued neural classifier for action recognition. The superior performance of the complex neural network for action recognition stems from the fact that motion, by nature, consists of two components, one along each of the axes.
Resumo:
The objective of this paper is to empirically evaluate a framework for designing – GEMS of SAPPhIRE as req-sol – to check if it supports design for variety and novelty. A set of observational studies is designed where three teams of two designers each, solve three different design problems in the following order: without any support, using the framework, and using a combination of the framework and a catalogue. Results from the studies reveal that both variety and novelty of the concept space increases with the use of the framework or the framework and the catalogue. However, the number of concepts and the time taken by the designers decreases with the use of the framework and, the framework and the catalogue. Based on the results and the interview sessions with the designers, an interactive framework for designing to be supported on a computer is proposed as future work.
Resumo:
Space vector based PWM strategies for three-level inverters have a broader choice of switching sequences to generate the required reference vector than triangle comparison based PWM techniques. However, space vector based PWM involves numerous steps which are computationally intensive. A simplified algorithm is proposed here, which is shown to reduce the computation time significantly. The developed algorithm is used to implement synchronous and asynchronous conventional space vector PWM, synchronized modified space vector PWM and an asynchronous advanced bus-clamping PWM technique on a low-cost dsPIC digital controller. Experimental results are presented for a comparative evaluation of the performance of different PWM methods.
Resumo:
A regenerative or circulating-power method is presented in this paper for heat run test on the legs of a three-level neutral point clamped (NPC) inverter. This test ensures that only losses are drawn from the dc supply, while rated power is circulated between the two legs, thus minimising wastage of energy. A proportional-resonant (PR) controller based current control scheme is proposed here for the circulating power test setup in NPC inverter. Simulation and experimental results are presented to validate the controller design at various operating conditions. Results of thermal test on the inverter legs are presented at two different operating conditions.
Resumo:
Wind power, as an alternative to fossil fuels, is plentiful, renewable, widely distributed, clean, produces no greenhouse gas emissions during operation, and uses little land. In operation, the overall cost per unit of energy produced is similar to the cost for new coal and natural gas installations. However, the stochastic behaviour of wind speeds leads to significant disharmony between wind energy production and electricity demand. Wind generation suffers from an intermittent characteristics due to the own diurnal and seasonal patterns of the wind behaviour. Both reactive power and voltage control are important under varying operating conditions of wind farm. To optimize reactive power flow and to keep voltages in limit, an optimization method is proposed in this paper. The objective proposed is minimization of the voltage deviations of the load buses (Vdesired). The approach considers the reactive power limits of wind generators and co-ordinates the transformer taps. This algorithm has been tested under practically varying conditions simulated on a test system. The results are obtained on a system of 50-bus real life equivalent power network. The result shows the efficiency of the proposed method.
Resumo:
This work proposes a boosting-based transfer learning approach for head-pose classification from multiple, low-resolution views. Head-pose classification performance is adversely affected when the source (training) and target (test) data arise from different distributions (due to change in face appearance, lighting, etc). Under such conditions, we employ Xferboost, a Logitboost-based transfer learning framework that integrates knowledge from a few labeled target samples with the source model to effectively minimize misclassifications on the target data. Experiments confirm that the Xferboost framework can improve classification performance by up to 6%, when knowledge is transferred between the CLEAR and FBK four-view headpose datasets.
Resumo:
The accuracy of pairing of the anticodon of the initiator tRNA (tRNA(fMet)) and the initiation codon of an mRNA, in the ribosomal P-site, is crucial for determining the translational reading frame. However, a direct role of any ribosomal element(s) in scrutinizing this pairing is unknown. The P-site elements, m(2)G966 (methylated by RsmD), m(5)C967 (methylated by RsmB) and the C-terminal tail of the protein S9 lie in the vicinity of tRNA(fMet). We investigated the role of these elements in initiation from various codons, namely, AUG, GUG, UUG, CUG, AUA, AUU, AUC and ACG with tRNA(CAU)(fmet) (tRNA(fMet) with CAU anticodon); CAC and CAU with tRNA(GUG)(fme); UAG with tRNA(GAU)(fMet) using in vivo and computational methods. Although RsmB deficiency did not impact initiation from most codons, RsmD deficiency increased initiation from AUA, CAC and CAU (2- to 3.6-fold). Deletion of the S9 C-terminal tail resulted in poorer initiation from UUG, GUG and CUG, but in increased initiation from CAC, CAU and UAC codons (up to 4-fold). Also, the S9 tail suppressed initiation with tRNA(CAU)(fMet)lacking the 3GC base pairs in the anticodon stem. These observations suggest distinctive roles of 966/967 methylations and the S9 tail in initiation.
Resumo:
In recent years, business practitioners are seen valuing patents on the basis of the market price that the patent can attract. Researchers have also looked into various patent latent variables and firm variables that influence the price of a patent. Forward citations of a patent are shown to play a role in determining price. Using patent auction price data (of Ocean Tomo now ICAP patent brokerage), we delve deeper into of the role of forward citations. The successfully sold 167 singleton patents form the sample of our study. We found that, it is mainly the right tail of the citation distribution that explains the high prices of the patents falling on the right tail of the price distribution. There is consistency in the literature on the positive correlation between patent prices and forward citations. In this paper, we go deeper to understand this linear relationship through case studies. Case studies of patents with high and low citations are described in this paper to understand why some patents attracted high prices. We look into the role of additional patent latent variables like age, technology discipline, class and breadth of the patent in influencing citations that a patent receives.
Resumo:
Empirical research available on technology transfer initiatives is either North American or European. Literature over the last two decades shows various research objectives such as identifying the variables to be measured and statistical methods to be used in the context of studying university based technology transfer initiatives. AUTM survey data from years 1996 to 2008 provides insightful patterns about the North American technology transfer initiatives, we use this data in our paper. This paper has three sections namely, a comparison of North American Universities with (n=1129) and without Medical Schools (n=786), an analysis of the top 75th percentile of these samples and a DEA analysis of these samples. We use 20 variables. Researchers have attempted to classify university based technology transfer initiative variables into multi-stages, namely, disclosures, patents and license agreements. Using the same approach, however with minor variations, three stages are defined in this paper. The first stage is to do with inputs from R&D expenditure and outputs namely, invention disclosures. The second stage is to do with invention disclosures being the input and patents issued being the output. The third stage is to do with patents issued as an input and technology transfers as outcomes.
Resumo:
Crop type classification using remote sensing data plays a vital role in planning cultivation activities and for optimal usage of the available fertile land. Thus a reliable and precise classification of agricultural crops can help improve agricultural productivity. Hence in this paper a gene expression programming based fuzzy logic approach for multiclass crop classification using Multispectral satellite image is proposed. The purpose of this work is to utilize the optimization capabilities of GEP for tuning the fuzzy membership functions. The capabilities of GEP as a classifier is also studied. The proposed method is compared to Bayesian and Maximum likelihood classifier in terms of performance evaluation. From the results we can conclude that the proposed method is effective for classification.
Resumo:
Particle Swarm Optimization is a parallel algorithm that spawns particles across a search space searching for an optimized solution. Though inherently parallel, they have distinct synchronizations points which stumbles attempts to create completely distributed versions of it. In this paper, we attempt to create a completely distributed peer-peer particle swarm optimization in a cluster of heterogeneous nodes. Since, the original algorithm requires explicit synchronization points we modified the algorithm in multiple ways to support a peer-peer system of nodes. We also modify certain aspect of the basic PSO algorithm and show how certain numerical problems can take advantage of the same thereby yielding fast convergence.