990 resultados para Multi-GPU Rendering
Resumo:
This paper deals with an optimization based method for synthesis of adjustable planar four-bar, crank-rocker mechanisms. For multiple different and desired paths to be traced by a point on the coupler, a two stage method first determines the parameters of the possible driving dyads. Then the remaining mechanism parameters are determined in the second stage where a least-squares based circle-fitting procedure is used. Compared to existing formulations, the optimization method uses less number of design variables. Two numerical examples demonstrate the effectiveness of the proposed synthesis method. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Transductive SVM (TSVM) is a well known semi-supervised large margin learning method for binary text classification. In this paper we extend this method to multi-class and hierarchical classification problems. We point out that the determination of labels of unlabeled examples with fixed classifier weights is a linear programming problem. We devise an efficient technique for solving it. The method is applicable to general loss functions. We demonstrate the value of the new method using large margin loss on a number of multi-class and hierarchical classification datasets. For maxent loss we show empirically that our method is better than expectation regularization/constraint and posterior regularization methods, and competitive with the version of entropy regularization method which uses label constraints.
Resumo:
Multi-task learning solves multiple related learning problems simultaneously by sharing some common structure for improved generalization performance of each task. We propose a novel approach to multi-task learning which captures task similarity through a shared basis vector set. The variability across tasks is captured through task specific basis vector set. We use sparse support vector machine (SVM) algorithm to select the basis vector sets for the tasks. The approach results in a sparse model where the prediction is done using very few examples. The effectiveness of our approach is demonstrated through experiments on synthetic and real multi-task datasets.
Resumo:
We study the problem of analyzing influence of various factors affecting individual messages posted in social media. The problem is challenging because of various types of influences propagating through the social media network that act simultaneously on any user. Additionally, the topic composition of the influencing factors and the susceptibility of users to these influences evolve over time. This problem has not been studied before, and off-the-shelf models are unsuitable for this purpose. To capture the complex interplay of these various factors, we propose a new non-parametric model called the Dynamic Multi-Relational Chinese Restaurant Process. This accounts for the user network for data generation and also allows the parameters to evolve over time. Designing inference algorithms for this model suited for large scale social-media data is another challenge. To this end, we propose a scalable and multi-threaded inference algorithm based on online Gibbs Sampling. Extensive evaluations on large-scale Twitter and Face book data show that the extracted topics when applied to authorship and commenting prediction outperform state-of-the-art baselines. More importantly, our model produces valuable insights on topic trends and user personality trends beyond the capability of existing approaches.
Resumo:
Rapid advancements in multi-core processor architectures coupled with low-cost, low-latency, high-bandwidth interconnects have made clusters of multi-core machines a common computing resource. Unfortunately, writing good parallel programs that efficiently utilize all the resources in such a cluster is still a major challenge. Various programming languages have been proposed as a solution to this problem, but are yet to be adopted widely to run performance-critical code mainly due to the relatively immature software framework and the effort involved in re-writing existing code in the new language. In this paper, we motivate and describe our initial study in exploring CUDA as a programming language for a cluster of multi-cores. We develop CUDA-For-Clusters (CFC), a framework that transparently orchestrates execution of CUDA kernels on a cluster of multi-core machines. The well-structured nature of a CUDA kernel, the growing popularity, support and stability of the CUDA software stack collectively make CUDA a good candidate to be considered as a programming language for a cluster. CFC uses a mixture of source-to-source compiler transformations, a work distribution runtime and a light-weight software distributed shared memory to manage parallel executions. Initial results on running several standard CUDA benchmark programs achieve impressive speedups of up to 7.5X on a cluster with 8 nodes, thereby opening up an interesting direction of research for further investigation.
Resumo:
Inter-domain linkers (IDLs)' bridge flanking domains and support inter-domain communication in multi-domain proteins. Their sequence and conformational preferences enable them to carry out varied functions. They also provide sufficient flexibility to facilitate domain motions and, in conjunction with the interacting interfaces, they also regulate the inter-domain geometry (IDG). In spite of the basic intuitive understanding of the inter-domain orientations with respect to linker conformations and interfaces, we still do not entirely understand the precise relationship among the three. We show that IDG is evolutionarily well conserved and is constrained by the domain-domain interface interactions. The IDLs modulate the interactions by varying their lengths, conformations and local structure, thereby affecting the overall IDG. Results of our analysis provide guidelines in modelling of multi-domain proteins from the tertiary structures of constituent domain components.
Resumo:
Aerosol absorption is poorly quantified because of the lack of adequate measurements. It has been shown that the Ozone Monitoring Instrument (OMI) aboard EOS-Aura and the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard EOS-Aqua, which fly in formation as part of the A-train, provide an excellent opportunity to improve the accuracy of aerosol retrievals. Here, we follow a multi-satellite approach to estimate the regional distribution of aerosol absorption over continental India for the first time. Annually and regionally averaged aerosol single-scattering albedo over the Indian landmass is estimated as 0.94 +/- 0.03. Our study demonstrates the potential of multi-satellite data analysis to improve the accuracy of retrieval of aerosol absorption over land.
Resumo:
A square ring microstrip antenna can be modified for dual-band operations by appropriately attaching an open ended stub. The input impedance of this antenna is analyzed here using multi-port network modeling (MNM) approach. The coupled feed is included by defining additional terms in the model. A prototype antenna is fabricated and tested to validate these computations.
Resumo:
This study investigates the application of support vector clustering (SVC) for the direct identification of coherent synchronous generators in large interconnected multi-machine power systems. The clustering is based on coherency measure, which indicates the degree of coherency between any pair of generators. The proposed SVC algorithm processes the coherency measure matrix that is formulated using the generator rotor measurements to cluster the coherent generators. The proposed approach is demonstrated on IEEE 10 generator 39-bus system and an equivalent 35 generators, 246-bus system of practical Indian southern grid. The effect of number of data samples and fault locations are also examined for determining the accuracy of the proposed approach. An extended comparison with other clustering techniques is also included, to show the effectiveness of the proposed approach in grouping the data into coherent groups of generators. This effectiveness of the coherent clusters obtained with the proposed approach is compared in terms of a set of clustering validity indicators and in terms of statistical assessment that is based on the coherency degree of a generator pair.
Resumo:
We use information theoretic achievable rate formulas for the multi-relay channel to study the problem of optimal placement of relay nodes along the straight line joining a source node and a destination node. The achievable rate formulas that we utilize are for full-duplex radios at the relays and decode-and-forward relaying. For the single relay case, and individual power constraints at the source node and the relay node, we provide explicit formulas for the optimal relay location and the optimal power allocation to the source-relay channel, for the exponential and the power-law path-loss channel models. For the multiple relay case, we consider exponential path-loss and a total power constraint over the source and the relays, and derive an optimization problem, the solution of which provides the optimal relay locations. Numerical results suggest that at low attenuation the relays are mostly clustered close to the source in order to be able to cooperate among themselves, whereas at high attenuation they are uniformly placed and work as repeaters. We also prove that a constant rate independent of the attenuation in the network can be achieved by placing a large enough number of relay nodes uniformly between the source and the destination, under the exponential path-loss model with total power constraint.
Resumo:
We study the problem of optimal sequential (''as-you-go'') deployment of wireless relay nodes, as a person walks along a line of random length (with a known distribution). The objective is to create an impromptu multihop wireless network for connecting a packet source to be placed at the end of the line with a sink node located at the starting point, to operate in the light traffic regime. In walking from the sink towards the source, at every step, measurements yield the transmit powers required to establish links to one or more previously placed nodes. Based on these measurements, at every step, a decision is made to place a relay node, the overall system objective being to minimize a linear combination of the expected sum power (or the expected maximum power) required to deliver a packet from the source to the sink node and the expected number of relay nodes deployed. For each of these two objectives, two different relay selection strategies are considered: (i) each relay communicates with the sink via its immediate previous relay, (ii) the communication path can skip some of the deployed relays. With appropriate modeling assumptions, we formulate each of these problems as a Markov decision process (MDP). We provide the optimal policy structures for all these cases, and provide illustrations of the policies and their performance, via numerical results, for some typical parameters.
Resumo:
In the present study, high strength bulk ultrafine-grained titanium alloy Ti-6Al-4V bars were successfully processed using multi-pass warm rolling. Ti-6Al-4V bars of 12 mm diameter and several metres long were processed by multi-pass warm rolling at 650 degrees C, 700 degrees C and 750 degrees C. The highest achieved mechanical properties for Ti-6Al-4V in as rolled condition were yield strength 1191 MPa, ultimate tensile strength of 1299 MPa having an elongation of 10% when the rolling temperature was 650 degrees C. The concurrent evolution of microstructure and texture has been studied using optical microscopy, electron back scattered diffraction and x-ray diffraction. The significant improvement in mechanical properties has been attributed to the ultrafine-grained microstructure as well as the morphology of alpha and beta phases in the warm rolled specimens. The warm rolling of Ti-6Al-4V leads to formation of < 10 (1) over bar0 >alpha//RD fibre texture. This study shows that multi-pass warm rolling has potential to eliminate the costly and time consuming heat treatment steps for small diameter bar products, as the solution treated and aged (STA) properties are achievable in the as rolled condition itself. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
We develop an approximate analytical technique for evaluating the performance of multi-hop networks based on beacon-less CSMA/CA as standardised in IEEE 802.15.4, a popular standard for wireless sensor networks. The network comprises sensor nodes, which generate measurement packets, and relay nodes which only forward packets. We consider a detailed stochastic process at each node, and analyse this process taking into account the interaction with neighbouring nodes via certain unknown variables (e.g., channel sensing rates, collision probabilities, etc.). By coupling these analyses of the various nodes, we obtain fixed point equations that can be solved numerically to obtain the unknown variables, thereby yielding approximations of time average performance measures, such as packet discard probabilities and average queueing delays. Different analyses arise for networks with no hidden nodes and networks with hidden nodes. We apply this approach to the performance analysis of tree networks rooted at a data sink. Finally, we provide a validation of our analysis technique against simulations.
Resumo:
Background: The set of indispensable genes that are required by an organism to grow and sustain life are termed as essential genes. There is a strong interest in identification of the set of essential genes, particularly in pathogens, not only for a better understanding of the pathogen biology, but also for identifying drug targets and the minimal gene set for the organism. Essentiality is inherently a systems property and requires consideration of the system as a whole for their identification. The available experimental approaches capture some aspects but each method comes with its own limitations. Moreover, they do not explain the basis for essentiality in most cases. A powerful prediction method to recognize this gene pool including rationalization of the known essential genes in a given organism would be very useful. Here we describe a multi-level multi-scale approach to identify the essential gene pool in a deadly pathogen, Mycobacterium tuberculosis. Results: The multi-level workflow analyses the bacterial cell by studying (a) genome-wide gene expression profiles to identify the set of genes which show consistent and significant levels of expression in multiple samples of the same condition, (b) indispensability for growth by using gene expression integrated flux balance analysis of a genome-scale metabolic model, (c) importance for maintaining the integrity and flow in a protein-protein interaction network and (d) evolutionary conservation in a set of genomes of the same ecological niche. In the gene pool identified, the functional basis for essentiality has been addressed by studying residue level conservation and the sub-structure at the ligand binding pockets, from which essential amino acid residues in that pocket have also been identified. 283 genes were identified as essential genes with high-confidence. An agreement of about 73.5% is observed with that obtained from the experimental transposon mutagenesis technique. A large proportion of the identified genes belong to the class of intermediary metabolism and respiration. Conclusions: The multi-scale, multi-level approach described can be generally applied to other pathogens as well. The essential gene pool identified form a basis for designing experiments to probe their finer functional roles and also serve as a ready shortlist for identifying drug targets.
Resumo:
We address the problem of multi-instrument recognition in polyphonic music signals. Individual instruments are modeled within a stochastic framework using Student's-t Mixture Models (tMMs). We impose a mixture of these instrument models on the polyphonic signal model. No a priori knowledge is assumed about the number of instruments in the polyphony. The mixture weights are estimated in a latent variable framework from the polyphonic data using an Expectation Maximization (EM) algorithm, derived for the proposed approach. The weights are shown to indicate instrument activity. The output of the algorithm is an Instrument Activity Graph (IAG), using which, it is possible to find out the instruments that are active at a given time. An average F-ratio of 0 : 7 5 is obtained for polyphonies containing 2-5 instruments, on a experimental test set of 8 instruments: clarinet, flute, guitar, harp, mandolin, piano, trombone and violin.