918 resultados para Open Source Software


Relevância:

80.00% 80.00%

Publicador:

Resumo:

FinnWordNet is a WordNet for Finnish that conforms to the framework given in Fellbaum (1998) and Vossen (ed.) (1998). FinnWordNet is open source and currently contains 117,000 synsets. A classic WordNet consists of synsets, or sets of partial synonyms whose shared meaning is described and exemplified by a gloss, a common part of speech and a hyperonym. Synsets in a WordNet are arranged in hierarchical partial orderings according to semantic relations like hyponymy/hyperonymy. Together the gloss, part of speech and hyperonym fix the meaning of a word and constrain the possible translations of a word in a given synset. The Finnish group has opted for translating Princeton WordNet 3.0 synsets wholesale into Finnish by professional translators, because the translation process can be controlled with regard to quality, coverage, cost and speed of translation. The project was financed by FIN-CLARIN at the University of Helsinki. According to our preliminary evaluation, the translation process was diligent and the quality is on a par with the original Princeton WordNet.

Relevância:

80.00% 80.00%

Publicador:

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Diffuse optical tomographic image reconstruction uses advanced numerical models that are computationally costly to be implemented in the real time. The graphics processing units (GPUs) offer desktop massive parallelization that can accelerate these computations. An open-source GPU-accelerated linear algebra library package is used to compute the most intensive matrix-matrix calculations and matrix decompositions that are used in solving the system of linear equations. These open-source functions were integrated into the existing frequency-domain diffuse optical image reconstruction algorithms to evaluate the acceleration capability of the GPUs (NVIDIA Tesla C 1060) with increasing reconstruction problem sizes. These studies indicate that single precision computations are sufficient for diffuse optical tomographic image reconstruction. The acceleration per iteration can be up to 40, using GPUs compared to traditional CPUs in case of three-dimensional reconstruction, where the reconstruction problem is more underdetermined, making the GPUs more attractive in the clinical settings. The current limitation of these GPUs in the available onboard memory (4 GB) that restricts the reconstruction of a large set of optical parameters, more than 13, 377. (C) 2010 Society of Photo-Optical Instrumentation Engineers. DOI: 10.1117/1.3506216]

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We propose a novel formulation of the points-to analysis as a system of linear equations. With this, the efficiency of the points-to analysis can be significantly improved by leveraging the advances in solution procedures for solving the systems of linear equations. However, such a formulation is non-trivial and becomes challenging due to various facts, namely, multiple pointer indirections, address-of operators and multiple assignments to the same variable. Further, the problem is exacerbated by the need to keep the transformed equations linear. Despite this, we successfully model all the pointer operations. We propose a novel inclusion-based context-sensitive points-to analysis algorithm based on prime factorization, which can model all the pointer operations. Experimental evaluation on SPEC 2000 benchmarks and two large open source programs reveals that our approach is competitive to the state-of-the-art algorithms. With an average memory requirement of mere 21MB, our context-sensitive points-to analysis algorithm analyzes each benchmark in 55 seconds on an average.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Just-in-Time (JIT) compilers for Java can be augmented by making use of runtime profile information to produce better quality code and hence achieve higher performance. In a JIT compilation environment, the profile information obtained can be readily exploited in the same run to aid recompilation and optimization of frequently executed (hot) methods. This paper discusses a low overhead path profiling scheme for dynamically profiling AT produced native code. The profile information is used in recompilation during a subsequent invocation of the hot method. During recompilation tree regions along the hot paths are enlarged and instruction scheduling at the superblock level is performed. We have used the open source LaTTe AT compiler framework for our implementation. Our results on a SPARC platform for SPEC JVM98 benchmarks indicate that (i) there is a significant reduction in the number of tree regions along the hot paths, and (ii) profile aided recompilation in LaTTe achieves performance comparable to that of adaptive LaTTe in spite of retranslation and profiling overheads.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Context-sensitive points-to analysis is critical for several program optimizations. However, as the number of contexts grows exponentially, storage requirements for the analysis increase tremendously for large programs, making the analysis non-scalable. We propose a scalable flow-insensitive context-sensitive inclusion-based points-to analysis that uses a specially designed multi-dimensional bloom filter to store the points-to information. Two key observations motivate our proposal: (i) points-to information (between pointer-object and between pointer-pointer) is sparse, and (ii) moving from an exact to an approximate representation of points-to information only leads to reduced precision without affecting correctness of the (may-points-to) analysis. By using an approximate representation a multi-dimensional bloom filter can significantly reduce the memory requirements with a probabilistic bound on loss in precision. Experimental evaluation on SPEC 2000 benchmarks and two large open source programs reveals that with an average storage requirement of 4MB, our approach achieves almost the same precision (98.6%) as the exact implementation. By increasing the average memory to 27MB, it achieves precision upto 99.7% for these benchmarks. Using Mod/Ref analysis as the client, we find that the client analysis is not affected that often even when there is some loss of precision in the points-to representation. We find that the NoModRef percentage is within 2% of the exact analysis while requiring 4MB (maximum 15MB) memory and less than 4 minutes on average for the points-to analysis. Another major advantage of our technique is that it allows to trade off precision for memory usage of the analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Points-to analysis is a key compiler analysis. Several memory related optimizations use points-to information to improve their effectiveness. Points-to analysis is performed by building a constraint graph of pointer variables and dynamically updating it to propagate more and more points-to information across its subset edges. So far, the structure of the constraint graph has been only trivially exploited for efficient propagation of information, e.g., in identifying cyclic components or to propagate information in topological order. We perform a careful study of its structure and propose a new inclusion-based flow-insensitive context-sensitive points-to analysis algorithm based on the notion of dominant pointers. We also propose a new kind of pointer-equivalence based on dominant pointers which provides significantly more opportunities for reducing the number of pointers tracked during the analysis. Based on this hitherto unexplored form of pointer-equivalence, we develop a new context-sensitive flow-insensitive points-to analysis algorithm which uses incremental dominator update to efficiently compute points-to information. Using a large suite of programs consisting of SPEC 2000 benchmarks and five large open source programs we show that our points-to analysis is 88% faster than BDD-based Lazy Cycle Detection and 2x faster than Deep Propagation. We argue that our approach of detecting dominator-based pointer-equivalence is a key to improve points-to analysis efficiency.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We address the task of mapping a given textual domain model (e.g., an industry-standard reference model) for a given domain (e.g., ERP), with the source code of an independently developed application in the same domain. This has applications in improving the understandability of an existing application, migrating it to a more flexible architecture, or integrating it with other related applications. We use the vector-space model to abstractly represent domain model elements as well as source-code artifacts. The key novelty in our approach is to leverage the relationships between source-code artifacts in a principled way to improve the mapping process. We describe experiments wherein we apply our approach to the task of matching two real, open-source applications to corresponding industry-standard domain models. We demonstrate the overall usefulness of our approach, as well as the role of our propagation techniques in improving the precision and recall of the mapping task.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Non-crystalline semiconductor based thin film transistors are the building blocks of large area electronic systems. These devices experience a threshold voltage shift with time due to prolonged gate bias stress. In this paper we integrate a recursive model for threshold voltage shift with the open source BSIM4V4 model of AIM-Spice. This creates a tool for circuit simulation for TFTs. We demonstrate the integrity of the model using several test cases including display driver circuits.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Polyhedral techniques for program transformation are now used in several proprietary and open source compilers. However, most of the research on polyhedral compilation has focused on imperative languages such as C, where the computation is specified in terms of statements with zero or more nested loops and other control structures around them. Graphical dataflow languages, where there is no notion of statements or a schedule specifying their relative execution order, have so far not been studied using a powerful transformation or optimization approach. The execution semantics and referential transparency of dataflow languages impose a different set of challenges. In this paper, we attempt to bridge this gap by presenting techniques that can be used to extract polyhedral representation from dataflow programs and to synthesize them from their equivalent polyhedral representation. We then describe PolyGLoT, a framework for automatic transformation of dataflow programs which we built using our techniques and other popular research tools such as Clan and Pluto. For the purpose of experimental evaluation, we used our tools to compile LabVIEW, one of the most widely used dataflow programming languages. Results show that dataflow programs transformed using our framework are able to outperform those compiled otherwise by up to a factor of seventeen, with a mean speed-up of 2.30x while running on an 8-core Intel system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Atomization is the process of disintegration of a liquid jet into ligaments and subsequently into smaller droplets. A liquid jet injected from a circular orifice into cross flow of air undergoes atomization primarily due to the interaction of the two phases rather than an intrinsic break up. Direct numerical simulation of this process resolving the finest droplets is computationally very expensive and impractical. In the present study, we resort to multiscale modelling to reduce the computational cost. The primary break up of the liquid jet is simulated using Gerris, an open source code, which employs Volume-of-Fluid (VOF) algorithm. The smallest droplets formed during primary atomization are modeled as Lagrangian particles. This one-way coupling approach is validated with the help of the simple test case of tracking a particle in a Taylor-Green vortex. The temporal evolution of the liquid jet forming the spray is captured and the flattening of the cylindrical liquid column prior to breakup is observed. The size distribution of the resultant droplets is presented at different distances downstream from the location of injection and their spatial evolution is analyzed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Here we extend the exploration of significantly super-Chandrasekhar magnetized white dwarfs by numerically computing axisymmetric stationary equilibria of differentially rotating magnetized polytropic compact stars in general relativity (GR), within the ideal magnetohydrodynamic regime. We use a general relativistic magnetohydrodynamic (GRMHD) framework that describes rotating and magnetized axisymmetric white dwarfs, choosing appropriate rotation laws and magnetic field profiles (toroidal and poloidal). The numerical procedure for finding solutions in this framework uses the 3 + 1 formalism of numerical relativity, implemented in the open source XNS code. We construct equilibrium sequences by varying different physical quantities in turn, and highlight the plausible existence of super-Chandrasekhar white dwarfs, with masses in the range of 2-3 solar mass, with central (deep interior) magnetic fields of the order of 10(14) G and differential rotation with surface time periods of about 1-10 s. We note that such white dwarfs are candidates for the progenitors of peculiar, overluminous Type Ia supernovae, to which observational evidence ascribes mass in the range 2.1-2.8 solar mass. We also present some interesting results related to the structure of such white dwarfs, especially the existence of polar hollows in special cases.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we introduce four scenario Cluster based Lagrangian Decomposition (CLD) procedures for obtaining strong lower bounds to the (optimal) solution value of two-stage stochastic mixed 0-1 problems. At each iteration of the Lagrangian based procedures, the traditional aim consists of obtaining the solution value of the corresponding Lagrangian dual via solving scenario submodels once the nonanticipativity constraints have been dualized. Instead of considering a splitting variable representation over the set of scenarios, we propose to decompose the model into a set of scenario clusters. We compare the computational performance of the four Lagrange multiplier updating procedures, namely the Subgradient Method, the Volume Algorithm, the Progressive Hedging Algorithm and the Dynamic Constrained Cutting Plane scheme for different numbers of scenario clusters and different dimensions of the original problem. Our computational experience shows that the CLD bound and its computational effort depend on the number of scenario clusters to consider. In any case, our results show that the CLD procedures outperform the traditional LD scheme for single scenarios both in the quality of the bounds and computational effort. All the procedures have been implemented in a C++ experimental code. A broad computational experience is reported on a test of randomly generated instances by using the MIP solvers COIN-OR and CPLEX for the auxiliary mixed 0-1 cluster submodels, this last solver within the open source engine COIN-OR. We also give computational evidence of the model tightening effect that the preprocessing techniques, cut generation and appending and parallel computing tools have in stochastic integer optimization. Finally, we have observed that the plain use of both solvers does not provide the optimal solution of the instances included in the testbed with which we have experimented but for two toy instances in affordable elapsed time. On the other hand the proposed procedures provide strong lower bounds (or the same solution value) in a considerably shorter elapsed time for the quasi-optimal solution obtained by other means for the original stochastic problem.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este Proyecto Fin de Carrera (PFC) se enmarca en otro proyecto de mayor envergadura, cuyo objetivo final es estudiar la eficiencia en el desarrollo de soluciones informáticas para organizaciones fundamentadas en herramientas CMS Open Source(código libre) basadas en Java y PHP. El presente PFC se ha desarrollado en el área jurídica y se ha propuesto Liferay como CMS. A través del gestor se debía buscar la manera de compartir los documentos de tal manera que estuvieran alojados en un servidor donde cualquier usuario previamente registrado pudiera acceder a ellos desde un sitio remoto. Además debía tener una sección de calendario dónde crear eventos. Como gestor documental Liferay cumple con las expectativas, pudiendo satisfacer todas las funcionalidades necesarias del proyecto. El único inconveniente encontrado, ha sido el no poder desarrollar algunos detalles de algunos módulos para ceñirse exactamente a las necesidades deseadas. Idioma: Castellano.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este PFC trata la construcción de un sistema de acceso al catálogo de una biblioteca doméstica a través de distintas interfaces de usuario que hacen uso de unos mismos servicios web REST expuestos en un servidor Java EE. Su desarrollo implica la integración de múltiples tecnologías open source a varios niveles.