995 resultados para software libraries


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In an information-driven society where the volume and value of produced and consumed data assumes a growing importance, the role of digital libraries gains particular importance. This work analyzes the limitations in current digital library management systems and the opportunities brought by recent distributed computing models. The result of this work is the implementation of the University of Aveiro integrated system for digital libraries and archives. It concludes by analyzing the system in production and proposing a new service oriented digital library architecture supported in a peer-to-peer infrastructure

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Newspapers cover a large amount of information everyday on topics of varied interests. To a university, newspapers are essential components of communication as they cover various happenings in a university. These items of information are neither stored properly nor put in retrieval systems for future use. The news and views appeared in newspapers can effectively be organized in a digital library making use of open source software. The CUSAT digital library (http://dspace.cusat.ac.in/dspace/) has organized some news items that appeared in local newspapers about the university under a special community named “CUSAT-News”. This article describes the methods of collecting, selecting, organizing, providing access and preserving news items required by a university using DSpace open source software.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The majority of existing application profiling techniques ag- gregate and report performance costs by method or call- ing context. Modern large-scale object-oriented applications consist of thousands of methods with complex calling pat- terns. Consequently, when profiled, their performance costs tend to be thinly distributed across many thousands of loca- tions with few easily identifiable optimisation opportunities. However experienced performance engineers know that there are repeated patterns of method calls in the execution of an application that are induced by the libraries, design patterns and coding idioms used in the software. Automati- cally identifying and aggregating costs over these patterns of method calls allows us to identify opportunities to improve performance based on optimising these patterns. We have developed an analysis technique that is able to identify the entry point methods, which we call subsuming methods, of such patterns. Our ofiine analysis runs over previously collected runtime performance data structured in a calling context tree, such as produced by a large number of existing commercial and open source profilers. We have evaluated our approach on the DaCapo bench- mark suite, showing that our analysis significantly reduces the size and complexity of the runtime performance data set, facilitating its comprehension and interpretation. We also demonstrate, with a collection of case studies, that our analysis identifies new optimisation opportunities that can lead to significant performance improvements (from 20% to over 50% improvement in our case studies).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past few years, libraries have started to design public programs that educate patrons about different tools and techniques to protect personal privacy. But do end user solutions provide adequate safeguards against surveillance by corporate and government actors? What does a comprehensive plan for privacy entail in order that libraries live up to their privacy values? In this paper, the authors discuss the complexity of surveillance architecture that the library institution might confront when seeking to defend the privacy rights of patrons. This architecture consists of three main parts: physical or material aspects, logical characteristics, and social factors of information and communication flows in the library setting. For each category, the authors will present short case studies that are culled from practitioner experience, research, and public discourse. The case studies probe the challenges faced by the library—not only when making hardware and software choices, but also choices related to staffing and program design. The paper shows that privacy choices intersect not only with free speech and chilling effects, but also with questions that concern intellectual property, organizational development, civic engagement, technological innovation, public infrastructure, and more. The paper ends with discussion of what libraries will require in order to sustain and improve efforts to serve as stewards of privacy in the 21st century.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An efficient technique to cut polygonal meshes as a step in the geometric modeling of topographic and geological data has been developed. In boundary represented models of outcropping strata and faulted horizons polygonal meshes often intersect each other. TRICUT determines the line of intersection and re-triangulates the area of contact. Along this line the mesh is split in two or more parts which can be selected for removal. The user interaction takes place in the 3D-model space. The intersection, selection and removal are under graphic control. The visualization of outcropping geological structures in digital terrain models is improved by determining intersections against a slightly shifted terrain model. Thus, the outcrop line becomes a surface which overlaps the terrain in its initial position. The area of this overlapping surface changes with respect to the strike and dip of the structure, the morphology and the offset. Some applications of TRICUT on different real datasets are shown. TRICUT is implemented in C+ + using the Visualization Toolkit in conjunction with the RAPID and TRIANGLE libraries. The program runs under LINUX and UNIX using the MESA OpenGL library. This work gives an example of solving a complex 3D geometric problem by integrating available robust public domain software. (C) 2002 Elsevier B.V. Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In DNA microarray experiments, the gene fragments that are spotted on the slides are usually obtained by the synthesis of specific oligonucleotides that are able to amplify genes through PCR. Shotgun library sequences are an alternative to synthesis of primers for the study of each gene in the genome. The possibility of putting thousands of gene sequences into a single slide allows the use of shotgun clones in order to proceed with microarray analysis without a completely sequenced genome. We developed an OC Identifier tool (optimal clone identifier for genomic shotgun libraries) for the identification of unique genes in shotgun libraries based on a partially sequenced genome; this allows simultaneous use of clones in projects such as transcriptome and phylogeny studies, using comparative genomic hybridization and genome assembly. The OC Identifier tool allows comparative genome analysis, biological databases, query language in relational databases, and provides bioinformatics tools to identify clones that contain unique genes as alternatives to primer synthesis. The OC Identifier allows analysis of clones during the sequencing phase, making it possible to select genes of interest for construction of a DNA microarray. ©FUNPEC-RP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The results obtained through biological research usually need to be analyzed using computational tools, since manual analysis becomes unfeasible due to the complexity and size of these results. For instance, the study of quasispecies frequently demands the analysis of several, very lengthy sequences of nucleotides and amino acids. Therefore, bioinformatics tools for the study of quasispecies are constantly being developed due to different problems found by biologists. In the present study, we address the development of a software tool for the evaluation of population diversity in quasispecies. Special attention is paid to the localization of genome regions prone to changes, as well as of possible hot spots.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the open source framework MARVIN for rapid application development in the field of biomedical and clinical research. MARVIN applications consist of modules that can be plugged together in order to provide the functionality required for a specific experimental scenario. Application modules work on a common patient database that is used to store and organize medical data as well as derived data. MARVIN provides a flexible input/output system with support for many file formats including DICOM, various 2D image formats and surface mesh data. Furthermore, it implements an advanced visualization system and interfaces to a wide range of 3D tracking hardware. Since it uses only highly portable libraries, MARVIN applications run on Unix/Linux, Mac OS X and Microsoft Windows.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Se aborda la construcción de repositorios institucionales open source con Software Greenstone. Se realiza un recorrido teórico y otro modélico desarrollando en él una aplicación práctica. El primer recorrido, que constituye el marco teórico, comprende una descripción, de: la filosofía open access (acceso abierto) y open source (código abierto) para la creación de repositorios institucionales. También abarca en líneas generales las temáticas relacionadas al protocolo OAI, el marco legal en lo que hace a la propiedad intelectual, las licencias y una aproximación a los metadatos. En el mismo recorrido se abordan aspectos teóricos de los repositorios institucionales: acepciones, beneficios, tipos, componentes intervinientes, herramientas open source para la creación de repositorios, descripción de las herramientas y finalmente, la descripción ampliada del Software Greenstone; elegido para el desarrollo modélico del repositorio institucional colocado en un demostrativo digital. El segundo recorrido, correspondiente al desarrollo modélico, incluye por un lado el modelo en sí del repositorio con el Software Greenstone; detallándose aquí uno a uno los componentes que lo conforman. Es el insumo teórico-práctico para el diseño -paso a paso- del repositorio institucional. Por otro lado, se incluye el resultado de la modelización, es decir el repositorio creado, el cual es exportado en entorno web a un soporte digital para su visibilización. El diseño del repositorio, paso a paso, constituye el núcleo sustantivo de aportes de este trabajo de tesina

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Se aborda la construcción de repositorios institucionales open source con Software Greenstone. Se realiza un recorrido teórico y otro modélico desarrollando en él una aplicación práctica. El primer recorrido, que constituye el marco teórico, comprende una descripción, de: la filosofía open access (acceso abierto) y open source (código abierto) para la creación de repositorios institucionales. También abarca en líneas generales las temáticas relacionadas al protocolo OAI, el marco legal en lo que hace a la propiedad intelectual, las licencias y una aproximación a los metadatos. En el mismo recorrido se abordan aspectos teóricos de los repositorios institucionales: acepciones, beneficios, tipos, componentes intervinientes, herramientas open source para la creación de repositorios, descripción de las herramientas y finalmente, la descripción ampliada del Software Greenstone; elegido para el desarrollo modélico del repositorio institucional colocado en un demostrativo digital. El segundo recorrido, correspondiente al desarrollo modélico, incluye por un lado el modelo en sí del repositorio con el Software Greenstone; detallándose aquí uno a uno los componentes que lo conforman. Es el insumo teórico-práctico para el diseño -paso a paso- del repositorio institucional. Por otro lado, se incluye el resultado de la modelización, es decir el repositorio creado, el cual es exportado en entorno web a un soporte digital para su visibilización. El diseño del repositorio, paso a paso, constituye el núcleo sustantivo de aportes de este trabajo de tesina

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Memory analysis techniques have become sophisticated enough to model, with a high degree of accuracy, the manipulation of simple memory structures (finite structures, single/double linked lists and trees). However, modern programming languages provide extensive library support including a wide range of generic collection objects that make use of complex internal data structures. While these data structures ensure that the collections are efficient, often these representations cannot be effectively modeled by existing methods (either due to excessive analysis runtime or due to the inability to represent the required information). This paper presents a method to represent collections using an abstraction of their semantics. The construction of the abstract semantics for the collection objects is done in a manner that allows individual elements in the collections to be identified. Our construction also supports iterators over the collections and is able to model the position of the iterators with respect to the elements in the collection. By ordering the contents of the collection based on the iterator position, the model can represent a notion of progress when iteratively manipulating the contents of a collection. These features allow strong updates to the individual elements in the collection as well as strong updates over the collections themselves.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work a complete set of libraries for developing wireless sensor applications in a simple and intuitive way is presented, in contraposition to the most spread application abstraction-level mechanisms based on operating systems. The main target of this software platform, named CookieLibs, is to provide the highest abstraction level on the management of WSNs but in the simplest way for those users who are not familiar with software design, in order to achieve a fast profiling mechanism for reliable prototyping based on the Cookies platform.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New digital artifacts are emerging in data-intensive science. For example, scientific workflows are executable descriptions of scientific procedures that define the sequence of computational steps in an automated data analysis, supporting reproducible research and the sharing and replication of best-practice and know-how through reuse. Workflows are specified at design time and interpreted through their execution in a variety of situations, environments, and domains. Hence it is essential to preserve both their static and dynamic aspects, along with the research context in which they are used. To achieve this, we propose the use of multidimensional digital objects (Research Objects) that aggregate the resources used and/or produced in scientific investigations, including workflow models, provenance of their executions, and links to the relevant associated resources, along with the provision of technological support for their preservation and efficient retrieval and reuse. In this direction, we specified a software architecture for the design and implementation of a Research Object preservation system, and realized this architecture with a set of services and clients, drawing together practices in digital libraries, preservation systems, workflow management, social networking and Semantic Web technologies. In this paper, we describe the backbone system of this realization, a digital library system built on top of dLibra.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don’t provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. Conclusion In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mosaics are high-resolution images obtained aerially and employed in several scientific research areas, such for example, in the field of environmental monitoring and precision agriculture. Although many high resolution maps are obtained by commercial demand, they can also be acquired with commercial aerial vehicles which provide more experimental autonomy and availability. For what regard to mosaicing-based aerial mission planners, there are not so many - if any - free of charge software. Therefore, in this paper is presented a framework designed with open source tools and libraries as an alternative to commercial tools to carry out mosaicing tasks.