994 resultados para Machine-readable Library Cataloguing
Resumo:
Fault tolerance has become a major issue for computer and software engineers because the occurrence of faults increases the cost of using a parallel computer. RADIC is the fault tolerance architecture for message passing systems which is transparent, decentralized, flexible and scalable. This master thesis presents the methodology used to implement the RADIC architecture over Open MPI, a well-know large-used message passing library. This implementation kept the RADIC architecture characteristics. In order to validate the implementation we have executed a synthetic ping program, besides, to evaluate the implementation performance we have used the NAS Parallel Benchmarks. The results prove that the RADIC architecture performance depends on the communication pattern of the parallel application which is running. Furthermore, our implementation proves that the RADIC architecture could be implemented over an existent message passing library.
Resumo:
Estudi elaborat a partir d’una estada a la Universität Karlsruhe entre gener i maig del 2007. Les biblioteques d’estructures de dades defineixen interfícies i implementen algorismes i estructures de dades fonamentals. Un exemple n’és la Satandard Template Library (STL ), que forma part del llenguatge de programació C++. En el marc d’una tesi, s’està treballant per obtenir implementacions més eficients i/o versàtils d’alguns components de la STL. Per a fer-ho s’utilitzen tècniques de la enginyeria d’algorismes. En particular, s’integra el coneixement de la comunitat algorítmica i es té en consideració la tecnologia existent. L’acció durant l’estada s’ha emmarcat en el desenvolupament la Multi Core STL (MCSTL ). La MCSTL és una implementació paral•lela de la STL per a màquines multi-core. Les màquines multi-core són actualment l’únic tipus de màquina disponible al mercat. Per tant, tot i que el paral•lelisme obtingut no sigui òptim, és preferible a tenir els processadors esperant, ja que , la tendència és que el nombre de processadors per computador augmenti.
Resumo:
PADICAT is the web archive created in 2005 in Catalonia (Spain ) by the Library of Catalonia (BC ) , the National Library of Catalonia , with the aim of collecting , processing and providing permanent access to the digital heritage of Catalonia . Its harvesting strategy is based on the hybrid model ( of massive harvesting . SPA top level domain ; selective compilation of the web site output of Catalan organizations; focused harvesting of public events) . The system provides open access to the whole collection , on the Internet . We consider necessary to complement the current search for new and visualization software with open source software tool, CAT ( Curator Archiving Tool) , composed by three modules aimed to effectively managing the processes of human cataloguing ; to publish directories where the digital resources and special collections ; and to offer statistical information of added value to end users. Within the framework of the International Internet Preservation Consortium meeting ( Vienna 2010) , the progress in the development of this new tool, and the philosophy that has motivated his design, are presented to the international community.
Resumo:
We have initiated a gene discovery program in Schistosoma mansoni based on the technique of Expressed Sequence Tags (ESTs), i.e. partial sequences of cDNAs obtained from single passes in automatic DNA sequencers. ESTs can be used to identify genese onf the basis of their homology whith sequences from other species deposited in DNA or protein databases. Trasncripts with sequences without matches in teh databases may represent novel parasite-specific genes. This approach has shown to be very efficient and in less than two years a broad range of novel genes has already been ascertained, more than doubling the number of known S. mansoni genes.
Advanced mapping of environmental data: Geostatistics, Machine Learning and Bayesian Maximum Entropy
Resumo:
This book combines geostatistics and global mapping systems to present an up-to-the-minute study of environmental data. Featuring numerous case studies, the reference covers model dependent (geostatistics) and data driven (machine learning algorithms) analysis techniques such as risk mapping, conditional stochastic simulations, descriptions of spatial uncertainty and variability, artificial neural networks (ANN) for spatial data, Bayesian maximum entropy (BME), and more.
Resumo:
The paper presents an approach for mapping of precipitation data. The main goal is to perform spatial predictions and simulations of precipitation fields using geostatistical methods (ordinary kriging, kriging with external drift) as well as machine learning algorithms (neural networks). More practically, the objective is to reproduce simultaneously both the spatial patterns and the extreme values. This objective is best reached by models integrating geostatistics and machine learning algorithms. To demonstrate how such models work, two case studies have been considered: first, a 2-day accumulation of heavy precipitation and second, a 6-day accumulation of extreme orographic precipitation. The first example is used to compare the performance of two optimization algorithms (conjugate gradients and Levenberg-Marquardt) of a neural network for the reproduction of extreme values. Hybrid models, which combine geostatistical and machine learning algorithms, are also treated in this context. The second dataset is used to analyze the contribution of radar Doppler imagery when used as external drift or as input in the models (kriging with external drift and neural networks). Model assessment is carried out by comparing independent validation errors as well as analyzing data patterns.
Resumo:
Forensic examinations of ink have been performed since the beginning of the 20th century. Since the 1960s, the International Ink Library, maintained by the United States Secret Service, has supported those analyses. Until 2009, the search and identification of inks were essentially performed manually. This paper describes the results of a project designed to improve ink samples' analytical and search processes. The project focused on the development of improved standardization procedures to ensure the best possible reproducibility between analyses run on different HPTLC plates. The successful implementation of this new calibration method enabled the development of mathematical algorithms and of a software package to complement the existing ink library.
Resumo:
Dans cet ouvrage, l'auteur propose une conceptualisation théorique de la coprésence en un même film de mondes multiples en abordant différents paramètres (hétérogénéité de la facture de l'image, pratiques du montage alterné, typologie des enchâssements, expansion sérielle, etc.) sur la base d'un corpus de films de fiction récents qui appartiennent pour la plupart au genre de la science-fiction (Matrix, Dark City, Avalon, Resident Evil, Avatar,...). Issue de la filmologie, la notion de « diégèse » y est développée à la fois dans le potentiel d'autonomisation dont témoigne la conception mondaine qui semble dominer aujourd'hui à l'ère des jeux vidéo, dans ses liens avec le récit et dans une perspective intermédiale. Les films discutés ont la particularité de mettre en scène des machines permettant aux personnages de passer d'un monde à l'autre : les modes de figuration de ces technologies sont investigués en lien avec les imaginaires du dispositif cinématographique et les potentialité du montage. La comparaison entre les films (Tron et son récent sequel, Totall Recall et son remake) et entre des oeuvres filmiques et littéraires (en particulier les nouvelles de Philip K. Dick et Simlacron 3 de Galouye) constitue un outil d'analyse permettant de saisir la contemporanéité de cette problématique, envisagée sur le plan esthétique dans le contexte de l'imagerie numérique.