974 resultados para Software packages selection


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dissertação para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização de Vias de Comunicação e Transportes

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ao longo dos últimos anos, os scanners 3D têm tido uma utilização crescente nas mais variadas áreas. Desde a Medicina à Arqueologia, passando pelos vários tipos de indústria, ´e possível identificar aplicações destes sistemas. Essa crescente utilização deve-se, entre vários factores, ao aumento dos recursos computacionais, à simplicidade e `a diversidade das técnicas existentes, e `as vantagens dos scanners 3D comparativamente com outros sistemas. Estas vantagens são evidentes em áreas como a Medicina Forense, onde a fotografia, tradicionalmente utilizada para documentar objectos e provas, reduz a informação adquirida a duas dimensões. Apesar das vantagens associadas aos scanners 3D, um factor negativo é o preço elevado. No âmbito deste trabalho pretendeu-se desenvolver um scanner 3D de luz estruturada económico e eficaz, e um conjunto de algoritmos para o controlo do scanner, para a reconstrução de superfícies de estruturas analisadas, e para a validação dos resultados obtidos. O scanner 3D implementado ´e constituído por uma câmara e por um projector de vídeo ”off-the-shelf”, e por uma plataforma rotativa desenvolvida neste trabalho. A função da plataforma rotativa consiste em automatizar o scanner de modo a diminuir a interação dos utilizadores. Os algoritmos foram desenvolvidos recorrendo a pacotes de software open-source e a ferramentas gratuitas. O scanner 3D foi utilizado para adquirir informação 3D de um crânio, e o algoritmo para reconstrução de superfícies permitiu obter superfícies virtuais do crânio. Através do algoritmo de validação, as superfícies obtidas foram comparadas com uma superfície do mesmo crânio, obtida por tomografia computorizada (TC). O algoritmo de validação forneceu um mapa de distâncias entre regiões correspondentes nas duas superfícies, que permitiu quantificar a qualidade das superfícies obtidas. Com base no trabalho desenvolvido e nos resultados obtidos, é possível afirmar que foi criada uma base funcional para o varrimento de superfícies 3D de estruturas, apta para desenvolvimento futuro, mostrando que é possível obter alternativas aos métodos comerciais usando poucos recursos financeiros.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

As novas tecnologias aplicadas ao processamento de imagem e reconhecimento de padrões têm sido alvo de um grande progresso nas últimas décadas. A sua aplicação é transversal a diversas áreas da ciência, nomeadamente a área da balística forense. O estudo de evidências (invólucros e projeteis) encontradas numa cena de crime, recorrendo a técnicas de processamento e análise de imagem, é pertinente pelo facto de, aquando do disparo, as armas de fogo imprimirem marcas únicas nos invólucros e projéteis deflagrados, permitindo relacionar evidências deflagradas pela mesma arma. A comparação manual de evidências encontradas numa cena de crime com evidências presentes numa base de dados, em termos de parâmetros visuais, constitui uma abordagem demorada. No âmbito deste trabalho pretendeu-se desenvolver técnicas automáticas de processamento e análise de imagens de evidências, obtidas através do microscópio ótico de comparação, tendo por base algoritmos computacionais. Estes foram desenvolvidos com recurso a pacotes de bibliotecas e a ferramentas open-source. Para a aquisição das imagens de evidências balísticas foram definidas quatro modalidades de aquisição: modalidade Planar, Multifocus, Microscan e Multiscan. As imagens obtidas foram aplicados algoritmos de processamento especialmente desenvolvidos para o efeito. A aplicação dos algoritmos de processamento permite a segmentação de imagem, a extração de características e o alinhamento de imagem. Este último tem como finalidade correlacionar as evidências e obter um valor quantitativo (métrica), indicando o quão similar essas evidências são. Com base no trabalho desenvolvido e nos resultados obtidos, foram definidos protocolos de aquisição de imagens de microscopia, que possibilitam a aquisição de imagens das regiões passiveis de serem estudadas, assim como algoritmos que permitem automatizar o posterior processo de alinhamento de imagens de evidências, constituindo uma vantagem em relação ao processo de comparação manual.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The purpose of this work is to present an algorithm to solve nonlinear constrained optimization problems, using the filter method with the inexact restoration (IR) approach. In the IR approach two independent phases are performed in each iteration—the feasibility and the optimality phases. The first one directs the iterative process into the feasible region, i.e. finds one point with less constraints violation. The optimality phase starts from this point and its goal is to optimize the objective function into the satisfied constraints space. To evaluate the solution approximations in each iteration a scheme based on the filter method is used in both phases of the algorithm. This method replaces the merit functions that are based on penalty schemes, avoiding the related difficulties such as the penalty parameter estimation and the non-differentiability of some of them. The filter method is implemented in the context of the line search globalization technique. A set of more than two hundred AMPL test problems is solved. The algorithm developed is compared with LOQO and NPSOL software packages.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Since the invention of photography humans have been using images to capture, store and analyse the act that they are interested in. With the developments in this field, assisted by better computers, it is possible to use image processing technology as an accurate method of analysis and measurement. Image processing's principal qualities are flexibility, adaptability and the ability to easily and quickly process a large amount of information. Successful examples of applications can be seen in several areas of human life, such as biomedical, industry, surveillance, military and mapping. This is so true that there are several Nobel prizes related to imaging. The accurate measurement of deformations, displacements, strain fields and surface defects are challenging in many material tests in Civil Engineering because traditionally these measurements require complex and expensive equipment, plus time consuming calibration. Image processing can be an inexpensive and effective tool for load displacement measurements. Using an adequate image acquisition system and taking advantage of the computation power of modern computers it is possible to accurately measure very small displacements with high precision. On the market there are already several commercial software packages. However they are commercialized at high cost. In this work block-matching algorithms will be used in order to compare the results from image processing with the data obtained with physical transducers during laboratory load tests. In order to test the proposed solutions several load tests were carried out in partnership with researchers from the Civil Engineering Department at Universidade Nova de Lisboa (UNL).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Aplicació de gestió per al manteniment dels paquets de software distribuïts als usuaris de l'Ajuntament de Barcelona. El treball consisteix en el disseny i anàlisi de la base de dades, juntament amb el desenvolupament de l'aplicació.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

SUMMARY: Large sets of data, such as expression profiles from many samples, require analytic tools to reduce their complexity. The Iterative Signature Algorithm (ISA) is a biclustering algorithm. It was designed to decompose a large set of data into so-called 'modules'. In the context of gene expression data, these modules consist of subsets of genes that exhibit a coherent expression profile only over a subset of microarray experiments. Genes and arrays may be attributed to multiple modules and the level of required coherence can be varied resulting in different 'resolutions' of the modular mapping. In this short note, we introduce two BioConductor software packages written in GNU R: The isa2 package includes an optimized implementation of the ISA and the eisa package provides a convenient interface to run the ISA, visualize its output and put the biclusters into biological context. Potential users of these packages are all R and BioConductor users dealing with tabular (e.g. gene expression) data. AVAILABILITY: http://www.unil.ch/cbg/ISA CONTACT: sven.bergmann@unil.ch

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Acute cases of schistosomiasis have been found on the coastal area of Pernambuco, Brazil, due to environmental disturbances and disorderly occupation of the urban areas. This study identifies and spatially marks the main foci of the snail host species, Biomphalaria glabrata on Itamaracá Island. The chaotic occupation of the beach resorts has favoured the emergence of transmission foci, thus exposing residents and tourists to the risk of infection. A database covering five years of epidemiological investigation on snails infected by Schistosoma mansoni in the island was produced with information from the geographic positioning of the foci, number of snails collected, number of snails tested positive, and their infection rate. The spatial position of the foci were recorded through the Global Positioning System (GPS), and the geographical coordinates were imported by AutoCad. The software packages ArcView and Spring were used for data processing and spatial analysis. AutoCad 2000 was used to plot the pairs of coordinates obtained from GPS. Between 1998 and 2002 5009 snails, of which 12.2% were positive for S. mansoni, were collected in Forte Beach. A total of 27 foci and areas of environmental risk were identified and spatially analyzed allowing the identification of the areas exposed to varying degrees of risk.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Biplots are graphical displays of data matrices based on the decomposition of a matrix as the product of two matrices. Elements of these two matrices are used as coordinates for the rows and columns of the data matrix, with an interpretation of the joint presentation that relies on the properties of the scalar product. Because the decomposition is not unique, there are several alternative ways to scale the row and column points of the biplot, which can cause confusion amongst users, especially when software packages are not united in their approach to this issue. We propose a new scaling of the solution, called the standard biplot, which applies equally well to a wide variety of analyses such as correspondence analysis, principal component analysis, log-ratio analysis and the graphical results of a discriminant analysis/MANOVA, in fact to any method based on the singular-value decomposition. The standard biplot also handles data matrices with widely different levels of inherent variance. Two concepts taken from correspondence analysis are important to this idea: the weighting of row and column points, and the contributions made by the points to the solution. In the standard biplot one set of points, usually the rows of the data matrix, optimally represent the positions of the cases or sample units, which are weighted and usually standardized in some way unless the matrix contains values that are comparable in their raw form. The other set of points, usually the columns, is represented in accordance with their contributions to the low-dimensional solution. As for any biplot, the projections of the row points onto vectors defined by the column points approximate the centred and (optionally) standardized data. The method is illustrated with several examples to demonstrate how the standard biplot copes in different situations to give a joint map which needs only one common scale on the principal axes, thus avoiding the problem of enlarging or contracting the scale of one set of points to make the biplot readable. The proposal also solves the problem in correspondence analysis of low-frequency categories that are located on the periphery of the map, giving the false impression that they are important.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Microbe browser is a web server providing comparative microbial genomics data. It offers comprehensive, integrated data from GenBank, RefSeq, UniProt, InterPro, Gene Ontology and the Orthologs Matrix Project (OMA) database, displayed along with gene predictions from five software packages. The Microbe browser is daily updated from the source databases and includes all completely sequenced bacterial and archaeal genomes. The data are displayed in an easy-to-use, interactive website based on Ensembl software. The Microbe browser is available at http://microbe.vital-it.ch/. Programmatic access is available through the OMA application programming interface (API) at http://microbe.vital-it.ch/api.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Although correspondence analysis is now widely available in statistical software packages and applied in a variety of contexts, notably the social and environmental sciences, there are still some misconceptions about this method as well as unresolved issues which remain controversial to this day. In this paper we hope to settle these matters, namely (i) the way CA measures variance in a two-way table and how to compare variances between tables of different sizes, (ii) the influence, or rather lack of influence, of outliers in the usual CA maps, (iii) the scaling issue and the biplot interpretation of maps,(iv) whether or not to rotate a solution, and (v) statistical significance of results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: Finding genes that are differentially expressed between conditions is an integral part of understanding the molecular basis of phenotypic variation. In the past decades, DNA microarrays have been used extensively to quantify the abundance of mRNA corresponding to different genes, and more recently high-throughput sequencing of cDNA (RNA-seq) has emerged as a powerful competitor. As the cost of sequencing decreases, it is conceivable that the use of RNA-seq for differential expression analysis will increase rapidly. To exploit the possibilities and address the challenges posed by this relatively new type of data, a number of software packages have been developed especially for differential expression analysis of RNA-seq data. RESULTS: We conducted an extensive comparison of eleven methods for differential expression analysis of RNA-seq data. All methods are freely available within the R framework and take as input a matrix of counts, i.e. the number of reads mapping to each genomic feature of interest in each of a number of samples. We evaluate the methods based on both simulated data and real RNA-seq data. CONCLUSIONS: Very small sample sizes, which are still common in RNA-seq experiments, impose problems for all evaluated methods and any results obtained under such conditions should be interpreted with caution. For larger sample sizes, the methods combining a variance-stabilizing transformation with the 'limma' method for differential expression analysis perform well under many different conditions, as does the nonparametric SAMseq method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Introduction. This paper studies the situation of research on Catalan literature between 1976 and 2003 by carrying out a bibliometric and social network analysis of PhD theses defended in Spain. It has a dual aim: to present interesting results for the discipline and to demonstrate the methodological efficacy of scientometric tools in the humanities, a field in which they are often neglected due to the difficulty of gathering data. Method. The analysis was performed on 151 records obtained from the TESEO database of PhD theses. The quantitative estimates include the use of the UCINET and Pajek software packages. Authority control was performed on the records. Analysis. Descriptive statistics were used to describe the sample and the distribution of responses to each question. Sex differences on key questions were analysed using the Chi-squared test. Results. The value of the figures obtained is demonstrated. The information obtained on the topic and the periods studied in the theses, and on the actors involved (doctoral students, thesis supervisors and members of defence committees), provide important insights into the mechanisms of humanities disciplines. The main research tendencies of Catalan literature are identified. It is observed that the composition of members of the thesis defence committees follows Lotka's Law. Conclusions. Bibliometric analysis and social network analysis may be especially useful in the humanities and in other fields which are lacking in scientometric data in comparison with the experimental sciences.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: Today, recognition and classification of sequence motifs and protein folds is a mature field, thanks to the availability of numerous comprehensive and easy to use software packages and web-based services. Recognition of structural motifs, by comparison, is less well developed and much less frequently used, possibly due to a lack of easily accessible and easy to use software. RESULTS: In this paper, we describe an extension of DeepView/Swiss-PdbViewer through which structural motifs may be defined and searched for in large protein structure databases, and we show that common structural motifs involved in stabilizing protein folds are present in evolutionarily and structurally unrelated proteins, also in deeply buried locations which are not obviously related to protein function. CONCLUSIONS: The possibility to define custom motifs and search for their occurrence in other proteins permits the identification of recurrent arrangements of residues that could have structural implications. The possibility to do so without having to maintain a complex software/hardware installation on site brings this technology to experts and non-experts alike.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE: Mutations in IDH3B, an enzyme participating in the Krebs cycle, have recently been found to cause autosomal recessive retinitis pigmentosa (arRP). The MDH1 gene maps within the RP28 arRP linkage interval and encodes cytoplasmic malate dehydrogenase, an enzyme functionally related to IDH3B. As a proof of concept for candidate gene screening to be routinely performed by ultra high throughput sequencing (UHTs), we analyzed MDH1 in a patient from each of the two families described so far to show linkage between arRP and RP28. METHODS: With genomic long-range PCR, we amplified all introns and exons of the MDH1 gene (23.4 kb). PCR products were then sequenced by short-read UHTs with no further processing. Computer-based mapping of the reads and mutation detection were performed by three independent software packages. RESULTS: Despite the intrinsic complexity of human genome sequences, reads were easily mapped and analyzed, and all algorithms used provided the same results. The two patients were homozygous for all DNA variants identified in the region, which confirms previous linkage and homozygosity mapping results, but had different haplotypes, indicating genetic or allelic heterogeneity. None of the DNA changes detected could be associated with the disease. CONCLUSIONS: The MDH1 gene is not the cause of RP28-linked arRP. Our experimental strategy shows that long-range genomic PCR followed by UHTs provides an excellent system to perform a thorough screening of candidate genes for hereditary retinal degeneration.