941 resultados para open source seismic data processing packages
Resumo:
An infographic for open source software licensing. This resource can serve as a simple introduction to open source software licensing, and as a reference for future use.
Resumo:
A Seminar about the advantages of using open source licenses as a complimentary strategy to the academic publish process.
Resumo:
Infographic providing a timeline of important events in the history of open source software since the fifties. Also includes stats for OSS licenses, usage in Business and reasons for participating in an OSS community.
Resumo:
Presentación que muestra una aplicación GIS propietaria de hace 15 años, junto con su nueva cara gracias al software libre. Se cuenta una arquitectura novedosa que incluye el uso del API de MapFish Server para lanzar geo-procesos, junto con el software stack de PostGIS, GeoWebCache y OpenLayers
Resumo:
Los Centros de Investigación de Geografía son por lo general productores de un gran volumen de Información Geográfica (IG), los cuales generan tanto proyectos financiados como iniciativas de investigación individuales. El Centro de Estudos de Geografia e Planeamento Regional (e-GEO) ha estado involucrado en varios proyectos a escala local, regional, nacional e internacional. Recientemente, dos cuestiones fueron objeto de debate. Una de ellas fue el hecho de que la información espacial obtenida a partir del desarrollo de tales proyectos de investigación no ha tenido la visibilidad que se esperaba. En la mayoría de las veces, la IG de estos proyectos no estaba en el formato adecuado para que los investigadores -o incluso el público en general o grupos de interés- pudieran pesquisar fácilmente. La segunda cuestión era sobre cómo hacer que estos resultados pudieran ser accesibles al alcance de todos, en todos los lugares, fácilmente y con los mínimos costes para el Centro, teniendo en cuenta el actual contexto económico portugués y los intereses de e-GEO. Estas dos cuestiones se resuelven con una sola respuesta: la puesta en marcha de un WebGIS en una plataforma Open Source. En este trabajo se ilustra la producción de un instrumento para la difusión de las indicaciones geográficas en el World Wide Web, utilizando únicamente software libre y freeware. Esta herramienta permite a todos los investigadores del Centro publicar su IG, la cual aparece como plenamente accesible a cualquier usuario final. Potencialmente, el hecho de permitir que este tipo de información sea plenamente accesible debería generar un gran impacto, acortando las distancias entre el trabajo realizado por los académicos y el usuario final. Creemos que es una óptima manera para que el público pueda acceder e interpretar la información espacial. En conclusión, esta plataforma debería servir para cerrar la brecha entre productores y usuarios de la información geográfica, permitiendo la interacción entre todas las partes así como la carga de nuevos datos dado un conjunto de normas destinadas a control de calidad
Resumo:
O bom funcionamento de uma empresa passa pela coordenação dos seus vários elementos, pela fluidez das suas operações diárias, pelo desempenho dos seus recursos, tanto humanos como materiais, e da interacção dos vários sistemas que a compõem. As tecnologias empresariais sentiram um desenvolvimento contínuo após a sua aparição, desde o processo básico, para gestão de processos de negócios (BPM), para plataformas de recursos empresariais (ERP) modernos como o sistema proprietário SAP ou Oracle, para conceitos mais gerais como SOA e cloud, baseados em standards abertos. As novas tecnologias apresentam novos canais de trânsito de informação mais rápidos e eficientes, formas de automatizar e acompanhar processos de negócio e vários tipos de infra-estruturas que podem ser utilizadas de forma a tornar a empresa mais produtiva e flexível. As soluções comerciais existentes permitem realizar estes objectivos mas os seus custos de aquisição podem revelar-se demasiado elevados para algumas empresas ou organizações, que arriscam de não se adaptar às mudanças do negócio. Ao mesmo tempo, software livre está a ganhar popularidade mas existem sempre alguns preconceitos sobre a qualidade e maturidade deste tipo de software. O objectivo deste trabalho é apresentar SOA, os principais produtos SOA comerciais e open source e realizar uma comparação entre as duas categorias para verificar o nível de maturidade do SOA open source em relação às soluções SOA proprietárias.
Resumo:
This article analyses the results of an empirical study on the 200 most popular UK-based websites in various sectors of e-commerce services. The study provides empirical evidence on unlawful processing of personal data. It comprises a survey on the methods used to seek and obtain consent to process personal data for direct marketing and advertisement, and a test on the frequency of unsolicited commercial emails (UCE) received by customers as a consequence of their registration and submission of personal information to a website. Part One of the article presents a conceptual and normative account of data protection, with a discussion of the ethical values on which EU data protection law is grounded and an outline of the elements that must be in place to seek and obtain valid consent to process personal data. Part Two discusses the outcomes of the empirical study, which unveils a significant departure between EU legal theory and practice in data protection. Although a wide majority of the websites in the sample (69%) has in place a system to ask separate consent for engaging in marketing activities, it is only 16.2% of them that obtain a consent which is valid under the standards set by EU law. The test with UCE shows that only one out of three websites (30.5%) respects the will of the data subject not to receive commercial communications. It also shows that, when submitting personal data in online transactions, there is a high probability (50%) of incurring in a website that will ignore the refusal of consent and will send UCE. The article concludes that there is severe lack of compliance of UK online service providers with essential requirements of data protection law. In this respect, it suggests that there is inappropriate standard of implementation, information and supervision by the UK authorities, especially in light of the clarifications provided at EU level.
Resumo:
Background: Expression microarrays are increasingly used to obtain large scale transcriptomic information on a wide range of biological samples. Nevertheless, there is still much debate on the best ways to process data, to design experiments and analyse the output. Furthermore, many of the more sophisticated mathematical approaches to data analysis in the literature remain inaccessible to much of the biological research community. In this study we examine ways of extracting and analysing a large data set obtained using the Agilent long oligonucleotide transcriptomics platform, applied to a set of human macrophage and dendritic cell samples. Results: We describe and validate a series of data extraction, transformation and normalisation steps which are implemented via a new R function. Analysis of replicate normalised reference data demonstrate that intrarray variability is small (only around 2 of the mean log signal), while interarray variability from replicate array measurements has a standard deviation (SD) of around 0.5 log(2) units (6 of mean). The common practise of working with ratios of Cy5/Cy3 signal offers little further improvement in terms of reducing error. Comparison to expression data obtained using Arabidopsis samples demonstrates that the large number of genes in each sample showing a low level of transcription reflect the real complexity of the cellular transcriptome. Multidimensional scaling is used to show that the processed data identifies an underlying structure which reflect some of the key biological variables which define the data set. This structure is robust, allowing reliable comparison of samples collected over a number of years and collected by a variety of operators. Conclusions: This study outlines a robust and easily implemented pipeline for extracting, transforming normalising and visualising transcriptomic array data from Agilent expression platform. The analysis is used to obtain quantitative estimates of the SD arising from experimental (non biological) intra- and interarray variability, and for a lower threshold for determining whether an individual gene is expressed. The study provides a reliable basis for further more extensive studies of the systems biology of eukaryotic cells.
Resumo:
A student from the Data Processing program at the New York Trade School is shown working. Black and white photograph with some edge damage due to writing in black along the top.
Resumo:
Felice Gigante a graduate from the New York Trade School Electronics program works on a machine in his job as Data Processing Customer Engineer for the International Business Machines Corp. Original caption reads, "Felice Gigante - Electronices, International Business Machines Corp." Black and white photograph with caption glued to reverse.
Resumo:
Includes bibliography
Resumo:
The CMS Collaboration conducted a month-long data taking exercise, the Cosmic Run At Four Tesla, during October-November 2008, with the goal of commissioning the experiment for extended operation. With all installed detector systems participating, CMS recorded 270 million cosmic ray events with the solenoid at a magnetic field strength of 3.8 T. This paper describes the data flow from the detector through the various online and offline computing systems, as well as the workflows used for recording the data, for aligning and calibrating the detector, and for analysis of the data. © 2010 IOP Publishing Ltd and SISSA.
Resumo:
The applications of the Finite Element Method (FEM) for three-dimensional domains are already well documented in the framework of Computational Electromagnetics. However, despite the power and reliability of this technique for solving partial differential equations, there are only a few examples of open source codes available and dedicated to the solid modeling and automatic constrained tetrahedralization, which are the most time consuming steps in a typical three-dimensional FEM simulation. Besides, these open source codes are usually developed separately by distinct software teams, and even under conflicting specifications. In this paper, we describe an experiment of open source code integration for solid modeling and automatic mesh generation. The integration strategy and techniques are discussed, and examples and performance results are given, specially for complicated and irregular volumes which are not simply connected. © 2011 IEEE.