955 resultados para Open source information retrieval
Resumo:
Despite the advancement of phylogenetic methods to estimate speciation and extinction rates, their power can be limited under variable rates, in particular for clades with high extinction rates and small number of extant species. Fossil data can provide a powerful alternative source of information to investigate diversification processes. Here, we present PyRate, a computer program to estimate speciation and extinction rates and their temporal dynamics from fossil occurrence data. The rates are inferred in a Bayesian framework and are comparable to those estimated from phylogenetic trees. We describe how PyRate can be used to explore different models of diversification. In addition to the diversification rates, it provides estimates of the parameters of the preservation process (fossilization and sampling) and the times of speciation and extinction of each species in the data set. Moreover, we develop a new birth-death model to correlate the variation of speciation/extinction rates with changes of a continuous trait. Finally, we demonstrate the use of Bayes factors for model selection and show how the posterior estimates of a PyRate analysis can be used to generate calibration densities for Bayesian molecular clock analysis. PyRate is an open-source command-line Python program available at http://sourceforge.net/projects/pyrate/.
Resumo:
En els últims anys, la popularitat de les xarxes sensefils (WIFI) ha anat en augment a un ritme incansable. Des de petits aparells instal•lats a les cases amb aquesta tecnologia com a complement dels routers d’accés a internet instal•lats per diverses companyies, fins a empreses fent petits desplegaments per comunicar entre si les seves seus. Al marge d’aquests escenaris, s’ha produït un fenomen social d’acolliment d’aquesta tecnologia a nivell mundial, en forma del que coneixem com a xarxes ciutadanes / xarxes lliures / xarxes socials. Aquestes xarxes han estat possibles gràcies a diverses raons que han fet assequible a col•lectius de persones, tant els aparells com els coneixements necessaris per dur a terme aquestes actuacions. Dintre d’aquest marc, al Bages, concretament a Manresa, es va començar a desenvolupar una d’aquestes xarxes. Les decisions d’aquesta xarxa d’utilitzar exclusivament hardware i software de codi obert, i determinats aspectes tècnics de la xarxa, ha comportat que la xarxa fos incompatible amb algunes de les aplicacions de gestió de xarxes existents desenvolupades per comunicats com gufi.net a Osona. És per això que per garantir el creixement, la supervivència i l’èxit d’aquesta xarxa en el temps, és indispensable poder comptar amb una eina de gestió que s’adigui a les característiques de GuifiBages. L’objectiu principal d’aquest treball és dotar a la xarxa GuifiBages de les eines necessàries per poder gestionar tota la informació referent a l’estructura de la seva xarxa, tant per facilitar l’accés a nous usuaris sense molts coneixements tècnics, com per facilitar nous desplegaments / reparacions / modificacions de la xarxa d’una manera automàtica. Com a conclusió d’aquest treball, podem afirmar que les avantatges que proporciones tecnologies com Plone, faciliten enormement la creació d’aplicacions de gestió de continguts en entorn web. Alhora, l’ús de noves tècniques de programació com AJAX o recursos com els que ofereix Google, permeten desenvolupar aplicacions web que no tenen res a envejar al software tradicional. D’altra banda, voldríem destacar l’ús exclusiu de programari lliure tant en els paquets de software necessaris pel desenvolupament, com en el sistema operatiu i programes dels ordinadors on s’ha dut a terme, demostrant que es poden desenvolupar sistemes de qualitat sense dependre de programari privatiu.
Resumo:
Extensible Markup Language (XML) is a generic computing language that provides an outstanding case study of commodification of service standards. The development of this language in the late 1990s marked a shift in computer science as its extensibility let store and share any kind of data. Many office suites software rely on it. The chapter highlights how the largest multinational firms pay special attention to gain a recognised international standard for such a major technological innovation. It argues that standardisation processes affects market structures and can lead to market capture. By examining how a strategic use of standardisation arenas can generate profits, it shows that Microsoft succeeded in making its own technical solution a recognised ISO standard in 2008, while the same arena already adopted two years earlier the open source standard set by IBM and Sun Microsystems. Yet XML standardisation also helped to establish a distinct model of information technology services at the expense of Microsoft monopoly on proprietary software
Resumo:
Informe sobre el 4th International LIS-EPI Meeting que tuvo lugar en Valencia en noviembre de 2009 bajo el lema ¿La información en 2015¿. Los temas que se trataron fueron el futuro del sector de la información y de las bibliotecas, las rich internet applications (RIAs), el software libre en bibliotecas, el acceso abierto y los dispositivos móviles
Resumo:
Podeu consultar la versió en castellà a http://hdl.handle.net/2445/8959
Resumo:
Podeu consultar la versió en català a http://hdl.handle.net/2445/8958
Resumo:
In this paper we propose a novel unsupervised approach to learning domain-specific ontologies from large open-domain text collections. The method is based on the joint exploitation of Semantic Domains and Super Sense Tagging for Information Retrieval tasks. Our approach is able to retrieve domain specific terms and concepts while associating them with a set of high level ontological types, named supersenses, providing flat ontologies characterized by very high accuracy and pertinence to the domain.
Resumo:
Introduction: The field of Connectomic research is growing rapidly, resulting from methodological advances in structural neuroimaging on many spatial scales. Especially progress in Diffusion MRI data acquisition and processing made available macroscopic structural connectivity maps in vivo through Connectome Mapping Pipelines (Hagmann et al, 2008) into so-called Connectomes (Hagmann 2005, Sporns et al, 2005). They exhibit both spatial and topological information that constrain functional imaging studies and are relevant in their interpretation. The need for a special-purpose software tool for both clinical researchers and neuroscientists to support investigations of such connectome data has grown. Methods: We developed the ConnectomeViewer, a powerful, extensible software tool for visualization and analysis in connectomic research. It uses the novel defined container-like Connectome File Format, specifying networks (GraphML), surfaces (Gifti), volumes (Nifti), track data (TrackVis) and metadata. Usage of Python as programming language allows it to by cross-platform and have access to a multitude of scientific libraries. Results: Using a flexible plugin architecture, it is possible to enhance functionality for specific purposes easily. Following features are already implemented: * Ready usage of libraries, e.g. for complex network analysis (NetworkX) and data plotting (Matplotlib). More brain connectivity measures will be implemented in a future release (Rubinov et al, 2009). * 3D View of networks with node positioning based on corresponding ROI surface patch. Other layouts possible. * Picking functionality to select nodes, select edges, get more node information (ConnectomeWiki), toggle surface representations * Interactive thresholding and modality selection of edge properties using filters * Arbitrary metadata can be stored for networks, thereby allowing e.g. group-based analysis or meta-analysis. * Python Shell for scripting. Application data is exposed and can be modified or used for further post-processing. * Visualization pipelines using filters and modules can be composed with Mayavi (Ramachandran et al, 2008). * Interface to TrackVis to visualize track data. Selected nodes are converted to ROIs for fiber filtering The Connectome Mapping Pipeline (Hagmann et al, 2008) processed 20 healthy subjects into an average Connectome dataset. The Figures show the ConnectomeViewer user interface using this dataset. Connections are shown that occur in all 20 subjects. The dataset is freely available from the homepage (connectomeviewer.org). Conclusions: The ConnectomeViewer is a cross-platform, open-source software tool that provides extensive visualization and analysis capabilities for connectomic research. It has a modular architecture, integrates relevant datatypes and is completely scriptable. Visit www.connectomics.org to get involved as user or developer.
Resumo:
This research work deals with the problem of modeling and design of low level speed controller for the mobile robot PRIM. The main objective is to develop an effective educational, and research tool. On one hand, the interests in using the open mobile platform PRIM consist in integrating several highly related subjects to the automatic control theory in an educational context, by embracing the subjects of communications, signal processing, sensor fusion and hardware design, amongst others. On the other hand, the idea is to implement useful navigation strategies such that the robot can be served as a mobile multimedia information point. It is in this context, when navigation strategies are oriented to goal achievement, that a local model predictive control is attained. Hence, such studies are presented as a very interesting control strategy in order to develop the future capabilities of the system. In this context the research developed includes the visual information as a meaningful source that allows detecting the obstacle position coordinates as well as planning the free obstacle trajectory that should be reached by the robot
Resumo:
Gene set enrichment (GSE) analysis is a popular framework for condensing information from gene expression profiles into a pathway or signature summary. The strengths of this approach over single gene analysis include noise and dimension reduction, as well as greater biological interpretability. As molecular profiling experiments move beyond simple case-control studies, robust and flexible GSE methodologies are needed that can model pathway activity within highly heterogeneous data sets. To address this challenge, we introduce Gene Set Variation Analysis (GSVA), a GSE method that estimates variation of pathway activity over a sample population in an unsupervised manner. We demonstrate the robustness of GSVA in a comparison with current state of the art sample-wise enrichment methods. Further, we provide examples of its utility in differential pathway activity and survival analysis. Lastly, we show how GSVA works analogously with data from both microarray and RNA-seq experiments. GSVA provides increased power to detect subtle pathway activity changes over a sample population in comparison to corresponding methods. While GSE methods are generally regarded as end points of a bioinformatic analysis, GSVA constitutes a starting point to build pathway-centric models of biology. Moreover, GSVA contributes to the current need of GSE methods for RNA-seq data. GSVA is an open source software package for R which forms part of the Bioconductor project and can be downloaded at http://www.bioconductor.org.
Resumo:
The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.
Resumo:
This paper describes the result of a research about diverse areas of the information technology world applied to cartography. Its final result is a complete and custom geographic information web system, designed and implemented to manage archaeological information of the city of Tarragona. The goal of the platform is to show on a web-focused application geographical and alphanumerical data and to provide concrete queries to explorate this. Various tools, between others, have been used: the PostgreSQL database management system in conjunction with its geographical extension PostGIS, the geographic server GeoServer, the GeoWebCache tile caching, the maps viewer and maps and satellite imagery from Google Maps, locations imagery from Google Street View, and other open source libraries. The technology has been chosen from an investigation of the requirements of the project, and has taken great part of its development. Except from the Google Maps tools which are not open source but are free, all design has been implemented with open source and free tools.
Resumo:
Nowadays, when a user is planning a touristic route is very difficult to find out which are the best places to visit. The user has to choose considering his/her preferences due to the great quantity of information it is possible to find in the web and taking into account it is necessary to do a selection, within small time because there is a limited time to do a trip. In Itiner@ project, we aim to implement Semantic Web technology combined with Geographic Information Systems in order to offer personalized touristic routes around a region based on user preferences and time situation. Using ontologies it is possible to link, structure, share data and obtain the result more suitable for user's preferences and actual situation with less time and more precisely than without ontologies. To achieve these objectives we propose a web page combining a GIS server and a touristic ontology. As a step further, we also study how to extend this technology on mobile devices due to the raising interest and technological progress of these devices and location-based services, which allows the user to have all the route information on the hand when he/she does a touristic trip. We design a little application in order to apply the combination of GIS and Semantic Web in a mobile device.
Resumo:
L-2-Hydroxyglutaric aciduria (L2HGA) is a rare, neurometabolic disorder with an autosomal recessive mode of inheritance. Affected individuals only have neurological manifestations, including psychomotor retardation, cerebellar ataxia, and more variably macrocephaly, or epilepsy. The diagnosis of L2HGA can be made based on magnetic resonance imaging (MRI), biochemical analysis, and mutational analysis of L2HGDH. About 200 patients with elevated concentrations of 2-hydroxyglutarate (2HG) in the urine were referred for chiral determination of 2HG and L2HGDH mutational analysis. All patients with increased L2HG (n=106; 83 families) were included. Clinical information on 61 patients was obtained via questionnaires. In 82 families the mutations were detected by direct sequence analysis and/or multiplex ligation dependent probe amplification (MLPA), including one case where MLPA was essential to detect the second allele. In another case RT-PCR followed by deep intronic sequencing was needed to detect the mutation. Thirty-five novel mutations as well as 35 reported mutations and 14 nondisease-related variants are reviewed and included in a novel Leiden Open source Variation Database (LOVD) for L2HGDH variants (http://www.LOVD.nl/L2HGDH). Every user can access the database and submit variants/patients. Furthermore, we report on the phenotype, including neurological manifestations and urinary levels of L2HG, and we evaluate the phenotype-genotype relationship.