934 resultados para Word wide web (Sistemas de recuperação da informação)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Su propio nombre ya pretende ser significativo: "World Wide Web", o Telaraña de Ámbito Mundial. De esta forma se trata de representar plásticamente esa creciente red de ordenadores que llamamos Internet. Sin embargo, comparar la WWW con una telaraña parece un exceso de optimismo: al fin y al cabo, una telaraña tiene un orden, un patrón que la hace crecer de una forma más o menos organizada. Una telaraña es el producto del trabajo de un ser que persigue un objetivo. Hay un orden tras el aparente enredo de una telaraña. O, en otras palabras, la complejidad estructural de una red de araña no es tan elevada como puede parecer a primera vista.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The REpresentational State Transfer (REST) architectural style describes the design principles that made the World Wide Web scalable and the same principles can be applied in enterprise context to do loosely coupled and scalable application integration. In recent years, RESTful services are gaining traction in the industry and are commonly used as a simpler alternative to SOAP Web Services. However, one of the main drawbacks of RESTful services is the lack of standard mechanisms to support advanced quality-ofservice requirements that are common to enterprises. Transaction processing is one of the essential features of enterprise information systems and several transaction models have been proposed in the past years to fulfill the gap of transaction processing in RESTful services. The goal of this paper is to analyze the state-of-the-art RESTful transaction models and identify the current challenges.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

R2RML is used to specify transformations of data available in relational databases into materialised or virtual RDF datasets. SPARQL queries evaluated against virtual datasets are translated into SQL queries according to the R2RML mappings, so that they can be evaluated over the underlying relational database engines. In this paper we describe an extension of a well-known algorithm for SPARQL to SQL translation, originally formalised for RDBMS-backed triple stores, that takes into account R2RML mappings. We present the result of our implementation using queries from a synthetic benchmark and from three real use cases, and show that SPARQL queries can be in general evaluated as fast as the SQL queries that would have been generated by SQL experts if no R2RML mappings had been used.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En este Trabajo de Fin de Grado se va a explicar el procedimiento seguido a la hora de estudiar, diseñar y desarrollar Ackuaria, un portal de monitorización y análisis de estadísticas de comunicaciones en tiempo real. Después, se mostrarán los resultados obtenidos y la interfaz gráfica desarrollada para una mejor experiencia de usuario. Ackuaria se apoyará en el uso de Licode, un proyecto de código libre desarrollado en la Universidad Politécnica de Madrid, más concretamente en el Grupo de Internet de Nueva Generación de la Escuela Técnica Superior de Ingenieros de Telecomunicación. Licode ofrece la posibilidad de crear un servicio de streaming y videoconferencia en la propia infraestructura del usuario. Está diseñado para ser totalmente escalable y su uso está orientado principalmente al Cloud, aunque es perfectamente utilizable en una infraestructura física. Licode a su vez se basa en WebRTC, un protocolo desarrollado por la W3C (World Wide Web Consortium) y el IETF (Internet Engineering Task Force) pensado para poder transmitir y recibir flujos de audio, video y datos a través del navegador. No necesita ninguna instalación adicional, por lo que establecer una sesión de videoconferencia Peer-to-Peer es realmente sencillo. Con Licode se usa una MCU (Multipoint Control Unit) para evitar que todas las conexiones entre los usuarios sean Peer-To-Peer. Actúa como un cliente WebRTC más por el que pasan todos los flujos, que se encarga de multiplexar y redirigir donde sea necesario. De esta forma se ahorra ancho de banda y recursos del dispositivo de una forma muy significativa. Existe la creciente necesidad de los usuarios de Licode y de cualquier servicio de videoconferencia en general de poder gestionar su infraestructura a partir de datos y estadísticas fiables. Sus objetivos son muy variados: desde estudiar el comportamiento de WebRTC en distintos escenarios hasta monitorizar el uso de los usuarios para poder contabilizar después el tiempo publicado por cada uno. En todos los casos era común la necesidad de disponer de una herramienta que permitiese conocer en todo momento qué está pasando en el servicio de Licode, así como de almacenar toda la información para poder ser analizada posteriormente. Para conseguir desarrollar Ackuaria se ha realizado un estudio de las comunicaciones en tiempo real con el objetivo de determinar qué parámetros era indispensable y útil monitorizar. A partir de este estudio se ha actualizado la arquitectura de Licode para que obtuviese todos los datos necesarios y los enviase de forma que pudiesen ser recogidos por Ackuaria. El portal de monitorización entonces tratará esa información y la mostrará de forma clara y ordenada, además de proporcionar una API REST al usuario.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Linked Data is the key paradigm of the Semantic Web, a new generation of the World Wide Web that promises to bring meaning (semantics) to data. A large number of both public and private organizations have published their data following the Linked Data principles, or have done so with data from other organizations. To this extent, since the generation and publication of Linked Data are intensive engineering processes that require high attention in order to achieve high quality, and since experience has shown that existing general guidelines are not always sufficient to be applied to every domain, this paper presents a set of guidelines for generating and publishing Linked Data in the context of energy consumption in buildings (one aspect of Building Information Models). These guidelines offer a comprehensive description of the tasks to perform, including a list of steps, tools that help in achieving the task, various alternatives for performing the task, and best practices and recommendations. Furthermore, this paper presents a complete example on the generation and publication of Linked Data about energy consumption in buildings, following the presented guidelines, in which the energy consumption data of council sites (e.g., buildings and lights) belonging to the Leeds City Council jurisdiction have been generated and published as Linked Data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A computational system for the prediction of polymorphic loci directly and efficiently from human genomic sequence was developed and verified. A suite of programs, collectively called pompous (polymorphic marker prediction of ubiquitous simple sequences) detects tandem repeats ranging from dinucleotides up to 250 mers, scores them according to predicted level of polymorphism, and designs appropriate flanking primers for PCR amplification. This approach was validated on an approximately 750-kilobase region of human chromosome 3p21.3, involved in lung and breast carcinoma homozygous deletions. Target DNA from 36 paired B lymphoblastoid and lung cancer lines was amplified and allelotyped for 33 loci predicted by pompous to be variable in repeat size. We found that among those 36 predominately Caucasian individuals 22 of the 33 (67%) predicted loci were polymorphic with an average heterozygosity of 0.42. Allele loss in this region was found in 27/36 (75%) of the tumor lines using these markers. pompous provides the genetic researcher with an additional tool for the rapid and efficient identification of polymorphic markers, and through a World Wide Web site, investigators can use pompous to identify polymorphic markers for their research. A catalog of 13,261 potential polymorphic markers and associated primer sets has been created from the analysis of 141,779,504 base pairs of human genomic sequence in GenBank. This data is available on our Web site (pompous.swmed.edu) and will be updated periodically as GenBank is expanded and algorithm accuracy is improved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Operon structure is an important organization feature of bacterial genomes. Many sets of genes occur in the same order on multiple genomes; these conserved gene groupings represent candidate operons. This study describes a computational method to estimate the likelihood that such conserved gene sets form operons. The method was used to analyze 34 bacterial and archaeal genomes, and yielded more than 7600 pairs of genes that are highly likely (P ≥ 0.98) to belong to the same operon. The sensitivity of our method is 30–50% for the Escherichia coli genome. The predicted gene pairs are available from our World Wide Web site http://www.tigr.org/tigr-scripts/operons/operons.cgi.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The EMBL Nucleotide Sequence Database (http://www.ebi.ac.uk/embl/) is maintained at the European Bioinformatics Institute (EBI) in an international collaboration with the DNA Data Bank of Japan (DDBJ) and GenBank at the NCBI (USA). Data is exchanged amongst the collaborating databases on a daily basis. The major contributors to the EMBL database are individual authors and genome project groups. Webin is the preferred web-based submission system for individual submitters, whilst automatic procedures allow incorporation of sequence data from large-scale genome sequencing centres and from the European Patent Office (EPO). Database releases are produced quarterly. Network services allow free access to the most up-to-date data collection via ftp, email and World Wide Web interfaces. EBI’s Sequence Retrieval System (SRS), a network browser for databanks in molecular biology, integrates and links the main nucleotide and protein databases plus many specialized databases. For sequence similarity searching a variety of tools (e.g. Blitz, Fasta, BLAST) are available which allow external users to compare their own sequences against the latest data in the EMBL Nucleotide Sequence Database and SWISS-PROT.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ligand-Gated Ion Channels (LGIC) are polymeric transmembrane proteins involved in the fast response to numerous neurotransmitters. All these receptors are formed by homologous subunits and the last two decades revealed an unexpected wealth of genes coding for these subunits. The Ligand-Gated Ion Channel database (LGICdb) has been developed to handle this increasing amount of data. The database aims to provide only one entry for each gene, containing annotated nucleic acid and protein sequences. The repository is carefully structured and the entries can be retrieved by various criteria. In addition to the sequences, the LGICdb provides multiple sequence alignments, phylogenetic analyses and atomic coordinates when available. The database is accessible via the World Wide Web (http://www.pasteur.fr/recherche/banques/LGIC/LGIC.html), where it is continuously updated. The version 16 (September 2000) available for download contained 333 entries covering 34 species.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The tmRNA database (tmRDB) is maintained at the University of Texas Health Science Center at Tyler, Texas, and accessible on the World Wide Web at the URL http://psyche.uthct.edu/dbs/tmRDB/tmRDB.html. Mirror sites are located at Auburn University, Auburn, Alabama (http://www.ag.auburn.edu/mirror/tmRDB/) and the Institute of Biological Sciences, Aarhus, Denmark (http://www.bioinf.au.dk/tmRDB/). The tmRDB provides information and citation links about tmRNA, a molecule that combines functions of tRNA and mRNA in trans-translation. tmRNA is likely to be present in all bacteria and has been found in algae chloroplasts, the cyanelle of Cyanophora paradoxa and the mitochondrion of the flagellate Reclinomonas americana. This release adds 26 new sequences and corresponding predicted tmRNA-encoded tag peptides for a total of 86 tmRNAs, ordered alphabetically and phylogenetically. Secondary structures and three-dimensional models in PDB format for representative molecules are being made available. tmRNA alignments prove individual base pairs and are generated manually assisted by computational tools. The alignments with their corresponding structural annotation can be obtained in various formats, including a new column format designed to improve and simplify computational usability of the data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ARKdb genome databases provide comprehensive public repositories for genome mapping data from farmed species and other animals (http://www.thearkdb.org) providing a resource similar in function to that offered by GDB or MGD for human or mouse genome mapping data, respectively. Because we have attempted to build a generic mapping database, the system has wide utility, particularly for those species for which development of a specific resource would be prohibitive. The ARKdb genome database model has been implemented for 10 species to date. These are pig, chicken, sheep, cattle, horse, deer, tilapia, cat, turkey and salmon. Access to the ARKdb databases is effected via the World Wide Web using the ARKdb browser and Anubis map viewer. The information stored includes details of loci, maps, experimental methods and the source references. Links to other information sources such as PubMed and EMBL/GenBank are provided. Responsibility for data entry and curation is shared amongst scientists active in genome research in the species of interest. Mirror sites in the United States are maintained in addition to the central genome server at Roslin.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

GOBASE (http://megasun.bch.umontreal.ca/gobase/) is a network-accessible biological database, which is unique in bringing together diverse biological data on organelles with taxonomically broad coverage, and in furnishing data that have been exhaustively verified and completed by experts. So far, we have focused on mitochondrial data: GOBASE contains all published nucleotide and protein sequences encoded by mitochondrial genomes, selected RNA secondary structures of mitochondria-encoded molecules, genetic maps of completely sequenced genomes, taxonomic information for all species whose sequences are present in the database and organismal descriptions of key protistan eukaryotes. All of these data have been integrated and organized in a formal database structure to allow sophisticated biological queries using terms that are inherent in biological concepts. Most importantly, data have been validated, completed, corrected and standardized, a prerequisite of meaningful analysis. In addition, where critical data are lacking, such as genetic maps and RNA secondary structures, they are generated by the GOBASE team and collaborators, and added to the database. The database is implemented in a relational database management system, but features an object-oriented view of the biological data through a Web/Genera-generated World Wide Web interface. Finally, we have developed software for database curation (i.e. data updates, validation and correction), which will be described in some detail in this paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Homeodomain Resource is an annotated collection of non-redundant protein sequences, three-dimensional structures and genomic information for the homeodomain protein family. Release 3.0 contains 795 full-length homeodomain-containing sequences, 32 experimentally-derived structures and 143 homeo­box loci implicated in human genetic disorders. Entries are fully hyperlinked to facilitate easy retrieval of the original records from source databases. A simple search engine with a graphical user interface is provided to query the component databases and assemble customized data sets. A new feature for this release is the addition of DNA recognition sites for all human homeodomain proteins described in the literature. The Homeodomain Resource is freely available through the World Wide Web at http://genome.nhgri.nih.gov/homeodomain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The energy demand for operating Information and Communication Technology (ICT) systems has been growing, implying in high operational costs and consequent increase of carbon emissions. Both in datacenters and telecom infrastructures, the networks represent a significant amount of energy spending. Given that, there is an increased demand for energy eficiency solutions, and several capabilities to save energy have been proposed. However, it is very dificult to orchestrate such energy eficiency capabilities, i.e., coordinate or combine them in the same network, ensuring a conflict-free operation and choosing the best one for a given scenario, ensuring that a capability not suited to the current bandwidth utilization will not be applied and lead to congestion or packet loss. Also, there is no way in the literature to do this taking business directives into account. In this regard, a method able to orchestrate diferent energy eficiency capabilities is proposed considering the possible combinations and conflicts among them, as well as the best option for a given bandwidth utilization and network characteristics. In the proposed method, the business policies specified in a high-level interface are refined down to the network level in order to bring highlevel directives into the operation, and a Utility Function is used to combine energy eficiency and performance requirements. A Decision Tree able to determine what to do in each scenario is deployed in a Software Defined Network environment. The proposed method was validated with diferent experiments, testing the Utility Function, checking the extra savings when combining several capabilities, the decision tree interpolation and dynamicity aspects. The orchestration proved to be valid to solve the problem of finding the best combination for a given scenario, achieving additional savings due to the combination, besides ensuring a conflict-free operation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a library of Penn State Fiber Optic Echelle (FOE) observations of a sample of field stars with spectral types F to M and luminosity classes V to I. The spectral coverage is from 3800 to 10000 Å with a nominal resolving power of 12,000. These spectra include many of the spectral lines most widely used as optical and near-infrared indicators of chromospheric activity such as the Balmer lines (Hα to H epsilon), Ca II H & K, the Mg I b triplet, Na I D_1, D_2, He I D_3, and Ca II IRT lines. There are also a large number of photospheric lines, which can also be affected by chromospheric activity, and temperature-sensitive photospheric features such as TiO bands. The spectra have been compiled with the goal of providing a set of standards observed at medium resolution. We have extensively used such data for the study of active chromosphere stars by applying a spectral subtraction technique. However, the data set presented here can also be utilized in a wide variety of ways ranging from radial velocity templates to study of variable stars and stellar population synthesis. This library can also be used for spectral classification purposes and determination of atmospheric parameters (T_eff, log g, [Fe/H]). A digital version of all the fully reduced spectra is available via ftp and the World Wide Web (WWW) in FITS format.