917 resultados para World Wide Web (Information Retrieval System)
Resumo:
El Malware es una grave amenaza para la seguridad de los sistemas. Con el uso generalizado de la World Wide Web, ha habido un enorme aumento en los ataques de virus, haciendo que la seguridad informática sea esencial para todas las computadoras y se expandan las áreas de investigación sobre los nuevos incidentes que se generan, siendo una de éstas la clasificación del malware. Los “desarrolladores de malware” utilizan nuevas técnicas para generar malware polimórfico reutilizando los malware existentes, por lo cual es necesario agruparlos en familias para estudiar sus características y poder detectar nuevas variantes de los mismos. Este trabajo, además de presentar un detallado estado de la cuestión de la clasificación del malware de ficheros ejecutables PE, presenta un enfoque en el que se mejora el índice de la clasificación de la base de datos de Malware MALICIA utilizando las características estáticas de ficheros ejecutables Imphash y Pehash, utilizando dichas características se realiza un clustering con el algoritmo clustering agresivo el cual se cambia con la clasificación actual mediante el algoritmo de majority voting y la característica icon_label, obteniendo un Precision de 99,15% y un Recall de 99,32% mejorando la clasificación de MALICIA con un F-measure de 99,23%.---ABSTRACT---Malware is a serious threat to the security of systems. With the widespread use of the World Wide Web, there has been a huge increase in virus attacks, making the computer security essential for all computers. Near areas of research have append in this area including classifying malware into families, Malware developers use polymorphism to generate new variants of existing malware. Thus it is crucial to group variants of the same family, to study their characteristics and to detect new variants. This work, in addition to presenting a detailed analysis of the problem of classifying malware PE executable files, presents an approach in which the classification in the Malware database MALICIA is improved by using static characteristics of executable files, namely Imphash and Pehash. Both features are evaluated through clustering real malware with family labels with aggressive clustering algorithm and combining this with the current classification by Majority voting algorithm, obtaining a Precision of 99.15% and a Recall of 99.32%, improving the classification of MALICIA with an F-measure of 99,23%.
Resumo:
R2RML is used to specify transformations of data available in relational databases into materialised or virtual RDF datasets. SPARQL queries evaluated against virtual datasets are translated into SQL queries according to the R2RML mappings, so that they can be evaluated over the underlying relational database engines. In this paper we describe an extension of a well-known algorithm for SPARQL to SQL translation, originally formalised for RDBMS-backed triple stores, that takes into account R2RML mappings. We present the result of our implementation using queries from a synthetic benchmark and from three real use cases, and show that SPARQL queries can be in general evaluated as fast as the SQL queries that would have been generated by SQL experts if no R2RML mappings had been used.
Resumo:
En este Trabajo de Fin de Grado se va a explicar el procedimiento seguido a la hora de estudiar, diseñar y desarrollar Ackuaria, un portal de monitorización y análisis de estadísticas de comunicaciones en tiempo real. Después, se mostrarán los resultados obtenidos y la interfaz gráfica desarrollada para una mejor experiencia de usuario. Ackuaria se apoyará en el uso de Licode, un proyecto de código libre desarrollado en la Universidad Politécnica de Madrid, más concretamente en el Grupo de Internet de Nueva Generación de la Escuela Técnica Superior de Ingenieros de Telecomunicación. Licode ofrece la posibilidad de crear un servicio de streaming y videoconferencia en la propia infraestructura del usuario. Está diseñado para ser totalmente escalable y su uso está orientado principalmente al Cloud, aunque es perfectamente utilizable en una infraestructura física. Licode a su vez se basa en WebRTC, un protocolo desarrollado por la W3C (World Wide Web Consortium) y el IETF (Internet Engineering Task Force) pensado para poder transmitir y recibir flujos de audio, video y datos a través del navegador. No necesita ninguna instalación adicional, por lo que establecer una sesión de videoconferencia Peer-to-Peer es realmente sencillo. Con Licode se usa una MCU (Multipoint Control Unit) para evitar que todas las conexiones entre los usuarios sean Peer-To-Peer. Actúa como un cliente WebRTC más por el que pasan todos los flujos, que se encarga de multiplexar y redirigir donde sea necesario. De esta forma se ahorra ancho de banda y recursos del dispositivo de una forma muy significativa. Existe la creciente necesidad de los usuarios de Licode y de cualquier servicio de videoconferencia en general de poder gestionar su infraestructura a partir de datos y estadísticas fiables. Sus objetivos son muy variados: desde estudiar el comportamiento de WebRTC en distintos escenarios hasta monitorizar el uso de los usuarios para poder contabilizar después el tiempo publicado por cada uno. En todos los casos era común la necesidad de disponer de una herramienta que permitiese conocer en todo momento qué está pasando en el servicio de Licode, así como de almacenar toda la información para poder ser analizada posteriormente. Para conseguir desarrollar Ackuaria se ha realizado un estudio de las comunicaciones en tiempo real con el objetivo de determinar qué parámetros era indispensable y útil monitorizar. A partir de este estudio se ha actualizado la arquitectura de Licode para que obtuviese todos los datos necesarios y los enviase de forma que pudiesen ser recogidos por Ackuaria. El portal de monitorización entonces tratará esa información y la mostrará de forma clara y ordenada, además de proporcionar una API REST al usuario.
Resumo:
Los hipergrafos dirigidos se han empleado en problemas relacionados con lógica proposicional, bases de datos relacionales, linguística computacional y aprendizaje automático. Los hipergrafos dirigidos han sido también utilizados como alternativa a los grafos (bipartitos) dirigidos para facilitar el estudio de las interacciones entre componentes de sistemas complejos que no pueden ser fácilmente modelados usando exclusivamente relaciones binarias. En este contexto, este tipo de representación es conocida como hiper-redes. Un hipergrafo dirigido es una generalización de un grafo dirigido especialmente adecuado para la representación de relaciones de muchos a muchos. Mientras que una arista en un grafo dirigido define una relación entre dos de sus nodos, una hiperarista en un hipergrafo dirigido define una relación entre dos conjuntos de sus nodos. La conexión fuerte es una relación de equivalencia que divide el conjunto de nodos de un hipergrafo dirigido en particiones y cada partición define una clase de equivalencia conocida como componente fuertemente conexo. El estudio de los componentes fuertemente conexos de un hipergrafo dirigido puede ayudar a conseguir una mejor comprensión de la estructura de este tipo de hipergrafos cuando su tamaño es considerable. En el caso de grafo dirigidos, existen algoritmos muy eficientes para el cálculo de los componentes fuertemente conexos en grafos de gran tamaño. Gracias a estos algoritmos, se ha podido averiguar que la estructura de la WWW tiene forma de “pajarita”, donde más del 70% del los nodos están distribuidos en tres grandes conjuntos y uno de ellos es un componente fuertemente conexo. Este tipo de estructura ha sido también observada en redes complejas en otras áreas como la biología. Estudios de naturaleza similar no han podido ser realizados en hipergrafos dirigidos porque no existe algoritmos capaces de calcular los componentes fuertemente conexos de este tipo de hipergrafos. En esta tesis doctoral, hemos investigado como calcular los componentes fuertemente conexos de un hipergrafo dirigido. En concreto, hemos desarrollado dos algoritmos para este problema y hemos determinado que son correctos y cuál es su complejidad computacional. Ambos algoritmos han sido evaluados empíricamente para comparar sus tiempos de ejecución. Para la evaluación, hemos producido una selección de hipergrafos dirigidos generados de forma aleatoria inspirados en modelos muy conocidos de grafos aleatorios como Erdos-Renyi, Newman-Watts-Strogatz and Barabasi-Albert. Varias optimizaciones para ambos algoritmos han sido implementadas y analizadas en la tesis. En concreto, colapsar los componentes fuertemente conexos del grafo dirigido que se puede construir eliminando ciertas hiperaristas complejas del hipergrafo dirigido original, mejora notablemente los tiempos de ejecucion de los algoritmos para varios de los hipergrafos utilizados en la evaluación. Aparte de los ejemplos de aplicación mencionados anteriormente, los hipergrafos dirigidos han sido también empleados en el área de representación de conocimiento. En concreto, este tipo de hipergrafos se han usado para el cálculo de módulos de ontologías. Una ontología puede ser definida como un conjunto de axiomas que especifican formalmente un conjunto de símbolos y sus relaciones, mientras que un modulo puede ser entendido como un subconjunto de axiomas de la ontología que recoge todo el conocimiento que almacena la ontología sobre un conjunto especifico de símbolos y sus relaciones. En la tesis nos hemos centrado solamente en módulos que han sido calculados usando la técnica de localidad sintáctica. Debido a que las ontologías pueden ser muy grandes, el cálculo de módulos puede facilitar las tareas de re-utilización y mantenimiento de dichas ontologías. Sin embargo, analizar todos los posibles módulos de una ontología es, en general, muy costoso porque el numero de módulos crece de forma exponencial con respecto al número de símbolos y de axiomas de la ontología. Afortunadamente, los axiomas de una ontología pueden ser divididos en particiones conocidas como átomos. Cada átomo representa un conjunto máximo de axiomas que siempre aparecen juntos en un modulo. La decomposición atómica de una ontología es definida como un grafo dirigido de tal forma que cada nodo del grafo corresponde con un átomo y cada arista define una dependencia entre una pareja de átomos. En esta tesis introducimos el concepto de“axiom dependency hypergraph” que generaliza el concepto de descomposición atómica de una ontología. Un modulo en una ontología correspondería con un componente conexo en este tipo de hipergrafos y un átomo de una ontología con un componente fuertemente conexo. Hemos adaptado la implementación de nuestros algoritmos para que funcionen también con axiom dependency hypergraphs y poder de esa forma calcular los átomos de una ontología. Para demostrar la viabilidad de esta idea, hemos incorporado nuestros algoritmos en una aplicación que hemos desarrollado para la extracción de módulos y la descomposición atómica de ontologías. A la aplicación la hemos llamado HyS y hemos estudiado sus tiempos de ejecución usando una selección de ontologías muy conocidas del área biomédica, la mayoría disponibles en el portal de Internet NCBO. Los resultados de la evaluación muestran que los tiempos de ejecución de HyS son mucho mejores que las aplicaciones más rápidas conocidas. ABSTRACT Directed hypergraphs are an intuitive modelling formalism that have been used in problems related to propositional logic, relational databases, computational linguistic and machine learning. Directed hypergraphs are also presented as an alternative to directed (bipartite) graphs to facilitate the study of the interactions between components of complex systems that cannot naturally be modelled as binary relations. In this context, they are known as hyper-networks. A directed hypergraph is a generalization of a directed graph suitable for representing many-to-many relationships. While an edge in a directed graph defines a relation between two nodes of the graph, a hyperedge in a directed hypergraph defines a relation between two sets of nodes. Strong-connectivity is an equivalence relation that induces a partition of the set of nodes of a directed hypergraph into strongly-connected components. These components can be collapsed into single nodes. As result, the size of the original hypergraph can significantly be reduced if the strongly-connected components have many nodes. This approach might contribute to better understand how the nodes of a hypergraph are connected, in particular when the hypergraphs are large. In the case of directed graphs, there are efficient algorithms that can be used to compute the strongly-connected components of large graphs. For instance, it has been shown that the macroscopic structure of the World Wide Web can be represented as a “bow-tie” diagram where more than 70% of the nodes are distributed into three large sets and one of these sets is a large strongly-connected component. This particular structure has been also observed in complex networks in other fields such as, e.g., biology. Similar studies cannot be conducted in a directed hypergraph because there does not exist any algorithm for computing the strongly-connected components of the hypergraph. In this thesis, we investigate ways to compute the strongly-connected components of directed hypergraphs. We present two new algorithms and we show their correctness and computational complexity. One of these algorithms is inspired by Tarjan’s algorithm for directed graphs. The second algorithm follows a simple approach to compute the stronglyconnected components. This approach is based on the fact that two nodes of a graph that are strongly-connected can also reach the same nodes. In other words, the connected component of each node is the same. Both algorithms are empirically evaluated to compare their performances. To this end, we have produced a selection of random directed hypergraphs inspired by existent and well-known random graphs models like Erd˝os-Renyi and Newman-Watts-Strogatz. Besides the application examples that we mentioned earlier, directed hypergraphs have also been employed in the field of knowledge representation. In particular, they have been used to compute the modules of an ontology. An ontology is defined as a collection of axioms that provides a formal specification of a set of terms and their relationships; and a module is a subset of an ontology that completely captures the meaning of certain terms as defined in the ontology. In particular, we focus on the modules computed using the notion of syntactic locality. As ontologies can be very large, the computation of modules facilitates the reuse and maintenance of these ontologies. Analysing all modules of an ontology, however, is in general not feasible as the number of modules grows exponentially in the number of terms and axioms of the ontology. Nevertheless, the modules can succinctly be represented using the Atomic Decomposition of an ontology. Using this representation, an ontology can be partitioned into atoms, which are maximal sets of axioms that co-occur in every module. The Atomic Decomposition is then defined as a directed graph such that each node correspond to an atom and each edge represents a dependency relation between two atoms. In this thesis, we introduce the notion of an axiom dependency hypergraph which is a generalization of the atomic decomposition of an ontology. A module in the ontology corresponds to a connected component in the hypergraph, and the atoms of the ontology to the strongly-connected components. We apply our algorithms for directed hypergraphs to axiom dependency hypergraphs and in this manner, we compute the atoms of an ontology. To demonstrate the viability of this approach, we have implemented the algorithms in the application HyS which computes the modules of ontologies and calculate their atomic decomposition. In the thesis, we provide an experimental evaluation of HyS with a selection of large and prominent biomedical ontologies, most of which are available in the NCBO Bioportal. HyS outperforms state-of-the-art implementations in the tasks of extracting modules and computing the atomic decomposition of these ontologies.
Resumo:
Operon structure is an important organization feature of bacterial genomes. Many sets of genes occur in the same order on multiple genomes; these conserved gene groupings represent candidate operons. This study describes a computational method to estimate the likelihood that such conserved gene sets form operons. The method was used to analyze 34 bacterial and archaeal genomes, and yielded more than 7600 pairs of genes that are highly likely (P ≥ 0.98) to belong to the same operon. The sensitivity of our method is 30–50% for the Escherichia coli genome. The predicted gene pairs are available from our World Wide Web site http://www.tigr.org/tigr-scripts/operons/operons.cgi.
Resumo:
Ligand-Gated Ion Channels (LGIC) are polymeric transmembrane proteins involved in the fast response to numerous neurotransmitters. All these receptors are formed by homologous subunits and the last two decades revealed an unexpected wealth of genes coding for these subunits. The Ligand-Gated Ion Channel database (LGICdb) has been developed to handle this increasing amount of data. The database aims to provide only one entry for each gene, containing annotated nucleic acid and protein sequences. The repository is carefully structured and the entries can be retrieved by various criteria. In addition to the sequences, the LGICdb provides multiple sequence alignments, phylogenetic analyses and atomic coordinates when available. The database is accessible via the World Wide Web (http://www.pasteur.fr/recherche/banques/LGIC/LGIC.html), where it is continuously updated. The version 16 (September 2000) available for download contained 333 entries covering 34 species.
Resumo:
We present a library of Penn State Fiber Optic Echelle (FOE) observations of a sample of field stars with spectral types F to M and luminosity classes V to I. The spectral coverage is from 3800 to 10000 Å with a nominal resolving power of 12,000. These spectra include many of the spectral lines most widely used as optical and near-infrared indicators of chromospheric activity such as the Balmer lines (Hα to H epsilon), Ca II H & K, the Mg I b triplet, Na I D_1, D_2, He I D_3, and Ca II IRT lines. There are also a large number of photospheric lines, which can also be affected by chromospheric activity, and temperature-sensitive photospheric features such as TiO bands. The spectra have been compiled with the goal of providing a set of standards observed at medium resolution. We have extensively used such data for the study of active chromosphere stars by applying a spectral subtraction technique. However, the data set presented here can also be utilized in a wide variety of ways ranging from radial velocity templates to study of variable stars and stellar population synthesis. This library can also be used for spectral classification purposes and determination of atmospheric parameters (T_eff, log g, [Fe/H]). A digital version of all the fully reduced spectra is available via ftp and the World Wide Web (WWW) in FITS format.
Resumo:
We present a library of Utrecht echelle spectrograph (UES) observations of a sample of F, G, K and M field dwarf stars covering the spectral range from 4800 Å to 10600 Å with a resolution of 55000. These spectra include some of the spectral lines most widely used as optical and near-infrared indicators of chromospheric activity such as Hβ, Mg I b triplet, Na I D_1, D_2, He I D_3, Hα, and Ca II IRT lines, as well as a large number of photospheric lines which can also be affected by chromospheric activity. The spectra have been compiled with the aim of providing a set of standards observed at high-resolution to be used in the application of the spectral subtraction technique to obtain the active-chromosphere contribution to these lines in chromospherically active single and binary stars. This library can also be used for spectral classification purposes. A digital version with all the spectra is available via ftp and the World Wide Web (WWW) in both ASCII and FITS formats.
Resumo:
Non-traditional means of recruitment for the twenty-first century knowledge worker need to accompany traditional means of recruitment due to an increased usage of technology by the twenty-first century knowledge worker. In this capstone project, the author examined the recruiting efficacy of social networks. Non-traditional means of recruitment through social networks via the World Wide Web can help organizations compete for potential applicants and assist job seekers in securing employment. These means are cost effective for the employer. Examples of organizational usage in this investigation illustrate that social networking can improve efficacy for recruitment and generational needs.
Resumo:
The English language and the Internet, both separately and taken together, are nowadays well-acknowledged as powerful forces which influence and affect the lexico-grammatical characteristics of other languages world-wide. In fact, many authors like Crystal (2004) have pointed out the emergence of the so-called Netspeak, that is, the language used in the Net or World Wide Web; as Crystal himself (2004: 19) puts it, ‘a type of language displaying features that are unique to the Internet […] arising out of its character as a medium which is electronic, global and interactive’. This ‘language’, however, may be differently understood: either as an adaptation of the English language proper to internet requirements and purposes, or as a new and rapidly-changing and developing language as a result of a rapid evolution or adaptation to Internet requirements of almost all world languages, for whom English is a trendsetter. If the second and probably most plausible interpretation is adopted, there are three salient features of ‘Netspeak’: (a) the rapid expansion of all its new linguistic developments thanks to the Internet itself, which may lead to the generalization and widespread acceptance of new words, coinages, or meanings, hundreds of times faster than was the case with the printed media. As said above, (b) the visible influence of English, the most prevalent language on the Internet. Consequently, (c) this new language tends to reduce the ‘distance’ between English and other languages as well as the ignorance of the former by speakers of other languages, since the ‘Netspeak’ version of the latter adopts grammatical, syntactic and lexical features of English. Thus, linguistic differences may even disappear when code-switching and/or borrowing occurs, as whole fragments of English appear in other language contexts. As a consequence of the new situation, an ideal context appears for interlanguage or multilingual word formation to thrive: puns, blends, compounds and word creativity in general find in the web the ideal place to gain rapid acceptance world-wide, as a result of fashion, coincidence, or sheer merit of the new linguistic proposals.
Resumo:
The Prophet's Village examines the problem of maintaining enough cattle to supply milk and meat versus selling off cattle to raise money for maize, antibiotics and pesticides; cash is also needed to pay for legal fees for Rerenko, the Laibon's son.
Resumo:
Set in Togo, West Africa, Ashakara is a modern African tale. An African doctor finds a cure to a deadly virus and decides to mass produce the drug at low cost in Africa. However, a pharmaceutical multinational does not want the doctor to succeed and sends an agent to Africa first to buy the drug then to destroy it ...
Resumo:
Title from caption
Resumo:
Includes indexes.
Resumo:
Editors: 1853-Feb. 1867, J.D.B. De Bow -- Apr. 1867-, R.G. Barnwell, E.Q. Bell -- Mar. 1868-Mar. 1869, W.M. Burwell.