956 resultados para Data compression (Computer science)
Estudo do impacto do tamanho máximo da carga da trama Ethernet no perfil do Tráfego IPV6 na Internet
Resumo:
A transição entre a versão 4 para a versão 6 do Internet Protocol (IP) vem ocorrendo na comunidade da Internet. No entanto, a estrutura interna dos protocolos IPv4 e IPv6, em detalhe no tamanho dos seus cabeçalhos, pode provocar alterações no perfil tráfego da rede. Este trabalho estuda as mudanças nas características de tráfego de rede, avaliando o que mudaria se o tráfego gerado fosse apenas IPv6 em vez de IPv4. Este artigo estende-se uma pesquisa anterior, abordando novas questões, mas usando os registos de dados reais disponíveis publicamente. É adotada uma metodologia de engenharia reversa nos pacotes IPv4 capturados, permitindo assim inferir qual a carga original no computador tributário e em seguida reencapsular essa carga em novos pacotes usando restrições de encapsulamento IPv6. Conclui-se que, na transição de IPv4 para IPv6, haverá um aumento no número de pacotes transmitidos na Internet.
Resumo:
As soluções informáticas de Customer Relationship Management (CRM) e os sistemas de suporte à informação, designados por Business Intelligence (BI), permitem a recolha de dados e a sua transformação em informação e em conhecimento, vital para diferenciação das organizações num Mundo globalizado e em constante mudança. A construção de um Data Warehouse corporativo é fundamental para as organizações que utilizam vários sistemas operacionais de modo a ser possível a agregação da informação. A Fundação INATEL – uma fundação privada de interesse público, 100% estatal – é um exemplo deste tipo de organização. Com uma base de dados de clientes superior a 250.000, atuando em áreas tão diferentes como sejam o Turismo, a Cultura e o Desporto, sustentado em mais de 25 sistemas informáticos autónomos. A base de estudo deste trabalho é a procura de identificação dos benefícios da implementação de um CRM Analítico na Fundação INATEL. Apresentando-se assim uma metodologia para a respetiva implementação e sugestão de um modelo de dados para a obtenção de uma visão única do cliente, acessível a toda a organização, de modo a garantir a total satisfação e consequente fidelização à marca INATEL. A disponibilização desta informação irá proporcionar um posicionamento privilegiado da Fundação INATEL e terá um papel fundamental na sua sustentabilidade económica.
Resumo:
This reports the work of Karrer and Wirth in identifying percentage results and, respectively, the Depth to Mate (DTM) and Depth to Conversion (DTC) data in all 2-5-man chess endgames.
Resumo:
Self-Organizing Map (SOM) algorithm has been extensively used for analysis and classification problems. For this kind of problems, datasets become more and more large and it is necessary to speed up the SOM learning. In this paper we present an application of the Simulated Annealing (SA) procedure to the SOM learning algorithm. The goal of the algorithm is to obtain fast learning and better performance in terms of matching of input data and regularity of the obtained map. An advantage of the proposed technique is that it preserves the simplicity of the basic algorithm. Several tests, carried out on different large datasets, demonstrate the effectiveness of the proposed algorithm in comparison with the original SOM and with some of its modification introduced to speed-up the learning.
Resumo:
In this work a new method for clustering and building a topographic representation of a bacteria taxonomy is presented. The method is based on the analysis of stable parts of the genome, the so-called “housekeeping genes”. The proposed method generates topographic maps of the bacteria taxonomy, where relations among different type strains can be visually inspected and verified. Two well known DNA alignement algorithms are applied to the genomic sequences. Topographic maps are optimized to represent the similarity among the sequences according to their evolutionary distances. The experimental analysis is carried out on 147 type strains of the Gammaprotebacteria class by means of the 16S rRNA housekeeping gene. Complete sequences of the gene have been retrieved from the NCBI public database. In the experimental tests the maps show clusters of homologous type strains and present some singular cases potentially due to incorrect classification or erroneous annotations in the database.
Resumo:
Structured data represented in the form of graphs arises in several fields of the science and the growing amount of available data makes distributed graph mining techniques particularly relevant. In this paper, we present a distributed approach to the frequent subgraph mining problem to discover interesting patterns in molecular compounds. The problem is characterized by a highly irregular search tree, whereby no reliable workload prediction is available. We describe the three main aspects of the proposed distributed algorithm, namely a dynamic partitioning of the search space, a distribution process based on a peer-to-peer communication framework, and a novel receiver-initiated, load balancing algorithm. The effectiveness of the distributed method has been evaluated on the well-known National Cancer Institute’s HIV-screening dataset, where the approach attains close-to linear speedup in a network of workstations.
Resumo:
This paper presents recent research into the functions and value of sketch outputs during computer supported collaborative design. Sketches made primarily exploiting whiteboard technology are shown to support subjects engaged in remote collaborative design, particularly when constructed in ‘nearsynchronous’ communication. The authors define near-synchronous communication and speculate that it is compatible with the reflective and iterative nature of design activity. There appears to be significant similarities between the making of sketches in near-synchronous remote collaborative design and those made on paper in more traditional face-to-face settings With the current increase in the use of computer supported collaborative working (CSCW) in undergraduate and postgraduate design education it is proposed that sketches and sketching can make important contributions to design learning in this context
Resumo:
This paper represents the first step in an on-going work for designing an unsupervised method based on genetic algorithm for intrusion detection. Its main role in a broader system is to notify of an unusual traffic and in that way provide the possibility of detecting unknown attacks. Most of the machine-learning techniques deployed for intrusion detection are supervised as these techniques are generally more accurate, but this implies the need of labeling the data for training and testing which is time-consuming and error-prone. Hence, our goal is to devise an anomaly detector which would be unsupervised, but at the same time robust and accurate. Genetic algorithms are robust and able to avoid getting stuck in local optima, unlike the rest of clustering techniques. The model is verified on KDD99 benchmark dataset, generating a solution competitive with the solutions of the state-of-the-art which demonstrates high possibilities of the proposed method.
Resumo:
This paper investigates the use of really simple syndication (RSS) to dynamically change virtual environments. The case study presented here uses meteorological data downloaded from the Internet in the form of an RSS feed, this data is used to simulate current weather patterns in a virtual environment. The downloaded data is aggregated and interpreted in conjunction with a configuration file, used to associate relevant weather information to the rendering engine. The engine is able to animate a wide range of basic weather patterns. Virtual reality is a way of immersing a user into a different environment, the amount of immersion the user experiences is important. Collaborative virtual reality will benefit from this work by gaining a simple way to incorporate up-to-date RSS feed data into any environment scenario. Instead of simulating weather conditions in training scenarios, actual weather conditions can be incorporated, improving the scenario and immersion.
Resumo:
The paper presents how workflow-oriented, single-user Grid portals could be extended to meet the requirements of users with collaborative needs. Through collaborative Grid portals different research and engineering teams would be able to share knowledge and resources. At the same time the workflow concept assures that the shared knowledge and computational capacity is aggregated to achieve the high-level goals of the group. The paper discusses the different issues collaborative support requires from Grid portal environments during the different phases of the workflow-oriented development work. While in the design period the most important task of the portal is to provide consistent and fault tolerant data management, during the workflow execution it must act upon the security framework its back-end Grids are built on.
Resumo:
This paper presents a parallel Linear Hashtable Motion Estimation Algorithm (LHMEA). Most parallel video compression algorithms focus on Group of Picture (GOP). Based on LHMEA we proposed earlier [1][2], we developed a parallel motion estimation algorithm focus inside of frame. We divide each reference frames into equally sized regions. These regions are going to be processed in parallel to increase the encoding speed significantly. The theory and practice speed up of parallel LHMEA according to the number of PCs in the cluster are compared and discussed. Motion Vectors (MV) are generated from the first-pass LHMEA and used as predictors for second-pass Hexagonal Search (HEXBS) motion estimation, which only searches a small number of Macroblocks (MBs). We evaluated distributed parallel implementation of LHMEA of TPA for real time video compression.
Resumo:
The VERA (Virtual Environment for Research in Archaeology) project is based on a research excavation of part of the large Roman town at Silchester, which aims to trace the site's development from its origins before the Roman conquest to its abandonment in the fifth century A.D. [1]. The VERA project aims to investigate how archaeologists use Information Technology (IT) in the context of a field excavation, and also for post-excavation analysis. VERA is a two-year project funded by the JISC VRE 2 programme that involves researchers from the University of Reading, University College London, and York Archaeological Trust. The overall aim of the project is to assess and introduce new tools and technologies that can aid the archaeological processes of gathering, recording and later analysis of data on the finds and artefacts discovered. The researchers involved in the project have a mix of skills, ranging from those related to archaeology, and computer science, though to ones involving usability and user assessment. This paper reports on the status of the research and development work undertaken in the project so far; this includes addressing various programming hurdles, on-site experiments and experiences, and the outcomes of usability and assessment studies.
Resumo:
When a computer program requires legitimate access to confidential data, the question arises whether such a program may illegally reveal sensitive information. This paper proposes a policy model to specify what information flow is permitted in a computational system. The security definition, which is based on a general notion of information lattices, allows various representations of information to be used in the enforcement of secure information flow in deterministic or nondeterministic systems. A flexible semantics-based analysis technique is presented, which uses the input-output relational model induced by an attacker's observational power, to compute the information released by the computational system. An illustrative attacker model demonstrates the use of the technique to develop a termination-sensitive analysis. The technique allows the development of various information flow analyses, parametrised by the attacker's observational power, which can be used to enforce what declassification policies.
Resumo:
There are three key driving forces behind the development of Internet Content Management Systems (CMS) - a desire to manage the explosion of content, a desire to provide structure and meaning to content in order to make it accessible, and a desire to work collaboratively to manipulate content in some meaningful way. Yet the traditional CMS has been unable to meet the latter of these requirements, often failing to provide sufficient tools for collaboration in a distributed context. Peer-to-Peer (P2P) systems are networks in which every node is an equal participant (whether transmitting data, exchanging content, or invoking services) and there is an absence of any centralised administrative or coordinating authorities. P2P systems are inherently more scalable than equivalent client-server implementations as they tend to use resources at the edge of the network much more effectively. This paper details the rationale and design of a P2P middleware for collaborative content management.