994 resultados para Scalable web servers


Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the characteristics of the current Web services is that many clients request the same or similar service from a group of replicated servers, e.g. music or movie downloading in peer-to-peer networks. Most of the time, servers are heterogeneous ones in terms of service rate. Much of research has been done in the homogeneous environment. However, there is has been little done on the heterogeneous scenario. It is important and urgent that we have models for heterogeneous server groups for the current Internet applications design and analysis. In this paper, we deploy an approximation method to transform heterogeneous systems into a group of homogeneous system. As a result, the previous results of homogeneous studies can be applied in heterogeneous cases. In order to test the approximation ratio of the proposed model to real applications, we conducted simulations to obtain the degree of similarity. We use two common strategies: random selection algorithm and Firs-Come-First-Serve (FCFS) algorithm to test the approximation ratio of the proposed model. The simulations indicate that the approximation model works well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Peer-to-Peer (P2P) Web caching has been a hot research topic in recent years as it can create scalable and robust designs for decentralized Internet-scale applications. However, many P2P Web caching systems suffer expensive overheads such as lookup and publish messages, and lack of locality awareness. In this paper we present the development of a locality aware P2P cache system to overcome these limitations by using routing table locality, aggregation and soft state. The experiments show that our P2P cache system improves the performance of index operations through the reduction of the amount of information processed by nodes, the reduction of the number of index messages sent by nodes, and the improvement of the locality of cache pointers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the rapid development of Internet, the amount of information on the Web grows explosively, people often feel puzzled and helpless in finding and getting the information they really need. For overcoming this problem, recommender systems such as singular value decomposition (SVD) method help users finding relevant information, products or services by providing personalized recommendations based on their profiles. SVD is a powerful technique for dimensionality reduction. However, due to its expensive computational requirements and weak performance for large sparse matrices, it has been considered inappropriate for practical applications involving massive data. Thus, to extract information in which the user is interested from a massive amount of data, we propose a personalized recommendation algorithm which is called ApproSVD algorithm based on approximating SVD in this paper. The trick behind our algorithm is to sample some rows of a user-item matrix, rescale each row by an appropriate factor to form a relatively smaller matrix, and then reduce the dimensionality of the smaller matrix. Finally, we present an empirical study to compare the prediction accuracy of our proposed algorithm with that of Drineas's LINEARTIMESVD algorithm and the standard SVD algorithm on MovieLens dataset and Flixster dataset, and show that our method has the best prediction quality. Furthermore, in order to show the superiority of the ApproSVD algorithm, we also conduct an empirical study to compare the prediction accuracy and running time between ApproSVD algorithm and incremental SVD algorithm on MovieLens dataset and Flixster dataset, and demonstrate that our proposed method has better performance overall.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Given the rising rates of obesity in children and adolescents, developing evidence-based weight loss or weight maintenance interventions that can be widely disseminated, well implemented, and are highly scalable is a public health necessity. Such interventions should ensure that adolescents establish healthy weight regulation practices while also reducing eating disorder risk.

Objective:
This study describes an online program, StayingFit, which has two tracks for universal and targeted delivery and was designed to enhance healthy living skills, encourage healthy weight regulation, and improve weight/shape concerns among high school adolescents.

Methods:
Ninth grade students in two high schools in the San Francisco Bay area and in St Louis were invited to participate. Students who were overweight (body mass index [BMI] >85th percentile) were offered the weight management track of StayingFit; students who were normal weight were offered the healthy habits track. The 12-session program included a monitored discussion group and interactive self-monitoring logs. Measures completed pre- and post-intervention included self-report height and weight, used to calculate BMI percentile for age and sex and standardized BMI (zBMI), Youth Risk Behavior Survey (YRBS) nutrition data, the Weight Concerns Scale, and the Center for Epidemiological Studies Depression Scale.

Results: A total of 336 students provided informed consent and were included in the analyses. The racial breakdown of the sample was as follows: 46.7% (157/336) multiracial/other, 31.0% (104/336) Caucasian, 16.7% (56/336) African American, and 5.7% (19/336) did not specify; 43.5% (146/336) of students identified as Hispanic/Latino. BMI percentile and zBMI significantly decreased among students in the weight management track. BMI percentile and zBMI did not significantly change among students in the healthy habits track, demonstrating that these students maintained their weight. Weight/shape concerns significantly decreased among participants in both tracks who had elevated weight/shape concerns at baseline. Fruit and vegetable consumption increased for both tracks. Physical activity increased among participants in the weight management track, while soda consumption and television time decreased.

Conclusions: Results suggest that an Internet-based, universally delivered, targeted intervention may support healthy weight regulation, improve weight/shape concerns among participants with eating disorders risk, and increase physical activity in high school students. Tailored content and interactive features to encourage behavior change may lead to sustainable improvements in adolescent health.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Hazardous alcohol consumption is a leading modifiable cause of mortality and morbidity among young people. Screening and brief intervention (SBI) is a key strategy to reduce alcohol-related harm in the community, and web-based approaches (e-SBI) have advantages over practitioner-delivered approaches, being cheaper, more acceptable, administrable remotely and infinitely scalable. An efficacy trial in a university population showed a 10-minute intervention could reduce drinking by 11% for 6 months or more among 17-24 year-old undergraduate hazardous drinkers. The e-SBINZ study is designed to examine the effectiveness of e-SBI across a range of universities and among Māori and non-Māori students in New Zealand. METHODS/DESIGN: The e-SBINZ study comprises two parallel, double blind, multi-site, individually randomised controlled trials. This paper outlines the background and design of the trial, which is recruiting 17-24 year-old students from seven of New Zealand's eight universities. Māori and non-Māori students are being sampled separately and are invited by e-mail to complete a web questionnaire including the AUDIT-C. Those who score >4 will be randomly allocated to no further contact until follow-up (control) or to assessment and personalised feedback (intervention) via computer. Follow-up assessment will occur 5 months later in second semester. Recruitment, consent, randomisation, intervention and follow-up are all online. Primary outcomes are (i) total alcohol consumption, (ii) frequency of drinking, (iii) amount consumed per typical drinking occasion, (iv) the proportions exceeding medical guidelines for acute and chronic harm, and (v) scores on an academic problems scale. DISCUSSION: The trial will provide information on the effectiveness of e-SBI in reducing hazardous alcohol consumption across diverse university student populations with separate effect estimates for Māori and non-Māori students. TRIAL REGISTRATION: Australian New Zealand Clinical Trials Registry (ANZCTR) ACTRN12610000279022.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O surgimento de novas aplicações que utilizam o protocolo HTTP nas suas transações e a crescente popularidade da World Wide Web (WWW) provocaram pesquisas pelo aumento do desempenho de servidores Web. Para tal, uma das alternativas propostas neste trabalho é utilizar um conjunto de servidores Web distribuídos que espalham a carga de requisições entre vários computadores, atuando como um só associado a uma estratégia de replicação de conteúdo. Um dos problemas centrais a ser resolvido em servidores Web distribuídos é como manter a consistência das réplicas de conteúdo entre os equipamentos envolvidos. Esta dissertação apresenta conceitos fundamentais envolvendo o tema replicação de conteúdo em servidores Web distribuídos. São mostrados detalhes sobre arquitetura de servidores Web distribuídos, manutenção da consistência em ambientes de servidores Web distribuídos, uso de replicação e formas de replicação. Além disso, são citados alguns trabalhos correlatos ao propósito de manter réplicas consistentes em ambientes de servidores Web distribuídos. Este trabalho tem por objetivo propor um modelo de manutenção da consistência de conteúdo em servidores Web distribuídos com características de transparência e autonomia. O modelo, denominado One Replication Protocol for Internet Servers (ORPIS), adota uma estratégia de propagação otimista porque não existe sincronismo no envio das atualizações para as réplicas. Este trabalho apresenta os principais componentes tecnológicos empregados na Web, além dos problemas causados pela escalabilidade e distribuição inerentes a esse ambiente. São descritas as principais técnicas de aumento de desempenho de servidores Web que atualmente vêm sendo utilizadas. O modelo ORPIS é descrito, sendo apresentados seus pressupostos, elencados seus componentes e detalhados os seus algoritmos de funcionamento. Este trabalho dá uma visão geral sobre a implementação e os testes realizados em alguns módulos do protótipo do modelo, caracterizando o ambiente de desenvolvimento do protótipo e detalhes da implementação. São enumerados os atributos e métodos das classes do protótipo e definidas as estruturas de dados utilizadas. Além disso, apresentam-se os resultados obtidos da avaliação funcional dos módulos implementados no protótipo. Um ponto a ser salientado é a compatibilidade do modelo ORPIS aos servidores Web existentes, sem a necessidade de modificação em suas configurações. O modelo ORPIS é baseado na filosofia de código aberto. Durante o desenvolvimento do protótipo, o uso de software de código aberto proporcionou um rápido acesso às ferramentas necessárias (sistema operacional, linguagens e gerenciador de banco de dados), com possibilidade de alteração nos códigos fonte como uma alternativa de customização.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we propose a Geographical Information System that can be used as a tool for the treatment and study of problems related with environmental and city management issues. It is based on the Scalable Vector Graphics (SVG) standard for Web development of graphics. The project uses the concept of remate and real-time mar creation by database access through instructions executed by browsers on the Internet. As a way of proving the system effectiveness, we present two study cases;.the first on a region named Maracajaú Coral Reefs, located in Rio Grande do Norte coast, and the second in the Switzerland Northeast in which we intended to promote the substitution of MapServer by the system proposed here. We also show some results that demonstrate the larger geographical data capability achieved by the use of the standardized codes and open source tools, such as Extensible Markup Language (XML), Document Object Model (DOM), script languages ECMAScript/ JavaScript, Hypertext Preprocessor (PHP) and PostgreSQL and its extension, PostGIS

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Web service-based application is an architectural style, where a collection of Web services communicates to each other to execute processes. With the popularity increase of developing Web service-based application and once Web services may change, in terms of functional and non-functional Quality of Service (QoS), we need mechanisms to monitor, diagnose, and repair Web services into a Web Application. This work presents a description of self-healing architecture that deals with these mechanisms. Other contributions of this paper are using the proxy server to measure Web service QoS values and to employ some strategies to recovery the effects from misbehaved Web services. © 2008 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the last decades changes have occurred in communication within and between enterprises, made easier by technologies suchas E-commerce, Internet, ERP systems and remote meetings and there was a rapid progress in network technology, which has changed the way business is done. A standardized way to offer services over the internet is using web services. Web services are a kind of remote procedure call and are generally used to integrate systems, independent of language, both client and server. It is common to use several web services run in sequence to perform a business process. To this type of process, gives the name of workflow. Thus, Web services are the primary components of workflows. A tool that provides a way of visualizing the behavior of a workflow can assist the administrator and is required. The present work presents the development of a tool that allows the administrator to classify visually services components and evaluate their importance in the final performance of a workflow. As proof of concept we used several virtual servers and computers where each computer has received a set of web services. A proxy was added between each call of workflows collecting relevant information and storing them in a database for later analysis. The analysis was based on Quality of Service parameters

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the main challenges facing next generation Cloud platform services is the need to simultaneously achieve ease of programming, consistency, and high scalability. Big Data applications have so far focused on batch processing. The next step for Big Data is to move to the online world. This shift will raise the requirements for transactional guarantees. CumuloNimbo is a new EC-funded project led by Universidad Politécnica de Madrid (UPM) that addresses these issues via a highly scalable multi-tier transactional platform as a service (PaaS) that bridges the gap between OLTP and Big Data applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aiming to address requirements concerning integration of services in the context of ?big data?, this paper presents an innovative approach that (i) ensures a flexible, adaptable and scalable information and computation infrastructure, and (ii) exploits the competences of stakeholders and information workers to meaningfully confront information management issues such as information characterization, classification and interpretation, thus incorporating the underlying collective intelligence. Our approach pays much attention to the issues of usability and ease-of-use, not requiring any particular programming expertise from the end users. We report on a series of technical issues concerning the desired flexibility of the proposed integration framework and we provide related recommendations to developers of such solutions. Evaluation results are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In professional video production, users have to access to huge multimedia files simultaneously in an error-free environment, this restriction force the use of expensive disk architectures for video servers. Previous researches proposed different RAID systems for each specific task (ingest, editing, file, play-out, etc.). Video production companies have to acquire different servers with different RAIDs systems in order to support each task in the production workflow. The solution has multiples disadvantages, duplicated material in several RAIDs, duplicated material for different qualities, transfer and transcoding processes, etc. In this work, an architecture for video servers based on the spreading of JPEG200 data in different RAIDs is presented, each individual part of the data structure goes to a specific RAID type depending on the effect that produces the data on the overall image quality, the method provide a redundancy correlated with the data rank. The global storage can be used in all the different tasks of the production workflow saving disk space, redundant files and transfers procedures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SSR es el acrónimo de SoundScape Renderer (tool for real-time spatial audio reproduction providing a variety of rendering algorithms), es un programa escrito en su mayoría en C++. El programa permite al usuario escuchar tanto sonidos grabados con anterioridad como sonidos en directo. El sonido o los sonidos se oirán, desde el punto de vista del oyente, como si el sonido se produjese en el punto que el programa decida, lo interesante de este proyecto es que el sonido podrá cambiar de lugar, moverse, etc. Todo en tiempo real. Esto se consigue sin modificar el sonido al grabarlo pero sí al emitirlo, el programa calcula las variaciones necesarias para que al emitir el sonido al oyente le llegue como si el sonido realmente se generase en un punto del espacio o lo más parecido posible. La sensación de movimiento no deja de ser el punto anterior cambiando de lugar. La idea era crear una aplicación web basada en Canvas de HTML5 que se comunicará con esta interfaz de usuario remota. Así se solucionarían todos los problemas de compatibilidad ya que cualquier dispositivo con posibilidad de visualizar páginas web podría correr una aplicación basada en estándares web, por ejemplo un sistema con Windows o un móvil con navegador. El protocolo debía de ser WebSocket porque es un protocolo HTML5 y ofrece las “garantías” de latencia que una aplicación con necesidades de información en tiempo real requiere. Nos permite una comunicación full-dúplex asíncrona sin mucho payload que es justo lo que se venía a evitar al no usar polling normal de HTML. El problema que surgió fue que la interfaz de usuario de red que tenía el programa no era compatible con WebSocket debido a un handshacking inicial y obligatorio que realiza el protocolo, por lo que se necesitaba otra interfaz de red. Se decidió entonces cambiar a JSON como formato para el intercambio de mensajes. Al final el proyecto comprende no sólo la aplicación web basada en Canvas sino también un servidor funcional y la definición de una nueva interfaz de usuario de red con su protocolo añadido. ABSTRACT. This project aims to become a part of the SSR tool to extend its capabilities in the field of the access. SSR is an acronym for SoundScape Renderer, is a program mostly written in C++ that allows you to hear already recorded or live sound with a variety of sound equipment as if the sound came from a desired place in the space. Like the web-page of the SSR says surely better explained: “The SoundScape Renderer (SSR) is a tool for real-time spatial audio reproduction providing a variety of rendering algorithms.” The application can be used with a graphical interface written in Qt but has also a network interface for external applications to use it. This network interface communicates using XML messages. A good example of it is the Android client. This Android client is already working. In order to use the application should be run it by loading an audio source and the wanted environment so that the renderer knows what to do. In that moment the server binds and anyone can use the network interface. Since the network interface is documented everyone can make an application to interact with this network interface. So the application can have as many user interfaces as wanted. The part that is developed in this project has nothing to do neither with audio rendering nor even with the reproduction of the spatial audio. The part that is developed here is about the interface used in the SSR application. As it can be deduced from the title: “Distributed Web Interface for Real-Time Spatial Audio Reproduction System”, this work aims only to offer the interface via web for the SSR (“Real-Time Spatial Audio Reproduction System”). The idea is not to make a new graphical interface for SSR but to allow more types of interfaces and communication. To accomplish the objective of allowing more graphical interfaces this project is going to use a new network interface. By now the SSR application is using only XML for data interchange but this new network interface support JSON. This project comprehends the server that launch the application, the user interface and the new network interface. It is done with these modules in order to allow creating new user interfaces that can communicate with the server or new servers that can communicate with the user interface by defining a complete network interface for data interchange.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The security event correlation scalability has become a major concern for security analysts and IT administrators when considering complex IT infrastructures that need to handle gargantuan amounts of events or wide correlation window spans. The current correlation capabilities of Security Information and Event Management (SIEM), based on a single node in centralized servers, have proved to be insufficient to process large event streams. This paper introduces a step forward in the current state of the art to address the aforementioned problems. The proposed model takes into account the two main aspects of this ?eld: distributed correlation and query parallelization. We present a case study of a multiple-step attack on the Olympic Games IT infrastructure to illustrate the applicability of our approach.