932 resultados para computer system emulation, multiprocessors, educational computer systems
Resumo:
Cloud Computing has evolved to become an enabler for delivering access to large scale distributed applications running on managed network-connected computing systems. This makes possible hosting Distributed Enterprise Information Systems (dEISs) in cloud environments, while enforcing strict performance and quality of service requirements, defined using Service Level Agreements (SLAs). {SLAs} define the performance boundaries of distributed applications, and are enforced by a cloud management system (CMS) dynamically allocating the available computing resources to the cloud services. We present two novel VM-scaling algorithms focused on dEIS systems, which optimally detect most appropriate scaling conditions using performance-models of distributed applications derived from constant-workload benchmarks, together with SLA-specified performance constraints. We simulate the VM-scaling algorithms in a cloud simulator and compare against trace-based performance models of dEISs. We compare a total of three SLA-based VM-scaling algorithms (one using prediction mechanisms) based on a real-world application scenario involving a large variable number of users. Our results show that it is beneficial to use autoregressive predictive SLA-driven scaling algorithms in cloud management systems for guaranteeing performance invariants of distributed cloud applications, as opposed to using only reactive SLA-based VM-scaling algorithms.
Resumo:
Infrastructure as a Service clouds are a flexible and fast way to obtain (virtual) resources as demand varies. Grids, on the other hand, are middleware platforms able to combine resources from different administrative domains for task execution. Clouds can be used by grids as providers of devices such as virtual machines, so they only use the resources they need. But this requires grids to be able to decide when to allocate and release those resources. Here we introduce and analyze by simulations an economic mechanism (a) to set resource prices and (b) resolve when to scale resources depending on the users’ demand. This system has a strong emphasis on fairness, so no user hinders the execution of other users’ tasks by getting too many resources. Our simulator is based on the well-known GridSim software for grid simulation, which we expand to simulate infrastructure clouds. The results show how the proposed system can successfully adapt the amount of allocated resources to the demand, while at the same time ensuring that resources are fairly shared among users.
Resumo:
Future high-quality consumer electronics will contain a number of applications running in a highly dynamic environment, and their execution will need to be efficiently arbitrated by the underlying platform software. The multimedia applications that currently execute in such similar contexts face frequent run-time variations in their resource demands, originated by the greedy nature of the multimedia processing itself. Changes in resource demands are triggered by numerous reasons (e.g. a switch in the input media compression format). Such situations require real-time adaptation mechanisms to adjust the system operation to the new requirements, and this must be done seamlessly to satisfy the user experience. One solution for efficiently managing application execution is to apply quality of service resource management techniques, based on assigning and enforcing resource contracts to applications. Most resource management solutions provide temporal isolation by enforcing resource assignments and avoiding any resource overruns. However, this has a clear limitation over the cost-effective resource usage. This paper presents a simple priority assignment scheme based on uniform priority bands to allow that greedy multimedia tasks incur in safe overruns that increase resource usage and do not threaten the timely execution of non-overrunning tasks. Experimental results show that the proposed priority assignment scheme in combination with a resource accounting mechanism preserves timely multimedia execution and delivery, achieves a higher cost-effective processor usage, and guarantees the execution isolation of non-overrunning tasks.
Resumo:
Data grid services have been used to deal with the increasing needs of applications in terms of data volume and throughput. The large scale, heterogeneity and dynamism of grid environments often make management and tuning of these data services very complex. Furthermore, current high-performance I/O approaches are characterized by their high complexity and specific features that usually require specialized administrator skills. Autonomic computing can help manage this complexity. The present paper describes an autonomic subsystem intended to provide self-management features aimed at efficiently reducing the I/O problem in a grid environment, thereby enhancing the quality of service (QoS) of data access and storage services in the grid. Our proposal takes into account that data produced in an I/O system is not usually immediately required. Therefore, performance improvements are related not only to current but also to any future I/O access, as the actual data access usually occurs later on. Nevertheless, the exact time of the next I/O operations is unknown. Thus, our approach proposes a long-term prediction designed to forecast the future workload of grid components. This enables the autonomic subsystem to determine the optimal data placement to improve both current and future I/O operations.
Resumo:
La seguridad verificada es una metodología para demostrar propiedades de seguridad de los sistemas informáticos que se destaca por las altas garantías de corrección que provee. Los sistemas informáticos se modelan como programas probabilísticos y para probar que verifican una determinada propiedad de seguridad se utilizan técnicas rigurosas basadas en modelos matemáticos de los programas. En particular, la seguridad verificada promueve el uso de demostradores de teoremas interactivos o automáticos para construir demostraciones completamente formales cuya corrección es certificada mecánicamente (por ordenador). La seguridad verificada demostró ser una técnica muy efectiva para razonar sobre diversas nociones de seguridad en el área de criptografía. Sin embargo, no ha podido cubrir un importante conjunto de nociones de seguridad “aproximada”. La característica distintiva de estas nociones de seguridad es que se expresan como una condición de “similitud” entre las distribuciones de salida de dos programas probabilísticos y esta similitud se cuantifica usando alguna noción de distancia entre distribuciones de probabilidad. Este conjunto incluye destacadas nociones de seguridad de diversas áreas como la minería de datos privados, el análisis de flujo de información y la criptografía. Ejemplos representativos de estas nociones de seguridad son la indiferenciabilidad, que permite reemplazar un componente idealizado de un sistema por una implementación concreta (sin alterar significativamente sus propiedades de seguridad), o la privacidad diferencial, una noción de privacidad que ha recibido mucha atención en los últimos años y tiene como objetivo evitar la publicación datos confidenciales en la minería de datos. La falta de técnicas rigurosas que permitan verificar formalmente este tipo de propiedades constituye un notable problema abierto que tiene que ser abordado. En esta tesis introducimos varias lógicas de programa quantitativas para razonar sobre esta clase de propiedades de seguridad. Nuestra principal contribución teórica es una versión quantitativa de una lógica de Hoare relacional para programas probabilísticos. Las pruebas de correción de estas lógicas son completamente formalizadas en el asistente de pruebas Coq. Desarrollamos, además, una herramienta para razonar sobre propiedades de programas a través de estas lógicas extendiendo CertiCrypt, un framework para verificar pruebas de criptografía en Coq. Confirmamos la efectividad y aplicabilidad de nuestra metodología construyendo pruebas certificadas por ordendor de varios sistemas cuyo análisis estaba fuera del alcance de la seguridad verificada. Esto incluye, entre otros, una meta-construcción para diseñar funciones de hash “seguras” sobre curvas elípticas y algoritmos diferencialmente privados para varios problemas de optimización combinatoria de la literatura reciente. ABSTRACT The verified security methodology is an emerging approach to build high assurance proofs about security properties of computer systems. Computer systems are modeled as probabilistic programs and one relies on rigorous program semantics techniques to prove that they comply with a given security goal. In particular, it advocates the use of interactive theorem provers or automated provers to build fully formal machine-checked versions of these security proofs. The verified security methodology has proved successful in modeling and reasoning about several standard security notions in the area of cryptography. However, it has fallen short of covering an important class of approximate, quantitative security notions. The distinguishing characteristic of this class of security notions is that they are stated as a “similarity” condition between the output distributions of two probabilistic programs, and this similarity is quantified using some notion of distance between probability distributions. This class comprises prominent security notions from multiple areas such as private data analysis, information flow analysis and cryptography. These include, for instance, indifferentiability, which enables securely replacing an idealized component of system with a concrete implementation, and differential privacy, a notion of privacy-preserving data mining that has received a great deal of attention in the last few years. The lack of rigorous techniques for verifying these properties is thus an important problem that needs to be addressed. In this dissertation we introduce several quantitative program logics to reason about this class of security notions. Our main theoretical contribution is, in particular, a quantitative variant of a full-fledged relational Hoare logic for probabilistic programs. The soundness of these logics is fully formalized in the Coq proof-assistant and tool support is also available through an extension of CertiCrypt, a framework to verify cryptographic proofs in Coq. We validate the applicability of our approach by building fully machine-checked proofs for several systems that were out of the reach of the verified security methodology. These comprise, among others, a construction to build “safe” hash functions into elliptic curves and differentially private algorithms for several combinatorial optimization problems from the recent literature.
Resumo:
El principal objetivo del proyecto es intentar reducir los costes de algunas de las operaciones de vigilancia llevadas a cabo por agencias o instituciones de seguridad. El proyecto consiste en diseñar y desarrollar un sistema informático que permita el manejo a distancia de un cuadricóptero a través de un ordenador, utilizando el teclado y visualizando las imágenes recibidas del módulo de la videocámara. Cada cuadricóptero estará compuesto de diferentes módulos y cada módulo tiene una funcionalidad característica. Se desarrollará un sistema de gestión de aeronaves para poder añadir nuevas unidades de cuadricópteros, así como un sistema de gestión de usuarios para administrar los usuarios en el sistema. Adicionalmente, se construirá un prototipo de cuadricóptero y se implementará su unidad controladora para poder realizar las pruebas del sistema desarrollado con ello. ---ABSTRACT---The aim of this project is to attempt to reduce the costs of some surveillance services offered by security agencies or institutions. The project consists in designing and developing a computer system to remotely control a drone or quad-copter through a computer, manipulating the drone through the keyboard and watching the images captured from the camera module. Each drone is built with one or more modules, and each module has its own functionality. Both new drones and new users can be added to the computer system through the drone management system and the user management system, respectively. Both of management systems are going to be developed. The project also includes the making of a quad-copter prototype and a controller unit implementation.
Resumo:
El presente documento pretende ofrecer una visión general del estado del conjunto de herramientas disponibles para el análisis y explotación de vulnerabilidades en sistemas informáticos y más concretamente en redes de ordenadores. Por un lado se ha procedido a describir analíticamente el conjunto de herramientas de software libre que se ofrecen en la actualidad para analizar y detectar vulnerabilidades en sistemas informáticos. Se ha descrito el funcionamiento, las opciones, y la motivación de uso para dichas herramientas, comparándolas con otras en algunos casos, describiendo sus diferencias en otros, y justificando su elección en todos ellos. Por otro lado se ha procedido a utilizar dichas herramientas analizadas con el objetivo de desarrollar ejemplos concretos de uso con sus diferentes parámetros seleccionados observando su comportamiento y tratando de discernir qué datos son útiles para obtener información acerca de las vulnerabilidades existentes en el sistema. Además, se ha desarrollado un caso práctico en el que se pone en práctica el conocimiento teórico presentado de forma que el lector sea capaz de asentar lo aprendido comprobando mediante un caso real la utilidad de las herramientas descritas. Los resultados obtenidos han demostrado que el análisis y detección de vulnerabilidades por parte de un administrador de sistemas competente permite ofrecer a la organización en cuestión un conjunto de técnicas para mejorar su seguridad informática y así evitar problemas con potenciales atacantes. ABSTRACT. This paper tries to provide an overview of the features of the set of tools available for the analysis and exploitation of vulnerabilities in computer systems and more specifically in computer networks. On the one hand we pretend analytically describe the set of free software tools that are offered today to analyze and detect vulnerabilities in computer systems. We have described the operation, options, and motivation to use these tools in comparison with other in some case, describing their differences in others, and justifying them in all cases. On the other hand we proceeded to use these analyzed tools in order to develop concrete examples of use with different parameters selected by observing their behavior and trying to discern what data are useful for obtaining information on existing vulnerabilities in the system. In addition, we have developed a practical case in which we put in practice the theoretical knowledge presented so that the reader is able to settle what has been learned through a real case verifying the usefulness of the tools previously described. The results have shown that vulnerabilities analysis and detection made by a competent system administrator can provide to an organization a set of techniques to improve its systems and avoid any potential attacker.
Resumo:
The Bologna Declaration and the implementation of the European Higher Education Area are promoting the use of active learning methodologies. The aim of this study is to evaluate the effects obtained after applying active learning methodologies to the achievement of generic competences as well as to the academic performance. This study has been carried out at the Universidad Politécnica de Madrid, where these methodologies have been applied to the Operating Systems I subject of the degree in Technical Engineering in Computer Systems. The fundamental hypothesis tested was whether the implementation of active learning methodologies (cooperative learning and problem based learning) favours the achievement of certain generic competences (‘teamwork’ and ‘planning and time management’) and also whether this fact improved the academic performance of our students. The original approach of this work consists in using psychometric tests to measure the degree of acquired student’s generic competences instead of using opinion surveys, as usual. Results indicated that active learning methodologies improve the academic performance when compared to the traditional lecture/discussion method, according to the success rate obtained. These methods seem to have as well an effect on the teamwork competence (the perception of the behaviour of the other members in the group) but not on the perception of each students’ behaviour. Active learning does not produce any significant change in the generic competence ‘planning and time management'.
Resumo:
A first-rate e-Health system saves lives, provides better patient care, allows complex but useful epidemiologic analysis and saves money. However, there may also be concerns about the costs and complexities associated with e-health implementation, and the need to solve issues about the energy footprint of the high-demanding computing facilities. This paper proposes a novel and evolved computing paradigm that: (i) provides the required computing and sensing resources; (ii) allows the population-wide diffusion; (iii) exploits the storage, communication and computing services provided by the Cloud; (iv) tackles the energy-optimization issue as a first-class requirement, taking it into account during the whole development cycle. The novel computing concept and the multi-layer top-down energy-optimization methodology obtain promising results in a realistic scenario for cardiovascular tracking and analysis, making the Home Assisted Living a reality.
Resumo:
La sociedad depende hoy más que nunca de la tecnología, pero la inversión en seguridad es escasa y los sistemas informáticos siguen estando muy lejos de ser seguros. La criptografía es una de las piedras angulares de la seguridad en este ámbito, por lo que recientemente se ha dedicado una cantidad considerable de recursos al desarrollo de herramientas que ayuden en la evaluación y mejora de los algoritmos criptográficos. EasyCrypt es uno de estos sistemas, desarrollado recientemente en el Instituto IMDEA Software en respuesta a la creciente necesidad de disponer de herramientas fiables de verificación formal de criptografía. En este trabajo se abordará la implementación de una mejora en el reductor de términos de EasyCrypt, sustituyéndolo por una máquina abstracta simbólica. Para ello se estudiarán e implementarán previamente dos máquinas abstractas muy conocidas, la Máquina de Krivine y la ZAM, introduciendo variaciones sobre ellas y estudiando sus diferencias desde un punto de vista práctico.---ABSTRACT---Today, society depends more than ever on technology, but the investment in security is still scarce and using computer systems are still far from safe to use. Cryptography is one of the cornerstones of security, so there has been a considerable amount of effort devoted recently to the development of tools oriented to the evaluation and improvement of cryptographic algorithms. One of these tools is EasyCrypt, developed recently at IMDEA Software Institute in response to the increasing need of reliable formal verification tools for cryptography. This work will focus on the improvement of the EasyCrypt’s term rewriting system, replacing it with a symbolic abstract machine. In order to do that, we will previously study and implement two widely known abstract machines, the Krivine Machine and the ZAM, introducing some variations and studying their differences from a practical point of view.
Resumo:
Dentro del análisis y diseño estructural surgen frecuentemente problemas de ingeniería donde se requiere el análisis dinámico de grandes modelos de elementos finitos que llegan a millones de grados de libertad y emplean volúmenes de datos de gran tamaño. La complejidad y dimensión de los análisis se dispara cuando se requiere realizar análisis paramétricos. Este problema se ha abordado tradicionalmente desde diversas perspectivas: en primer lugar, aumentando la capacidad tanto de cálculo como de memoria de los sistemas informáticos empleados en los análisis. En segundo lugar, se pueden simplificar los análisis paramétricos reduciendo su número o detalle y por último se puede recurrir a métodos complementarios a los elementos .nitos para la reducción de sus variables y la simplificación de su ejecución manteniendo los resultados obtenidos próximos al comportamiento real de la estructura. Se propone el empleo de un método de reducción que encaja en la tercera de las opciones y consiste en un análisis simplificado que proporciona una solución para la respuesta dinámica de una estructura en el subespacio modal complejo empleando un volumen de datos muy reducido. De este modo se pueden realizar análisis paramétricos variando múltiples parámetros, para obtener una solución muy aproximada al objetivo buscado. Se propone no solo la variación de propiedades locales de masa, rigidez y amortiguamiento sino la adición de grados de libertad a la estructura original para el cálculo de la respuesta tanto permanente como transitoria. Adicionalmente, su facilidad de implementación permite un control exhaustivo sobre las variables del problema y la implementación de mejoras como diferentes formas de obtención de los autovalores o la eliminación de las limitaciones de amortiguamiento en la estructura original. El objetivo del método se puede considerar similar a los que se obtienen al aplicar el método de Guyan u otras técnicas de reducción de modelos empleados en dinámica estructural. Sin embargo, aunque el método permite ser empleado en conjunción con otros para obtener las ventajas de ambos, el presente procedimiento no realiza la condensación del sistema de ecuaciones, sino que emplea la información del sistema de ecuaciones completa estudiando tan solo la respuesta en las variables apropiadas de los puntos de interés para el analista. Dicho interés puede surgir de la necesidad de obtener la respuesta de las grandes estructuras en unos puntos determinados o de la necesidad de modificar la estructura en zonas determinadas para cambiar su comportamiento (respuesta en aceleraciones, velocidades o desplazamientos) ante cargas dinámicas. Por lo tanto, el procedimiento está particularmente indicado para la selección del valor óptimo de varios parámetros en grandes estructuras (del orden de cientos de miles de modos) como pueden ser la localización de elementos introducidos, rigideces, masas o valores de amortiguamientos viscosos en estudios previos en los que diversas soluciones son planteadas y optimizadas, y que en el caso de grandes estructuras, pueden conllevar un número de simulaciones extremadamente elevado para alcanzar la solución óptima. Tras plantear las herramientas necesarias y desarrollar el procedimiento, se propone un caso de estudio para su aplicación al modelo de elementos .nitos del UAV MILANO desarrollado por el Instituto Nacional de Técnica Aeroespacial. A dicha estructura se le imponen ciertos requisitos al incorporar un equipo en aceleraciones en punta de ala izquierda y desplazamientos en punta de ala derecha en presencia de la sustentación producida por una ráfaga continua de viento de forma sinusoidal. La modificación propuesta consiste en la adición de un equipo en la punta de ala izquierda, bien mediante un anclaje rígido, bien unido mediante un sistema de reducción de la respuesta dinámica con propiedades de masa, rigidez y amortiguamiento variables. El estudio de los resultados obtenidos permite determinar la optimización de los parámetros del sistema de atenuación por medio de múltiples análisis dinámicos de forma que se cumplan de la mejor forma posible los requisitos impuestos con la modificación. Se comparan los resultados con los obtenidos mediante el uso de un programa comercial de análisis por el método de los elementos .nitos lográndose soluciones muy aproximadas entre el modelo completo y el reducido. La influencia de diversos factores como son el amortiguamiento modal de la estructura original, el número de modos retenidos en la truncatura o la precisión proporcionada por el barrido en frecuencia se analiza en detalle para, por último, señalar la eficiencia en términos de tiempo y volumen de datos de computación que ofrece el método propuesto en comparación con otras aproximaciones. Por lo tanto, puede concluirse que el método propuesto se considera una opción útil y eficiente para el análisis paramétrico de modificaciones locales en grandes estructuras. ABSTRACT When developing structural design and analysis some projects require dynamic analysis of large finite element models with millions of degrees of freedom which use large size data .les. The analysis complexity and size grow if a parametric analysis is required. This problem has been approached traditionally in several ways: one way is increasing the power and the storage capacity of computer systems involved in the analysis. Other obvious way is reducing the total amount of analyses and their details. Finally, complementary methods to finite element analysis can also be employed in order to limit the number of variables and to reduce the execution time keeping the results as close as possible to the actual behaviour of the structure. Following this third option, we propose a model reduction method that is based in a simplified analysis that supplies a solution for the dynamic response of the structure in the complex modal space using few data. Thereby, parametric analysis can be done varying multiple parameters so as to obtain a solution which complies with the desired objetive. We propose not only mass, stiffness and damping variations, but also addition of degrees of freedom to the original structure in order to calculate the transient and steady-state response. Additionally, the simple implementation of the procedure allows an in-depth control of the problem variables. Furthermore, improvements such as different ways to obtain eigenvectors or to remove damping limitations of the original structure are also possible. The purpose of the procedure is similar to that of using the Guyan or similar model order reduction techniques. However, in our method we do not perform a true model order reduction in the traditional sense. Furthermore, additional gains, which we do not explore herein, can be obtained through the combination of this method with traditional model-order reduction procedures. In our procedure we use the information of the whole system of equations is used but only those nodes of interest to the analyst are processed. That interest comes from the need to obtain the response of the structure at specific locations or from the need to modify the structure at some suitable positions in order to change its behaviour (acceleration, velocity or displacement response) under dynamic loads. Therefore, the procedure is particularly suitable for parametric optimization in large structures with >100000 normal modes such as position of new elements, stiffness, mass and viscous dampings in previous studies where different solutions are devised and optimized, and in the case of large structures, can carry an extremely high number of simulations to get the optimum solution. After the introduction of the required tools and the development of the procedure, a study case is proposed with use the finite element model (FEM) of the MILANO UAV developed by Instituto Nacional de Técnica Aeroespacial. Due to an equipment addition, certain acceleration and displacement requirements on left wing tip and right wing tip, respectively, are imposed. The structure is under a continuous sinusoidal wind gust which produces lift. The proposed modification consists of the addition of an equipment in left wing tip clamped through a rigid attachment or through a dynamic response reduction system with variable properties of mass, stiffness and damping. The analysis of the obtained results allows us to determine the optimized parametric by means of multiple dynamic analyses in a way such that the imposed requirements have been accomplished in the best possible way. The results achieved are compared with results from a commercial finite element analysis software, showing a good correlation. Influence of several factors such as the modal damping of the original structure, the number of modes kept in the modal truncation or the precission given by the frequency sweep is analyzed. Finally, the efficiency of the proposed method is addressed in tems of computational time and data size compared with other approaches. From the analyses performed, we can conclude that the proposed method is a useful and efficient option to perform parametric analysis of possible local modifications in large structures.
Resumo:
Este trabalho avalia a influência das emoções humanas expressas pela mímica da face na tomada de decisão de sistemas computacionais, com o objetivo de melhorar a experiência do usuário. Para isso, foram desenvolvidos três módulos: o primeiro trata-se de um sistema de computação assistiva - uma prancha de comunicação alternativa e ampliada em versão digital. O segundo módulo, aqui denominado Módulo Afetivo, trata-se de um sistema de computação afetiva que, por meio de Visão Computacional, capta a mímica da face do usuário e classifica seu estado emocional. Este segundo módulo foi implementado em duas etapas, as duas inspiradas no Sistema de Codificação de Ações Faciais (FACS), que identifica expressões faciais com base no sistema cognitivo humano. Na primeira etapa, o Módulo Afetivo realiza a inferência dos estados emocionais básicos: felicidade, surpresa, raiva, medo, tristeza, aversão e, ainda, o estado neutro. Segundo a maioria dos pesquisadores da área, as emoções básicas são inatas e universais, o que torna o módulo afetivo generalizável a qualquer população. Os testes realizados com o modelo proposto apresentaram resultados 10,9% acima dos resultados que usam metodologias semelhantes. Também foram realizadas análises de emoções espontâneas, e os resultados computacionais aproximam-se da taxa de acerto dos seres humanos. Na segunda etapa do desenvolvimento do Módulo Afetivo, o objetivo foi identificar expressões faciais que refletem a insatisfação ou a dificuldade de uma pessoa durante o uso de sistemas computacionais. Assim, o primeiro modelo do Módulo Afetivo foi ajustado para este fim. Por fim, foi desenvolvido um Módulo de Tomada de Decisão que recebe informações do Módulo Afetivo e faz intervenções no Sistema Computacional. Parâmetros como tamanho do ícone, arraste convertido em clique e velocidade de varredura são alterados em tempo real pelo Módulo de Tomada de Decisão no sistema computacional assistivo, de acordo com as informações geradas pelo Módulo Afetivo. Como o Módulo Afetivo não possui uma etapa de treinamento para inferência do estado emocional, foi proposto um algoritmo de face neutra para resolver o problema da inicialização com faces contendo emoções. Também foi proposto, neste trabalho, a divisão dos sinais faciais rápidos entre sinais de linha base (tique e outros ruídos na movimentação da face que não se tratam de sinais emocionais) e sinais emocionais. Os resultados dos Estudos de Caso realizados com os alunos da APAE de Presidente Prudente demonstraram que é possível melhorar a experiência do usuário, configurando um sistema computacional com informações emocionais expressas pela mímica da face.
Resumo:
In the chemical textile domain experts have to analyse chemical components and substances that might be harmful for their usage in clothing and textiles. Part of this analysis is performed searching opinions and reports people have expressed concerning these products in the Social Web. However, this type of information on the Internet is not as frequent for this domain as for others, so its detection and classification is difficult and time-consuming. Consequently, problems associated to the use of chemical substances in textiles may not be detected early enough, and could lead to health problems, such as allergies or burns. In this paper, we propose a framework able to detect, retrieve, and classify subjective sentences related to the chemical textile domain, that could be integrated into a wider health surveillance system. We also describe the creation of several datasets with opinions from this domain, the experiments performed using machine learning techniques and different lexical resources such as WordNet, and the evaluation focusing on the sentiment classification, and complaint detection (i.e., negativity). Despite the challenges involved in this domain, our approach obtains promising results with an F-score of 65% for polarity classification and 82% for complaint detection.
Resumo:
This introduction provides an overview of the state-of-the-art technology in Applications of Natural Language to Information Systems. Specifically, we analyze the need for such technologies to successfully address the new challenges of modern information systems, in which the exploitation of the Web as a main data source on business systems becomes a key requirement. It will also discuss the reasons why Human Language Technologies themselves have shifted their focus onto new areas of interest very directly linked to the development of technology for the treatment and understanding of Web 2.0. These new technologies are expected to be future interfaces for the new information systems to come. Moreover, we will review current topics of interest to this research community, and will present the selection of manuscripts that have been chosen by the program committee of the NLDB 2011 conference as representative cornerstone research works, especially highlighting their contribution to the advancement of such technologies.
Resumo:
The explosive growth of the traffic in computer systems has made it clear that traditional control techniques are not adequate to provide the system users fast access to network resources and prevent unfair uses. In this paper, we present a reconfigurable digital hardware implementation of a specific neural model for intrusion detection. It uses a specific vector of characterization of the network packages (intrusion vector) which is starting from information obtained during the access intent. This vector will be treated by the system. Our approach is adaptative and to detecting these intrusions by using a complex artificial intelligence method known as multilayer perceptron. The implementation have been developed and tested into a reconfigurable hardware (FPGA) for embedded systems. Finally, the Intrusion detection system was tested in a real-world simulation to gauge its effectiveness and real-time response.