881 resultados para computer systems


Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper is reviewing objective assessments of Parkinson’s disease(PD) motor symptoms, cardinal, and dyskinesia, using sensor systems. It surveys the manifestation of PD symptoms, sensors that were used for their detection, types of signals (measures) as well as their signal processing (data analysis) methods. A summary of this review’s finding is represented in a table including devices (sensors), measures and methods that were used in each reviewed motor symptom assessment study. In the gathered studies among sensors, accelerometers and touch screen devices are the most widely used to detect PD symptoms and among symptoms, bradykinesia and tremor were found to be mostly evaluated. In general, machine learning methods are potentially promising for this. PD is a complex disease that requires continuous monitoring and multidimensional symptom analysis. Combining existing technologies to develop new sensor platforms may assist in assessing the overall symptom profile more accurately to develop useful tools towards supporting better treatment process.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Authentication plays an important role in how we interact with computers, mobile devices, the web, etc. The idea of authentication is to uniquely identify a user before granting access to system privileges. For example, in recent years more corporate information and applications have been accessible via the Internet and Intranet. Many employees are working from remote locations and need access to secure corporate files. During this time, it is possible for malicious or unauthorized users to gain access to the system. For this reason, it is logical to have some mechanism in place to detect whether the logged-in user is the same user in control of the user's session. Therefore, highly secure authentication methods must be used. We posit that each of us is unique in our use of computer systems. It is this uniqueness that is leveraged to "continuously authenticate users" while they use web software. To monitor user behavior, n-gram models are used to capture user interactions with web-based software. This statistical language model essentially captures sequences and sub-sequences of user actions, their orderings, and temporal relationships that make them unique by providing a model of how each user typically behaves. Users are then continuously monitored during software operations. Large deviations from "normal behavior" can possibly indicate malicious or unintended behavior. This approach is implemented in a system called Intruder Detector (ID) that models user actions as embodied in web logs generated in response to a user's actions. User identification through web logs is cost-effective and non-intrusive. We perform experiments on a large fielded system with web logs of approximately 4000 users. For these experiments, we use two classification techniques; binary and multi-class classification. We evaluate model-specific differences of user behavior based on coarse-grain (i.e., role) and fine-grain (i.e., individual) analysis. A specific set of metrics are used to provide valuable insight into how each model performs. Intruder Detector achieves accurate results when identifying legitimate users and user types. This tool is also able to detect outliers in role-based user behavior with optimal performance. In addition to web applications, this continuous monitoring technique can be used with other user-based systems such as mobile devices and the analysis of network traffic.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

INTRODUCTION In recent years computer systems have become increasingly complex and consequently the challenge of protecting these systems has become increasingly difficult. Various techniques have been implemented to counteract the misuse of computer systems in the form of firewalls, antivirus software and intrusion detection systems. The complexity of networks and dynamic nature of computer systems leaves current methods with significant room for improvement. Computer scientists have recently drawn inspiration from mechanisms found in biological systems and, in the context of computer security, have focused on the human immune system (HIS). The human immune system provides an example of a robust, distributed system that provides a high level of protection from constant attacks. By examining the precise mechanisms of the human immune system, it is hoped the paradigm will improve the performance of real intrusion detection systems. This paper presents an introduction to recent developments in the field of immunology. It discusses the incorporation of a novel immunological paradigm, Danger Theory, and how this concept is inspiring artificial immune systems (AIS). Applications within the context of computer security are outlined drawing direct reference to the underlying principles of Danger Theory and finally, the current state of intrusion detection systems is discussed and improvements suggested.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Intrusion Detection Systems (IDSs) provide an important layer of security for computer systems and networks, and are becoming more and more necessary as reliance on Internet services increases and systems with sensitive data are more commonly open to Internet access. An IDS’s responsibility is to detect suspicious or unacceptable system and network activity and to alert a systems administrator to this activity. The majority of IDSs use a set of signatures that define what suspicious traffic is, and Snort is one popular and actively developing open-source IDS that uses such a set of signatures known as Snort rules. Our aim is to identify a way in which Snort could be developed further by generalising rules to identify novel attacks. In particular, we attempted to relax and vary the conditions and parameters of current Snort rules, using a similar approach to classic rule learning operators such as generalisation and specialisation. We demonstrate the effectiveness of our approach through experiments with standard datasets and show that we are able to detect previously undetected variants of various attacks. We conclude by discussing the general effectiveness and appropriateness of generalisation in Snort based IDS rule processing. Keywords: anomaly detection, intrusion detection, Snort, Snort rules

Relevância:

70.00% 70.00%

Publicador:

Resumo:

INTRODUCTION In recent years computer systems have become increasingly complex and consequently the challenge of protecting these systems has become increasingly difficult. Various techniques have been implemented to counteract the misuse of computer systems in the form of firewalls, antivirus software and intrusion detection systems. The complexity of networks and dynamic nature of computer systems leaves current methods with significant room for improvement. Computer scientists have recently drawn inspiration from mechanisms found in biological systems and, in the context of computer security, have focused on the human immune system (HIS). The human immune system provides an example of a robust, distributed system that provides a high level of protection from constant attacks. By examining the precise mechanisms of the human immune system, it is hoped the paradigm will improve the performance of real intrusion detection systems. This paper presents an introduction to recent developments in the field of immunology. It discusses the incorporation of a novel immunological paradigm, Danger Theory, and how this concept is inspiring artificial immune systems (AIS). Applications within the context of computer security are outlined drawing direct reference to the underlying principles of Danger Theory and finally, the current state of intrusion detection systems is discussed and improvements suggested.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Intrusion Detection Systems (IDSs) provide an important layer of security for computer systems and networks, and are becoming more and more necessary as reliance on Internet services increases and systems with sensitive data are more commonly open to Internet access. An IDS’s responsibility is to detect suspicious or unacceptable system and network activity and to alert a systems administrator to this activity. The majority of IDSs use a set of signatures that define what suspicious traffic is, and Snort is one popular and actively developing open-source IDS that uses such a set of signatures known as Snort rules. Our aim is to identify a way in which Snort could be developed further by generalising rules to identify novel attacks. In particular, we attempted to relax and vary the conditions and parameters of current Snort rules, using a similar approach to classic rule learning operators such as generalisation and specialisation. We demonstrate the effectiveness of our approach through experiments with standard datasets and show that we are able to detect previously undetected variants of various attacks. We conclude by discussing the general effectiveness and appropriateness of generalisation in Snort based IDS rule processing. Keywords: anomaly detection, intrusion detection, Snort, Snort rules

Relevância:

60.00% 60.00%

Publicador:

Resumo:

These notes follow on from the material that you studied in CSSE1000 Introduction to Computer Systems. There you studied details of logic gates, binary numbers and instruction set architectures using the Atmel AVR microcontroller family as an example. In your present course (METR2800 Team Project I), you need to get on to designing and building an application which will include such a microcontroller. These notes focus on programming an AVR microcontroller in C and provide a number of example programs to illustrate the use of some of the AVR peripheral devices.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Training-needs analysis is critical for defining and procuring effective training systems. However, traditional approaches to training-needs analysis are not suitable for capturing the demands of highly automated and computerized work domains. In this article, we propose that work domain analysis can identify the functional structure of a work domain that must be captured in a training system, so that workers can be trained to deal with unpredictable contingencies that cannot be handled by computer systems. To illustrate this argument, we outline a work domain analysis of a fighter aircraft that defines its functional structure in terms of its training objectives, measures of performance, basic training functions, physical functionality, and physical context. The functional structure or training needs identified by work domain analysis can then be used as a basis for developing functional specifications for training systems, specifically its design objectives, data collection capabilities, scenario generation capabilities, physical functionality, and physical attributes. Finally, work domain analysis also provides a useful framework for evaluating whether a tendered solution fulfills the training needs of a work domain.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Este trabalho consiste no desenvolvimento de um Sistema de Apoio à Criminologia – SAC, onde se pretende ajudar os detectives/analistas na prevenção proactiva da criminalidade e na gestão dos seus recursos materiais e humanos, bem como impulsionar estudos sobre a alta incidência de determinados tipos de crime numa dada região. Historicamente, a resolução de crimes tem sido uma prerrogativa da justiça penal e dos seus especialistas e, com o aumento da utilização de sistemas computacionais no sistema judicial para registar todos os dados que dizem respeito a ocorrências de crimes, dados de suspeitos e vítimas, registo criminal de indivíduos e outros dados que fluem dentro da organização, cresce a necessidade de transformar estes dados em informação proveitosa no combate à criminalidade. O SAC tira partido de técnicas de extracção de conhecimento de informação e aplica-as a um conjunto de dados de ocorrências de crimes numa dada região e espaço temporal, bem como a um conjunto de variáveis que influenciam a criminalidade, as quais foram estudadas e identificadas neste trabalho. Este trabalho é constituído por um modelo de extracção de conhecimento de informação e por uma aplicação que permite ao utilizador fornecer um conjunto de dados adequado, garantindo a máxima eficácia do modelo.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The emergence of new business models, namely, the establishment of partnerships between organizations, the chance that companies have of adding existing data on the web, especially in the semantic web, to their information, led to the emphasis on some problems existing in databases, particularly related to data quality. Poor data can result in loss of competitiveness of the organizations holding these data, and may even lead to their disappearance, since many of their decision-making processes are based on these data. For this reason, data cleaning is essential. Current approaches to solve these problems are closely linked to database schemas and specific domains. In order that data cleaning can be used in different repositories, it is necessary for computer systems to understand these data, i.e., an associated semantic is needed. The solution presented in this paper includes the use of ontologies: (i) for the specification of data cleaning operations and, (ii) as a way of solving the semantic heterogeneity problems of data stored in different sources. With data cleaning operations defined at a conceptual level and existing mappings between domain ontologies and an ontology that results from a database, they may be instantiated and proposed to the expert/specialist to be executed over that database, thus enabling their interoperability.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mestrado em Engenharia Electrotécnica e de Computadores

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mestrado em Contabilidade e Gestão das Instituições Financeiras

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Um dos temas mais debatidos na sociedade actual é a segurança. Os níveis de segurança e as ferramentas para os alcançar entram em contraponto com os métodos usados para os quebrar. Como no passado, a razão qualidade/serviço mantém-se hoje, e manter-se-á no futuro, assegurando maior segurança àqueles que melhor se protejam. Problemas simples da vida real como furtos ou uso de falsa identidade assumem no meio informático uma forma rápida e por vezes indetectável de crime organizado. Neste estudo são investigados métodos sociais e aplicações informáticas comuns para quebrar a segurança de um sistema informático genérico. Desta forma, e havendo um entendimento sobre o Modus Operandi das entidades mal-intencionadas, poderá comprovar-se a instabilidade e insegurança de um sistema informático, e, posteriormente, actuar sobre o mesmo de tal forma que fique colocado numa posição da segurança que, podendo não ser infalível, poderá estar muito melhorada. Um dos objectivos fulcrais deste trabalho é conseguir implementar e configurar um sistema completo através de um estudo de soluções de mercado, gratuitas ou comerciais, a nível da implementação de um sistema em rede com todos os serviços comuns instalados, i.e., um pacote “chave na mão” com serviços de máquinas, sistema operativo, aplicações, funcionamento em rede com serviços de correio electrónico, gestão empresarial, anti-vírus, firewall, entre outros. Será possível então evidenciar uma instância de um sistema funcional, seguro e com os serviços necessários a um sistema actual, sem recurso a terceiros, e sujeito a um conjunto de testes que contribuem para o reforço da segurança.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Actualmente verifica-se que a complexidade dos sistemas informáticos tem vindo a aumentar, fazendo parte das nossas ferramentas diárias de trabalho a utilização de sistemas informáticos e a utilização de serviços online. Neste âmbito, a internet obtém um papel de destaque junto das universidades, ao permitir que alunos e professores possam interagir mais facilmente. A internet e a educação baseada na Web vêm oferecer acesso remoto a qualquer informação independentemente da localização ou da hora. Como consequência, qualquer pessoa com uma ligação à internet, ao poder adquirir informações sobre um determinado tema junto dos maiores peritos, obtém vantagens significativas. Os laboratórios remotos são uma solução muito valorizada no que toca a interligar tecnologia e recursos humanos em ambientes que podem estar afastados no tempo ou no espaço. A criação deste tipo de laboratórios e a sua utilidade real só é possível porque as tecnologias de comunicação emergentes têm contribuído de uma forma muito relevante para melhorar a sua disponibilização à distância. A necessidade de criação de laboratórios remotos torna-se imprescindível para pesquisas relacionadas com engenharia que envolvam a utilização de recursos escassos ou de grandes dimensões. Apoiado neste conceito, desenvolveu-se um laboratório remoto para os alunos de engenharia que precisam de testar circuitos digitais numa carta de desenvolvimento de hardware configurável, permitindo a utilização deste recurso de uma forma mais eficiente. O trabalho consistiu na criação de um laboratório remoto de baixo custo, com base em linguagens de programação open source, sendo utilizado como unidade de processamento um router da ASUS com o firmware OpenWrt. Este firmware é uma distribuição Linux para sistemas embutidos. Este laboratório remoto permite o teste dos circuitos digitais numa carta de desenvolvimento de hardware configurável em tempo real, utilizando a interface JTAG. O laboratório desenvolvido tem a particularidade de ter como unidade de processamento um router. A utilização do router como servidor é uma solução muito pouco usual na implementação de laboratórios remotos. Este router, quando comparado com um computador normal, apresenta uma capacidade de processamento e memória muito inferior, embora os testes efectuados provassem que apresenta um desempenho muito adequado às expectativas.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Learning and teaching processes, like all human activities, can be mediated through the use of tools. Information and communication technologies are now widespread within education. Their use in the daily life of teachers and learners affords engagement with educational activities at any place and time and not necessarily linked to an institution or a certificate. In the absence of formal certification, learning under these circumstances is known as informal learning. Despite the lack of certification, learning with technology in this way presents opportunities to gather information about and present new ways of exploiting an individual’s learning. Cloud technologies provide ways to achieve this through new architectures, methodologies, and workflows that facilitate semantic tagging, recognition, and acknowledgment of informal learning activities. The transparency and accessibility of cloud services mean that institutions and learners can exploit existing knowledge to their mutual benefit. The TRAILER project facilitates this aim by providing a technological framework using cloud services, a workflow, and a methodology. The services facilitate the exchange of information and knowledge associated with informal learning activities ranging from the use of social software through widgets, computer gaming, and remote laboratory experiments. Data from these activities are shared among institutions, learners, and workers. The project demonstrates the possibility of gathering information related to informal learning activities independently of the context or tools used to carry them out.