930 resultados para SUMEX-AIM (Computer system)
Resumo:
We consider Cooperative Intrusion Detection System (CIDS) which is a distributed AIS-based (Artificial Immune System) IDS where nodes collaborate over a peer-to-peer overlay network. The AIS uses the negative selection algorithm for the selection of detectors (e.g., vectors of features such as CPU utilization, memory usage and network activity). For better detection performance, selection of all possible detectors for a node is desirable but it may not be feasible due to storage and computational overheads. Limiting the number of detectors on the other hand comes with the danger of missing attacks. We present a scheme for the controlled and decentralized division of detector sets where each IDS is assigned to a region of the feature space. We investigate the trade-off between scalability and robustness of detector sets. We address the problem of self-organization in CIDS so that each node generates a distinct set of the detectors to maximize the coverage of the feature space while pairs of nodes exchange their detector sets to provide a controlled level of redundancy. Our contribution is twofold. First, we use Symmetric Balanced Incomplete Block Design, Generalized Quadrangles and Ramanujan Expander Graph based deterministic techniques from combinatorial design theory and graph theory to decide how many and which detectors are exchanged between which pair of IDS nodes. Second, we use a classical epidemic model (SIR model) to show how properties from deterministic techniques can help us to reduce the attack spread rate.
Resumo:
There are different ways to authenticate humans, which is an essential prerequisite for access control. The authentication process can be subdivided into three categories that rely on something someone i) knows (e.g. password), and/or ii) has (e.g. smart card), and/or iii) is (biometric features). Besides classical attacks on password solutions and the risk that identity-related objects can be stolen, traditional biometric solutions have their own disadvantages such as the requirement of expensive devices, risk of stolen bio-templates etc. Moreover, existing approaches provide the authentication process usually performed only once initially. Non-intrusive and continuous monitoring of user activities emerges as promising solution in hardening authentication process: iii-2) how so. behaves. In recent years various keystroke dynamic behavior-based approaches were published that are able to authenticate humans based on their typing behavior. The majority focuses on so-called static text approaches, where users are requested to type a previously defined text. Relatively few techniques are based on free text approaches that allow a transparent monitoring of user activities and provide continuous verification. Unfortunately only few solutions are deployable in application environments under realistic conditions. Unsolved problems are for instance scalability problems, high response times and error rates. The aim of this work is the development of behavioral-based verification solutions. Our main requirement is to deploy these solutions under realistic conditions within existing environments in order to enable a transparent and free text based continuous verification of active users with low error rates and response times.
Resumo:
Our daily lives become more and more dependent upon smartphones due to their increased capabilities. Smartphones are used in various ways, e.g. for payment systems or assisting the lives of elderly or disabled people. Security threats for these devices become more and more dangerous since there is still a lack of proper security tools for protection. Android emerges as an open smartphone platform which allows modification even on operating system level and where third-party developers first time have the opportunity to develop kernel-based low-level security tools. Android quickly gained its popularity among smartphone developers and even beyond since it bases on Java on top of "open" Linux in comparison to former proprietary platforms which have very restrictive SDKs and corresponding APIs. Symbian OS, holding the greatest market share among all smartphone OSs, was even closing critical APIs to common developers and introduced application certification. This was done since this OS was the main target for smartphone malwares in the past. In fact, more than 290 malwares designed for Symbian OS appeared from July 2004 to July 2008. Android, in turn, promises to be completely open source. Together with the Linux-based smartphone OS OpenMoko, open smartphone platforms may attract malware writers for creating malicious applications endangering the critical smartphone applications and owners privacy. Since signature-based approaches mainly detect known malwares, anomaly-based approaches can be a valuable addition to these systems. They base on mathematical algorithms processing data that describe the state of a certain device. For gaining this data, a monitoring client is needed that has to extract usable information (features) from the monitored system. Our approach follows a dual system for analyzing these features. On the one hand, functionality for on-device light-weight detection is provided. But since most algorithms are resource exhaustive, remote feature analysis is provided on the other hand. Having this dual system enables event-based detection that can react to the current detection need. In our ongoing research we aim to investigates the feasibility of light-weight on-device detection for certain occasions. On other occasions, whenever significant changes are detected on the device, the system can trigger remote detection with heavy-weight algorithms for better detection results. In the absence of the server respectively as a supplementary approach, we also consider a collaborative scenario. Here, mobile devices sharing a common objective are enabled by a collaboration module to share information, such as intrusion detection data and results. This is based on an ad-hoc network mode that can be provided by a WiFi or Bluetooth adapter nearly every smartphone possesses.
Resumo:
Session Initiation Protocol (SIP) is developed to provide advanced voice services over IP networks. SIP unites telephony and data world, permitting telephone calls to be transmitted over Intranets and Internet. Increase in network performance and new mechanisms for guaranteed quality of service encourage this consolidation to provide toll cost savings. Security comes up as one of the most important issues when voice communication and critical voice applications are considered. Not only the security methods provided by traditional telephony systems, but also additional methods are required to overcome security risks introduced by the public IP networks. SIP considers security problems of such a consolidation and provides a security framework. There are several security methods defined within SIP specifications and extensions. But, suggested methods can not solve all the security problems of SIP systems with various system requirements. In this thesis, a Kerberos based solution is proposed for SIP security problems, including SIP authentication and privacy. The proposed solution tries to establish flexible and scalable SIP system that will provide desired level of security for voice communications and critical telephony applications.
Resumo:
There is no doubt that social engineering plays a vital role in compromising most security defenses, and in attacks on people, organizations, companies, or even governments. It is the art of deceiving and tricking people to reveal critical information or to perform an action that benefits the attacker in some way. Fraudulent and deceptive people have been using social engineering traps and tactics using information technology such as e-mails, social networks, web sites, and applications to trick victims into obeying them, accepting threats, and falling victim to various crimes and attacks such as phishing, sexual abuse, financial abuse, identity theft, impersonation, physical crime, and many other forms of attack. Although organizations, researchers, practitioners, and lawyers recognize the severe risk of social engineering-based threats, there is a severe lack of understanding and controlling of such threats. One side of the problem is perhaps the unclear concept of social engineering as well as the complexity of understand human behaviors in behaving toward, approaching, accepting, and failing to recognize threats or the deception behind them. The aim of this paper is to explain the definition of social engineering based on the related theories of the many related disciplines such as psychology, sociology, information technology, marketing, and behaviourism. We hope, by this work, to help researchers, practitioners, lawyers, and other decision makers to get a fuller picture of social engineering and, therefore, to open new directions of collaboration toward detecting and controlling it.
Resumo:
Современный этап развития комплексов автоматического управления и навигации малогабаритными БЛА многократного применения предъявляет высокие требования к автономности, точности и миниатюрности данных систем. Противоречивость требований диктует использование функционального и алгоритмического объединения нескольких разнотипных источников навигационной информации в едином вычислительном процессе на основе методов оптимальной фильтрации. Получили широкое развитие бесплатформенные инерциальные навигационные системы (БИНС) на основе комплексирования данных микромеханических датчиков инерциальной информации и датчиков параметров движения в воздушном потоке с данными спутниковых навигационных систем (СНС). Однако в современных условиях такой подход не в полной мере реализует требования к помехозащищённости, автономности и точности получаемой навигационной информации. Одновременно с этим достигли значительного прогресса навигационные системы, использующие принципы корреляционно экстремальной навигации по оптическим ориентирам и цифровым картам местности. Предлагается схема построения автономной автоматической навигационной системы (АНС) для БЛА многоразового применения на основе объединения алгоритмов БИНС, спутниковой навигационной системы и оптической навигационной системы. The modern stage of automatic control and guidance systems development for small unmanned aerial vehicles (UAV) is determined by advanced requirements for autonomy, accuracy and size of the systems. The contradictory of the requirements dictates novel functional and algorithmic tight coupling of several different onboard sensors into one computational process, which is based on methods of optimal filtering. Nowadays, data fusion of micro-electro mechanical sensors of inertial measurement units, barometric pressure sensors, and signals of global navigation satellite systems (GNSS) receivers is widely used in numerous strap down inertial navigation systems (INS). However, the systems do not fully comply with such requirements as jamming immunity, fault tolerance, autonomy, and accuracy of navigation. At the same time, the significant progress has been recently demonstrated by the navigation systems, which use the correlation extremal principle applied for optical data flow and digital maps. This article proposes a new architecture of automatic navigation management system (ANMS) for small UAV, which combines algorithms of strap down INS, satellite navigation and optical navigation system.
Resumo:
We describe an investigation into how Massey University’s Pollen Classifynder can accelerate the understanding of pollen and its role in nature. The Classifynder is an imaging microscopy system that can locate, image and classify slide based pollen samples. Given the laboriousness of purely manual image acquisition and identification it is vital to exploit assistive technologies like the Classifynder to enable acquisition and analysis of pollen samples. It is also vital that we understand the strengths and limitations of automated systems so that they can be used (and improved) to compliment the strengths and weaknesses of human analysts to the greatest extent possible. This article reviews some of our experiences with the Classifynder system and our exploration of alternative classifier models to enhance both accuracy and interpretability. Our experiments in the pollen analysis problem domain have been based on samples from the Australian National University’s pollen reference collection (2,890 grains, 15 species) and images bundled with the Classifynder system (400 grains, 4 species). These samples have been represented using the Classifynder image feature set.We additionally work through a real world case study where we assess the ability of the system to determine the pollen make-up of samples of New Zealand honey. In addition to the Classifynder’s native neural network classifier, we have evaluated linear discriminant, support vector machine, decision tree and random forest classifiers on these data with encouraging results. Our hope is that our findings will help enhance the performance of future releases of the Classifynder and other systems for accelerating the acquisition and analysis of pollen samples.
Resumo:
Generating discriminative input features is a key requirement for achieving highly accurate classifiers. The process of generating features from raw data is known as feature engineering and it can take significant manual effort. In this paper we propose automated feature engineering to derive a suite of additional features from a given set of basic features with the aim of both improving classifier accuracy through discriminative features, and to assist data scientists through automation. Our implementation is specific to HTTP computer network traffic. To measure the effectiveness of our proposal, we compare the performance of a supervised machine learning classifier built with automated feature engineering versus one using human-guided features. The classifier addresses a problem in computer network security, namely the detection of HTTP tunnels. We use Bro to process network traffic into base features and then apply automated feature engineering to calculate a larger set of derived features. The derived features are calculated without favour to any base feature and include entropy, length and N-grams for all string features, and counts and averages over time for all numeric features. Feature selection is then used to find the most relevant subset of these features. Testing showed that both classifiers achieved a detection rate above 99.93% at a false positive rate below 0.01%. For our datasets, we conclude that automated feature engineering can provide the advantages of increasing classifier development speed and reducing development technical difficulties through the removal of manual feature engineering. These are achieved while also maintaining classification accuracy.
Resumo:
In the near future, robots and CG (computer graphics) will be required to exhibit creative behaviors that reflect designers’ abstract images and emotions. However, there are no effective methods to develop abstract images and emotions and support designers in designing creative behaviors that reflect their images and emotions. Analogy and blending are two methods known to be very effective for designing creative behaviors. The aim of this study is to propose a method for developing designers’ abstract behavioral images and emotions and giving shape to them by constructing a computer system that supports a designer in the creation of the desired behavior. This method focuses on deriving inspiration from the behavioral aspects of natural phenomena rather than simply mimicking it. We have proposed two new methods for developing abstract behavioral images and emotions by which a designer can use analogies from natural things such as animals and plants even when there is a difference in the number of joints between the natural object and the design target. The first method uses visual behavioral images, the second uses rhythmic behavioral images. We have demonstrated examples of designed behaviors to verify the effectiveness of the proposed methods.
Resumo:
This work deals with two related areas: processing of visual information in the central nervous system, and the application of computer systems to research in neurophysiology.
Certain classes of interneurons in the brain and optic lobes of the blowfly Calliphora phaenicia were previously shown to be sensitive to the direction of motion of visual stimuli. These units were identified by visual field, preferred direction of motion, and anatomical location from which recorded. The present work is addressed to the questions: (1) is there interaction between pairs of these units, and (2) if such relationships can be found, what is their nature. To answer these questions, it is essential to record from two or more units simultaneously, and to use more than a single recording electrode if recording points are to be chosen independently. Accordingly, such techniques were developed and are described.
One must also have practical, convenient means for analyzing the large volumes of data so obtained. It is shown that use of an appropriately designed computer system is a profitable approach to this problem. Both hardware and software requirements for a suitable system are discussed and an approach to computer-aided data analysis developed. A description is given of members of a collection of application programs developed for analysis of neuro-physiological data and operated in the environment of and with support from an appropriate computer system. In particular, techniques developed for classification of multiple units recorded on the same electrode are illustrated as are methods for convenient graphical manipulation of data via a computer-driven display.
By means of multiple electrode techniques and the computer-aided data acquisition and analysis system, the path followed by one of the motion detection units was traced from open optic lobe through the brain and into the opposite lobe. It is further shown that this unit and its mirror image in the opposite lobe have a mutually inhibitory relationship. This relationship is investigated. The existence of interaction between other pairs of units is also shown. For pairs of units responding to motion in the same direction, the relationship is of an excitatory nature; for those responding to motion in opposed directions, it is inhibitory.
Experience gained from use of the computer system is discussed and a critical review of the current system is given. The most useful features of the system were found to be the fast response, the ability to go from one analysis technique to another rapidly and conveniently, and the interactive nature of the display system. The shortcomings of the system were problems in real-time use and the programming barrier—the fact that building new analysis techniques requires a high degree of programming knowledge and skill. It is concluded that computer system of the kind discussed will play an increasingly important role in studies of the central nervous system.
Resumo:
Os SIG Sistemas de Informação Geográfica vêm sendo cada vez mais estudados como ferramentas facilitadoras de análises territoriais com o objetivo de subsidiar a gestão ambiental. A Ilha Grande, que pertence ao município de Angra dos Reis, localiza-se na baía de Ilha Grande no sul do estado do Rio de Janeiro e constitui-se no recorte espacial de análise. Apresenta uma dinâmica ambiental complexa que se sobrepõem principalmente aos usos de proteção ambiental e de atividade turística em uma porção do território em que as normatizações legais são difíceis de serem aplicadas, pois são reflexos de interesses que se manifestam em três esferas do poder a municipal, a estadual e a federal. O objetivo principal desta pesquisa é a realização do processamento digital de imagem para auxiliar a gestão territorial da Ilha Grande. Em foco, a estrada Abraão - Dois Rios, que liga Abraão (local de desembarque dos turistas, principal núcleo da Ilha) a Dois Rios (local de visitação por estudantes e pesquisadores, núcleo que abrigava o presídio, atualmente abriga sede do centro de pesquisa e museu da Universidade do Estado do Rio de Janeiro), ambos protegidos por diferentes categorias de unidades de conservação. A metodologia fundamenta-se no processamento digital de imagem através da segmentação e da classificação supervisionada por pixel e por região. O processamento deu-se a partir da segmentação (divisão de uma imagem digital em múltiplas regiões ou objetos, para simplificar e/ou mudar a representação de uma imagem) e dos processos de classificações de imagem, com a utilização de classificação por pixel e classificação por regiões (com a utilização do algoritmo Bhattacharya). As segmentações e classificações foram processadas no sistema computacional SPRING versão 5.1.7 e têm como objetivo auxiliar na análise de uso da Terra e projetar cenários a partir da identificação dos pontos focais de fragilidade encontrados ao longo da estrada Abraão-Dois Rios, propensos a ocorrências de movimentos de massa e que potencializam o efeito de borda da floresta e os impactos ambientais. A metodologia utilizada baseou-se em análise de campo e comparações de tecnologias de classificação de imagens. Essa estrada eixo de ligação entre os dois núcleos tem significativa importância na história da Ilha, nela circulam veículos, pesados e leves, de serviço, pedestres e turistas. Como resultados da presente foram gerados os mapas de classificação por pixel, os mapas de classificação por região, o mapa fuzzy com a intersecção dos mapas de classificação supervisionada por região e os mapas com os locais coletados em campo onde são verificadas ocorrências de movimentos de massa nas imagens ALOS, 2000, IKONOS, 2003 e ortofotografias, 2006. Esses mapas buscam servir de apoio à tomada de decisões por parte dos órgãos locais responsáveis.
Resumo:
This report describes a computer system that creates simple computer animation in response to high-level, vague, and incomplete descriptions of films. It makes its films by collecting and evaluating suggestions from several different bodies of knowledge. The order in which it makes its choices is influenced by the focus of the film. Difficult choices are postponed to be resumed when more of the film has been determined. The system was implemented in an object-oriented language based upon computational entities called "actors". The goal behind the construction of the system is that, whenever faced with a choice, it should sensibly choose between alternatives based upon the description of the film and as much general knowledge as possible. The system is presented as a computational model of creativity and aesthetics.
Resumo:
SIR is a computer system, programmed in the LISP language, which accepts information and answers questions expressed in a restricted form of English. This system demonstrates what can reasonably be called an ability to "understand" semantic information. SIR's semantic and deductive ability is based on the construction of an internal model, which uses word associations and property lists, for the relational information normally conveyed in conversational statements. A format-matching procedure extracts semantic content from English sentences. If an input sentence is declarative, the system adds appropriate information to the model. If an input sentence is a question, the system searches the model until it either finds the answer or determines why it cannot find the answer. In all cases SIR reports its conclusions. The system has some capacity to recognize exceptions to general rules, resolve certain semantic ambiguities, and modify its model structure in order to save computer memory space. Judging from its conversational ability, SIR, is a first step toward intelligent man-machine communication. The author proposes a next step by describing how to construct a more general system which is less complex and yet more powerful than SIR. This proposed system contains a generalized version of the SIR model, a formal logical system called SIR1, and a computer program for testing the truth of SIR1 statements with respect to the generalized model by using partial proof procedures in the predicate calculus. The thesis also describes the formal properties of SIR1 and how they relate to the logical structure of SIR.
Resumo:
Assessment of infant pain is a pressing concern, especially within the context of neonatal intensive care where infants may be exposed to prolonged and repeated pain during lengthy hospitalization. In the present study the feasibility of carrying out the complete Neonatal Facial Coding System (NFCS) in real time at bedside, specifically reliability, construct and concurrent validity, was evaluated in a tertiary level Neonatal Intensive Care Unit (NICU). Heel lance was used as a model of procedural pain, and observed with n = 40 infants at 32 weeks gestational age. Infant sleep/wake state, NFCS facial activity and specific hand movements were coded during baseline, unwrap, swab, heel lance, squeezing and recovery events. Heart rate was recorded continuously and digitally sampled using a custom designed computer system. Repeated measures analysis of variance (ANOVA) showed statistically significant differences across events for facial activity (P <0.0001) and heart rate (P <0.0001). Planned comparisons showed facial activity unchanged during baseline, swab and unwrap, then increased significantly during heel lance (P <0.0001), increased further during squeezing (P <0.003), then decreased during recovery (P <0.0001). Systematic shifts in sleep/wake state were apparent. Rise in facial activity was consistent with increased heart rate, except that facial activity more closely paralleled initiation of the invasive event. Thus facial display was more specific to tissue damage compared with heart rate. Inter-observer reliability was high. Construct validity of the NFCS at bedside was demonstrated as invasive procedures were distinguished from tactile. While bedside coding of behavior does not permit raters to be blind to events, mechanical recording of heart rate allowed for an independent source of concurrent validation for bedside application of the NFCS scale.
Resumo:
Traditionally, we've focussed on the question of how to make a system easy to code the first time, or perhaps on how to ease the system's continued evolution. But if we look at life cycle costs, then we must conclude that the important question is how to make a system easy to operate. To do this we need to make it easy for the operators to see what's going on and to then manipulate the system so that it does what it is supposed to. This is a radically different criterion for success. What makes a computer system visible and controllable? This is a difficult question, but it's clear that today's modern operating systems with nearly 50 million source lines of code are neither. Strikingly, the MIT Lisp Machine and its commercial successors provided almost the same functionality as today's mainstream sytsems, but with only 1 Million lines of code. This paper is a retrospective examination of the features of the Lisp Machine hardware and software system. Our key claim is that by building the Object Abstraction into the lowest tiers of the system, great synergy and clarity were obtained. It is our hope that this is a lesson that can impact tomorrow's designs. We also speculate on how the spirit of the Lisp Machine could be extended to include a comprehensive access control model and how new layers of abstraction could further enrich this model.