994 resultados para Internet security applications


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a cooperative caching architecture suitable for continuous media (CM) proxy caching in MANET environments. The proposed scheme introduces an application manager component, which is interposed between traditional Internet CM applications and the network layer. The application manager transparently performs data location and service migration of active CM streaming sessions so as to exploit nearby data sources based on the dynamic topology of a MANET. We propose two data location schemes - Cache-State - a link-state based scheme and Reactive - an on-demand scheme. Since service migration can occur frequently, the application manager uses soft-state signaling techniques to communicate between remote application managers by translating hard-state application signaling, such as Real Time Streaming Protocol (RTSP) into soft-state messages. The proposed schemes are evaluated through simulation studies using the NS simulator. Simulation studies show that both Cache-State and Reactive schemes demonstrate significant QoS improvements and reduced bandwidth consumption.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Wireless sensor networks (WSNs) are used in health monitoring, tracking and security applications. Such networks transfer data from specific areas to a nominated destination. In the network, each sensor node acts as a routing element for other sensor nodes during the transmission of data. This can increase energy consumption of the sensor node. In this paper, we propose a routing protocol for improving network lifetime and performance. The proposed protocol uses type-2 fuzzy logic to minimize the effects of uncertainty produced by the environmental noise. Simulation results show that the proposed protocol performs better than a recently developed routing protocol in terms of extending network lifetime and saving energy and also reducing data packet lost.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Email has become the critical communication medium for most organizations. Unfortunately, email-born attacks in computer networks are causing considerable economic losses worldwide. Exiting phishing email blocking appliances have little effect in weeding out the vast majority of phishing emails. At the same time, online criminals are becoming more dangerous and sophisticated. Phishing emails are more active than ever before and putting the average computer user and organizations at risk of significant data, brand and financial loss. In this paper, we propose a hybrid feature selection approach based combination of content-based and behaviour-based. The approach could mine the attacker behaviour based on email header. On a publicly available test corpus, our hybrid features selection is able to achieve 94% accuracy rate.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper is devoted to a case study of a new construction of classifiers. These classifiers are called automatically generated multi-level meta classifiers, AGMLMC. The construction combines diverse meta classifiers in a new way to create a unified system. This original construction can be generated automatically producing classifiers with large levels. Different meta classifiers are incorporated as low-level integral parts of another meta classifier at the top level. It is intended for the distributed computing and networking. The AGMLMC classifiers are unified classifiers with many parts that can operate in parallel. This make it easy to adopt them in distributed applications. This paper introduces new construction of classifiers and undertakes an experimental study of their performance. We look at a case study of their effectiveness in the special case of the detection and filtering of phishing emails. This is a possible important application area for such large and distributed classification systems. Our experiments investigate the effectiveness of combining diverse meta classifiers into one AGMLMC classifier in the case study of detection and filtering of phishing emails. The results show that new classifiers with large levels achieved better performance compared to the base classifiers and simple meta classifiers classifiers. This demonstrates that the new technique can be applied to increase the performance if diverse meta classifiers are included in the system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

As the new millennium approaches, we are living in a society that is increasingly dependent upon information technology. However, whilst technology can deliver a number of benefits, it also introduces new vulnerabilities that can be exploited by persons with the necessary technical skills. Hackers represent a well-known threat in this respect and are responsible for a significant degree of disruption and damage to information systems. However, they are not the only criminal element that has to be taken into consideration. Evidence suggests that technology is increasingly seen as potential tool for terrorist organizations. This is leading to the emergence of a new threat in the form of 'cyber terrorists', who attack technological infrastructures such as the Internet in order to help further their cause. The paper discusses the problems posed by these groups and considers the nature of the responses necessary to preserve the future security of our society.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis is comprised of three chapters. The first article studies the determinants of the labor force participation of elderly American males and investigates the factors that may account for the changes in retirement between 1950 and 2000. We develop a life-cycle general equilibrium model with endogenous retirement that embeds Social Security legislation and Medicare. Individuals are ex ante heterogeneous with respect to their preferences for leisure and face uncertainty about labor productivity, health status and out-of-pocket medical expenses. The model is calibrated to the U.S. economy in 2000 and is able to reproduce very closely the retirement behavior of the American population. It reproduces the peaks in the distribution of Social Security applications at ages 62 and 65 and the observed facts that low earners and unhealthy individuals retire earlier. It also matches very closely the increase in retirement from 1950 to 2000. Changes in Social Security policy - which became much more generous - and the introduction of Medicare account for most of the expansion of retirement. In contrast, the isolated impact of the increase in longevity was a delaying of retirement. In the second article, I develop an overlapping generations model of criminal behavior, which extends prior research on crime by taking into account individuals' labor supply decisions and the stigma effect that affects convicted offenders, lowering their likelihood of employment. I use the model to guide a quantitative assessment of the determinants of crime and of a counterfactual experiment in which an income redistribution policy is thought as an alternative to greater law enforcement. The model economy considered in this paper is populated by heterogeneous agents who live for a realistic number of periods, have preferences over consumption and leisure, and differ in terms of their age, their skills as well as their employment shocks. In addition, savings may be precautionary and allow partial insurance against the labor income shocks. Because of the lack of full insurance, this model generates an endogenous distribution of wealth across consumers, enabling us to assess the welfare implications of the redistribution policy experiment. I calibrated the model using the US data for 1980 and then use the model to investigate the changes in criminality between 1980 and 1996. The main results that come out of this study are: 1) Law enforcement policy was the most important factor behind the fall in criminality in the period, while the increase in inequality was the most important single factor promoting crime; 2) Stigmatization is not a free-cost crime control policy; 3) Income redistribution can be a powerful alternative policy to fight crime. Finally, the third article studies the impact of HIV/AIDS on per capita income and education. It explores two channels from HIV/AIDS to income that have not been sufficiently stressed by the literature: the reduction of the incentives to study due to shorter expected longevity and the reduction of productivity of experienced workers. In the model individuals live for three periods, may get infected in the second period and with some probability die of Aids before reaching the third period of their life. Parents care for the welfare of the future generations so that they will maximize lifetime utility of their dynasty. The simulations predict that the most affected countries in Sub-Saharan Africa will be in the future, on average, thirty percent poorer than they would be without AIDS. Schooling will decline in some cases by forty percent. These figures are dramatically reduced with widespread medical treatment, as it increases the survival probability and productivity of infected individuals.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Elétrica - FEIS

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Two of the main features of today complex software systems like pervasive computing systems and Internet-based applications are distribution and openness. Distribution revolves around three orthogonal dimensions: (i) distribution of control|systems are characterised by several independent computational entities and devices, each representing an autonomous and proactive locus of control; (ii) spatial distribution|entities and devices are physically distributed and connected in a global (such as the Internet) or local network; and (iii) temporal distribution|interacting system components come and go over time, and are not required to be available for interaction at the same time. Openness deals with the heterogeneity and dynamism of system components: complex computational systems are open to the integration of diverse components, heterogeneous in terms of architecture and technology, and are dynamic since they allow components to be updated, added, or removed while the system is running. The engineering of open and distributed computational systems mandates for the adoption of a software infrastructure whose underlying model and technology could provide the required level of uncoupling among system components. This is the main motivation behind current research trends in the area of coordination middleware to exploit tuple-based coordination models in the engineering of complex software systems, since they intrinsically provide coordinated components with communication uncoupling and further details in the references therein. An additional daunting challenge for tuple-based models comes from knowledge-intensive application scenarios, namely, scenarios where most of the activities are based on knowledge in some form|and where knowledge becomes the prominent means by which systems get coordinated. Handling knowledge in tuple-based systems induces problems in terms of syntax - e.g., two tuples containing the same data may not match due to differences in the tuple structure - and (mostly) of semantics|e.g., two tuples representing the same information may not match based on a dierent syntax adopted. Till now, the problem has been faced by exploiting tuple-based coordination within a middleware for knowledge intensive environments: e.g., experiments with tuple-based coordination within a Semantic Web middleware (surveys analogous approaches). However, they appear to be designed to tackle the design of coordination for specic application contexts like Semantic Web and Semantic Web Services, and they result in a rather involved extension of the tuple space model. The main goal of this thesis was to conceive a more general approach to semantic coordination. In particular, it was developed the model and technology of semantic tuple centres. It is adopted the tuple centre model as main coordination abstraction to manage system interactions. A tuple centre can be seen as a programmable tuple space, i.e. an extension of a Linda tuple space, where the behaviour of the tuple space can be programmed so as to react to interaction events. By encapsulating coordination laws within coordination media, tuple centres promote coordination uncoupling among coordinated components. Then, the tuple centre model was semantically enriched: a main design choice in this work was to try not to completely redesign the existing syntactic tuple space model, but rather provide a smooth extension that { although supporting semantic reasoning { keep the simplicity of tuple and tuple matching as easier as possible. By encapsulating the semantic representation of the domain of discourse within coordination media, semantic tuple centres promote semantic uncoupling among coordinated components. The main contributions of the thesis are: (i) the design of the semantic tuple centre model; (ii) the implementation and evaluation of the model based on an existent coordination infrastructure; (iii) a view of the application scenarios in which semantic tuple centres seem to be suitable as coordination media.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Conventional inorganic materials for x-ray radiation sensors suffer from several drawbacks, including their inability to cover large curved areas, me- chanical sti ffness, lack of tissue-equivalence and toxicity. Semiconducting organic polymers represent an alternative and have been employed as di- rect photoconversion material in organic diodes. In contrast to inorganic detector materials, polymers allow low-cost and large area fabrication by sol- vent based methods. In addition their processing is compliant with fexible low-temperature substrates. Flexible and large-area detectors are needed for dosimetry in medical radiotherapy and security applications. The objective of my thesis is to achieve optimized organic polymer diodes for fexible, di- rect x-ray detectors. To this end polymer diodes based on two different semi- conducting polymers, polyvinylcarbazole (PVK) and poly(9,9-dioctyluorene) (PFO) have been fabricated. The diodes show state-of-the-art rectifying be- haviour and hole transport mobilities comparable to reference materials. In order to improve the X-ray stopping power, high-Z nanoparticle Bi2O3 or WO3 where added to realize a polymer-nanoparticle composite with opti- mized properities. X-ray detector characterization resulted in sensitivties of up to 14 uC/Gy/cm2 for PVK when diodes were operated in reverse. Addition of nanoparticles could further improve the performance and a maximum sensitivy of 19 uC/Gy/cm2 was obtained for the PFO diodes. Compared to the pure PFO diode this corresponds to a five-fold increase and thus highlights the potentiality of nanoparticles for polymer detector design. In- terestingly the pure polymer diodes showed an order of magnitude increase in sensitivity when operated in forward regime. The increase was attributed to a different detection mechanism based on the modulation of the diodes conductivity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Atmospheric turbulence near the ground severely limits the quality of imagery acquired over long horizontal paths. In defense, surveillance, and border security applications, there is interest in deploying man-portable, embedded systems incorporating image reconstruction methods to compensate turbulence effects. While many image reconstruction methods have been proposed, their suitability for use in man-portable embedded systems is uncertain. To be effective, these systems must operate over significant variations in turbulence conditions while subject to other variations due to operation by novice users. Systems that meet these requirements and are otherwise designed to be immune to the factors that cause variation in performance are considered robust. In addition robustness in design, the portable nature of these systems implies a preference for systems with a minimum level of computational complexity. Speckle imaging methods have recently been proposed as being well suited for use in man-portable horizontal imagers. In this work, the robustness of speckle imaging methods is established by identifying a subset of design parameters that provide immunity to the expected variations in operating conditions while minimizing the computation time necessary for image recovery. Design parameters are selected by parametric evaluation of system performance as factors external to the system are varied. The precise control necessary for such an evaluation is made possible using image sets of turbulence degraded imagery developed using a novel technique for simulating anisoplanatic image formation over long horizontal paths. System performance is statistically evaluated over multiple reconstruction using the Mean Squared Error (MSE) to evaluate reconstruction quality. In addition to more general design parameters, the relative performance the bispectrum and the Knox-Thompson phase recovery methods is also compared. As an outcome of this work it can be concluded that speckle-imaging techniques are robust to the variation in turbulence conditions and user controlled parameters expected when operating during the day over long horizontal paths. Speckle imaging systems that incorporate 15 or more image frames and 4 estimates of the object phase per reconstruction provide up to 45% reduction in MSE and 68% reduction in the deviation. In addition, Knox-Thompson phase recover method is shown to produce images in half the time required by the bispectrum. The quality of images reconstructed using Knox-Thompson and bispectrum methods are also found to be nearly identical. Finally, it is shown that certain blind image quality metrics can be used in place of the MSE to evaluate quality in field scenarios. Using blind metrics rather depending on user estimates allows for reconstruction quality that differs from the minimum MSE by as little as 1%, significantly reducing the deviation in performance due to user action.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Biometrics applied to mobile devices are of great interest for security applications. Daily scenarios can benefit of a combination of both the most secure systems and most simple and extended devices. This document presents a hand biometric system oriented to mobile devices, proposing a non-intrusive, contact-less acquisition process where final users should take a picture of their hand in free-space with a mobile device without removals of rings, bracelets or watches. The main contribution of this paper is threefold: firstly, a feature extraction method is proposed, providing invariant hand measurements to previous changes; second contribution consists of providing a template creation based on hand geometric distances, requiring information from only one individual, without considering data from the rest of individuals within the database; finally, a proposal for template matching is proposed, minimizing the intra-class similarity and maximizing the inter-class likeliness. The proposed method is evaluated using three publicly available contact-less, platform-free databases. In addition, the results obtained with these databases will be compared to the results provided by two competitive pattern recognition techniques, namely Support Vector Machines (SVM) and k-Nearest Neighbour, often employed within the literature. Therefore, this approach provides an appropriate solution to adapt hand biometrics to mobile devices, with an accurate results and a non-intrusive acquisition procedure which increases the overall acceptance from the final user.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Synthetic Aperture Radar (SAR) images a target region reflectivity function in the multi-dimensional spatial domain of range and cross-range with a finer azimuth resolution than the one provided by any on-board real antenna. Conventional SAR techniques assume a single reflection of transmitted waveforms from targets. Nevertheless, new uses of Unmanned Aerial Vehicles (UAVs) for civilian-security applications force SAR systems to work in much more complex scenes such as urban environments. Consequently, multiple-bounce returns are additionally superposed to direct-scatter echoes. They are known as ghost images, since they obscure true target image and lead to poor resolution. All this may involve a significant problem in applications related to surveillance and security. In this work, an innovative multipath mitigation technique is presented in which Time Reversal (TR) concept is applied to SAR images when the target is concealed in clutter, leading to TR-SAR technique. This way, the effect of multipath is considerably reduced ?or even removed?, recovering the lost resolution due to multipath propagation. Furthermore, some focusing indicators such as entropy (E), contrast (C) and Rényi entropy (RE) provide us with a good focusing criterion when using TR-SAR.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract This work is focused on the problem of performing multi‐robot patrolling for infrastructure security applications in order to protect a known environment at critical facilities. Thus, given a set of robots and a set of points of interest, the patrolling task consists of constantly visiting these points at irregular time intervals for security purposes. Current existing solutions for these types of applications are predictable and inflexible. Moreover, most of the previous centralized and deterministic solutions and only few efforts have been made to integrate dynamic methods. Therefore, the development of new dynamic and decentralized collaborative approaches in order to solve the aforementioned problem by implementing learning models from Game Theory. The model selected in this work that includes belief‐based and reinforcement models as special cases is called Experience‐Weighted Attraction. The problem has been defined using concepts of Graph Theory to represent the environment in order to work with such Game Theory techniques. Finally, the proposed methods have been evaluated experimentally by using a patrolling simulator. The results obtained have been compared with previous available

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El principio de Teoría de Juegos permite desarrollar modelos estocásticos de patrullaje multi-robot para proteger infraestructuras criticas. La protección de infraestructuras criticas representa un gran reto para los países al rededor del mundo, principalmente después de los ataques terroristas llevados a cabo la década pasada. En este documento el termino infraestructura hace referencia a aeropuertos, plantas nucleares u otros instalaciones. El problema de patrullaje se define como la actividad de patrullar un entorno determinado para monitorear cualquier actividad o sensar algunas variables ambientales. En esta actividad, un grupo de robots debe visitar un conjunto de puntos de interés definidos en un entorno en intervalos de tiempo irregulares con propósitos de seguridad. Los modelos de partullaje multi-robot son utilizados para resolver este problema. Hasta el momento existen trabajos que resuelven este problema utilizando diversos principios matemáticos. Los modelos de patrullaje multi-robot desarrollados en esos trabajos representan un gran avance en este campo de investigación. Sin embargo, los modelos con los mejores resultados no son viables para aplicaciones de seguridad debido a su naturaleza centralizada y determinista. Esta tesis presenta cinco modelos de patrullaje multi-robot distribuidos e impredecibles basados en modelos matemáticos de aprendizaje de Teoría de Juegos. El objetivo del desarrollo de estos modelos está en resolver los inconvenientes presentes en trabajos preliminares. Con esta finalidad, el problema de patrullaje multi-robot se formuló utilizando conceptos de Teoría de Grafos, en la cual se definieron varios juegos en cada vértice de un grafo. Los modelos de patrullaje multi-robot desarrollados en este trabajo de investigación se han validado y comparado con los mejores modelos disponibles en la literatura. Para llevar a cabo tanto la validación como la comparación se ha utilizado un simulador de patrullaje y un grupo de robots reales. Los resultados experimentales muestran que los modelos de patrullaje desarrollados en este trabajo de investigación trabajan mejor que modelos de trabajos previos en el 80% de 150 casos de estudio. Además de esto, estos modelos cuentan con varias características importantes tales como distribución, robustez, escalabilidad y dinamismo. Los avances logrados con este trabajo de investigación dan evidencia del potencial de Teoría de Juegos para desarrollar modelos de patrullaje útiles para proteger infraestructuras. ABSTRACT Game theory principle allows to developing stochastic multi-robot patrolling models to protect critical infrastructures. Critical infrastructures protection is a great concern for countries around the world, mainly due to terrorist attacks in the last decade. In this document, the term infrastructures includes airports, nuclear power plants, and many other facilities. The patrolling problem is defined as the activity of traversing a given environment to monitoring any activity or sensing some environmental variables If this activity were performed by a fleet of robots, they would have to visit some places of interest of an environment at irregular intervals of time for security purposes. This problem is solved using multi-robot patrolling models. To date, literature works have been solved this problem applying various mathematical principles.The multi-robot patrolling models developed in those works represent great advances in this field. However, the models that obtain the best results are unfeasible for security applications due to their centralized and predictable nature. This thesis presents five distributed and unpredictable multi-robot patrolling models based on mathematical learning models derived from Game Theory. These multi-robot patrolling models aim at overcoming the disadvantages of previous work. To this end, the multi-robot patrolling problem was formulated using concepts of Graph Theory to represent the environment. Several normal-form games were defined at each vertex of a graph in this formulation. The multi-robot patrolling models developed in this research work have been validated and compared with best ranked multi-robot patrolling models in the literature. Both validation and comparison were preformed by using both a patrolling simulator and real robots. Experimental results show that the multirobot patrolling models developed in this research work improve previous ones in as many as 80% of 150 cases of study. Moreover, these multi-robot patrolling models rely on several features to highlight in security applications such as distribution, robustness, scalability, and dynamism. The achievements obtained in this research work validate the potential of Game Theory to develop patrolling models to protect infrastructures.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Market orientation strategies are now expected to be integrated and enacted by firms and governments alike. While private services will surely continue to take the lead in mobile strategy orientation, others such as government and Non-Governmental Organizations (NGOs) are also becoming prominent Mobile Players (m-Players). Enhanced data services through smart phones are raising expectations that governments will finally deliver services that are in line with a consumer ICT lifestyle. To date, it is not certain which form of technological standards will take the lead, e.g. enhanced m-services or traditional Internet-based applications. Yet, with the introduction of interactive applications and fully transactional services via 3G smart phones, the currently untapped segment of the population (without computers) have the potential to gain access to government services at a low cost.