880 resultados para Internet security applications
Resumo:
Es descriu el disseny i posterior implementació de la nova plataforma d’automatització del servei ofert per Internet Security Auditors, S.L. destinada a l’anàlisi de dominis d’Internet amb la finalitat de detectar possibles infeccions que afectin a usuaris de la web. El sistema actual conté algunes deficiències, de manera que aquest text presenta una nova versió, la qual aporta millores molt significatives com ara una gestió més òptima, o un disseny renovat i escalable de la informació i els diferents processos. Així mateix es dota al sistema d’un control d’errors centralitzat, amb enviament d’alàrmes en temps real, i una agrupació i centralització dels resultats.
Resumo:
Peer-reviewed
Resumo:
De nos jours, la voiture est devenue le mode de transport le plus utilisé, mais malheureusement, il est accompagné d’un certain nombre de problèmes (accidents, pollution, embouteillages, etc.), qui vont aller en s’aggravant avec l’augmentation prévue du nombre de voitures particulières, malgré les efforts très importants mis en œuvre pour tenter de les réduire ; le nombre de morts sur les routes demeure très important. Les réseaux sans fil de véhicules, appelés VANET, qui consistent de plusieurs véhicules mobiles sans infrastructure préexistante pour communiquer, font actuellement l’objet d'une attention accrue de la part des constructeurs et des chercheurs, afin d’améliorer la sécurité sur les routes ou encore les aides proposées aux conducteurs. Par exemple, ils peuvent avertir d’autres automobilistes que les routes sont glissantes ou qu’un accident vient de se produire. Dans VANET, les protocoles de diffusion (broadcast) jouent un rôle très important par rapport aux messages unicast, car ils sont conçus pour transmettre des messages de sécurité importants à tous les nœuds. Ces protocoles de diffusion ne sont pas fiables et ils souffrent de plusieurs problèmes, à savoir : (1) Tempête de diffusion (broadcast storm) ; (2) Nœud caché (hidden node) ; (3) Échec de la transmission. Ces problèmes doivent être résolus afin de fournir une diffusion fiable et rapide. L’objectif de notre recherche est de résoudre certains de ces problèmes, tout en assurant le meilleur compromis entre fiabilité, délai garanti, et débit garanti (Qualité de Service : QdS). Le travail de recherche de ce mémoire a porté sur le développement d’une nouvelle technique qui peut être utilisée pour gérer le droit d’accès aux médias (protocole de gestion des émissions), la gestion de grappe (cluster) et la communication. Ce protocole intègre l'approche de gestion centralisée des grappes stables et la transmission des données. Dans cette technique, le temps est divisé en cycles, chaque cycle est partagé entre les canaux de service et de contrôle, et divisé en deux parties. La première partie s’appuie sur TDMA (Time Division Multiple Access). La deuxième partie s’appuie sur CSMA/CA (Carrier Sense Multiple Access / Collision Avoidance) pour gérer l’accès au medium. En outre, notre protocole ajuste d’une manière adaptative le temps consommé dans la diffusion des messages de sécurité, ce qui permettra une amélioration de la capacité des canaux. Il est implanté dans la couche MAC (Medium Access Control), centralisé dans les têtes de grappes (CH, cluster-head) qui s’adaptent continuellement à la dynamique des véhicules. Ainsi, l’utilisation de ce protocole centralisé nous assure une consommation efficace d’intervalles de temps pour le nombre exact de véhicules actifs, y compris les nœuds/véhicules cachés; notre protocole assure également un délai limité pour les applications de sécurité, afin d’accéder au canal de communication, et il permet aussi de réduire le surplus (overhead) à l’aide d’une propagation dirigée de diffusion.
TactoColor : conception et évaluation d’une interface d’exploration spatiale du web pour malvoyants.
Resumo:
Nous nous intéressons, dans le cadre de cette recherche, à l’accès à l’internet des personnes malvoyantes. Plusieurs types d’outils destinés à ce public sont disponibles sur le marché, comme les lecteurs et les agrandisseurs d’écran, en fonction de l’acuité visuelle de la personne. Bien que ces outils soient utiles et régulièrement utilisés, les malvoyants (ainsi que les aveugles) évoquent souvent leur aspect frustrant. Plusieurs raisons sont citées, comme le manque d’organisation spatiale du contenu lu avec les lecteurs d’écran ou le fait de ne solliciter qu’un seul sens. La présente recherche consiste à adapter pour les malvoyants un système en développement le TactoWeb (Petit, 2013) qui permet une exploration audio-tactile du Web. TactoWeb a été conçu pour les handicapés ayant une cécité complète et n’offre donc aucune propriété visuelle. Nous proposons ici une adaptation du système pour les handicapés n’ayant qu’une déficience visuelle partielle. Nous espérons fournir à cette population des outils performants qui leur permettront de naviguer sur l’internet de façon efficace et agréable. En effet, grâce à une exploration non-linéaire (qui devrait améliorer l’orientation spatiale) et une interface multimodale (qui sollicite la vue, l’ouïe et le toucher), nous pensons réduire fortement le sentiment de frustration qu’évoquent les malvoyants. Nous avons posé l’hypothèse qu’une exploration non-linéaire et trimodale d’un site internet avec TactoColor est plus satisfaisante et efficace qu’une exploration non-linéaire bimodale avec TactoWeb (sans retour visuel). TactoColor a été adapté pour les malvoyants en ajoutant des indices visuels traduisant les composantes de la page (liens, menus, boutons) qui devraient rendre l’exploration plus aisée. Pour vérifier notre hypothèse, les deux versions du logiciel ont été évaluées par des malvoyants. Ainsi, les participants ont commencé soit avec TactoWeb, soit avec TactoColor afin de ne pas favoriser une des versions. La qualité de la navigation, son efficacité et son efficience ont été analysées en se basant sur le temps nécessaire à l’accomplissement d’une tâche, ainsi que la facilité ou la difficulté évoquée par le participant. Aussi, à la fin de chaque session, nous avons demandé leur avis aux participants, grâce à un questionnaire d’évaluation, ce qui nous a permis d’avoir leur retour sur notre logiciel après leur brève expérience. Tous ces relevés nous ont permis de déterminer que l’ajout des couleurs entraine une exploration plus rapide des pages web et une meilleure orientation spatiale. Par contre les performances très différentes des participants ne permettent pas de dire si la présence des couleurs facilite la complétion des tâches.
Resumo:
This paper presents a performance analysis of reversible, fault tolerant VLSI implementations of carry select and hybrid decimal adders suitable for multi-digit BCD addition. The designs enable partial parallel processing of all digits that perform high-speed addition in decimal domain. When the number of digits is more than 25 the hybrid decimal adder can operate 5 times faster than conventional decimal adder using classical logic gates. The speed up factor of hybrid adder increases above 10 when the number of decimal digits is more than 25 for reversible logic implementation. Such highspeed decimal adders find applications in real time processors and internet-based applications. The implementations use only reversible conservative Fredkin gates, which make it suitable for VLSI circuits.
Resumo:
Biometrics has become important in security applications. In comparison with many other biometric features, iris recognition has very high recognition accuracy because it depends on iris which is located in a place that still stable throughout human life and the probability to find two identical iris's is close to zero. The identification system consists of several stages including segmentation stage which is the most serious and critical one. The current segmentation methods still have limitation in localizing the iris due to circular shape consideration of the pupil. In this research, Daugman method is done to investigate the segmentation techniques. Eyelid detection is another step that has been included in this study as a part of segmentation stage to localize the iris accurately and remove unwanted area that might be included. The obtained iris region is encoded using haar wavelets to construct the iris code, which contains the most discriminating feature in the iris pattern. Hamming distance is used for comparison of iris templates in the recognition stage. The dataset which is used for the study is UBIRIS database. A comparative study of different edge detector operator is performed. It is observed that canny operator is best suited to extract most of the edges to generate the iris code for comparison. Recognition rate of 89% and rejection rate of 95% is achieved
Resumo:
In this computerized, globalised and internet world our computer collects various types of information’s about every human being and stores them in files secreted deep on its hard drive. Files like cache, browser history and other temporary Internet files can be used to store sensitive information like logins and passwords, names addresses, and even credit card numbers. Now, a hacker can get at this information by wrong means and share with someone else or can install some nasty software on your computer that will extract your sensitive and secret information. Identity Theft posses a very serious problem to everyone today. If you have a driver’s license, a bank account, a computer, ration card number, PAN card number, ATM card or simply a social security number you are more than at risk, you are a target. Whether you are new to the idea of ID Theft, or you have some unanswered questions, we’ve compiled a quick refresher list below that should bring you up to speed. Identity theft is a term used to refer to fraud that involves pretending to be someone else in order to steal money or get other benefits. Identity theft is a serious crime, which is increasing at tremendous rate all over the world after the Internet evolution. There is widespread agreement that identity theft causes financial damage to consumers, lending institutions, retail establishments, and the economy as a whole. Surprisingly, there is little good public information available about the scope of the crime and the actual damages it inflicts. Accounts of identity theft in recent mass media and in film or literature have centered on the exploits of 'hackers' - variously lauded or reviled - who are depicted as cleverly subverting corporate firewalls or other data protection defenses to gain unauthorized access to credit card details, personnel records and other information. Reality is more complicated, with electronic identity fraud taking a range of forms. The impact of those forms is not necessarily quantifiable as a financial loss; it can involve intangible damage to reputation, time spent dealing with disinformation and exclusion from particular services because a stolen name has been used improperly. Overall we can consider electronic networks as an enabler for identity theft, with the thief for example gaining information online for action offline and the basis for theft or other injury online. As Fisher pointed out "These new forms of hightech identity and securities fraud pose serious risks to investors and brokerage firms across the globe," I am a victim of identity theft. Being a victim of identity theft I felt the need for creating an awareness among the computer and internet users particularly youngsters in India. Nearly 70 per cent of Indian‘s population are living in villages. Government of India already started providing computer and internet facilities even to the remote villages through various rural development and rural upliftment programmes. Highly educated people, established companies, world famous financial institutions are becoming victim of identity theft. The question here is how vulnerable the illiterate and innocent rural people are if they suddenly exposed to a new device through which some one can extract and exploit their personal data without their knowledge? In this research work an attempt has been made to bring out the real problems associated with Identity theft in developed countries from an economist point of view.
Resumo:
We know where you live is an entertaining and informative quiz show highlighting the dangers resulting from a lack of awareness of Facebook's privacy and security settings. The game show is complemented by a short tutorial explaining these settings. The show is aimed at a wider audience and is suitable for all.
Resumo:
This paper presents an approach for automatic classification of pulsed Terahertz (THz), or T-ray, signals highlighting their potential in biomedical, pharmaceutical and security applications. T-ray classification systems supply a wealth of information about test samples and make possible the discrimination of heterogeneous layers within an object. In this paper, a novel technique involving the use of Auto Regressive (AR) and Auto Regressive Moving Average (ARMA) models on the wavelet transforms of measured T-ray pulse data is presented. Two example applications are examined - the classi. cation of normal human bone (NHB) osteoblasts against human osteosarcoma (HOS) cells and the identification of six different powder samples. A variety of model types and orders are used to generate descriptive features for subsequent classification. Wavelet-based de-noising with soft threshold shrinkage is applied to the measured T-ray signals prior to modeling. For classi. cation, a simple Mahalanobis distance classi. er is used. After feature extraction, classi. cation accuracy for cancerous and normal cell types is 93%, whereas for powders, it is 98%.
Resumo:
Tropical Applications of Meteorology Using Satellite and Ground-Based Observations (TAMSAT) rainfall estimates are used extensively across Africa for operational rainfall monitoring and food security applications; thus, regional evaluations of TAMSAT are essential to ensure its reliability. This study assesses the performance of TAMSAT rainfall estimates, along with the African Rainfall Climatology (ARC), version 2; the Tropical Rainfall Measuring Mission (TRMM) 3B42 product; and the Climate Prediction Center morphing technique (CMORPH), against a dense rain gauge network over a mountainous region of Ethiopia. Overall, TAMSAT exhibits good skill in detecting rainy events but underestimates rainfall amount, while ARC underestimates both rainfall amount and rainy event frequency. Meanwhile, TRMM consistently performs best in detecting rainy events and capturing the mean rainfall and seasonal variability, while CMORPH tends to overdetect rainy events. Moreover, the mean difference in daily rainfall between the products and rain gauges shows increasing underestimation with increasing elevation. However, the distribution in satellite–gauge differences demon- strates that although 75% of retrievals underestimate rainfall, up to 25% overestimate rainfall over all eleva- tions. Case studies using high-resolution simulations suggest underestimation in the satellite algorithms is likely due to shallow convection with warm cloud-top temperatures in addition to beam-filling effects in microwave- based retrievals from localized convective cells. The overestimation by IR-based algorithms is attributed to nonraining cirrus with cold cloud-top temperatures. These results stress the importance of understanding re- gional precipitation systems causing uncertainties in satellite rainfall estimates with a view toward using this knowledge to improve rainfall algorithms.
Resumo:
This thesis is comprised of three chapters. The first article studies the determinants of the labor force participation of elderly American males and investigates the factors that may account for the changes in retirement between 1950 and 2000. We develop a life-cycle general equilibrium model with endogenous retirement that embeds Social Security legislation and Medicare. Individuals are ex ante heterogeneous with respect to their preferences for leisure and face uncertainty about labor productivity, health status and out-of-pocket medical expenses. The model is calibrated to the U.S. economy in 2000 and is able to reproduce very closely the retirement behavior of the American population. It reproduces the peaks in the distribution of Social Security applications at ages 62 and 65 and the observed facts that low earners and unhealthy individuals retire earlier. It also matches very closely the increase in retirement from 1950 to 2000. Changes in Social Security policy - which became much more generous - and the introduction of Medicare account for most of the expansion of retirement. In contrast, the isolated impact of the increase in longevity was a delaying of retirement. In the second article, I develop an overlapping generations model of criminal behavior, which extends prior research on crime by taking into account individuals' labor supply decisions and the stigma effect that affects convicted offenders, lowering their likelihood of employment. I use the model to guide a quantitative assessment of the determinants of crime and of a counterfactual experiment in which an income redistribution policy is thought as an alternative to greater law enforcement. The model economy considered in this paper is populated by heterogeneous agents who live for a realistic number of periods, have preferences over consumption and leisure, and differ in terms of their age, their skills as well as their employment shocks. In addition, savings may be precautionary and allow partial insurance against the labor income shocks. Because of the lack of full insurance, this model generates an endogenous distribution of wealth across consumers, enabling us to assess the welfare implications of the redistribution policy experiment. I calibrated the model using the US data for 1980 and then use the model to investigate the changes in criminality between 1980 and 1996. The main results that come out of this study are: 1) Law enforcement policy was the most important factor behind the fall in criminality in the period, while the increase in inequality was the most important single factor promoting crime; 2) Stigmatization is not a free-cost crime control policy; 3) Income redistribution can be a powerful alternative policy to fight crime. Finally, the third article studies the impact of HIV/AIDS on per capita income and education. It explores two channels from HIV/AIDS to income that have not been sufficiently stressed by the literature: the reduction of the incentives to study due to shorter expected longevity and the reduction of productivity of experienced workers. In the model individuals live for three periods, may get infected in the second period and with some probability die of Aids before reaching the third period of their life. Parents care for the welfare of the future generations so that they will maximize lifetime utility of their dynasty. The simulations predict that the most affected countries in Sub-Saharan Africa will be in the future, on average, thirty percent poorer than they would be without AIDS. Schooling will decline in some cases by forty percent. These figures are dramatically reduced with widespread medical treatment, as it increases the survival probability and productivity of infected individuals.
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
Two of the main features of today complex software systems like pervasive computing systems and Internet-based applications are distribution and openness. Distribution revolves around three orthogonal dimensions: (i) distribution of control|systems are characterised by several independent computational entities and devices, each representing an autonomous and proactive locus of control; (ii) spatial distribution|entities and devices are physically distributed and connected in a global (such as the Internet) or local network; and (iii) temporal distribution|interacting system components come and go over time, and are not required to be available for interaction at the same time. Openness deals with the heterogeneity and dynamism of system components: complex computational systems are open to the integration of diverse components, heterogeneous in terms of architecture and technology, and are dynamic since they allow components to be updated, added, or removed while the system is running. The engineering of open and distributed computational systems mandates for the adoption of a software infrastructure whose underlying model and technology could provide the required level of uncoupling among system components. This is the main motivation behind current research trends in the area of coordination middleware to exploit tuple-based coordination models in the engineering of complex software systems, since they intrinsically provide coordinated components with communication uncoupling and further details in the references therein. An additional daunting challenge for tuple-based models comes from knowledge-intensive application scenarios, namely, scenarios where most of the activities are based on knowledge in some form|and where knowledge becomes the prominent means by which systems get coordinated. Handling knowledge in tuple-based systems induces problems in terms of syntax - e.g., two tuples containing the same data may not match due to differences in the tuple structure - and (mostly) of semantics|e.g., two tuples representing the same information may not match based on a dierent syntax adopted. Till now, the problem has been faced by exploiting tuple-based coordination within a middleware for knowledge intensive environments: e.g., experiments with tuple-based coordination within a Semantic Web middleware (surveys analogous approaches). However, they appear to be designed to tackle the design of coordination for specic application contexts like Semantic Web and Semantic Web Services, and they result in a rather involved extension of the tuple space model. The main goal of this thesis was to conceive a more general approach to semantic coordination. In particular, it was developed the model and technology of semantic tuple centres. It is adopted the tuple centre model as main coordination abstraction to manage system interactions. A tuple centre can be seen as a programmable tuple space, i.e. an extension of a Linda tuple space, where the behaviour of the tuple space can be programmed so as to react to interaction events. By encapsulating coordination laws within coordination media, tuple centres promote coordination uncoupling among coordinated components. Then, the tuple centre model was semantically enriched: a main design choice in this work was to try not to completely redesign the existing syntactic tuple space model, but rather provide a smooth extension that { although supporting semantic reasoning { keep the simplicity of tuple and tuple matching as easier as possible. By encapsulating the semantic representation of the domain of discourse within coordination media, semantic tuple centres promote semantic uncoupling among coordinated components. The main contributions of the thesis are: (i) the design of the semantic tuple centre model; (ii) the implementation and evaluation of the model based on an existent coordination infrastructure; (iii) a view of the application scenarios in which semantic tuple centres seem to be suitable as coordination media.