535 resultados para RFID authentication
Resumo:
There are authentication models which use passwords, keys, personal identifiers (cards, tags etc) to authenticate a particular user in the authentication/identification process. However, there are other systems that can use biometric data, such as signature, fingerprint, voice, etc., to authenticate an individual in a system. In another hand, the storage of biometric can bring some risks such as consistency and protection problems for these data. According to this problem, it is necessary to protect these biometric databases to ensure the integrity and reliability of the system. In this case, there are models for security/authentication biometric identification, for example, models and Fuzzy Vault and Fuzzy Commitment systems. Currently, these models are mostly used in the cases for protection of biometric data, but they have fragile elements in the protection process. Therefore, increasing the level of security of these methods through changes in the structure, or even by inserting new layers of protection is one of the goals of this thesis. In other words, this work proposes the simultaneous use of encryption (Encryption Algorithm Papilio) with protection models templates (Fuzzy Vault and Fuzzy Commitment) in identification systems based on biometric. The objective of this work is to improve two aspects in Biometric systems: safety and accuracy. Furthermore, it is necessary to maintain a reasonable level of efficiency of this data through the use of more elaborate classification structures, known as committees. Therefore, we intend to propose a model of a safer biometric identification systems for identification.
Resumo:
Until recently the use of biometrics was restricted to high-security environments and criminal identification applications, for economic and technological reasons. However, in recent years, biometric authentication has become part of daily lives of people. The large scale use of biometrics has shown that users within the system may have different degrees of accuracy. Some people may have trouble authenticating, while others may be particularly vulnerable to imitation. Recent studies have investigated and identified these types of users, giving them the names of animals: Sheep, Goats, Lambs, Wolves, Doves, Chameleons, Worms and Phantoms. The aim of this study is to evaluate the existence of these users types in a database of fingerprints and propose a new way of investigating them, based on the performance of verification between subjects samples. Once introduced some basic concepts in biometrics and fingerprint, we present the biometric menagerie and how to evaluate them.
Resumo:
Until recently the use of biometrics was restricted to high-security environments and criminal identification applications, for economic and technological reasons. However, in recent years, biometric authentication has become part of daily lives of people. The large scale use of biometrics has shown that users within the system may have different degrees of accuracy. Some people may have trouble authenticating, while others may be particularly vulnerable to imitation. Recent studies have investigated and identified these types of users, giving them the names of animals: Sheep, Goats, Lambs, Wolves, Doves, Chameleons, Worms and Phantoms. The aim of this study is to evaluate the existence of these users types in a database of fingerprints and propose a new way of investigating them, based on the performance of verification between subjects samples. Once introduced some basic concepts in biometrics and fingerprint, we present the biometric menagerie and how to evaluate them.
Resumo:
The aim of this thesis is to merge two of the emerging paradigms about web programming: RESTful Web Development and Service-Oriented Programming. REST is the main architectural paradigm about web applications, they are characterised by procedural structure which avoid the use of handshaking mechanisms. Even though REST has a standard structure to access the resources of the web applications, the backend side is usually not very modular if not complicated. Service-Oriented Programming, instead, has as one of the fundamental principles, the modularisation of the components. Service-Oriented Applications are characterised by separate modules that allow to simplify the devel- opment of the web applications. There are very few example of integration between these two technologies: it seems therefore reasonable to merge them. In this thesis the methodologies studied to reach this results are explored through an application that helps to handle documents and notes among several users, called MergeFly. The MergeFly practical case, once that all the specifics had been set, will be utilised in order to develop and handle HTTP requests through SOAP. In this document will be first defined the 1) characteristics of the application, 2) SOAP technology, partially introduced the 3) Jolie Language, 4) REST and finally a 5) Jolie-REST implementation will be offered through the MergeFly case. It is indeed implemented a token mechanism for authentication: it has been first discarded sessions and cookies algorithm of authentication in so far not into the pure RESTness theory, even if often used). In the final part the functionality and effectiveness of the results will be evaluated, judging the Jolie-REST duo.
Resumo:
The fast developing international trade of products based on traditional knowledge and their value chains has become an important aspect of the ethnopharmacological debate. The structure and diversity of value chains and their impact on the phytochemical composition of herbal medicinal products has been overlooked in the debate about quality problems in transnational trade. Different government policies and regulations governing trade in herbal medicinal products impact on such value chains. Medicinal Rhodiola species, including Rhodiola rosea L. and Rhodiola crenulata (Hook.f. & Thomson) H.Ohba, have been used widely in Europe and Asia as traditional herbal medicines with numerous claims for their therapeutic effects. Faced with resource depletion and environment destruction, R. rosea and R. crenulata are becoming endangered, making them more economically valuable to collectors and middlemen, and also increasing the risk of adulteration and low quality. We compare the phytochemical differences among Rhodiola raw materials available on the market to provide a practical method for Rhodiola authentication and the detection of potential adulterant compounds. Samples were collected from Europe and Asia and nuclear magnetic resonance spectroscopy coupled with multivariate analysis software and high performance thin layer chromatography techniques were used to analyse the samples. A method was developed to quantify the amount of adulterant species contained within mixtures. We compared the phytochemical composition of collected Rhodiola samples to authenticated samples. Rosavin and rosarin were mainly present in R. rosea whereas crenulatin was only present in R. crenulata. 30% of the Rhodiola samples purchased from the Chinese market were adulterated by other Rhodiola spp. Moreover, 7 % of the raw-material samples were not labelled satifactorily. The utilisation of both 1H-NMR and HPTLC methods provided an integrated analysis of the phytochemical differences and novel identification method for R. rosea and R. crenulata. Using 1H-NMR spectroscopy it was possible to quantify the presence of R. crenulata in admixtures with R. rosea. This quantitative technique could be used in the future to assess a variety of herbal drugs and products. This project also highlights the need to further study the links between producers and consumers in national and trans-national trade.
Resumo:
Salman, M. et al. (2016). Integrating Scientific Publication into an Applied Gaming Ecosystem. GSTF Journal on Computing (JoC), Volume 5 (Issue 1), pp. 45-51.
Resumo:
After years of deliberation, the EU commission sped up the reform process of a common EU digital policy considerably in 2015 by launching the EU digital single market strategy. In particular, two core initiatives of the strategy were agreed upon: General Data Protection Regulation and the Network and Information Security (NIS) Directive law texts. A new initiative was additionally launched addressing the role of online platforms. This paper focuses on the platform privacy rationale behind the data protection legislation, primarily based on the proposal for a new EU wide General Data Protection Regulation. We analyse the legislation rationale from an Information System perspective to understand the role user data plays in creating platforms that we identify as “processing silos”. Generative digital infrastructure theories are used to explain the innovative mechanisms that are thought to govern the notion of digitalization and successful business models that are affected by digitalization. We foresee continued judicial data protection challenges with the now proposed Regulation as the adoption of the “Internet of Things” continues. The findings of this paper illustrate that many of the existing issues can be addressed through legislation from a platform perspective. We conclude by proposing three modifications to the governing rationale, which would not only improve platform privacy for the data subject, but also entrepreneurial efforts in developing intelligent service platforms. The first modification is aimed at improving service differentiation on platforms by lessening the ability of incumbent global actors to lock-in the user base to their service/platform. The second modification posits limiting the current unwanted tracking ability of syndicates, by separation of authentication and data store services from any processing entity. Thirdly, we propose a change in terms of how security and data protection policies are reviewed, suggesting a third party auditing procedure.
Resumo:
Radio Frequenzidentifikation (RFID) auf Basis pas-siver Transponder im Ultra-High-Frequenzbereich (UHF) findet in der Logistik immer häufiger Anwen-dung. Zur Ausschöpfung der Potenziale dieser AutoID-Technologie wird vorausgesetzt, dass die Identifikation der Waren und Güter zuverlässig erfolgt. Dies gestaltet sich aufgrund von Umgebungseinflüssen auf das elek-tromagnetische Lesefeld, das die passiven Transponder zur Identifikation mit Energie versorgt, oftmals sehr schwierig. Die Kenntnis der elektromagnetischen Feld-stärkeverteilung im Raum kann somit als Grundlage für die Bewertung der zuverlässigen Erfassung durch RFID-Installationen herangezogen werden. Das im Bei-trag vorgestellte Messkonzept mit Methodik zeigt eine Möglichkeit zur schnellen Erfassung der Lese-feldausprägung auf, um anhand der Ergebnisse die Kon-figuration dieser Systeme zu erleichtern.
Resumo:
[EN]This paper describes an approach for detection of frontal faces in real time (20-35Hz) for further processing. This approach makes use of a combination of previous detection tracking and color for selecting interest areas. On those areas, later facial features such as eyes, nose and mouth are searched based on geometric tests, appearance veri cation, temporal and spatial coherence. The system makes use of very simple techniques applied in a cascade approach, combined and coordinated with temporal information for improving performance. This module is a component of a complete system designed for detection, tracking and identi cation of individuals [1].
Resumo:
[EN]In face recognition, where high-dimensional representation spaces are generally used, it is very important to take advantage of all the available information. In particular, many labelled facial images will be accumulated while the recognition system is functioning, and due to practical reasons some of them are often discarded. In this paper, we propose an algorithm for using this information. The algorithm has the fundamental characteristic of being incremental. On the other hand, the algorithm makes use of a combination of classification results for the images in the input sequence. Experiments with sequences obtained with a real person detection and tracking system allow us to analyze the performance of the algorithm, as well as its potential improvements.
Resumo:
Abstract: In the mid-1990s when I worked for a telecommunications giant I struggled to gain access to basic geodemographic data. It cost hundreds of thousands of dollars at the time to simply purchase a tile of satellite imagery from Marconi, and it was often cheaper to create my own maps using a digitizer and A0 paper maps. Everything from granular administrative boundaries to right-of-ways to points of interest and geocoding capabilities were either unavailable for the places I was working in throughout Asia or very limited. The control of this data was either in a government’s census and statistical bureau or was created by a handful of forward thinking corporations. Twenty years on we find ourselves inundated with data (location and other) that we are challenged to amalgamate, and much of it still “dirty” in nature. Open data initiatives such as ODI give us great hope for how we might be able to share information together and capitalize not only in the crowdsourcing behavior but in the implications for positive usage for the environment and for the advancement of humanity. We are already gathering and amassing a great deal of data and insight through excellent citizen science participatory projects across the globe. In early 2015, I delivered a keynote at the Data Made Me Do It conference at UC Berkeley, and in the preceding year an invited talk at the inaugural QSymposium. In gathering research for these presentations, I began to ponder on the effect that social machines (in effect, autonomous data collection subjects and objects) might have on social behaviors. I focused on studying the problem of data from various veillance perspectives, with an emphasis on the shortcomings of uberveillance which included the potential for misinformation, misinterpretation, and information manipulation when context was entirely missing. As we build advanced systems that rely almost entirely on social machines, we need to ponder on the risks associated with following a purely technocratic approach where machines devoid of intelligence may one day dictate what humans do at the fundamental praxis level. What might be the fallout of uberveillance? Bio: Dr Katina Michael is a professor in the School of Computing and Information Technology at the University of Wollongong. She presently holds the position of Associate Dean – International in the Faculty of Engineering and Information Sciences. Katina is the IEEE Technology and Society Magazine editor-in-chief, and IEEE Consumer Electronics Magazine senior editor. Since 2008 she has been a board member of the Australian Privacy Foundation, and until recently was the Vice-Chair. Michael researches on the socio-ethical implications of emerging technologies with an emphasis on an all-hazards approach to national security. She has written and edited six books, guest edited numerous special issue journals on themes related to radio-frequency identification (RFID) tags, supply chain management, location-based services, innovation and surveillance/ uberveillance for Proceedings of the IEEE, Computer and IEEE Potentials. Prior to academia, Katina worked for Nortel Networks as a senior network engineer in Asia, and also in information systems for OTIS and Andersen Consulting. She holds cross-disciplinary qualifications in technology and law.
Resumo:
O planeamento e gestão de stocks assume uma enorme relevância no contexto empresarial para que se possa responder de forma eficaz às flutuações do mercado e, consequentemente aumentar a produtividade e competitividade da empresa. O presente estudo foi desenvolvido numa empresa do setor vitivinícola português e tem como objetivo estudar os processos de gestão de stocks da mesma, de forma a melhorar os seus resultados operacionais. Mais especificamente, pretende-se elaborar um plano de gestão de stocks para que se possam definir políticas que se adequem a cada produto de forma a evitar quebras de stocks. Para alcançar os objetivos, considerou-se a seguinte metodologia: (1) análise da procura de produtos; (2) perceber de que forma se comporta a procura ao longo do ano; (3) definição do tipo de política de planeamento a ser adotada para cada grupo de produtos; (4) cálculo das quantidades de stock a produzir e o intervalo de tempo entre cada produção e (5) verificação da operacionalidade do plano de intervenção de modo a melhorar o planeamento da produção. As propostas de intervenção passaram pela implementação de políticas de gestão de stocks, nomeadamente a política de ponto de encomenda e a política de revisão cíclica. Passaram também pelo estudo da sazonalidade das vendas dos diferentes tipos de vinho de forma a facilitar o planeamento da preparação de espumantes. Embora as propostas não tenham sido postas em prática, são discutidas as vantagens e desvantagens das mesmas, bem como apresentadas propostas de melhoria.
Resumo:
The evolution and maturation of Cloud Computing created an opportunity for the emergence of new Cloud applications. High-performance Computing, a complex problem solving class, arises as a new business consumer by taking advantage of the Cloud premises and leaving the expensive datacenter management and difficult grid development. Standing on an advanced maturing phase, today’s Cloud discarded many of its drawbacks, becoming more and more efficient and widespread. Performance enhancements, prices drops due to massification and customizable services on demand triggered an emphasized attention from other markets. HPC, regardless of being a very well established field, traditionally has a narrow frontier concerning its deployment and runs on dedicated datacenters or large grid computing. The problem with common placement is mainly the initial cost and the inability to fully use resources which not all research labs can afford. The main objective of this work was to investigate new technical solutions to allow the deployment of HPC applications on the Cloud, with particular emphasis on the private on-premise resources – the lower end of the chain which reduces costs. The work includes many experiments and analysis to identify obstacles and technology limitations. The feasibility of the objective was tested with new modeling, architecture and several applications migration. The final application integrates a simplified incorporation of both public and private Cloud resources, as well as HPC applications scheduling, deployment and management. It uses a well-defined user role strategy, based on federated authentication and a seamless procedure to daily usage with balanced low cost and performance.
Resumo:
Food safety has always been a social issue that draws great public attention. With the rapid development of wireless communication technologies and intelligent devices, more and more Internet of Things (IoT) systems are applied in the food safety tracking field. However, connection between things and information system is usually established by pre-storing information of things into RFID Tag, which is inapplicable for on-field food safety detection. Therefore, considering pesticide residue is one of the severe threaten to food safety, a new portable, high-sensitivity, low-power, on-field organophosphorus (OP) compounds detection system is proposed in this thesis to realize the on-field food safety detection. The system is designed based on optical detection method by using a customized photo-detection sensor. A Micro Controller Unit (MCU) and a Bluetooth Low Energy (BLE) module are used to quantize and transmit detection result. An Android Application (APP) is also developed for the system to processing and display detection result as well as control the detection process. Besides, a quartzose sample container and black system box are also designed and made for the system demonstration. Several optimizations are made in wireless communication, circuit layout, Android APP and industrial design to realize the mobility, low power and intelligence.