843 resultados para cyber security, securitization, information technology, U.S CYBERCOM
Resumo:
Sobre el impacto que las TIC pueden tener en las relaciones interpersonales, ya sean en la familia y/o con los amigos/as, desde una perspectiva psicosocial
Resumo:
The purpose of this thesis is to investigate projects funded in European 7th framework Information and Communication Technology- work programme. The research has been limited to issue ”Pervasive and trusted network and service infrastructure” and the aim is to find out which are the most important topics into which research will concentrate in the future. The thesis will provide important information for the Department of Information Technology in Lappeenranta University of Technology. First in this thesis will be investigated what are the requirements for the projects which were funded in “Pervasive and trusted network and service infrastructure” – programme 2007. Second the projects funded according to “Pervasive and trusted network and service infrastructure”-programme will be listed in to tables and the most important keywords will be gathered. Finally according to the keyword appearances the vision of the most important future topics will be defined. According to keyword-analysis the wireless networks are in important role in the future and core networks will be implemented with fiber technology to ensure fast data transfer. Software development favors Service Oriented Architecture (SOA) and open source solutions. The interoperability and ensuring the privacy are in key role in the future. 3D in all forms and content delivery are important topics as well. When all the projects were compared, the most important issue was discovered to be SOA which leads the way to cloud computing.
Resumo:
Terveydenhuollon ja siihen liittyvien palvelujen kustannusten jatkuva kohoaminen ja kuntien paheneva taloustilanne sekä terveydenhuollon pienenevät henkilöstöresurssit ovat lisänneet painetta toimintojen kustannustehokkaaseen toteuttamiseen. Edellä mainitusta johtuen, terveydenhuollon toimijoita kehotetaan etsimään uusia ratkaisuja, joilla voidaan taata jatkossa riittävä asiakaspalvelutaso, kustannustehokkuus ja – palvelujen turvallisuus. Tämän työn tavoitteena oli vastata kysymyksiin tunnistus- ja paikannustoiminnan hyödyntämisen mahdollisuuksista Itä-Savon sairaanhoitopiirin Savonlinnan keskussairaalassa. Tarkoituksena oli selvittää, millaisia paikannus ja tunnistusteknologiaan liittyviä tavoitteita ja vaatimuksia terveydenhuollon toimialalla ja erityisesti Itä-Savon sairaanhoitopiirissä on ja millä tavalla RFID ja WLAN – teknologioilla saadaan kehitettyä asetettuihin tavoitteisiin ja hyötyodotuksiin vastaavat ratkaisut. Työssä pyrittiin selvittämään myös millaisia rahallisia säästöjä tunnistus- ja paikannusteknologioilla voidaan saada aikaan. Työn yhteydessä kartoitettiin tarpeita ja vaatimuksia tunnistus- ja paikannusteknologian hyödyntämiseen. Tarpeet ja vaatimukset testattiin tunnistus- ja paikannuspilotissa. Lisäksi perehdyttiin kirjallisuuteen ja aiempiin tutkimuksiin tunnistus- ja paikannusteknologioista. Suunnitelman perusteella näyttää siltä, että hyödyntämällä tunnistus- ja paikannusteknologioita voitaisiin tehostaa Savonlinnan keskussairaalan toimintaa. Suunnitelman pilottivaiheen tuloksien perusteella toiminnan tehostaminen tarkoittaisi kustannussäästöjä, parantaisi potilasturvallisuutta sekä hoitotyön laatua. Tunnistus- ja paikannusteknologian käyttökohteita sairaalassa voisivat olla esimerkiksi reaaliaikaiseen prosessien ohjaaminen, kulunvalvonnan ja -ohjauksen automatisointi, potilaan automaattinen tunnistaminen, sekä sairaalan tutkimuslaitteiden seuranta.
Resumo:
Alumni are considered as precious resource of the institutions, thus improving alumni adminis-tration is critical. In information era, alumni administration is assisted by widespread information technology, such as social network sites. This paper aims to discover if a self-built information sys-tem would enhance alumni connection in the IMMIT context, and what kind of attributes would be helpful applying to the special context. The current online alumni services at other universities and at the IMMIT host university are analyzed, and then social media is introduced. After illustrating the social capital existing in IM-MIT, the type of the self-built information system is suggested, following an interpretation of the prototype. Two research models are utilized in this article: TAM and intentional social action model. The second model is adjusted with proposed parameters. Afterwards, a survey and an interview protocol are designed under the guidance of the models. The results are analyzed in several groups, and the proposed parameters are tested. A conclusion is drawn to indicate how to improve alumni‟s intention to use and how to achieve a better-accepted design.
Resumo:
The Fog of Cyber Defence is a book about cyberspace, cyber security and cyberwar. The book is untangling the ties of the Nordic states with the important, yet complex and foggy phenomenon of cyber. It is adding important perspectives into the ongoing discussion about cyber security and creating room for the deepening of co-operation amongst the Nordic states. The articles in the book contribute to the debate over the implications of cyber for national security and the armed forces. The authors, who come from various professional backgrounds, appreciate and welcome further discussion and comments on the very important themes that impact our everyday lives.
Resumo:
The Travel and Tourism field is undergoing changes due to the rapid development of information technology and digital services. Online travel has profoundly changed the way travel and tourism organizations interact with their customers. Mobile technology such as mobile services for pocket devices (e.g. mobile phones) has the potential to take this development even further. Nevertheless, many issues have been highlighted since the early days of mobile services development (e.g. the lack of relevance, ease of use of many services). However, the wide adoption of smartphones and the mobile Internet in many countries as well as the formation of so-called ecosystems between vendors of mobile technology indicate that many of these issues have been overcome. Also when looking at the numbers of downloaded applications related to travel in application stores like Google Play, it seems obvious that mobile travel and tourism services are adopted and used by many individuals. However, as business is expected to start booming in the mobile era, many issues have a tendency to be overlooked. Travelers are generally on the go and thus services that work effectively in mobile settings (e.g. during a trip) are essential. Hence, the individuals’ perceived drivers and barriers to use mobile travel and tourism services in on-site or during trip settings seem particularly valuable to understand; thus this is one primary aim of the thesis. We are, however, also interested in understanding different types of mobile travel service users. Individuals may indeed be very different in their propensity to adopt and use technology based innovations (services). Research is also switching more from investigating issues of mobile service development to understanding individuals’ usage patterns of mobile services. But designing new mobile services may be a complex matter from a service provider perspective. Hence, our secondary aim is to provide insights into drivers and barriers of mobile travel and tourism service development from a holistic business model perspective. To accomplish the research objectives seven different studies have been conducted over a time period from 2002 – 2013. The studies are founded on and contribute to theories within diffusion of innovations, technology acceptance, value creation, user experience and business model development. Several different research methods are utilized: surveys, field and laboratory experiments and action research. The findings suggest that a successful mobile travel and tourism service is a service which supports one or several mobile motives (needs) of individuals such as spontaneous needs, time-critical arrangements, efficiency ambitions, mobility related needs (location features) and entertainment needs. The service could be customized to support travelers’ style of traveling (e.g. organized travel or independent travel) and should be easy to use, especially easy to take into use (access, install and learn) during a trip, without causing security concerns and/or financial risks for the user. In fact, the findings suggest that the most prominent barrier to the use of mobile travel and tourism services during a trip is an individual’s perceived financial cost (entry costs and usage costs). It should, however, be noted that regulations are put in place in the EU regarding data roaming prices between European countries and national telecom operators are starting to see ‘international data subscriptions’ as a sales advantage (e.g. Finnish Sonera provides a data subscription in the Baltic and Nordic region at the same price as in Finland), which will enhance the adoption of mobile travel and tourism services also in international contexts. In order to speed up the adoption rate travel service providers could consider e.g. more local initiatives of free Wi-Fi networks, development of services that can be used, at least to some extent, in an offline mode (do not require costly network access during a trip) and cooperation with telecom operators (e.g. lower usage costs for travelers who use specific mobile services or travel with specific vendors). Furthermore, based on a developed framework for user experience of mobile trip arrangements, the results show that a well-designed mobile site and/or native application, which preferably supports integration with other mobile services, is a must for true mobile presence. In fact, travel service providers who want to build a relationship with their customers need to consider a downloadable native application, but in order to be found through the mobile channel and make contact with potential new customers, a mobile website should be available. Moreover, we have made a first attempt with cluster analysis to identify user categories of mobile services in a travel and tourism context. The following four categories were identified: info-seekers, checkers, bookers and all-rounders. For example “all-rounders”, represented primarily by individuals who use their pocket device for almost any of the investigated mobile travel services, constituted primarily of 23 to 50 year old males with high travel frequency and great online experience. The results also indicate that travel service providers will increasingly become multi-channel providers. To manage multiple online channels, closely integrated and hybrid online platforms for different devices, supporting all steps in a traveler process should be considered. It could be useful for travel service providers to focus more on developing browser-based mobile services (HTML5-solutions) than native applications that work only with specific operating systems and for specific devices. Based on an action research study and utilizing a holistic business model framework called STOF we found that HTML5 as an emerging platform, at least for now, has some limitations regarding the development of the user experience and monetizing the application. In fact, a native application store (e.g. Google Play) may be a key mediator in the adoption of mobile travel and tourism services both from a traveler and a service provider perspective. Moreover, it must be remembered that many device and mobile operating system developers want service providers to specifically create services for their platforms and see native applications as a strategic advantage to sell more devices of a certain kind. The mobile telecom industry has moved into a battle of ecosystems where device makers, developers of operating systems and service developers are to some extent forced to choose their development platforms.
Resumo:
This pro gradu –thesis discusses generating competitive advantage through competitor information systems. The structure of this thesis follows the structure of the WCA model by Alter (1996). In the WCA model, business process is influenced by three separate but connected elements: information, technology, and process participants. The main research question is how competitor information can be incorporated into or made into a tool creating competitive advantage. Research subquestions are: How does competitor information act as a part of the business process creating competitive advantage? How is a good competitor information system situated and structured in an organisation? How can management help information generate competitive advantage in the business process with participants, information, and technology? This thesis discusses each of the elements separate, but the elements are connected to each other and to competitive advantage. Information is discussed by delving into competitor information and competitor analysis. Competitive intelligence and competitor analysis requires commitment throughout the organisation, including top management, the desire to perform competitive intelligence and the desire to use the end products of that competitive intelligence. In order to be successful, systematic competitive intelligence and competitor analysis require vision, willingness to strive for the goals set, and clear strategies to proceed. Technology is discussed by taking a look into the function of the competitor information systems play and the place they occupy within an organization. In addition, there is discussion about the basic infrastructure of competitor information systems, and the problems competitor information systems can have plaguing them. In order for competitor information systems to be useful and worthy of the resources it takes to develop and maintain them, competitor information systems require on-going resource allocation and high quality information. In order for competitor information systems justify their existence business process participants need to maintain and utilize competitor information systems on all levels. Business process participants are discussed through management practices. This thesis discusses way to manage information, technology, and process participants, when the goal is to generate competitive advantage through competitor information systems. This is possible when information is treated as a resource with value, technology requires strategy in order to be successful within an organization, and process participants are an important resource. Generating competitive advantage through competitor information systems is possible when the elements of information, technology, and business process participants all align advantageously.
Resumo:
In the new age of information technology, big data has grown to be the prominent phenomena. As information technology evolves, organizations have begun to adopt big data and apply it as a tool throughout their decision-making processes. Research on big data has grown in the past years however mainly from a technical stance and there is a void in business related cases. This thesis fills the gap in the research by addressing big data challenges and failure cases. The Technology-Organization-Environment framework was applied to carry out a literature review on trends in Business Intelligence and Knowledge management information system failures. A review of extant literature was carried out using a collection of leading information system journals. Academic papers and articles on big data, Business Intelligence, Decision Support Systems, and Knowledge Management systems were studied from both failure and success aspects in order to build a model for big data failure. I continue and delineate the contribution of the Information System failure literature as it is the principal dynamics behind technology-organization-environment framework. The gathered literature was then categorised and a failure model was developed from the identified critical failure points. The failure constructs were further categorized, defined, and tabulated into a contextual diagram. The developed model and table were designed to act as comprehensive starting point and as general guidance for academics, CIOs or other system stakeholders to facilitate decision-making in big data adoption process by measuring the effect of technological, organizational, and environmental variables with perceived benefits, dissatisfaction and discontinued use.
Resumo:
In the new age of information technology, big data has grown to be the prominent phenomena. As information technology evolves, organizations have begun to adopt big data and apply it as a tool throughout their decision-making processes. Research on big data has grown in the past years however mainly from a technical stance and there is a void in business related cases. This thesis fills the gap in the research by addressing big data challenges and failure cases. The Technology-Organization-Environment framework was applied to carry out a literature review on trends in Business Intelligence and Knowledge management information system failures. A review of extant literature was carried out using a collection of leading information system journals. Academic papers and articles on big data, Business Intelligence, Decision Support Systems, and Knowledge Management systems were studied from both failure and success aspects in order to build a model for big data failure. I continue and delineate the contribution of the Information System failure literature as it is the principal dynamics behind technology-organization-environment framework. The gathered literature was then categorised and a failure model was developed from the identified critical failure points. The failure constructs were further categorized, defined, and tabulated into a contextual diagram. The developed model and table were designed to act as comprehensive starting point and as general guidance for academics, CIOs or other system stakeholders to facilitate decision-making in big data adoption process by measuring the effect of technological, organizational, and environmental variables with perceived benefits, dissatisfaction and discontinued use.
Resumo:
The information technology (IT) industry has recently witnessed the proliferation of cloud services, which have allowed IT service providers to deliver on-demand resources to customers over the Internet. This frees both service providers and consumers from traditional IT-related burdens such as capital and operating expenses and allows them to respond rapidly to new opportunities in the market. Due to the popularity and growth of cloud services, numerous researchers have conducted studies on various aspects of cloud services, both positive and negative. However, none of those studies have connected all relevant information to provide a holistic picture of the current state of cloud service research. This study aims to investigate that current situation and propose the most promising future directions. In order to determine achieve these goals, a systematic literature review was conducted on studies with a primary focus on cloud services. Based on carefully crafted inclusion criteria, 52 articles from highly credible online sources were selected for the review. To define the main focus of the review and facilitate the analysis of literature, a conceptual framework with five main factors was proposed. The selected articles were organized under the factors of the proposed framework and then synthesized using a narrative technique. The results of this systematic review indicate that the impacts of cloud services on enterprises were the factor best covered by contemporary research. Researchers were able to present valuable findings about how cloud services impact various aspects of enterprises such as governance, performance, and security. By contrast, the role of service provider sub-contractors in the cloud service market remains largely uninvestigated, as do cloud-based enterprise software and cloud-based office systems for consumers. Moreover, the results also show that researchers should pay more attention to the integration of cloud services into legacy IT systems to facilitate the adoption of cloud services by enterprise users. After the literature synthesis, the present study proposed several promising directions for cloud service research by outlining research questions for the underexplored areas of cloud services, in order to facilitate the development of cloud service markets in the future.
Resumo:
"Mémoire présenté à la faculté des études supérieures en vue de l'obtention du grade de Maîtrise en droit des affaires (LL.M)"
Resumo:
Avec les avancements de la technologie de l'information, les données temporelles économiques et financières sont de plus en plus disponibles. Par contre, si les techniques standard de l'analyse des séries temporelles sont utilisées, une grande quantité d'information est accompagnée du problème de dimensionnalité. Puisque la majorité des séries d'intérêt sont hautement corrélées, leur dimension peut être réduite en utilisant l'analyse factorielle. Cette technique est de plus en plus populaire en sciences économiques depuis les années 90. Étant donnée la disponibilité des données et des avancements computationnels, plusieurs nouvelles questions se posent. Quels sont les effets et la transmission des chocs structurels dans un environnement riche en données? Est-ce que l'information contenue dans un grand ensemble d'indicateurs économiques peut aider à mieux identifier les chocs de politique monétaire, à l'égard des problèmes rencontrés dans les applications utilisant des modèles standards? Peut-on identifier les chocs financiers et mesurer leurs effets sur l'économie réelle? Peut-on améliorer la méthode factorielle existante et y incorporer une autre technique de réduction de dimension comme l'analyse VARMA? Est-ce que cela produit de meilleures prévisions des grands agrégats macroéconomiques et aide au niveau de l'analyse par fonctions de réponse impulsionnelles? Finalement, est-ce qu'on peut appliquer l'analyse factorielle au niveau des paramètres aléatoires? Par exemple, est-ce qu'il existe seulement un petit nombre de sources de l'instabilité temporelle des coefficients dans les modèles macroéconomiques empiriques? Ma thèse, en utilisant l'analyse factorielle structurelle et la modélisation VARMA, répond à ces questions à travers cinq articles. Les deux premiers chapitres étudient les effets des chocs monétaire et financier dans un environnement riche en données. Le troisième article propose une nouvelle méthode en combinant les modèles à facteurs et VARMA. Cette approche est appliquée dans le quatrième article pour mesurer les effets des chocs de crédit au Canada. La contribution du dernier chapitre est d'imposer la structure à facteurs sur les paramètres variant dans le temps et de montrer qu'il existe un petit nombre de sources de cette instabilité. Le premier article analyse la transmission de la politique monétaire au Canada en utilisant le modèle vectoriel autorégressif augmenté par facteurs (FAVAR). Les études antérieures basées sur les modèles VAR ont trouvé plusieurs anomalies empiriques suite à un choc de la politique monétaire. Nous estimons le modèle FAVAR en utilisant un grand nombre de séries macroéconomiques mensuelles et trimestrielles. Nous trouvons que l'information contenue dans les facteurs est importante pour bien identifier la transmission de la politique monétaire et elle aide à corriger les anomalies empiriques standards. Finalement, le cadre d'analyse FAVAR permet d'obtenir les fonctions de réponse impulsionnelles pour tous les indicateurs dans l'ensemble de données, produisant ainsi l'analyse la plus complète à ce jour des effets de la politique monétaire au Canada. Motivée par la dernière crise économique, la recherche sur le rôle du secteur financier a repris de l'importance. Dans le deuxième article nous examinons les effets et la propagation des chocs de crédit sur l'économie réelle en utilisant un grand ensemble d'indicateurs économiques et financiers dans le cadre d'un modèle à facteurs structurel. Nous trouvons qu'un choc de crédit augmente immédiatement les diffusions de crédit (credit spreads), diminue la valeur des bons de Trésor et cause une récession. Ces chocs ont un effet important sur des mesures d'activité réelle, indices de prix, indicateurs avancés et financiers. Contrairement aux autres études, notre procédure d'identification du choc structurel ne requiert pas de restrictions temporelles entre facteurs financiers et macroéconomiques. De plus, elle donne une interprétation des facteurs sans restreindre l'estimation de ceux-ci. Dans le troisième article nous étudions la relation entre les représentations VARMA et factorielle des processus vectoriels stochastiques, et proposons une nouvelle classe de modèles VARMA augmentés par facteurs (FAVARMA). Notre point de départ est de constater qu'en général les séries multivariées et facteurs associés ne peuvent simultanément suivre un processus VAR d'ordre fini. Nous montrons que le processus dynamique des facteurs, extraits comme combinaison linéaire des variables observées, est en général un VARMA et non pas un VAR comme c'est supposé ailleurs dans la littérature. Deuxièmement, nous montrons que même si les facteurs suivent un VAR d'ordre fini, cela implique une représentation VARMA pour les séries observées. Alors, nous proposons le cadre d'analyse FAVARMA combinant ces deux méthodes de réduction du nombre de paramètres. Le modèle est appliqué dans deux exercices de prévision en utilisant des données américaines et canadiennes de Boivin, Giannoni et Stevanovic (2010, 2009) respectivement. Les résultats montrent que la partie VARMA aide à mieux prévoir les importants agrégats macroéconomiques relativement aux modèles standards. Finalement, nous estimons les effets de choc monétaire en utilisant les données et le schéma d'identification de Bernanke, Boivin et Eliasz (2005). Notre modèle FAVARMA(2,1) avec six facteurs donne les résultats cohérents et précis des effets et de la transmission monétaire aux États-Unis. Contrairement au modèle FAVAR employé dans l'étude ultérieure où 510 coefficients VAR devaient être estimés, nous produisons les résultats semblables avec seulement 84 paramètres du processus dynamique des facteurs. L'objectif du quatrième article est d'identifier et mesurer les effets des chocs de crédit au Canada dans un environnement riche en données et en utilisant le modèle FAVARMA structurel. Dans le cadre théorique de l'accélérateur financier développé par Bernanke, Gertler et Gilchrist (1999), nous approximons la prime de financement extérieur par les credit spreads. D'un côté, nous trouvons qu'une augmentation non-anticipée de la prime de financement extérieur aux États-Unis génère une récession significative et persistante au Canada, accompagnée d'une hausse immédiate des credit spreads et taux d'intérêt canadiens. La composante commune semble capturer les dimensions importantes des fluctuations cycliques de l'économie canadienne. L'analyse par décomposition de la variance révèle que ce choc de crédit a un effet important sur différents secteurs d'activité réelle, indices de prix, indicateurs avancés et credit spreads. De l'autre côté, une hausse inattendue de la prime canadienne de financement extérieur ne cause pas d'effet significatif au Canada. Nous montrons que les effets des chocs de crédit au Canada sont essentiellement causés par les conditions globales, approximées ici par le marché américain. Finalement, étant donnée la procédure d'identification des chocs structurels, nous trouvons des facteurs interprétables économiquement. Le comportement des agents et de l'environnement économiques peut varier à travers le temps (ex. changements de stratégies de la politique monétaire, volatilité de chocs) induisant de l'instabilité des paramètres dans les modèles en forme réduite. Les modèles à paramètres variant dans le temps (TVP) standards supposent traditionnellement les processus stochastiques indépendants pour tous les TVPs. Dans cet article nous montrons que le nombre de sources de variabilité temporelle des coefficients est probablement très petit, et nous produisons la première évidence empirique connue dans les modèles macroéconomiques empiriques. L'approche Factor-TVP, proposée dans Stevanovic (2010), est appliquée dans le cadre d'un modèle VAR standard avec coefficients aléatoires (TVP-VAR). Nous trouvons qu'un seul facteur explique la majorité de la variabilité des coefficients VAR, tandis que les paramètres de la volatilité des chocs varient d'une façon indépendante. Le facteur commun est positivement corrélé avec le taux de chômage. La même analyse est faite avec les données incluant la récente crise financière. La procédure suggère maintenant deux facteurs et le comportement des coefficients présente un changement important depuis 2007. Finalement, la méthode est appliquée à un modèle TVP-FAVAR. Nous trouvons que seulement 5 facteurs dynamiques gouvernent l'instabilité temporelle dans presque 700 coefficients.
Resumo:
Dans une société mondialisée, où les relations sont intégrées à une vitesse différente avec l'utilisation des technologies de l'information et des communications, l'accès à la justice gagne de nouveaux concepts, mais elle est encore confrontée à de vieux obstacles. La crise mondiale de l'accès à la justice dans le système judiciaire provoque des débats concernant l'égalité en vertu de la loi, la capacité des individus, la connaissance des droits, l'aide juridique, les coûts et les délais. Les deux derniers ont été les facteurs les plus importants du mécontentement des individus avec le système judiciaire. La présente étude a pour objet d'analyser l'incidence de l'utilisation de la technologie dans l’appareil judiciaire, avec l'accent sur la réalité brésilienne, la voie législative et des expériences antérieures dans le développement de logiciels de cyberjustice. La mise en œuvre de ces instruments innovants exige des investissements et de la planification, avec une attention particulière sur l'incidence qu'ils peuvent avoir sur les routines traditionnelles des tribunaux. De nouveaux défis sont sur la voie de ce processus de transformation et doivent être traités avec professionnalisme afin d'éviter l'échec de projets de qualité. En outre, si la technologie peut faire partie des différents aspects de notre quotidien et l'utilisation de modes alternatifs de résolution des conflits en ligne sont considérés comme un succès, pourquoi serait-il difficile de faire ce changement dans la prestation de la justice par le système judiciaire? Des solutions technologiques adoptées dans d'autres pays ne sont pas facilement transférables à un environnement culturel différent, mais il y a toujours la possibilité d'apprendre des expériences des autres et d’éviter de mauvaises voies qui pourraient compromettre la définition globale de l'accès à la justice.
Resumo:
Biometrics has become important in security applications. In comparison with many other biometric features, iris recognition has very high recognition accuracy because it depends on iris which is located in a place that still stable throughout human life and the probability to find two identical iris's is close to zero. The identification system consists of several stages including segmentation stage which is the most serious and critical one. The current segmentation methods still have limitation in localizing the iris due to circular shape consideration of the pupil. In this research, Daugman method is done to investigate the segmentation techniques. Eyelid detection is another step that has been included in this study as a part of segmentation stage to localize the iris accurately and remove unwanted area that might be included. The obtained iris region is encoded using haar wavelets to construct the iris code, which contains the most discriminating feature in the iris pattern. Hamming distance is used for comparison of iris templates in the recognition stage. The dataset which is used for the study is UBIRIS database. A comparative study of different edge detector operator is performed. It is observed that canny operator is best suited to extract most of the edges to generate the iris code for comparison. Recognition rate of 89% and rejection rate of 95% is achieved
Resumo:
The paper discusses the use of online information resources for organising knowledge in library and information centres in Cochin University of Science and Technology (CUSAT). The paper discusses the status and extent of automation in CUSAT library. The use of different online resources and the purposes for which these resources are being used, is explained in detail. Structured interview method was applied for collecting data. It was observed that 67 per cent users consult online resources for assisting knowledge organisation. Library of Congress catalogue is the widely used (100 per cent) online resource followed by OPAC of CUSAT and catalogue of British Library. The main purposes for using these resources are class number building and subject indexing