980 resultados para Password-based authentication
Resumo:
The TCABR data analysis and acquisition system has been upgraded to support a joint research programme using remote participation technologies. The architecture of the new system uses Java language as programming environment. Since application parameters and hardware in a joint experiment are complex with a large variability of components, requirements and specification solutions need to be flexible and modular, independent from operating system and computer architecture. To describe and organize the information on all the components and the connections among them, systems are developed using the extensible Markup Language (XML) technology. The communication between clients and servers uses remote procedure call (RPC) based on the XML (RPC-XML technology). The integration among Java language, XML and RPC-XML technologies allows to develop easily a standard data and communication access layer between users and laboratories using common software libraries and Web application. The libraries allow data retrieval using the same methods for all user laboratories in the joint collaboration, and the Web application allows a simple graphical user interface (GUI) access. The TCABR tokamak team in collaboration with the IPFN (Instituto de Plasmas e Fusao Nuclear, Instituto Superior Tecnico, Universidade Tecnica de Lisboa) is implementing this remote participation technologies. The first version was tested at the Joint Experiment on TCABR (TCABRJE), a Host Laboratory Experiment, organized in cooperation with the IAEA (International Atomic Energy Agency) in the framework of the IAEA Coordinated Research Project (CRP) on ""Joint Research Using Small Tokamaks"". (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This project describes an authentication technique that is shoulder-surfing resistant. Shoulder surfing is an attack in which an attacker can get access to private information by observing the user’s interaction with a terminal, or by using recording tools to record the user interaction and study the obtained data, with the objective of obtaining unauthorized access to a target user’s personal information. The technique described here relies on gestural analysis coupled with a secondary channel of authentication that uses button pressing. The thesis presents and evaluates multiple alternative algorithms for gesture analysis, and furthermore assesses the effectiveness of the technique.
Resumo:
Despite the efficacy of minutia-based fingerprint matching techniques for good-quality images captured by optical sensors, minutia-based techniques do not often perform so well on poor-quality images or fingerprint images captured by small solid-state sensors. Solid-state fingerprint sensors are being increasingly deployed in a wide range of applications for user authentication purposes. Therefore, it is necessary to develop new fingerprint-matching techniques that utilize other features to deal with fingerprint images captured by solid-state sensors. This paper presents a new fingerprint matching technique based on fingerprint ridge features. This technique was assessed on the MSU-VERIDICOM database, which consists of fingerprint impressions obtained from 160 users (4 impressions per finger) using a solid-state sensor. The combination of ridge-based matching scores computed by the proposed ridge-based technique with minutia-based matching scores leads to a reduction of the false non-match rate by approximately 1.7% at a false match rate of 0.1%. © 2005 IEEE.
Resumo:
Trattasi di un sistema basato sulla mappatura degli access point utilizzabile su un'applicazione per smartphone con piattaforma Android. Sono stati usati ambienti di lavoro come Eclipse, affiancato da Lamp e API, utili per visualizzare le mappe da Google Maps. E' stato creato un database online, in modo tale che gli utenti possano condividere le proprie registrazioni (solo quelle che non richiedono una password e quindi utilizzabili da tutti). Il tutto testato tramite un emulatore Android
Resumo:
Over the past several years, a number of design approaches in wireless mesh networks have been introduced to support the deployment of wireless mesh networks (WMNs). We introduce a novel wireless mesh architecture that supports authentication and authorisation functionalities, giving the possibility of a seamless WMN integration into the home's organization authentication and authorisation infrastructure. First, we introduce a novel authentication and authorisation mechanism for wireless mesh nodes. The mechanism is designed upon an existing federated access control approach, i.e. the AAI infrastructure that is using just the credentials at the user's home organization in a federation. Second, we demonstrate how authentication and authorisation for end users is implemented by using an existing web-based captive portal approach. Finally, we observe the difference between the two and explain in detail the process flow of authorized access to network resources in wireless mesh networks. The goal of our wireless mesh architecture is to enable easy broadband network access to researchers at remote locations, giving them additional advantage of a secure access to their measurements, irrespective of their location. It also provides an important basis for the real-life deployment of wireless mesh networks for the support of environmental research.
Resumo:
The intention of an authentication and authorization infrastructure (AAI) is to simplify and unify access to different web resources. With a single login, a user can access web applications at multiple organizations. The Shibboleth authentication and authorization infrastructure is a standards-based, open source software package for web single sign-on (SSO) across or within organizational boundaries. It allows service providers to make fine-grained authorization decisions for individual access of protected online resources. The Shibboleth system is a widely used AAI, but only supports protection of browser-based web resources. We have implemented a Shibboleth AAI extension to protect web services using Simple Object Access Protocol (SOAP). Besides user authentication for browser-based web resources, this extension also provides user and machine authentication for web service-based resources. Although implemented for a Shibboleth AAI, the architecture can be easily adapted to other AAIs.
Resumo:
Liquid crystal properties make them useful for the development of security devices in applications of authentication and detection of fakes. Induced orientation of liquid crystal molecules and birefringence are the two main properties used in security devices. Employing liquid crystal and dichroic colorants, we have developed devices that show, with the aid of a polarizer, multiple images on each side of the device. Rubbed polyimide is used as alignment layer on each substrate of the LC cell. By rubbing the polyimide in different directions in each substrate it is possible to create any kind of symbols, drawings or motifs with a greyscale; the more complex the created device is, the more difficult is to fake it. To identify the motifs it is necessary to use polarized light. Depending on whether the polarizer is located in front of the LC cell or behind it, different motifs from one or the other substrate are shown. The effect arises from the dopant colour dye added to the liquid crystal, the induced orientation and the twist structure. In practice, a grazing reflection on a dielectric surface is polarized enough to see the effect. Any LC flat panel display can obviously be used as backlight as well.
Resumo:
This article proposes an innovative biometric technique based on the idea of authenticating a person on a mobile device by gesture recognition. To accomplish this aim, a user is prompted to be recognized by a gesture he/she performs moving his/her hand while holding a mobile device with an accelerometer embedded. As users are not able to repeat a gesture exactly in the air, an algorithm based on sequence alignment is developed to correct slight differences between repetitions of the same gesture. The robustness of this biometric technique has been studied within 2 different tests analyzing a database of 100 users with real falsifications. Equal Error Rates of 2.01 and 4.82% have been obtained in a zero-effort and an active impostor attack, respectively. A permanence evaluation is also presented from the analysis of the repetition of the gestures of 25 users in 10 sessions over a month. Furthermore, two different gesture databases have been developed: one made up of 100 genuine identifying 3-D hand gestures and 3 impostors trying to falsify each of them and another with 25 volunteers repeating their identifying 3- D hand gesture in 10 sessions over a month. These databases are the most extensive in published studies, to the best of our knowledge.
Resumo:
Digital services and communications in vehicular scenarios provide the essential assets to improve road transport in several ways like reducing accidents, improving traffic efficiency and optimizing the transport of goods and people. Vehicular communications typically rely on VANET (Vehicular Ad hoc Networks). In these networks vehicles communicate with each other without the need of infrastructure. VANET are mainly oriented to disseminate information to the vehicles in certain geographic area for time critical services like safety warnings but present very challenging requirements that have not been successfully fulfilled nowadays. Some of these challenges are; channel saturation due to simultaneous radio access of many vehicles, routing protocols in topologies that vary rapidly, minimum quality of service assurance and security mechanisms to efficiently detect and neutralize malicious attacks. Vehicular services can be classified in four important groups: Safety, Efficiency, Sustainability and Infotainment. The benefits of these services for the transport sector are clear but many technological and business challenges need to be faced before a real mass market deployment. Service delivery platforms are not prepared for fulfilling the needs of this complex environment with restrictive requirements due to the criticism of some services To overcome this situation, we propose a solution called VISIONS “Vehicular communication Improvement: Solution based on IMS Operational Nodes and Services”. VISIONS leverages on IMS subsystem and NGN enablers, and follows the CALM reference Architecture standardized by ISO. It also avoids the use of Road Side Units (RSUs), reducing complexity and high costs in terms of deployment and maintenance. We demonstrate the benefits in the following areas: 1. VANET networks efficiency. VISIONS provide a mechanism for the vehicles to access valuable information from IMS and its capabilities through a cellular channel. This efficiency improvement will occur in two relevant areas: a. Routing mechanisms. These protocols are responsible of carrying information from a vehicle to another (or a group of vehicles) using multihop mechanisms. We do not propose a new algorithm but the use of VANET topology information provided through our solution to enrich the performance of these protocols. b. Security. Many aspects of security (privacy, key, authentication, access control, revocation mechanisms, etc) are not resolved in vehicular communications. Our solution efficiently disseminates revocation information to neutralize malicious nodes in the VANET. 2. Service delivery platform. It is based on extended enablers, reference architectures, standard protocols and open APIs. By following this approach, we reduce costs and resources for service development, deployment and maintenance. To quantify these benefits in VANET networks, we provide an analytical model of the system and simulate our solution in realistic scenarios. The simulations results demonstrate how VISIONS improves the performance of relevant routing protocols and is more efficient neutralizing security attacks than the widely proposed solutions based on RSUs. Finally, we design an innovative Social Network service based in our platform, explaining how VISIONS facilitate the deployment and usage of complex capabilities. RESUMEN Los servicios digitales y comunicaciones en entornos vehiculares proporcionan herramientas esenciales para mejorar el transporte por carretera; reduciendo el número de accidentes, mejorando la eficiencia del tráfico y optimizando el transporte de mercancías y personas. Las comunicaciones vehiculares generalmente están basadas en redes VANET (Vehicular Ad hoc Networks). En dichas redes, los vehículos se comunican entre sí sin necesidad de infraestructura. Las redes VANET están principalmente orientadas a difundir información (por ejemplo advertencias de seguridad) a los vehículos en determinadas zonas geográficas, pero presentan unos requisitos muy exigentes que no se han resuelto con éxito hasta la fecha. Algunos de estos retos son; saturación del canal de acceso de radio debido al acceso simultáneo de múltiples vehículos, la eficiencia de protocolos de encaminamiento en topologías que varían rápidamente, la calidad de servicio (QoS) y los mecanismos de seguridad para detectar y neutralizar los ataques maliciosos de manera eficiente. Los servicios vehiculares pueden clasificarse en cuatro grupos: Seguridad, Eficiencia del tráfico, Sostenibilidad, e Infotainment (información y entretenimiento). Los beneficios de estos servicios para el sector son claros, pero es necesario resolver muchos desafíos tecnológicos y de negocio antes de una implementación real. Las actuales plataformas de despliegue de servicios no están preparadas para satisfacer las necesidades de este complejo entorno con requisitos muy restrictivos debido a la criticidad de algunas aplicaciones. Con el objetivo de mejorar esta situación, proponemos una solución llamada VISIONS “Vehicular communication Improvement: Solution based on IMS Operational Nodes and Services”. VISIONS se basa en el subsistema IMS, las capacidades NGN y es compatible con la arquitectura de referencia CALM estandarizado por ISO para sistemas de transporte. También evita el uso de elementos en las carreteras, conocidos como Road Side Units (RSU), reduciendo la complejidad y los altos costes de despliegue y mantenimiento. A lo largo de la tesis, demostramos los beneficios en las siguientes áreas: 1. Eficiencia en redes VANET. VISIONS proporciona un mecanismo para que los vehículos accedan a información valiosa proporcionada por IMS y sus capacidades a través de un canal de celular. Dicho mecanismo contribuye a la mejora de dos áreas importantes: a. Mecanismos de encaminamiento. Estos protocolos son responsables de llevar información de un vehículo a otro (o a un grupo de vehículos) utilizando múltiples saltos. No proponemos un nuevo algoritmo de encaminamiento, sino el uso de información topológica de la red VANET a través de nuestra solución para enriquecer el funcionamiento de los protocolos más relevantes. b. Seguridad. Muchos aspectos de la seguridad (privacidad, gestión de claves, autenticación, control de acceso, mecanismos de revocación, etc) no están resueltos en las comunicaciones vehiculares. Nuestra solución difunde de manera eficiente la información de revocación para neutralizar los nodos maliciosos en la red. 2. Plataforma de despliegue de servicios. Está basada en capacidades NGN, arquitecturas de referencia, protocolos estándar y APIs abiertos. Siguiendo este enfoque, reducimos costes y optimizamos procesos para el desarrollo, despliegue y mantenimiento de servicios vehiculares. Para cuantificar estos beneficios en las redes VANET, ofrecemos un modelo de analítico del sistema y simulamos nuestra solución en escenarios realistas. Los resultados de las simulaciones muestran cómo VISIONS mejora el rendimiento de los protocolos de encaminamiento relevantes y neutraliza los ataques a la seguridad de forma más eficientes que las soluciones basadas en RSU. Por último, diseñamos un innovador servicio de red social basado en nuestra plataforma, explicando cómo VISIONS facilita el despliegue y el uso de las capacidades NGN.
Resumo:
This document will be divided into two main parts. The first one will be the classification of the authentication techniques. We will search the main electronic databases for papers related to authentication techniques. We will then summarize the related papers and show what classifications they use for the authentication techniques. After all of the documents have been read and summarized we will analyse them and group the authentication techniques into the classifications found. For the second part of the document we will focus on the study of usability attributes in the authentication techniques. This to know how authentications techniques compare to one another based on their usability attributes. We will search the main electronic databases for papers related to the usability attributes of authentication techniques based on the usability definition of ISO/IEC 25010 (SQuaRE) and its attributes. We will then summarize the related papers and show what authentication methods they describe and which usability attributes they measure. After all of the documents have been read and summarized we will analyse them depending on their usability attribute. At the end we will elaborate those results to show which authentication techniques have better usability in terms of a specific usability attribute. This will help practitioners who are interested in using authentication methods but want or need to focus on a specific usability attribute. They will be able to use this as a guide to help them chose the best option that fits their purpose.
Resumo:
El extraordinario auge de las nuevas tecnologías de la información, el desarrollo de la Internet de las Cosas, el comercio electrónico, las redes sociales, la telefonía móvil y la computación y almacenamiento en la nube, han proporcionado grandes beneficios en todos los ámbitos de la sociedad. Junto a éstos, se presentan nuevos retos para la protección y privacidad de la información y su contenido, como la suplantación de personalidad y la pérdida de la confidencialidad e integridad de los documentos o las comunicaciones electrónicas. Este hecho puede verse agravado por la falta de una frontera clara que delimite el mundo personal del mundo laboral en cuanto al acceso de la información. En todos estos campos de la actividad personal y laboral, la Criptografía ha jugado un papel fundamental aportando las herramientas necesarias para garantizar la confidencialidad, integridad y disponibilidad tanto de la privacidad de los datos personales como de la información. Por otro lado, la Biometría ha propuesto y ofrecido diferentes técnicas con el fin de garantizar la autentificación de individuos a través del uso de determinadas características personales como las huellas dáctilares, el iris, la geometría de la mano, la voz, la forma de caminar, etc. Cada una de estas dos ciencias, Criptografía y Biometría, aportan soluciones a campos específicos de la protección de datos y autentificación de usuarios, que se verían enormemente potenciados si determinadas características de ambas ciencias se unieran con vistas a objetivos comunes. Por ello es imperativo intensificar la investigación en estos ámbitos combinando los algoritmos y primitivas matemáticas de la Criptografía con la Biometría para dar respuesta a la demanda creciente de nuevas soluciones más técnicas, seguras y fáciles de usar que potencien de modo simultáneo la protección de datos y la identificacíón de usuarios. En esta combinación el concepto de biometría cancelable ha supuesto una piedra angular en el proceso de autentificación e identificación de usuarios al proporcionar propiedades de revocación y cancelación a los ragos biométricos. La contribución de esta tesis se basa en el principal aspecto de la Biometría, es decir, la autentificación segura y eficiente de usuarios a través de sus rasgos biométricos, utilizando tres aproximaciones distintas: 1. Diseño de un esquema criptobiométrico borroso que implemente los principios de la biometría cancelable para identificar usuarios lidiando con los problemas acaecidos de la variabilidad intra e inter-usuarios. 2. Diseño de una nueva función hash que preserva la similitud (SPHF por sus siglas en inglés). Actualmente estas funciones se usan en el campo del análisis forense digital con el objetivo de buscar similitudes en el contenido de archivos distintos pero similares de modo que se pueda precisar hasta qué punto estos archivos pudieran ser considerados iguales. La función definida en este trabajo de investigación, además de mejorar los resultados de las principales funciones desarrolladas hasta el momento, intenta extender su uso a la comparación entre patrones de iris. 3. Desarrollando un nuevo mecanismo de comparación de patrones de iris que considera tales patrones como si fueran señales para compararlos posteriormente utilizando la transformada de Walsh-Hadarmard. Los resultados obtenidos son excelentes teniendo en cuenta los requerimientos de seguridad y privacidad mencionados anteriormente. Cada uno de los tres esquemas diseñados han sido implementados para poder realizar experimentos y probar su eficacia operativa en escenarios que simulan situaciones reales: El esquema criptobiométrico borroso y la función SPHF han sido implementados en lenguaje Java mientras que el proceso basado en la transformada de Walsh-Hadamard en Matlab. En los experimentos se ha utilizado una base de datos de imágenes de iris (CASIA) para simular una población de usuarios del sistema. En el caso particular de la función de SPHF, además se han realizado experimentos para comprobar su utilidad en el campo de análisis forense comparando archivos e imágenes con contenido similar y distinto. En este sentido, para cada uno de los esquemas se han calculado los ratios de falso negativo y falso positivo. ABSTRACT The extraordinary increase of new information technologies, the development of Internet of Things, the electronic commerce, the social networks, mobile or smart telephony and cloud computing and storage, have provided great benefits in all areas of society. Besides this fact, there are new challenges for the protection and privacy of information and its content, such as the loss of confidentiality and integrity of electronic documents and communications. This is exarcebated by the lack of a clear boundary between the personal world and the business world as their differences are becoming narrower. In both worlds, i.e the personal and the business one, Cryptography has played a key role by providing the necessary tools to ensure the confidentiality, integrity and availability both of the privacy of the personal data and information. On the other hand, Biometrics has offered and proposed different techniques with the aim to assure the authentication of individuals through their biometric traits, such as fingerprints, iris, hand geometry, voice, gait, etc. Each of these sciences, Cryptography and Biometrics, provides tools to specific problems of the data protection and user authentication, which would be widely strengthen if determined characteristics of both sciences would be combined in order to achieve common objectives. Therefore, it is imperative to intensify the research in this area by combining the basics mathematical algorithms and primitives of Cryptography with Biometrics to meet the growing demand for more secure and usability techniques which would improve the data protection and the user authentication. In this combination, the use of cancelable biometrics makes a cornerstone in the user authentication and identification process since it provides revocable or cancelation properties to the biometric traits. The contributions in this thesis involve the main aspect of Biometrics, i.e. the secure and efficient authentication of users through their biometric templates, considered from three different approaches. The first one is designing a fuzzy crypto-biometric scheme using the cancelable biometric principles to take advantage of the fuzziness of the biometric templates at the same time that it deals with the intra- and inter-user variability among users without compromising the biometric templates extracted from the legitimate users. The second one is designing a new Similarity Preserving Hash Function (SPHF), currently widely used in the Digital Forensics field to find similarities among different files to calculate their similarity level. The function designed in this research work, besides the fact of improving the results of the two main functions of this field currently in place, it tries to expand its use to the iris template comparison. Finally, the last approach of this thesis is developing a new mechanism of handling the iris templates, considering them as signals, to use the Walsh-Hadamard transform (complemented with three other algorithms) to compare them. The results obtained are excellent taking into account the security and privacy requirements mentioned previously. Every one of the three schemes designed have been implemented to test their operational efficacy in situations that simulate real scenarios: The fuzzy crypto-biometric scheme and the SPHF have been implemented in Java language, while the process based on the Walsh-Hadamard transform in Matlab. The experiments have been performed using a database of iris templates (CASIA-IrisV2) to simulate a user population. The case of the new SPHF designed is special since previous to be applied i to the Biometrics field, it has been also tested to determine its applicability in the Digital Forensic field comparing similar and dissimilar files and images. The ratios of efficiency and effectiveness regarding user authentication, i.e. False Non Match and False Match Rate, for the schemes designed have been calculated with different parameters and cases to analyse their behaviour.
Resumo:
Eight phenolic acids and two abscisic acid isomers in Australian honeys from five botanical species (Melaleuca, Guioa, Lophostemon, Banksia and Helianthus) have been analyzed in relation to their botanical origins. Total phenolic acids present in these honeys range from 2.13 mg/100 g sunflower (Helianthus annuus) honey to 12.11 mg/100 g tea tree (Melaleuca quinquenervia) honey, with amounts of individual acids being various. Tea tree honey shows a phenolic profile of gallic, ellagic, chlorogenic and coumaric acids, which is similar to the phenolic profile of an Australian Eucalyptus honey (bloodwood or Eucalyptus intermedia honey). The main difference between tea tree and bloodwood honeys is the contribution of chlorogenic acid to their total phenolic profiles. In Australian crow ash (Guioa semiglauca) honey, a characteristic phenolic profile mainly consisting of gallic acid and abscisic acid could be used as the floral marker. In brush box (Lophostemon conferta) honey, the phenolic profile, comprising mainly gallic acid and ellagic acid, could be used to differentiate this honey not only from the other Australian non-Eucalyptus honeys but also from a Eucalyptus honey (yellow box or Eucalyptus melliodora honey). However, this Eucalyptus honey could not be differentiated from brush box honey based only on their flavonoid profiles. Similarly, the phenolic profile of heath (Banksia ericifolia) honey, comprising mainly gallic acid, an unknown phenolic acid (Phl) and coumaric acid, could also be used to differentiate this honey from tea tree and bloodwood honeys, which have similar flavonoid profiles. Coumaric acid is a principal phenolic acid in Australian sunflower honey and it could thus be used together with gallic acid for the authentication. These results show that the HPLC analysis of phenolic acids and abscisic acids in Australian floral honeys Could assist the differentiation and authentication of the honeys. © 2005 Elsevier Ltd. All rights reserved.
Resumo:
Security protocols preserve essential properties, such as confidentiality and authentication, of electronically transmitted data. However, such properties cannot be directly expressed or verified in contemporary formal methods. Via a detailed example, we describe the phases needed to formalise and verify the correctness of a security protocol in the state-oriented Z formalism.
Resumo:
This dissertation develops an image processing framework with unique feature extraction and similarity measurements for human face recognition in the thermal mid-wave infrared portion of the electromagnetic spectrum. The goals of this research is to design specialized algorithms that would extract facial vasculature information, create a thermal facial signature and identify the individual. The objective is to use such findings in support of a biometrics system for human identification with a high degree of accuracy and a high degree of reliability. This last assertion is due to the minimal to no risk for potential alteration of the intrinsic physiological characteristics seen through thermal infrared imaging. The proposed thermal facial signature recognition is fully integrated and consolidates the main and critical steps of feature extraction, registration, matching through similarity measures, and validation through testing our algorithm on a database, referred to as C-X1, provided by the Computer Vision Research Laboratory at the University of Notre Dame. Feature extraction was accomplished by first registering the infrared images to a reference image using the functional MRI of the Brain’s (FMRIB’s) Linear Image Registration Tool (FLIRT) modified to suit thermal infrared images. This was followed by segmentation of the facial region using an advanced localized contouring algorithm applied on anisotropically diffused thermal images. Thermal feature extraction from facial images was attained by performing morphological operations such as opening and top-hat segmentation to yield thermal signatures for each subject. Four thermal images taken over a period of six months were used to generate thermal signatures and a thermal template for each subject, the thermal template contains only the most prevalent and consistent features. Finally a similarity measure technique was used to match signatures to templates and the Principal Component Analysis (PCA) was used to validate the results of the matching process. Thirteen subjects were used for testing the developed technique on an in-house thermal imaging system. The matching using an Euclidean-based similarity measure showed 88% accuracy in the case of skeletonized signatures and templates, we obtained 90% accuracy for anisotropically diffused signatures and templates. We also employed the Manhattan-based similarity measure and obtained an accuracy of 90.39% for skeletonized and diffused templates and signatures. It was found that an average 18.9% improvement in the similarity measure was obtained when using diffused templates. The Euclidean- and Manhattan-based similarity measure was also applied to skeletonized signatures and templates of 25 subjects in the C-X1 database. The highly accurate results obtained in the matching process along with the generalized design process clearly demonstrate the ability of the thermal infrared system to be used on other thermal imaging based systems and related databases. A novel user-initialization registration of thermal facial images has been successfully implemented. Furthermore, the novel approach at developing a thermal signature template using four images taken at various times ensured that unforeseen changes in the vasculature did not affect the biometric matching process as it relied on consistent thermal features.
Resumo:
The ever-growing interest in scientific techniques, able to characterise the materials and rediscover the steps behind the execution of a painting, makes them widely accepted in its investigation. This research discusses issues emerging from attribution and authentication studies and proposes best practise for the characterisation of materials and techniques, favouring the contextualisation of the results in an integrated approach; the work aims to systematically classify paintings in categories that aid the examination of objects. A first grouping of paintings is based on the information initially available on them, identifying four categories. A focus of this study is the examination of case studies, spanning from the 16th to the 20th century, to evaluate and validate different protocols associated to each category, to show problems arising from paintings and explain advantages and limits of the approach. The research methodology incorporates a combined set of scientific techniques (non-invasive, such as technical imaging and XRF, micro-invasive, such as optical microscopy, SEM-EDS, FTIR, Raman microscopy and in one case radiocarbon dating) to answer the questions and, if necessary for the classification, exhaustively characterise the materials of the paintings, as the creation and contribution of shared technical databases related to various artists and their evolution over time is an objective tool that benefits this kind of study. The reliability of a close collaboration among different professionals is an essential aspect of this research to comprehensively study a painting, as the integration of stylistic, documentary and provenance studies corroborates the scientific findings and helps in the successful contextualisation of the results and the reconstruction of the history of the object.