18 resultados para Maintenance As A Basic Human Right
Resumo:
Intelligent systems are currently inherent to the society, supporting a synergistic human-machine collaboration. Beyond economical and climate factors, energy consumption is strongly affected by the performance of computing systems. The quality of software functioning may invalidate any improvement attempt. In addition, data-driven machine learning algorithms are the basis for human-centered applications, being their interpretability one of the most important features of computational systems. Software maintenance is a critical discipline to support automatic and life-long system operation. As most software registers its inner events by means of logs, log analysis is an approach to keep system operation. Logs are characterized as Big data assembled in large-flow streams, being unstructured, heterogeneous, imprecise, and uncertain. This thesis addresses fuzzy and neuro-granular methods to provide maintenance solutions applied to anomaly detection (AD) and log parsing (LP), dealing with data uncertainty, identifying ideal time periods for detailed software analyses. LP provides deeper semantics interpretation of the anomalous occurrences. The solutions evolve over time and are general-purpose, being highly applicable, scalable, and maintainable. Granular classification models, namely, Fuzzy set-Based evolving Model (FBeM), evolving Granular Neural Network (eGNN), and evolving Gaussian Fuzzy Classifier (eGFC), are compared considering the AD problem. The evolving Log Parsing (eLP) method is proposed to approach the automatic parsing applied to system logs. All the methods perform recursive mechanisms to create, update, merge, and delete information granules according with the data behavior. For the first time in the evolving intelligent systems literature, the proposed method, eLP, is able to process streams of words and sentences. Essentially, regarding to AD accuracy, FBeM achieved (85.64+-3.69)%; eGNN reached (96.17+-0.78)%; eGFC obtained (92.48+-1.21)%; and eLP reached (96.05+-1.04)%. Besides being competitive, eLP particularly generates a log grammar, and presents a higher level of model interpretability.
Resumo:
The chapters of the thesis focus on a limited variety of selected themes in EU privacy and data protection law. Chapter 1 sets out the general introduction on the research topic. Chapter 2 touches upon the methodology used in the research. Chapter 3 conceptualises the basic notions from a legal standpoint. Chapter 4 examines the current regulatory regime applicable to digital health technologies, healthcare emergencies, privacy, and data protection. Chapter 5 provides case studies on the application deployed in the Covid-19 scenario, from the perspective of privacy and data protection. Chapter 6 addresses the post-Covid European regulatory initiatives on the subject matter, and its potential effects on privacy and data protection. Chapter 7 is the outcome of a six-month internship with a company in Italy and focuses on the protection of fundamental rights through common standardisation and certification, demonstrating that such standards can serve as supporting tools to guarantee the right to privacy and data protection in digital health technologies. The thesis concludes with the observation that finding and transposing European privacy and data protection standards into scenarios, such as public healthcare emergencies where digital health technologies are deployed, requires rapid coordination between the European Data Protection Authorities and the Member States guarantee that individual privacy and data protection rights are ensured.
Resumo:
This research investigates the use of Artificial Intelligence (AI) systems for profiling and decision-making, and the consequences that it poses to rights and freedoms of individuals. In particular, the research considers that automated decision-making systems (ADMs) are opaque, can be biased, and their logic is correlation-based. For these reasons, ADMs do not take decisions as human beings do. Against this background, the risks for the rights of individuals combined with the demand for transparency of algorithms have created a debate on the need for a new 'right to explanation'. Assuming that, except in cases provided for by law, a decision made by a human does not entitle to a right to explanation, the question has been raised as to whether – if the decision is made by an algorithm – it is necessary to configure a right to explanation for the decision-subject. Therefore, the research addresses a right to explanation of automated decision-making, examining the relation between today’s technology and legal concepts of explanation, reasoning, and transparency. In particular, it focuses on the existence and scope of the right to explanation, considering legal and technical issues surrounding the use of ADMs. The research analyses the use of AI and the problems arising from it from a legal perspective, studying the EU legal framework – especially in the data protection field. In this context, a part of the research is focused on transparency requirements under the GDPR (namely, Articles 13–15, 22, as well as Recital 71). The research aims to outline an interpretative framework of such a right and make recommendations about its development, aiming to provide guidelines for an adequate explanation of automated decisions. Hence, the thesis analyses what an explanation might consist of, and the benefits of explainable AI – examined from legal and technical perspectives.