902 resultados para Supervisory Control and Data Acquisition (SCADA)
3D Surveying and Data Management towards the Realization of a Knowledge System for Cultural Heritage
Resumo:
The research activities involved the application of the Geomatic techniques in the Cultural Heritage field, following the development of two themes: Firstly, the application of high precision surveying techniques for the restoration and interpretation of relevant monuments and archaeological finds. The main case regards the activities for the generation of a high-fidelity 3D model of the Fountain of Neptune in Bologna. In this work, aimed to the restoration of the manufacture, both the geometrical and radiometrical aspects were crucial. The final product was the base of a 3D information system representing a shared tool where the different figures involved in the restoration activities shared their contribution in a multidisciplinary approach. Secondly, the arrangement of 3D databases for a Building Information Modeling (BIM) approach, in a process which involves the generation and management of digital representations of physical and functional characteristics of historical buildings, towards a so-called Historical Building Information Model (HBIM). A first application was conducted for the San Michele in Acerboli’s church in Santarcangelo di Romagna. The survey was performed by the integration of the classical and modern Geomatic techniques and the point cloud representing the church was used for the development of a HBIM model, where the relevant information connected to the building could be stored and georeferenced. A second application regards the domus of Obellio Firmo in Pompeii, surveyed by the integration of the classical and modern Geomatic techniques. An historical analysis permitted the definitions of phases and the organization of a database of materials and constructive elements. The goal is the obtaining of a federate model able to manage the different aspects: documental, analytic and reconstructive ones.
Resumo:
The present doctoral thesis discusses the ways to improve the performance of driving simulator, provide objective measures for the road safety evaluation methodology based on driver’s behavior and response and investigates the drivers' adaptation to the driving assistant systems. The activities are divided into two macro areas; the driving simulation studies and on-road experiments. During the driving simulation experimentation, the classical motion cueing algorithm with logarithmic scale was implemented in the 2DOF motion cueing simulator and the motion cues were found desirable by the participants. In addition, it found out that motion stimuli could change the behaviour of the drivers in terms of depth/distance perception. During the on-road experimentations, The driver gaze behaviour was investigated to find the objective measures on the visibility of the road signs and reaction time of the drivers. The sensor infusion and the vehicle monitoring instruments were found useful for an objective assessment of the pavement condition and the drivers’ performance. In the last chapter of the thesis, the safety assessment during the use of level 1 automated driving “ACC” is discussed with the simulator and on-road experiment. The drivers’ visual behaviour was investigated in both studies with innovative classification method to find the epochs of the distraction of the drivers. The behavioural adaptation to ACC showed that drivers may divert their attention away from the driving task to engage in secondary, non-driving-related tasks.
Resumo:
In the last decades, global food supply chains had to deal with the increasing awareness of the stakeholders and consumers about safety, quality, and sustainability. In order to address these new challenges for food supply chain systems, an integrated approach to design, control, and optimize product life cycle is required. Therefore, it is essential to introduce new models, methods, and decision-support platforms tailored to perishable products. This thesis aims to provide novel practice-ready decision-support models and methods to optimize the logistics of food items with an integrated and interdisciplinary approach. It proposes a comprehensive review of the main peculiarities of perishable products and the environmental stresses accelerating their quality decay. Then, it focuses on top-down strategies to optimize the supply chain system from the strategical to the operational decision level. Based on the criticality of the environmental conditions, the dissertation evaluates the main long-term logistics investment strategies to preserve products quality. Several models and methods are proposed to optimize the logistics decisions to enhance the sustainability of the supply chain system while guaranteeing adequate food preservation. The models and methods proposed in this dissertation promote a climate-driven approach integrating climate conditions and their consequences on the quality decay of products in innovative models supporting the logistics decisions. Given the uncertain nature of the environmental stresses affecting the product life cycle, an original stochastic model and solving method are proposed to support practitioners in controlling and optimizing the supply chain systems when facing uncertain scenarios. The application of the proposed decision-support methods to real case studies proved their effectiveness in increasing the sustainability of the perishable product life cycle. The dissertation also presents an industry application of a global food supply chain system, further demonstrating how the proposed models and tools can be integrated to provide significant savings and sustainability improvements.
Resumo:
The term Artificial intelligence acquired a lot of baggage since its introduction and in its current incarnation is synonymous with Deep Learning. The sudden availability of data and computing resources has opened the gates to myriads of applications. Not all are created equal though, and problems might arise especially for fields not closely related to the tasks that pertain tech companies that spearheaded DL. The perspective of practitioners seems to be changing, however. Human-Centric AI emerged in the last few years as a new way of thinking DL and AI applications from the ground up, with a special attention at their relationship with humans. The goal is designing a system that can gracefully integrate in already established workflows, as in many real-world scenarios AI may not be good enough to completely replace its humans. Often this replacement may even be unneeded or undesirable. Another important perspective comes from, Andrew Ng, a DL pioneer, who recently started shifting the focus of development from “better models” towards better, and smaller, data. He defined his approach Data-Centric AI. Without downplaying the importance of pushing the state of the art in DL, we must recognize that if the goal is creating a tool for humans to use, more raw performance may not align with more utility for the final user. A Human-Centric approach is compatible with a Data-Centric one, and we find that the two overlap nicely when human expertise is used as the driving force behind data quality. This thesis documents a series of case-studies where these approaches were employed, to different extents, to guide the design and implementation of intelligent systems. We found human expertise proved crucial in improving datasets and models. The last chapter includes a slight deviation, with studies on the pandemic, still preserving the human and data centric perspective.
Resumo:
The thesis represents the conclusive outcome of the European Joint Doctorate programmein Law, Science & Technology funded by the European Commission with the instrument Marie Skłodowska-Curie Innovative Training Networks actions inside of the H2020, grantagreement n. 814177. The tension between data protection and privacy from one side, and the need of granting further uses of processed personal datails is investigated, drawing the lines of the technological development of the de-anonymization/re-identification risk with an explorative survey. After acknowledging its span, it is questioned whether a certain degree of anonymity can still be granted focusing on a double perspective: an objective and a subjective perspective. The objective perspective focuses on the data processing models per se, while the subjective perspective investigates whether the distribution of roles and responsibilities among stakeholders can ensure data anonymity.
Resumo:
The discovery of new materials and their functions has always been a fundamental component of technological progress. Nowadays, the quest for new materials is stronger than ever: sustainability, medicine, robotics and electronics are all key assets which depend on the ability to create specifically tailored materials. However, designing materials with desired properties is a difficult task, and the complexity of the discipline makes it difficult to identify general criteria. While scientists developed a set of best practices (often based on experience and expertise), this is still a trial-and-error process. This becomes even more complex when dealing with advanced functional materials. Their properties depend on structural and morphological features, which in turn depend on fabrication procedures and environment, and subtle alterations leads to dramatically different results. Because of this, materials modeling and design is one of the most prolific research fields. Many techniques and instruments are continuously developed to enable new possibilities, both in the experimental and computational realms. Scientists strive to enforce cutting-edge technologies in order to make progress. However, the field is strongly affected by unorganized file management, proliferation of custom data formats and storage procedures, both in experimental and computational research. Results are difficult to find, interpret and re-use, and a huge amount of time is spent interpreting and re-organizing data. This also strongly limit the application of data-driven and machine learning techniques. This work introduces possible solutions to the problems described above. Specifically, it talks about developing features for specific classes of advanced materials and use them to train machine learning models and accelerate computational predictions for molecular compounds; developing method for organizing non homogeneous materials data; automate the process of using devices simulations to train machine learning models; dealing with scattered experimental data and use them to discover new patterns.
Resumo:
The purpose of this research study is to discuss privacy and data protection-related regulatory and compliance challenges posed by digital transformation in healthcare in the wake of the COVID-19 pandemic. The public health crisis accelerated the development of patient-centred remote/hybrid healthcare delivery models that make increased use of telehealth services and related digital solutions. The large-scale uptake of IoT-enabled medical devices and wellness applications, and the offering of healthcare services via healthcare platforms (online doctor marketplaces) have catalysed these developments. However, the use of new enabling technologies (IoT, AI) and the platformisation of healthcare pose complex challenges to the protection of patient’s privacy and personal data. This happens at a time when the EU is drawing up a new regulatory landscape for the use of data and digital technologies. Against this background, the study presents an interdisciplinary (normative and technology-oriented) critical assessment on how the new regulatory framework may affect privacy and data protection requirements regarding the deployment and use of Internet of Health Things (hardware) devices and interconnected software (AI systems). The study also assesses key privacy and data protection challenges that affect healthcare platforms (online doctor marketplaces) in their offering of video API-enabled teleconsultation services and their (anticipated) integration into the European Health Data Space. The overall conclusion of the study is that regulatory deficiencies may create integrity risks for the protection of privacy and personal data in telehealth due to uncertainties about the proper interplay, legal effects and effectiveness of (existing and proposed) EU legislation. The proliferation of normative measures may increase compliance costs, hinder innovation and ultimately, deprive European patients from state-of-the-art digital health technologies, which is paradoxically, the opposite of what the EU plans to achieve.
Resumo:
In pursuit of aligning with the European Union's ambitious target of achieving a carbon-neutral economy by 2050, researchers, vehicle manufacturers, and original equipment manufacturers have been at the forefront of exploring cutting-edge technologies for internal combustion engines. The introduction of these technologies has significantly increased the effort required to calibrate the models implemented in the engine control units. Consequently the development of tools that reduce costs and the time required during the experimental phases, has become imperative. Additionally, to comply with ever-stricter limits on 〖"CO" 〗_"2" emissions, it is crucial to develop advanced control systems that enhance traditional engine management systems in order to reduce fuel consumption. Furthermore, the introduction of new homologation cycles, such as the real driving emissions cycle, compels manufacturers to bridge the gap between engine operation in laboratory tests and real-world conditions. Within this context, this thesis showcases the performance and cost benefits achievable through the implementation of an auto-adaptive closed-loop control system, leveraging in-cylinder pressure sensors in a heavy-duty diesel engine designed for mining applications. Additionally, the thesis explores the promising prospect of real-time self-adaptive machine learning models, particularly neural networks, to develop an automatic system, using in-cylinder pressure sensors for the precise calibration of the target combustion phase and optimal spark advance in a spark-ignition engines. To facilitate the application of these combustion process feedback-based algorithms in production applications, the thesis discusses the results obtained from the development of a cost-effective sensor for indirect cylinder pressure measurement. Finally, to ensure the quality control of the proposed affordable sensor, the thesis provides a comprehensive account of the design and validation process for a piezoelectric washer test system.
Resumo:
Il presente lavoro è suddiviso in due parti. Nella prima sono presentate la teoria degli esponenti di Lyapunov e la teoria del Controllo Ottimo da un punto di vista geometrico. Sono riportati i risultati principali di queste due teorie e vengono abbozzate le dimostrazioni dei teoremi più importanti. Nella seconda parte, usando queste due teorie, abbiamo provato a trovare una stima per gli esponenti di Lyapunov estremali associati ai sistemi dinamici lineari switched sul gruppo di Lie SL2(R). Abbiamo preso in considerazione solo il caso di un sistema generato da due matrici A,B ∈ sl2(R) che generano l’intera algebra di Lie. Abbiamo suddiviso il problema in alcuni possibili casi a seconda della posizione nello spazio tridimensionale sl2(R) del segmento di estremi A e B rispetto al cono delle matrici nilpotenti. Per ognuno di questi casi, abbiamo trovato una candidata soluzione ottimale. Riformuleremo il problema originale di trovare una stima per gli esponenti di Lyapunov in un problema di Controllo Ottimo. Dopodiché, applichiamo il Principio del massimo di Pontryagin e troveremo un controllo e la corrispondente traiettoria che soddisfa tale Principio.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica
Resumo:
Operational Modal Analysis is currently applied in structural dynamic monitoring studies using conventional wired based sensors and data acquisition platforms. This approach, however, becomes inadequate in cases where the tests are performed in ancient structures with esthetic concerns or in others, where the use of wires greatly impacts the monitoring system cost and creates difficulties in the maintenance and deployment of data acquisition platforms. In these cases, the use of sensor platforms based on wireless and MEMS would clearly benefit these applications. This work presents a first attempt to apply this wireless technology to the structural monitoring of historical masonry constructions in the context of operational modal analysis. Commercial WSN platforms were used to study one laboratory specimen and one of the structural elements of a XV century building in Portugal. Results showed that in comparison to the conventional wired sensors, wireless platforms have poor performance in respect to the acceleration time series recorded and the detection of modal shapes. However, for frequency detection issues, reliable results were obtained, especially when random excitation was used as noise source.
Resumo:
The advances of the semiconductor industry enable microelectromechanical systems sensors, signal conditioning logic and network access to be integrated into a smart sensor node. In this framework, a mixed-mode interface circuit for monolithically integrated gas sensor arrays was developed with high-level design techniques. This interface system includes analog electronics for inspection of up to four sensor arrays and digital logic for smart control and data communication. Although different design methodologies were used in the conception of the complete circuit, high-level synthesis tools and methodologies were crucial in speeding up the whole design cycle, enhancing reusability for future applications and producing a flexible and robust component.
Resumo:
This project proposes a preliminary architectural design for a control and data processing center, also known as 'ground segment', for Earth observation satellites.
Resumo:
This thesis studies robustness against large-scale failures in communications networks. If failures are isolated, they usually go unnoticed by users thanks to recovery mechanisms. However, such mechanisms are not effective against large-scale multiple failures. Large-scale failures may cause huge economic loss. A key requirement towards devising mechanisms to lessen their impact is the ability to evaluate network robustness. This thesis focuses on multilayer networks featuring separated control and data planes. The majority of the existing measures of robustness are unable to capture the true service degradation in such a setting, because they rely on purely topological features. One of the major contributions of this thesis is a new measure of functional robustness. The failure dynamics is modeled from the perspective of epidemic spreading, for which a new epidemic model is proposed. Another contribution is a taxonomy of multiple, large-scale failures, adapted to the needs and usage of the field of networking.
Resumo:
We have combined several key sample preparation steps for the use of a liquid matrix system to provide high analytical sensitivity in automated ultraviolet -- matrix-assisted laser desorption/ionisation -- mass spectrometry (UV-MALDI-MS). This new sample preparation protocol employs a matrix-mixture which is based on the glycerol matrix-mixture described by Sze et al. The low-femtomole sensitivity that is achievable with this new preparation protocol enables proteomic analysis of protein digests comparable to solid-state matrix systems. For automated data acquisition and analysis, the MALDI performance of this liquid matrix surpasses the conventional solid-state MALDI matrices. Besides the inherent general advantages of liquid samples for automated sample preparation and data acquisition the use of the presented liquid matrix significantly reduces the extent of unspecific ion signals in peptide mass fingerprints compared to typically used solid matrices, such as 2,5-dihydroxybenzoic acid (DHB) or alpha-cyano-hydroxycinnamic acid (CHCA). In particular, matrix and low-mass ion signals and ion signals resulting from cation adduct formation are dramatically reduced. Consequently, the confidence level of protein identification by peptide mass mapping of in-solution and in-gel digests is generally higher.