490 resultados para Workflow
Resumo:
La presente ricerca ha avuto come obbietivo studiare come stia avvenendo l'adattamento dei tradizionali laboratori fotochimici di film alle tecniche digitali attraverso l'analisi delle politiche di preservazione, del restauro, dei costi relativi all’acquisto di nuove apparecchiature e della migrazione ai nuovi media presso il British Film Institute (BFI), lo Svenska Filminstitutet (SFI), l’Eye Filmmuseum, L'Immagine Ritrovata e ANIM - Cinemateca Portuguesa. A questo scopo è stato utilizzato il metodo dello studio di caso, in cui sono state effettuate interviste con gestori e tecnici delle istituzioni citate al fine di rispondere al quesito della presente ricerca: quali sono l’impatto e le implicazioni di questo adattamento? Quali sono i risultati raggiunti con le nuove attrezzature e metodi di restauro e preservazione? Per arrivare a rispondere a queste domande lo studio è stato diviso in due sezioni. Nella prima parte, sono riportate le interviste svolte presso SFI, BFI, Eye e L'Immagine Ritrovata, realizzate al fine di ottenere dati che permettessero di formulare un'intervista più completa e approfondita. Successivamente, questa intervista più dettagliata è stata condotta con una sola istituzione, la Cinemateca Portuguesa. Pertanto, nella seconda parte dello studio di caso, sono state realizzate interviste a tecnici e dirigenti di ANIM - Cinemateca Portuguesa e dei suoi laboratori partner. L'analisi dei risultati comprende tutte le informazioni provenienti dalle cinque istituzioni intervistate. È stato notato che l'adattamento al digitale ha effettivamente apportato miglioramenti nella preservazione e nel recupero del materiale fotochimico, ma ciò ha anche fatto sorgere alcuni dilemmi nei laboratori. C'è apprensione per la scarsità di materiale della filiera fotochimica e per una sua ipotetica fine nel prossimo futuro, poiché non ci sono ancora sufficienti conoscenze sulla filiera digitale e sul comportamento di questi media con il passare del tempo. Gli intervistati hanno altresì dimostrato positività riguardo alle tecnologie digitali.
Resumo:
Sandy coasts represent vital areas whose preservation and maintenance also involve economic and tourist interests. Besides, these dynamic environments undergo the erosion process at different levels depending on their specific characteristics. For this reason, defence interventions are commonly realized by combining engineering solutions and management policies to evaluate their effects over time. Monitoring activities represent the fundamental instrument to obtain a deep knowledge of the investigated phenomenon. Thanks to technological development, several possibilities both in terms of geomatic surveying techniques and processing tools are available, allowing to reach high performances and accuracy. Nevertheless, when the littoral definition includes both emerged and submerged beaches, several issues have to be considered. Therefore, the geomatic surveys and all the following steps need to be calibrated according to the individual application, with the reference system, accuracy and spatial resolution as primary aspects. This study provides the evaluation of the available geomatic techniques, processing approaches, and derived products, aiming at optimising the entire workflow of coastal monitoring by adopting an accuracy-efficiency trade-off. The presented analyses highlight the balance point when the increase in performance becomes an additional value for the obtained products ensuring proper data management. This perspective can represent a helpful instrument to properly plan the monitoring activities according to the specific purposes of the analysis. Finally, the primary uses of the acquired and processed data in monitoring contexts are presented, also considering possible applications for numerical modelling as supporting tools. Moreover, the theme of coastal monitoring has been addressed throughout this thesis by considering a practical point of view, linking to the activities performed by Arpae (Regional agency for prevention, environment and energy of Emilia-Romagna). Indeed, the Adriatic coast of Emilia-Romagna, where sandy beaches particularly exposed to erosion are present, has been chosen as a case study for all the analyses and considerations.
Resumo:
Legionella is a Gram-negative bacterium that represent a public health issue, with heavy social and economic impact. Therefore, it is mandatory to provide a proper environmental surveillance and risk assessment plan to perform Legionella control in water distribution systems in hospital and community buildings. The thesis joins several methodologies in a unique workflow applied for the identification of non-pneumophila Legionella species (n-pL), starting from standard methods as culture and gene sequencing (mip and rpoB), and passing through innovative approaches as MALDI-TOF MS technique and whole genome sequencing (WGS). The results obtained, were compared to identify the Legionella isolates, and lead to four presumptive novel Legionella species identification. One of these four new isolates was characterized and recognized at taxonomy level with the name of Legionella bononiensis (the 64th Legionella species). The workflow applied in this thesis, help to increase the knowledge of Legionella environmental species, improving the description of the environment itself and the events that promote the growth of Legionella in their ecological niche. The correct identification and characterization of the isolates permit to prevent their spread in man-made environment and contain the occurrence of cases, clusters, or outbreaks. Therefore, the experimental work undertaken, could support the preventive measures during environmental and clinical surveillance, improving the study of species often underestimated or still unknown.
Resumo:
The dissertation addresses the still not solved challenges concerned with the source-based digital 3D reconstruction, visualisation and documentation in the domain of archaeology, art and architecture history. The emerging BIM methodology and the exchange data format IFC are changing the way of collaboration, visualisation and documentation in the planning, construction and facility management process. The introduction and development of the Semantic Web (Web 3.0), spreading the idea of structured, formalised and linked data, offers semantically enriched human- and machine-readable data. In contrast to civil engineering and cultural heritage, academic object-oriented disciplines, like archaeology, art and architecture history, are acting as outside spectators. Since the 1990s, it has been argued that a 3D model is not likely to be considered a scientific reconstruction unless it is grounded on accurate documentation and visualisation. However, these standards are still missing and the validation of the outcomes is not fulfilled. Meanwhile, the digital research data remain ephemeral and continue to fill the growing digital cemeteries. This study focuses, therefore, on the evaluation of the source-based digital 3D reconstructions and, especially, on uncertainty assessment in the case of hypothetical reconstructions of destroyed or never built artefacts according to scientific principles, making the models shareable and reusable by a potentially wide audience. The work initially focuses on terminology and on the definition of a workflow especially related to the classification and visualisation of uncertainty. The workflow is then applied to specific cases of 3D models uploaded to the DFG repository of the AI Mainz. In this way, the available methods of documenting, visualising and communicating uncertainty are analysed. In the end, this process will lead to a validation or a correction of the workflow and the initial assumptions, but also (dealing with different hypotheses) to a better definition of the levels of uncertainty.
Resumo:
Spectral sensors are a wide class of devices that are extremely useful for detecting essential information of the environment and materials with high degree of selectivity. Recently, they have achieved high degrees of integration and low implementation cost to be suited for fast, small, and non-invasive monitoring systems. However, the useful information is hidden in spectra and it is difficult to decode. So, mathematical algorithms are needed to infer the value of the variables of interest from the acquired data. Between the different families of predictive modeling, Principal Component Analysis and the techniques stemmed from it can provide very good performances, as well as small computational and memory requirements. For these reasons, they allow the implementation of the prediction even in embedded and autonomous devices. In this thesis, I will present 4 practical applications of these algorithms to the prediction of different variables: moisture of soil, moisture of concrete, freshness of anchovies/sardines, and concentration of gasses. In all of these cases, the workflow will be the same. Initially, an acquisition campaign was performed to acquire both spectra and the variables of interest from samples. Then these data are used as input for the creation of the prediction models, to solve both classification and regression problems. From these models, an array of calibration coefficients is derived and used for the implementation of the prediction in an embedded system. The presented results will show that this workflow was successfully applied to very different scientific fields, obtaining autonomous and non-invasive devices able to predict the value of physical parameters of choice from new spectral acquisitions.
Resumo:
This thesis reports three experimental studies that may contribute to understand how the sources or types of dietary fibres (DFs) included in sow diet with similar level of total DFs influence the composition of colostrum and milk and their related effects on offspring performance and gut microbiota. The first study showed that decreasing the level of hemicelluloses (HCs) in sow’s lactation diet increased the proportion of butyrate and the concentration of volatile fatty acids (VFAs), copper and threonine in milk. Simultaneously, the post-weaning growth of low birthweight piglets was improved, and the diarrhoea occurrence was reduced during the second week post-weaning. The second study showed that the level of HCs in the diet of lactating sows affected their faecal microbiota, modified the VFA profile in sow’s faeces during lactation and barely impacted the faecal microbiota of slow and fast growing piglets. The third study showed that replacing a source soluble DFs by one of insoluble DFs in sow’s diet during late gestation and lactation reduced farrowing duration, increased total VFAs and lactoferrin concentrations in colostrum, improved growth performance from birth to 1 day of lactation, during the post-weaning period and throughout the study, and reduced diarrhoea occurrence during the first week post-weaning. Finally, a fourth study proposed a workflow to analyse low biomass samples from the umbilical cord blood aiming at investigating the existence of a pre-birth microbiota with no substantial findings to confirm this hypothesis. Overall, the results of these studies confirmed that, besides the level of DFs, the sources, and the types of DFs included in the sow's diet shape the sow's microbiota, influence the composition of colostrum and milk, and improve offspring performance, but with limited impacts on the microbiota of piglets.
Resumo:
The Workflow activity was the following: Preliminary phase: Identification of 18 Formalin-fixed paraffin embedded (FFPE) samples (9 patients) («matched» 9 AK lesions and 9 SCC lesions). Working on biopsies samples we perform an extraction and RNA analysis with droplet Digital PCR (ddPCR) and we perform the data analysis. Second and final step phase: Evaluation of additional 39 subjects (36 men and 3 women). Results: We perform an evaluation and comparison of the following miRNA: miR-320 (a miRNA involved in apoptosis and cell proliferation control; miR-204, a miRNA involved in cell proliferation in and miRNA-16-5p, a miRNA involved in apoptosis).Conclusion: Our data suggest that there is no significant variation in the expression of the three tested microRNAs between adjacent AK lesions and squamous-cell carcinoma. However, a relevant trend has been observed Furthermore, by evaluating the miRNA expression trend between keratosis and carcinoma of the same patient, it is observed that there is no "uniform trend": for some samples the expression rises for the transition from AK to SCC and viceversa.
Resumo:
A Digital Scholarly Edition is a conceptually and structurally sophisticated entity. Throughout the centuries, diverse methodologies have been employed to reconstruct a text transmitted through one or multiple sources, resulting in various edition types. With the advent of digital technology in philology, these practices have undergone a significant transformation, compelling scholars to reconsider their approach in light of the web. In the digital age, philologists are expected to possess (too) advanced technical skills to prepare interactive and enriched editions, even though, in most cases, only mechanical or documentary editions are published online. The Śivadharma Database is a web Content Management System (CMS) designed to facilitate the preparation, publication, and updating of Digital Scholarly Editions. By providing scholars with a user-friendly CRUD web application to reconstruct and annotate a text, they can prepare their textus with additional components such as apparatus, notes, translations, citations, and parallels. It is possible by leveraging an annotation system based on HTML and graph data structure. This choice is made because the text entity is multidimensional and multifaceted, even if its sequential presentation constrains it. In particular, editions of South Asian texts of the Śivadharma corpus, the case study of this research, contain a series of phenomena that are difficult to manage formally, such as overlapping hierarchies. Hence, it becomes necessary to establish the data structure best suited to represent this complexity. In Śivadharma Database, the textus is an HTML file readily displayable. Textual fragments, annotated via an interface without requiring philologists to write code and saved in the backend, form the atomic unit of multiple relationships organised in a graph database. This approach enables the formal representation of complex and overlapping textual phenomena, allowing for good annotation expressiveness with minimal effort to learn the relevant technologies during the editing workflow.
Resumo:
Currently making digital 3D models and replicas of the cultural heritage assets play an important role in the preservation and having a high detail source for future research and intervention. In this dissertation, it is tried to assess different methods for digital surveying and making 3D replicas of cultural heritage assets in different scales of size. The methodologies vary in devices, software, workflow, and the amount of skill that is required. The three phases of the 3D modelling process are data acquisition, modelling, and model presentation. Each of these sections is divided into sub-sections and there are several approaches, methods, devices, and software that may be employed, furthermore, the selection process should be based on the operation's goal, available facilities, the scale and properties of the object or structure to be modeled, as well as the operators' expertise and experience. The most key point to remember is that the 3D modelling operation should be properly accurate, precise, and reliable; therefore, there are so many instructions and pieces of advice on how to perform 3D modelling effectively. It is an attempt to compare and evaluate the various ways of each phase in order to explain and demonstrate their differences, benefits, and drawbacks in order to serve as a simple guide for new and/or inexperienced users.
Resumo:
Nell'ambito della loro trasformazione digitale, molte organizzazioni stanno adottando nuove tecnologie per supportare lo sviluppo, l'implementazione e la gestione delle proprie architetture basate su microservizi negli ambienti cloud e tra i fornitori di cloud. In questo scenario, le service ed event mesh stanno emergendo come livelli infrastrutturali dinamici e configurabili che facilitano interazioni complesse e la gestione di applicazioni basate su microservizi e servizi cloud. L’obiettivo di questo lavoro è quello di analizzare soluzioni mesh open-source (istio, Linkerd, Apache EventMesh) dal punto di vista delle prestazioni, quando usate per gestire la comunicazione tra applicazioni a workflow basate su microservizi all’interno dell’ambiente cloud. A questo scopo è stato realizzato un sistema per eseguire il dislocamento di ognuno dei componenti all’interno di un cluster singolo e in un ambiente multi-cluster. La raccolta delle metriche e la loro sintesi è stata realizzata con un sistema personalizzato, compatibile con il formato dei dati di Prometheus. I test ci hanno permesso di valutare le prestazioni di ogni componente insieme alla sua efficacia. In generale, mentre si è potuta accertare la maturità delle implementazioni di service mesh testate, la soluzione di event mesh da noi usata è apparsa come una tecnologia ancora non matura, a causa di numerosi problemi di funzionamento.