749 resultados para Consortial Implementations


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Streamflow is considered a driver of inter and intra‐specific life‐history differences among freshwater fish. Therefore, dams and related flow regulation, can have deleterious impacts on their life‐cycles. The main objective of this study is to assess the effects of flow regulation on the growth and reproduction of a non‐migratory fish species. During one year, samples were collected from two populations of Iberian chub, inhabiting rivers with non‐regulated and regulated flow regimes. Flow regulation for water derivation promoted changes in chub’s condition, duration of gonad maturation and spawning, fecundity and oocyte size. However, this non‐migratory species was less responsive to streamflow regulation than a migratory species analysed. Findings from this study are important to understand changes imposed by regulated rivers on fish and can be used as guidelines for flow requirements implementations; RESUMO: O caudal é um dos fatores responsáveis pelo funcionamento dos ciclos de vida das espécies piscícolas dulciaquícolas. As barragens, e a regularização de caudal associada, podem ter impactes nos ciclos de vida destas espécies. O objetivo deste estudo prende‐se com a avaliação dos efeitos da regularização de caudal no crescimento e reprodução de uma espécie piscícola não‐migradora. A análise de amostras recolhidas em populações de escalo do Norte provenientes de dois rios de caudal regularizado e não regularizado, identificaram impactes significativos a nível da condição corporal, da maturação das gónadas e desova, da fecundidade e da dimensão dos oócitos. Esta espécie não‐migradora parece ser menos responsiva à artificialização do caudal que uma espécie migradora previamente analisada. Estes resultados permitem compreender as alterações impostas pela regularização do caudal e podem ser usados em programas de reabilitação fluvial.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since their emergence, locally resonant metamaterials have found several applications for the control of surface waves, from micrometer-sized electronic devices to meter-sized seismic barriers. The interaction between Rayleigh-type surface waves and resonant metamaterials has been investigated through the realization of locally resonant metasurfaces, thin elastic interfaces constituted by a cluster of resonant inclusions or oscillators embedded near the surface of an elastic waveguide. When such resonant metasurfaces are embedded in an elastic homogeneous half-space, they can filter out the propagation of Rayleigh waves, creating low-frequency bandgaps at selected frequencies. In the civil engineering context, heavy resonating masses are needed to extend the bandgap frequency width of locally resonant devices, a requirement that limits their practical implementations. In this dissertation, the wave attenuation capabilities of locally resonant metasurfaces have been enriched by proposing (i) tunable metasurfaces to open large frequency bandgaps with small effective inertia, and by developing (ii) an analytical framework aimed at studying the propagation of Rayleigh waves propagation in deep resonant waveguides. In more detail, inertial amplified resonators are exploited to design advanced metasurfaces with a prescribed static and a tunable dynamic response. The modular design of the tunable metasurfaces allows to shift and enlarge low-frequency spectral bandgaps without modifying the total inertia of the metasurface. Besides, an original dispersion law is derived to study the dispersive properties of Rayleigh waves propagating in thick resonant layers made of sub-wavelength resonators. Accordingly, a deep resonant wave barrier of mechanical resonators embedded inside the soil is designed to impede the propagation of seismic surface waves. Numerical models are developed to confirm the analytical dispersion predictions of the tunable metasurface and resonant layer. Finally, a medium-size scale resonant wave barrier is designed according to the soil stratigraphy of a real geophysical scenario to attenuate ground-borne vibration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Big data are reshaping the way we interact with technology, thus fostering new applications to increase the safety-assessment of foods. An extraordinary amount of information is analysed using machine learning approaches aimed at detecting the existence or predicting the likelihood of future risks. Food business operators have to share the results of these analyses when applying to place on the market regulated products, whereas agri-food safety agencies (including the European Food Safety Authority) are exploring new avenues to increase the accuracy of their evaluations by processing Big data. Such an informational endowment brings with it opportunities and risks correlated to the extraction of meaningful inferences from data. However, conflicting interests and tensions among the involved entities - the industry, food safety agencies, and consumers - hinder the finding of shared methods to steer the processing of Big data in a sound, transparent and trustworthy way. A recent reform in the EU sectoral legislation, the lack of trust and the presence of a considerable number of stakeholders highlight the need of ethical contributions aimed at steering the development and the deployment of Big data applications. Moreover, Artificial Intelligence guidelines and charters published by European Union institutions and Member States have to be discussed in light of applied contexts, including the one at stake. This thesis aims to contribute to these goals by discussing what principles should be put forward when processing Big data in the context of agri-food safety-risk assessment. The research focuses on two interviewed topics - data ownership and data governance - by evaluating how the regulatory framework addresses the challenges raised by Big data analysis in these domains. The outcome of the project is a tentative Roadmap aimed to identify the principles to be observed when processing Big data in this domain and their possible implementations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La presente trattazione affronta le principali problematiche giuridiche derivanti dall’apertura di una procedura concorsuale, esaminando le questioni di maggiore rilievo giuridico e operativo per il settore del trasporto marittimo in base ai due sistemi che, a livello sovranazionale, regolano l’insolvenza transfrontaliera, i.e. quello ispirato alla UNCITRAL Model Law e il Regolamento UE 848/2015. Le cornici normative UNCITRAL e UE hanno rappresentato, quindi, il punto di partenza dello scrutinio delle possibili aree di conflitto tra il trasporto marittimo e le procedure di insolvenza: sono emerse numerose zone di potenziale collisione, soprattutto in relazione ai criteri di collegamento tipici della navigazione (in primis, la bandiera quale elemento distintivo della nazionalità della nave) e, dunque, all’individuazione del centro degli interessi principali del debitore/armatore, soprattutto se – come di fatto avviene frequentemente in ambito internazionale – organizzato sotto forma di shipping group. Il secondo capitolo è dedicato, in senso lato, ai privilegi marittimi e al loro rapporto con le procedure di insolvenza, con precipuo riferimento all’ipoteca navale e ai maritime liens. A tale proposito, sono analizzate le principali problematiche correlate all’attuazione dei privilegi marittimi, segnatamente in relazione all’istituto del sequestro di nave di cui alla Convenzione di Bruxelles del 1952 nel contesto dell’insolvenza transfrontaliera. Il terzo e ultimo capitolo è dedicato alla limitazione di responsabilità quale istituto tipico del settore di riferimento, dalla prospettiva delle possibili interferenze tra la costituzione dei fondi di cui alle Convenzioni LLMC e CLC ed eventuali procedimenti concorsuali. La ricerca svolta ha dimostrato che l’universalità a cui ambiscono il Regolamento 848/2015 (già 1346/2000) e il sistema UNCITRAL risulta minata dalla coesistenza di una molteplicità di differenti interpretazioni e implementazioni, tali per cui l’insolvenza transfrontaliera delle compagnie di trasporto marittimo non risulta regolata in maniera uniforme, con conseguente possibilità di diverso trattamento di fattispecie e situazioni analoghe.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La tesi consiste nella descrizione del complessivo background storico-letterario, archeologico e digitale necessario per la realizzazione di un Atlante digitale dell’antica Grecia antica sulla base della raccolta e analisi dei dati e delle informazioni contenute nella Periegesi di Pausania. Grazie all’impiego degli applicativi GIS, ed in particolare di ArcGIS online, è stato possibile creare un database georiferito contenente le informazioni e le descrizioni fornite dal testo; ogni identificazione di un sito storico è stata inoltre confrontata con lo stato attuale della ricerca archeologica, al fine di produrre uno strumento innovativo tanto per a ricerca storico-archeologica quanto per lo studio e la valutazione dell’opera di Pausania. Nello specifico il lavoro consiste in primo esempio di atlante digitale interamente basato sull’interpretazione di un testo classico attraverso un processo di georeferenziazione dei suoi contenuti. Per ogni sito identificato è stata infatti specificato il relativo passo di Pausania, collegando direttamente Il dato archeologico con la fonte letteraria. Per la definizione di una tassonomia efficace per l’analisi dei contenuti dell’opera o, si è scelto di associare agli elementi descritti da Pausania sette livelli (layers) all’interno della mappa corrispondenti ad altrettante categorie generali (città, santuari extraurbani, monumenti, boschi sacri, località, corsi d’acqua, e monti). Per ciascun elemento sono state poi inserite ulteriori informazioni all’interno di una tabella descrittiva, quali: fonte, identificazione, età di appartenenza, e stato dell’identificazione.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The continuous and swift progression of both wireless and wired communication technologies in today's world owes its success to the foundational systems established earlier. These systems serve as the building blocks that enable the enhancement of services to cater to evolving requirements. Studying the vulnerabilities of previously designed systems and their current usage leads to the development of new communication technologies replacing the old ones such as GSM-R in the railway field. The current industrial research has a specific focus on finding an appropriate telecommunication solution for railway communications that will replace the GSM-R standard which will be switched off in the next years. Various standardization organizations are currently exploring and designing a radiofrequency technology based standard solution to serve railway communications in the form of FRMCS (Future Railway Mobile Communication System) to substitute the current GSM-R. Bearing on this topic, the primary strategic objective of the research is to assess the feasibility to leverage on the current public network technologies such as LTE to cater to mission and safety critical communication for low density lines. The research aims to identify the constraints, define a service level agreement with telecom operators, and establish the necessary implementations to make the system as reliable as possible over an open and public network, while considering safety and cybersecurity aspects. The LTE infrastructure would be utilized to transmit the vital data for the communication of a railway system and to gather and transmit all the field measurements to the control room for maintenance purposes. Given the significance of maintenance activities in the railway sector, the ongoing research includes the implementation of a machine learning algorithm to detect railway equipment faults, reducing time and human analysis errors due to the large volume of measurements from the field.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The pervasive availability of connected devices in any industrial and societal sector is pushing for an evolution of the well-established cloud computing model. The emerging paradigm of the cloud continuum embraces this decentralization trend and envisions virtualized computing resources physically located between traditional datacenters and data sources. By totally or partially executing closer to the network edge, applications can have quicker reactions to events, thus enabling advanced forms of automation and intelligence. However, these applications also induce new data-intensive workloads with low-latency constraints that require the adoption of specialized resources, such as high-performance communication options (e.g., RDMA, DPDK, XDP, etc.). Unfortunately, cloud providers still struggle to integrate these options into their infrastructures. That risks undermining the principle of generality that underlies the cloud computing scale economy by forcing developers to tailor their code to low-level APIs, non-standard programming models, and static execution environments. This thesis proposes a novel system architecture to empower cloud platforms across the whole cloud continuum with Network Acceleration as a Service (NAaaS). To provide commodity yet efficient access to acceleration, this architecture defines a layer of agnostic high-performance I/O APIs, exposed to applications and clearly separated from the heterogeneous protocols, interfaces, and hardware devices that implement it. A novel system component embodies this decoupling by offering a set of agnostic OS features to applications: memory management for zero-copy transfers, asynchronous I/O processing, and efficient packet scheduling. This thesis also explores the design space of the possible implementations of this architecture by proposing two reference middleware systems and by adopting them to support interactive use cases in the cloud continuum: a serverless platform and an Industry 4.0 scenario. A detailed discussion and a thorough performance evaluation demonstrate that the proposed architecture is suitable to enable the easy-to-use, flexible integration of modern network acceleration into next-generation cloud platforms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Spiking Neural Networks (SNNs) are bio-inspired Artificial Neural Networks (ANNs) utilizing discrete spiking signals, akin to neuron communication in the brain, making them ideal for real-time and energy-efficient Cyber-Physical Systems (CPSs). This thesis explores their potential in Structural Health Monitoring (SHM), leveraging low-cost MEMS accelerometers for early damage detection in motorway bridges. The study focuses on Long Short-Term SNNs (LSNNs), although their complex learning processes pose challenges. Comparing LSNNs with other ANN models and training algorithms for SHM, findings indicate LSNNs' effectiveness in damage identification, comparable to ANNs trained using traditional methods. Additionally, an optimized embedded LSNN implementation demonstrates a 54% reduction in execution time, but with longer pre-processing due to spike-based encoding. Furthermore, SNNs are applied in UAV obstacle avoidance, trained directly using a Reinforcement Learning (RL) algorithm with event-based input from a Dynamic Vision Sensor (DVS). Performance evaluation against Convolutional Neural Networks (CNNs) highlights SNNs' superior energy efficiency, showing a 6x decrease in energy consumption. The study also investigates embedded SNN implementations' latency and throughput in real-world deployments, emphasizing their potential for energy-efficient monitoring systems. This research contributes to advancing SHM and UAV obstacle avoidance through SNNs' efficient information processing and decision-making capabilities within CPS domains.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This exploratory research project developed a cognitive situated approach to studying aspects of simultaneous interpreting with quantitative, confirmatory methods. To do so, it explored how to determine the potential benefits of using a computer-assisted interpreting tool, InterpretBank, among 22 Chinese interpreting trainees with Chinese L1 and English L2. The informants were mostly 2nd-year female students with an average age of 24.7 enrolled in Chinese MA interpreting programs. The study adopted a pretest and posttest design with three cycles. The independent variable was using Excel or InterpretBank. After Cycle I (pre-test), the sample split into control (Excel) and experimental (InterpretBank) groups. Tool choice was compulsory in Cycle II but not Cycle III. The source materials for each cycle were pairs of matching transcripts from popular science podcasts. Informants compiled glossaries out of one transcript, while the other one was edited for simultaneous interpreting, with 39 terms as potential problem triggers. Quantitative profiling results showed that InterpretBank informants spent less time on glossary compilation, generated more terms faster than Excel informants, but their glossaries were less diverse (personal) and longer. The booth tasks yielded no significant differences in fluency indicators except for more bumps (200-600ms silent time gaps) for InterpretBank in Cycle II. InterpretBank informants had more correct renditions in Cycles II and III but there was no statistically significant difference among accuracy indicators per cycle. Holistic quality assessments by PhD raters showed InterpretBank consistently outperforming Excel, suggesting a positive InterpretBank impact on SI quality. However, some InterpretBank implementations raised cognitive ergonomic concerns for Chinese, potentially undermining its utility. Overall, results were mixed regarding InterpretBank benefits for Chinese trainees, but the project was successful in developing cognitive situated interpreting study methods, constructs and indicators.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A global italian pharmaceutical company has to provide two work environments that favor different needs. The environments will allow to develop solutions in a controlled, secure and at the same time in an independent manner on a state-of-the-art enterprise cloud platform. The need of developing two different environments is dictated by the needs of the working units. Indeed, the first environment is designed to facilitate the creation of application related to genomics, therefore, designed more for data-scientists. This environment is capable of consuming, producing, retrieving and incorporating data, furthermore, will support the most used programming languages for genomic applications (e.g., Python, R). The proposal was to obtain a pool of ready-togo Virtual Machines with different architectures to provide best performance based on the job that needs to be carried out. The second environment has more of a traditional trait, to obtain, via ETL (Extract-Transform-Load) process, a global datamodel, resembling a classical relational structure. It will provide major BI operations (e.g., analytics, performance measure, reports, etc.) that can be leveraged both for application analysis or for internal usage. Since, both architectures will maintain large amounts of data regarding not only pharmaceutical informations but also internal company informations, it would be possible to digest the data by reporting/ analytics tools and also apply data-mining, machine learning technologies to exploit intrinsic informations. The thesis work will introduce, proposals, implementations, descriptions of used technologies/platforms and future works of the above discussed environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The usage of version control systems and the capabilities of storing the source code in public platforms such as GitHub increased the number of passwords, API Keys and tokens that can be found and used causing a massive security issue for people and companies. In this project, SAP's secret scanner Credential Digger is presented. How it can scan repositories to detect hardcoded secrets and how it manages to filter out the false positives between them. Moreover, how I have implemented the Credential Digger's pre-commit hook. A performance comparison between three different implementations of the hook based on how it interacts with the Machine Learning model is presented. This project also includes how it is possible to use already detected credentials to decrease the number false positive by leveraging the similarity between leaks by using the Bucket System.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wound management is a fundamental task in standard clinical practice. Automated solutions already exist for humans, but there is a lack of applications on wound management for pets. The importance of a precise and efficient wound assessment is helpful to improve diagnosis and to increase the effectiveness of treatment plans for the chronic wounds. The goal of the research was to propose an automated pipeline capable of segmenting natural light-reflected wound images of animals. Two datasets composed by light-reflected images were used in this work: Deepskin dataset, 1564 human wound images obtained during routine dermatological exams, with 145 manual annotated images; Petwound dataset, a set of 290 wound photos of dogs and cats with 0 annotated images. Two implementations of U-Net Convolutioal Neural Network model were proposed for the automated segmentation. Active Semi-Supervised Learning techniques were applied for human-wound images to perform segmentation from 10% of annotated images. Then the same models were trained, via Transfer Learning, adopting an Active Semi- upervised Learning to unlabelled animal-wound images. The combination of the two training strategies proved their effectiveness in generating large amounts of annotated samples (94% of Deepskin, 80% of PetWound) with the minimal human intervention. The correctness of automated segmentation were evaluated by clinical experts at each round of training thus we can assert that the results obtained in this thesis stands as a reliable solution to perform a correct wound image segmentation. The use of Transfer Learning and Active Semi-Supervied Learning allows to minimize labelling effort from clinicians, even requiring no starting manual annotation at all. Moreover the performances of the model with limited number of parameters suggest the implementation of smartphone-based application to this topic, helping the future standardization of light-reflected images as acknowledge medical images.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Planning is an important sub-field of artificial intelligence (AI) focusing on letting intelligent agents deliberate on the most adequate course of action to attain their goals. Thanks to the recent boost in the number of critical domains and systems which exploit planning for their internal procedures, there is an increasing need for planning systems to become more transparent and trustworthy. Along this line, planning systems are now required to produce not only plans but also explanations about those plans, or the way they were attained. To address this issue, a new research area is emerging in the AI panorama: eXplainable AI (XAI), within which explainable planning (XAIP) is a pivotal sub-field. As a recent domain, XAIP is far from mature. No consensus has been reached in the literature about what explanations are, how they should be computed, and what they should explain in the first place. Furthermore, existing contributions are mostly theoretical, and software implementations are rarely more than preliminary. To overcome such issues, in this thesis we design an explainable planning framework bridging the gap between theoretical contributions from literature and software implementations. More precisely, taking inspiration from the state of the art, we develop a formal model for XAIP, and the software tool enabling its practical exploitation. Accordingly, the contribution of this thesis is four-folded. First, we review the state of the art of XAIP, supplying an outline of its most significant contributions from the literature. We then generalise the aforementioned contributions into a unified model for XAIP, aimed at supporting model-based contrastive explanations. Next, we design and implement an algorithm-agnostic library for XAIP based on our model. Finally, we validate our library from a technological perspective, via an extensive testing suite. Furthermore, we assess its performance and usability through a set of benchmarks and end-to-end examples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The BP (Bundle Protocol) version 7 has been recently standardized by IETF in RFC 9171, but it is the whole DTN (Delay-/Disruption-Tolerant Networking) architecture, of which BP is the core, that is gaining a renewed interest, thanks to its planned adoption in future space missions. This is obviously positive, but at the same time it seems to make space agencies more interested in deployment than in research, with new BP implementations that may challenge the central role played until now by the historical BP reference implementations, such as ION and DTNME. To make Unibo research on DTN independent of space agency decisions, the development of an internal BP implementation was in order. This is the goal of this thesis, which deals with the design and implementation of Unibo-BP: a novel, research-driven BP implementation, to be released as Free Software. Unibo-BP is fully compliant with RFC 9171, as demonstrated by a series of interoperability tests with ION and DTNME, and presents a few innovations, such as the ability to manage remote DTN nodes by means of the BP itself. Unibo-BP is compatible with pre-existing Unibo implementations of CGR (Contact Graph Routing) and LTP (Licklider Transmission Protocol) thanks to interfaces designed during the thesis. The thesis project also includes an implementation of TCPCLv3 (TCP Convergence Layer version 3, RFC 7242), which can be used as an alternative to LTPCL to connect with proximate nodes, especially in terrestrial networks. Summarizing, Unibo-BP is at the heart of a larger project, Unibo-DTN, which aims to implement the main components of a complete DTN stack (BP, TCPCL, LTP, CGR). Moreover, Unibo-BP is compatible with all DTNsuite applications, thanks to an extension of the Unified API library on which DTNsuite applications are based. The hope is that Unibo-BP and all the ancillary programs developed during this thesis will contribute to the growth of DTN popularity in academia and among space agencies.