920 resultados para Heterogeneous platforms
Resumo:
Advanced cell cultures are developing rapidly in biomedical research. Nowadays, various approaches and technologies are being used, however, these culturing systems present limitations from increasing complexity, requiring high costs, and not easily customization. We present two versatile and cost-effective methods for developing culturing systems that integrate 3D cell culture and microfluidic platforms. Firstly, for drug screening applications, many high-quality cell spheres of homogeneous size and shape are required. Conventional approaches usually have a dearth of control over the size and geometry of cell spheres and require sample collection and manipulation. To overcome this difficulty, in this study, hundreds of spheroids of several cell lines were generated using multi-well plates that housed our microdevices. Tumor spheroids grow at a uniform rate (in scaffolded or scaffold-free environments) and can be harvested at will. Microscopy imaging are done in real time during or after the culture. After in situ immunostaining, fluorescence imaging can be conducted while keeping the spatial distribution of spheroids in the microwells. Drug effects were successfully observed through viability, growth, and morphologic investigations. Also, we fabricated a microfluidic device suitable for directed and selective cell culture treatments. The microfluidic device was used to reproduce and confirm in vitro investigations carried out using normal culture methods, using a microglia cell line. The device layout and the syringe pump system, entirely designed in our lab, successfully allowed culture growth and medium flow regulation. Solution flows can be finely controlled, allowing treatments and immunofluorescence in one single chamber selectively. To conclude, we propose the development of two culturing platforms (microstructured well devices and in-flow microfluidic chip), which are the result of separate scientific investigations but have the primary goal of performing treatments in a reproducible manner. Our devices shall improve future studies on drug exposure testing, representing adjustable and versatile cell culture systems.
Resumo:
Embedding intelligence in extreme edge devices allows distilling raw data acquired from sensors into actionable information, directly on IoT end-nodes. This computing paradigm, in which end-nodes no longer depend entirely on the Cloud, offers undeniable benefits, driving a large research area (TinyML) to deploy leading Machine Learning (ML) algorithms on micro-controller class of devices. To fit the limited memory storage capability of these tiny platforms, full-precision Deep Neural Networks (DNNs) are compressed by representing their data down to byte and sub-byte formats, in the integer domain. However, the current generation of micro-controller systems can barely cope with the computing requirements of QNNs. This thesis tackles the challenge from many perspectives, presenting solutions both at software and hardware levels, exploiting parallelism, heterogeneity and software programmability to guarantee high flexibility and high energy-performance proportionality. The first contribution, PULP-NN, is an optimized software computing library for QNN inference on parallel ultra-low-power (PULP) clusters of RISC-V processors, showing one order of magnitude improvements in performance and energy efficiency, compared to current State-of-the-Art (SoA) STM32 micro-controller systems (MCUs) based on ARM Cortex-M cores. The second contribution is XpulpNN, a set of RISC-V domain specific instruction set architecture (ISA) extensions to deal with sub-byte integer arithmetic computation. The solution, including the ISA extensions and the micro-architecture to support them, achieves energy efficiency comparable with dedicated DNN accelerators and surpasses the efficiency of SoA ARM Cortex-M based MCUs, such as the low-end STM32M4 and the high-end STM32H7 devices, by up to three orders of magnitude. To overcome the Von Neumann bottleneck while guaranteeing the highest flexibility, the final contribution integrates an Analog In-Memory Computing accelerator into the PULP cluster, creating a fully programmable heterogeneous fabric that demonstrates end-to-end inference capabilities of SoA MobileNetV2 models, showing two orders of magnitude performance improvements over current SoA analog/digital solutions.
Resumo:
In recent years, IoT technology has radically transformed many crucial industrial and service sectors such as healthcare. The multi-facets heterogeneity of the devices and the collected information provides important opportunities to develop innovative systems and services. However, the ubiquitous presence of data silos and the poor semantic interoperability in the IoT landscape constitute a significant obstacle in the pursuit of this goal. Moreover, achieving actionable knowledge from the collected data requires IoT information sources to be analysed using appropriate artificial intelligence techniques such as automated reasoning. In this thesis work, Semantic Web technologies have been investigated as an approach to address both the data integration and reasoning aspect in modern IoT systems. In particular, the contributions presented in this thesis are the following: (1) the IoT Fitness Ontology, an OWL ontology that has been developed in order to overcome the issue of data silos and enable semantic interoperability in the IoT fitness domain; (2) a Linked Open Data web portal for collecting and sharing IoT health datasets with the research community; (3) a novel methodology for embedding knowledge in rule-defined IoT smart home scenarios; and (4) a knowledge-based IoT home automation system that supports a seamless integration of heterogeneous devices and data sources.
Resumo:
The application of modern ICT technologies is radically changing many fields pushing toward more open and dynamic value chains fostering the cooperation and integration of many connected partners, sensors, and devices. As a valuable example, the emerging Smart Tourism field derived from the application of ICT to Tourism so to create richer and more integrated experiences, making them more accessible and sustainable. From a technological viewpoint, a recurring challenge in these decentralized environments is the integration of heterogeneous services and data spanning multiple administrative domains, each possibly applying different security/privacy policies, device and process control mechanisms, service access, and provisioning schemes, etc. The distribution and heterogeneity of those sources exacerbate the complexity in the development of integrating solutions with consequent high effort and costs for partners seeking them. Taking a step towards addressing these issues, we propose APERTO, a decentralized and distributed architecture that aims at facilitating the blending of data and services. At its core, APERTO relies on APERTO FaaS, a Serverless platform allowing fast prototyping of the business logic, lowering the barrier of entry and development costs to newcomers, (zero) fine-grained scaling of resources servicing end-users, and reduced management overhead. APERTO FaaS infrastructure is based on asynchronous and transparent communications between the components of the architecture, allowing the development of optimized solutions that exploit the peculiarities of distributed and heterogeneous environments. In particular, APERTO addresses the provisioning of scalable and cost-efficient mechanisms targeting: i) function composition allowing the definition of complex workloads from simple, ready-to-use functions, enabling smarter management of complex tasks and improved multiplexing capabilities; ii) the creation of end-to-end differentiated QoS slices minimizing interfaces among application/service running on a shared infrastructure; i) an abstraction providing uniform and optimized access to heterogeneous data sources, iv) a decentralized approach for the verification of access rights to resources.
Resumo:
The recent trend of moving Cloud Computing capabilities to the Edge of the network is reshaping how applications and their middleware supports are designed, deployed, and operated. This new model envisions a continuum of virtual resources between the traditional cloud and the network edge, which is potentially more suitable to meet the heterogeneous Quality of Service (QoS) requirements of diverse application domains and next-generation applications. Several classes of advanced Internet of Things (IoT) applications, e.g., in the industrial manufacturing domain, are expected to serve a wide range of applications with heterogeneous QoS requirements and call for QoS management systems to guarantee/control performance indicators, even in the presence of real-world factors such as limited bandwidth and concurrent virtual resource utilization. The present dissertation proposes a comprehensive QoS-aware architecture that addresses the challenges of integrating cloud infrastructure with edge nodes in IoT applications. The architecture provides end-to-end QoS support by incorporating several components for managing physical and virtual resources. The proposed architecture features: i) a multilevel middleware for resolving the convergence between Operational Technology (OT) and Information Technology (IT), ii) an end-to-end QoS management approach compliant with the Time-Sensitive Networking (TSN) standard, iii) new approaches for virtualized network environments, such as running TSN-based applications under Ultra-low Latency (ULL) constraints in virtual and 5G environments, and iv) an accelerated and deterministic container overlay network architecture. Additionally, the QoS-aware architecture includes two novel middlewares: i) a middleware that transparently integrates multiple acceleration technologies in heterogeneous Edge contexts and ii) a QoS-aware middleware for Serverless platforms that leverages coordination of various QoS mechanisms and virtualized Function-as-a-Service (FaaS) invocation stack to manage end-to-end QoS metrics. Finally, all architecture components were tested and evaluated by leveraging realistic testbeds, demonstrating the efficacy of the proposed solutions.
Resumo:
Proteins, the most essential biological macromolecules, are involved in nearly every aspect of life. The elucidation of their three-dimensional structures through X-ray analysis has significantly contributed to our understanding of fundamental mechanisms in life processes. However, the obstacle of obtaining high-resolution protein crystals remains significant. Thus, searching for materials that can effectively induce nucleation of crystals is a promising and active field. This thesis work characterizes and prepares albumin nanoparticles as heterogeneous nucleants for protein crystallization. These stable Bovine Serum Albumin nanoparticles were synthesized via the desolvation method, purified efficiently, and characterized in terms of dimension, morphology, and secondary structure. The ability of BSA-NPs to induce macromolecule nucleation was tested on three model proteins, exhibiting significant results, with larger NPs inducing more nucleation. The second part of this work focuses on the structural study, mainly through X-ray crystallography, of five chloroplast and cytosolic enzymes involved in the fundamental cellular processes of two photosynthetic organisms, Chlamydomonas reinhardtii and Arabidopsis thaliana. The structures of three enzymes involved in the Calvin-Benson-Bassham Cycle, phosphoribulokinase, troseposphatisomerase, and ribulosiophosphate epimerase from Chlamydomonas reinhardtii, were solved to investigate their catalytic and regulatory mechanisms. Additionally, the structure of nitrosylated-CrTPI made it possible to identify Cys14 as a target for nitrosylation, and the crystallographic structure of CrRPE was solved for the first time, providing insights into its catalytic and regulatory properties. Finally, the structure of S-nitrosoglutathione reductase, AtGSNOR, was compared with that of AtADH1, revealing differences in their catalytic sites. Overall, seven crystallographic structures, including partially oxidized CrPRK, CrPRK/ATP, CrPRK/ADP/Ru5P, CrTPI-nitrosylated, apo-CrRPE, apo-AtGSNOR, and AtADH1-NADH, were solved and are yet to be deposited in the PDB.
Resumo:
Multiple Myeloma (MM) is a hematologic cancer with heterogeneous and complex genomic landscape, where Copy Number Alterations (CNAs) play a key role in the disease's pathogenesis and prognosis. It is of biological and clinical interest to study the temporal occurrence of early alterations, as they play a disease "driver" function by deregulating key tumor pathways. This study presents an innovative bioinformatic tools suite created for harmonizing and tracing the origin of CNAs throughout the evolutionary history of MM. To this aim, large cohorts of newly-diagnosed MM (NDMM, N=1582) and Smoldering-MM (SMM, N=282) were aggregated. The tools developed in this study enable the harmonization of CNAs as obtained from different genomic platforms in such a way that a high statistical power can be obtained. By doing so, the high numerosity of those cohorts was harnessed for the identification of novel genes characterized as "driver" (NFKB2, NOTCH2, MAX, EVI5 and MYC-ME2-enhancer), and the generation of an innovative timing model, implemented with a statistical method to introduce confidence intervals in the CNAs-calls. By applying this model on both NDMM and SMM cohorts, it was possible to identify specific CNAs (1q(CKS1B)amp, 13q(RB1)del, 11q(CCND1)amp and 14q(MAX)del) and categorize them as "early"/ "driver" events. A high level of precision was guaranteed by the narrow confidence intervals in the timing estimates. These CNAs were proposed as critical MM alterations, which play a foundational role in the evolutionary history of both SMM and NDMM. Finally, a multivariate survival model was able to identify the independent genomic alterations with the greatest effect on patients’ survival, including RB1-del, CKS1B-amp, MYC-amp, NOTCH2-amp and TRAF3-del/mut. In conclusion, the alterations that were identified as both "early-drivers” and correlated with patients’ survival were proposed as biomarkers that, if included in wider survival models, could provide a better disease stratification and an improved prognosis definition.
Resumo:
The pervasive availability of connected devices in any industrial and societal sector is pushing for an evolution of the well-established cloud computing model. The emerging paradigm of the cloud continuum embraces this decentralization trend and envisions virtualized computing resources physically located between traditional datacenters and data sources. By totally or partially executing closer to the network edge, applications can have quicker reactions to events, thus enabling advanced forms of automation and intelligence. However, these applications also induce new data-intensive workloads with low-latency constraints that require the adoption of specialized resources, such as high-performance communication options (e.g., RDMA, DPDK, XDP, etc.). Unfortunately, cloud providers still struggle to integrate these options into their infrastructures. That risks undermining the principle of generality that underlies the cloud computing scale economy by forcing developers to tailor their code to low-level APIs, non-standard programming models, and static execution environments. This thesis proposes a novel system architecture to empower cloud platforms across the whole cloud continuum with Network Acceleration as a Service (NAaaS). To provide commodity yet efficient access to acceleration, this architecture defines a layer of agnostic high-performance I/O APIs, exposed to applications and clearly separated from the heterogeneous protocols, interfaces, and hardware devices that implement it. A novel system component embodies this decoupling by offering a set of agnostic OS features to applications: memory management for zero-copy transfers, asynchronous I/O processing, and efficient packet scheduling. This thesis also explores the design space of the possible implementations of this architecture by proposing two reference middleware systems and by adopting them to support interactive use cases in the cloud continuum: a serverless platform and an Industry 4.0 scenario. A detailed discussion and a thorough performance evaluation demonstrate that the proposed architecture is suitable to enable the easy-to-use, flexible integration of modern network acceleration into next-generation cloud platforms.
Resumo:
The relationship between catalytic properties and the nature of the active phase is well-established, with increased presence typically leading to enhanced catalysis. However, the costs associated with acquiring and processing these metals can become economically and environmentally unsustainable for global industries. Thus, there is potential for a paradigm shift towards utilizing polymeric ligands or other polymeric systems to modulate and enhance catalytic performance. This alternative approach has the potential to reduce the requisite amount of active phase while preserving effective catalytic activity. Such a strategy could yield substantial benefits from both economic and environmental perspectives. The primary objective of this research is to examine the influence of polymeric hydro-soluble ligands on the final properties, such as size and dispersion of the active phase, as well as the catalytic activity, encompassing conversion, selectivity towards desired products, and stability, of colloidal gold nanoparticles supported on active carbon. The goal is to elucidate the impact of polymers systematically, offering a toolbox for fine-tuning catalytic performances from the initial stages of catalyst design. Moreover, investigating the potential to augment conversion and selectivity in specific reactions through tailored polymeric ligands holds promise for reshaping catalyst preparation methodologies, thereby fostering the development of more economically sustainable materials.
Resumo:
All structures are subjected to various loading conditions and combinations. For offshore structures, these loads include permanent loads, hydrostatic pressure, wave, current, and wind loads. Typically, sea environments in different geographical regions are characterized by the 100-year wave height, surface currents, and velocity speeds. The main problems associated with the commonly used, deterministic method is the fact that not all waves have the same period, and that the actual stochastic nature of the marine environment is not taken into account. Offshore steel structure fatigue design is done using the DNVGL-RP-0005:2016 standard which takes precedence over the DNV-RP-C203 standard (2012). Fatigue analysis is necessary for oil and gas producing offshore steel structures which were first constructed in the Gulf of Mexico North Sea (the 1930s) and later in the North Sea (1960s). Fatigue strength is commonly described by S-N curves which have been obtained by laboratory experiments. The rapid development of the Offshore wind industry has caused the exploration into deeper ocean areas and the adoption of new support structural concepts such as full lattice tower systems amongst others. The optimal design of offshore wind support structures including foundation, turbine towers, and transition piece components putting into consideration, economy, safety, and even the environment is a critical challenge. In this study, fatigue design challenges of transition pieces from decommissioned platforms for offshore wind energy are proposed to be discussed. The fatigue resistance of the material and structural components under uniaxial and multiaxial loading is introduced with the new fatigue design rules whilst considering the combination of global and local modeling using finite element analysis software programs.
Resumo:
Nei prossimi anni è atteso un aggiornamento sostanziale di LHC, che prevede di aumentare la luminosità integrata di un fattore 10 rispetto a quella attuale. Tale parametro è proporzionale al numero di collisioni per unità di tempo. Per questo, le risorse computazionali necessarie a tutti i livelli della ricostruzione cresceranno notevolmente. Dunque, la collaborazione CMS ha cominciato già da alcuni anni ad esplorare le possibilità offerte dal calcolo eterogeneo, ovvero la pratica di distribuire la computazione tra CPU e altri acceleratori dedicati, come ad esempio schede grafiche (GPU). Una delle difficoltà di questo approccio è la necessità di scrivere, validare e mantenere codice diverso per ogni dispositivo su cui dovrà essere eseguito. Questa tesi presenta la possibilità di usare SYCL per tradurre codice per la ricostruzione di eventi in modo che sia eseguibile ed efficiente su diversi dispositivi senza modifiche sostanziali. SYCL è un livello di astrazione per il calcolo eterogeneo, che rispetta lo standard ISO C++. Questo studio si concentra sul porting di un algoritmo di clustering dei depositi di energia calorimetrici, CLUE, usando oneAPI, l'implementazione SYCL supportata da Intel. Inizialmente, è stato tradotto l'algoritmo nella sua versione standalone, principalmente per prendere familiarità con SYCL e per la comodità di confronto delle performance con le versioni già esistenti. In questo caso, le prestazioni sono molto simili a quelle di codice CUDA nativo, a parità di hardware. Per validare la fisica, l'algoritmo è stato integrato all'interno di una versione ridotta del framework usato da CMS per la ricostruzione. I risultati fisici sono identici alle altre implementazioni mentre, dal punto di vista delle prestazioni computazionali, in alcuni casi, SYCL produce codice più veloce di altri livelli di astrazione adottati da CMS, presentandosi dunque come una possibilità interessante per il futuro del calcolo eterogeneo nella fisica delle alte energie.
Resumo:
In this work, two different protocols for the synthesis of Nb2O5-SiO2 with a sol-gel route in which supercritical carbon dioxide was used as solvent have been developed. The tailored design of the reactor allowed the reactants to come into contact only when supercritical CO2 is present, and the high-throughput experimentation scCO2 unit allowed the screening of synthetic parameters, that led to a Nb2O5 incorporation into the silica matrix of 2.5 wt%. N2-physisorption revealed high surface areas and the presence of meso- and micropores. XRD allowed to demonstrate the amorphous character of these materials. SEM-EDX proved the excellent dispersion of Nb2O5 into the silica matrix. These materials were tested in the epoxidation of cyclooctene with hydrogen peroxide, which is considered an environmentally friendly oxidant. The catalysts were virtually inactive in an organic, polar, aprotic solvent (1,4-dioxane). However, the most active scCO2 Nb2O5-SiO2 catalyst achieved a cyclooctene conversion of 44% with a selectivity of 88% towards the epoxide when tested in ethanol. Catalytic tests on cyclohexene revealed the presence of the epoxide, which is remarkable, considering that this substrate is easily oxidised to the diol. The behaviour in protic and aprotic solvents is compared to that of TS-1.
Resumo:
Basilar invagination (BI) is a congenital craniocervical junction (CCJ) anomaly represented by a prolapsed spine into the skull-base that can result in severe neurological impairment. In this paper, we retrospective evaluate the surgical treatment of 26 patients surgically treated for symptomatic BI. BI was classified according to instability and neural abnormalities findings. Clinical outcome was evaluated using the Nürick grade system. A total of 26 patients were included in this paper. Their age ranged from 15 to 67 years old (mean 38). Of which, 10 patients were male (38%) and 16 (62%) were female. All patients had some degree of tonsillar herniation, with 25 patients treated with foramen magnum decompression. Nine patients required a craniocervical fixation. Six patients had undergone prior surgery and required a new surgical procedure for progression of neurological symptoms associated with new compression or instability. Most of patients with neurological symptoms secondary to brainstem compression had some improvement during the follow-up. There was mortality in this series, 1 month after surgery, associated with a late removal of the tracheal cannula. Management of BI requires can provide improvements in neurological outcomes, but requires analysis of the neural and bony anatomy of the CCJ, as well as occult instability. The complexity and heterogeneous presentation requires attention to occult instability on examination and attention to airway problems secondary to concomitant facial malformations.
Resumo:
TiO2 and TiO2/WO3 electrodes, irradiated by a solar simulator in configurations for heterogeneous photocatalysis (HP) and electrochemically-assisted HP (EHP), were used to remediate aqueous solutions containing 10 mg L(-1) (34 μmol L(-1)) of 17-α-ethinylestradiol (EE2), active component of most oral contraceptives. The photocatalysts consisted of 4.5 μm thick porous films of TiO2 and TiO2/WO3 (molar ratio W/Ti of 12%) deposited on transparent electrodes from aqueous suspensions of TiO2 particles and WO3 precursors, followed by thermal treatment at 450 (°)C. First, an energy diagram was organized with photoelectrochemical and UV-Vis absorption spectroscopy data and revealed that EE2 could be directly oxidized by the photogenerated holes at the semiconductor surfaces, considering the relative HOMO level for EE2 and the semiconductor valence band edges. Also, for the irradiated hybrid photocatalyst, electrons in TiO2 should be transferred to WO3 conduction band, while holes move toward TiO2 valence band, improving charge separation. The remediated EE2 solutions were analyzed by fluorescence, HPLC and total organic carbon measurements. As expected from the energy diagram, both photocatalysts promoted the EE2 oxidation in HP configuration; after 4 h, the EE2 concentration decayed to 6.2 mg L(-1) (35% of EE2 removal) with irradiated TiO2 while TiO2/WO3 electrode resulted in 45% EE2 removal. A higher performance was achieved in EHP systems, when a Pt wire was introduced as a counter-electrode and the photoelectrodes were biased at +0.7 V; then, the EE2 removal corresponded to 48 and 54% for the TiO2 and TiO2/WO3, respectively. The hybrid TiO2/WO3, when compared to TiO2 electrode, exhibited enhanced sunlight harvesting and improved separation of photogenerated charge carriers, resulting in higher performance for removing this contaminant of emerging concern from aqueous solution.
Resumo:
In this study, the transmission-line modeling (TLM) applied to bio-thermal problems was improved by incorporating several novel computational techniques, which include application of graded meshes which resulted in 9 times faster in computational time and uses only a fraction (16%) of the computational resources used by regular meshes in analyzing heat flow through heterogeneous media. Graded meshes, unlike regular meshes, allow heat sources to be modeled in all segments of the mesh. A new boundary condition that considers thermal properties and thus resulting in a more realistic modeling of complex problems is introduced. Also, a new way of calculating an error parameter is introduced. The calculated temperatures between nodes were compared against the results obtained from the literature and agreed within less than 1% difference. It is reasonable, therefore, to conclude that the improved TLM model described herein has great potential in heat transfer of biological systems.