973 resultados para Multi-View Rendering
Resumo:
High Energy efficiency and high performance are the key regiments for Internet of Things (IoT) end-nodes. Exploiting cluster of multiple programmable processors has recently emerged as a suitable solution to address this challenge. However, one of the main bottlenecks for multi-core architectures is the instruction cache. While private caches fall into data replication and wasting area, fully shared caches lack scalability and form a bottleneck for the operating frequency. Hence we propose a hybrid solution where a larger shared cache (L1.5) is shared by multiple cores connected through a low-latency interconnect to small private caches (L1). However, it is still limited by large capacity miss with a small L1. Thus, we propose a sequential prefetch from L1 to L1.5 to improve the performance with little area overhead. Moreover, to cut the critical path for better timing, we optimized the core instruction fetch stage with non-blocking transfer by adopting a 4 x 32-bit ring buffer FIFO and adding a pipeline for the conditional branch. We present a detailed comparison of different instruction cache architectures' performance and energy efficiency recently proposed for Parallel Ultra-Low-Power clusters. On average, when executing a set of real-life IoT applications, our two-level cache improves the performance by up to 20% and loses 7% energy efficiency with respect to the private cache. Compared to a shared cache system, it improves performance by up to 17% and keeps the same energy efficiency. In the end, up to 20% timing (maximum frequency) improvement and software control enable the two-level instruction cache with prefetch adapt to various battery-powered usage cases to balance high performance and energy efficiency.
Resumo:
Le conseguenze del management algoritmico sui lavoratori sono note tra gli studiosi, ma poche ricerche indagano le possibilità di agency, soprattutto a livello individuale, nella gig-economy. A partire dalla quotidianità del lavoro, l’obiettivo è analizzare le forme di agency esercitate dai platform workers nel settore della logistica dell'ultimo miglio. La ricerca si basa su un'etnografia multi-situata condotta in due paesi molto distanti e riguardante due diversi servizi urbani di piattaforma: il food-delivery in Italia (Bologna, Torino) e il ride-hailing in Argentina (Buenos Aires). Nonostante le differenze, il lavoro di campo ha mostrato diverse continuità tra i contesti geografici. Innanzitutto, le tecnologie digitali giocano un ruolo ambivalente nell'ambiente di lavoro: se la tecnologia è usata dalle aziende per disciplinare il lavoro, costituisce però anche uno strumento che può essere impiegato a vantaggio dei lavoratori. Sia nel ride-hailing che nelle piattaforme di food-delivery, infatti, i lavoratori esprimono la loro agency condividendo pratiche di rimaneggiamento e tattiche per aggirare il despotismo algoritmico. In secondo luogo, la ricerca ha portato alla luce una gran varietà di attività economiche sviluppate ai margini dell'economia di piattaforma. In entrambi i casi le piattaforme intersecano vivacemente le economie informali urbane e alimentano circuiti informali di lavoro, come evidenziato dall'elevata presenza di scambi illeciti: ad esempio, vendita di account, hacking-bots, caporalato digitale. Tutt'altro che avviare un processo di formalizzazione, quindi, la piattaforma sussume e riproduce l’insieme di condizioni produttive e riproduttive dell'informalità (viração), offrendo impieghi intermittenti e insicuri a una massa di lavoratori-usa-e-getta disponibile al sottoimpiego. In conclusione, le piattaforme vengono definite come infrastrutture barocche, intendendo con il barocco tanto la natura ibrida dell'azione che mescola forme di neoliberismo-dal-basso con pratiche di solidarietà tra pari, quanto la progressiva ristrutturazione dei processi di accumulazione all’insegna di una rinnovata interdipendenza tra formale e informale nelle infrastrutture del «mondo a domicilio».
Resumo:
Multi-phase electrical drives are potential candidates for the employment in innovative electric vehicle powertrains, in response to the request for high efficiency and reliability of this type of application. In addition to the multi-phase technology, in the last decades also, multilevel technology has been developed. These two technologies are somewhat complementary since both allow increasing the power rating of the system without increasing the current and voltage ratings of the single power switches of the inverter. In this thesis, some different topics concerning the inverter, the motor and the fault diagnosis of an electric vehicle powertrain are addressed. In particular, the attention is focused on multi-phase and multilevel technologies and their potential advantages with respect to traditional technologies. First of all, the mathematical models of two multi-phase machines, a five-phase induction machine and an asymmetrical six-phase permanent magnet synchronous machines are developed using the Vector Space Decomposition approach. Then, a new modulation technique for multi-phase multilevel T-type inverters, which solves the voltage balancing problem of the DC-link capacitors, ensuring flexible management of the capacitor voltages, is developed. The technique is based on the proper selection of the zero-sequence component of the modulating signals. Subsequently, a diagnostic technique for detecting the state of health of the rotor magnets in a six-phase permanent magnet synchronous machine is established. The technique is based on analysing the electromotive force induced in the stator windings by the rotor magnets. Furthermore, an innovative algorithm able to extend the linear modulation region for five-phase inverters, taking advantage of the multiple degrees of freedom available in multi-phase systems is presented. Finally, the mathematical model of an eighteen-phase squirrel cage induction motor is defined. This activity aims to develop a motor drive able to change the number of poles of the machine during the machine operation.
Resumo:
This doctoral thesis focuses on the study of historical shallow landslide activity over time in response to anthropogenic forcing on land use, through the compilation of multi-temporal landslide inventories. The study areas, located in contrasting settings and characterized by different history of land-cover changes, include the Sillaro River basin (Italy) and the Tsitika and Eve River basins (coastal British Columbia). The Sillaro River basin belongs to clay-dominated settings, characterized by extensive badland development, and dominated by earth slides and earthflows. Here, forest removal began in the Roman period and has been followed by agricultural land abandonment and natural revegetation in recent time. By contrast, the Tsitika-Eve River basins are characterized by granitic and basaltic lithologies, and dominated by debris slides, debris flows and debris avalanches. In this setting, anthropogenic impacts started in 1960’s and have involved logging operation. The thesis begins with an introductory chapter, followed by a methodological section, where a multi-temporal mapping approach is proposed and tested at four landslide sites of the Sillaro River basin. Results, in terms of inventory completeness in time and space, are compared against the existing region-wide Emilia-Romagna inventory. This approach is then applied at the Sillaro River basin scale, where the multi-temporal inventory obtained is used to investigate the landslide activity in relation to historical land cover changes across geologic domains and in relation to hydro-meteorological forcing. Then, the impact of timber harvesting and road construction on landslide activity and sediment transfer in the Tsitika-Eve River basins is investigated, with a focus on the controls that interactions between landscape morphometry and cutblock location may have on landslide size-frequency relations. The thesis ends with a summary of the main findings and discusses advantages and limitations associated with the compilation of multi-temporal inventories in the two settings during different periods of human-driven, land-cover dynamics.
Resumo:
The topic of this thesis is the design and the implementation of mathematical models and control system algorithms for rotary-wing unmanned aerial vehicles to be used in cooperative scenarios. The use of rotorcrafts has many attractive advantages, since these vehicles have the capability to take-off and land vertically, to hover and to move backward and laterally. Rotary-wing aircraft missions require precise control characteristics due to their unstable and heavy coupling aspects. As a matter of fact, flight test is the most accurate way to evaluate flying qualities and to test control systems. However, it may be very expensive and/or not feasible in case of early stage design and prototyping. A good compromise is made by a preliminary assessment performed by means of simulations and a reduced flight testing campaign. Consequently, having an analytical framework represents an important stage for simulations and control algorithm design. In this work mathematical models for various helicopter configurations are implemented. Different flight control techniques for helicopters are presented with theoretical background and tested via simulations and experimental flight tests on a small-scale unmanned helicopter. The same platform is used also in a cooperative scenario with a rover. Control strategies, algorithms and their implementation to perform missions are presented for two main scenarios. One of the main contributions of this thesis is to propose a suitable control system made by a classical PID baseline controller augmented with L1 adaptive contribution. In addition a complete analytical framework and the study of the dynamics and the stability of a synch-rotor are provided. At last, the implementation of cooperative control strategies for two main scenarios that include a small-scale unmanned helicopter and a rover.
Resumo:
Amid the trend of rising health expenditure in developed economies, changing the healthcare delivery models is an important point of action for service regulators to contain this trend. Such a change is mostly induced by either financial incentives or regulatory tools issued by the regulators and targeting service providers and patients. This creates a tripartite interaction between service regulators, professionals, and patients that manifests a multi-principal agent relationship, in which professionals are agents to two principals: regulators and patients. This thesis is concerned with such a multi-principal agent relationship in healthcare and attempts to investigate the determinants of the (non-)compliance to regulatory tools in light of this tripartite relationship. In addition, the thesis provides insights into the different institutional, economic, and regulatory settings, which govern the multi-principal agent relationship in healthcare in different countries. Furthermore, the thesis provides and empirically tests a conceptual framework of the possible determinants of (non-)compliance by physicians to regulatory tools issued by the regulator. The main findings of the thesis are first, in a multi-principal agent setting, the utilization of financial incentives to align the objectives of professionals and the regulator is important but not the only solution. This finding is based on the heterogeneity in the financial incentives provided to professionals in different health markets, which does not provide a one-size-fits-all model of financial incentives to influence clinical decisions. Second, soft law tools as clinical practice guidelines (CPGs) are important tools to mitigate the problems of the multi-principal agent setting in health markets as they reduce information asymmetries while preserving the autonomy of professionals. Third, CPGs are complex and heterogeneous and so are the determinants of (non-)compliance to them. Fourth, CPGs work but under conditions. Factors such as intra-professional competition between service providers or practitioners might lead to non-compliance to CPGs – if CPGs are likely to reduce the professional’s utility. Finally, different degrees of soft law mandate have different effects on providers’ compliance. Generally, the stronger the mandate, the stronger the compliance, however, even with a strong mandate, drivers such as intra-professional competition and co-management of patients by different professionals affected the (non-)compliance.
Resumo:
The use of extracorporeal organ support (ECOS) devices is increasingly widespread, to temporarily sustain or replace the functions of impaired organs in critically ill patients. Among ECOS, respiratory functions are supplied by extracorporeal life support (ECLS) therapies like extracorporeal membrane oxygenation (ECMO) and extracorporeal carbon dioxide removal (ECCO2R), and renal replacement therapies (RRT) are used to support kidney functions. However, the leading cause of mortality in critically ill patients is multi-organ dysfunction syndrome (MODS), which requires a complex therapeutic strategy where extracorporeal treatments are often integrated to pharmacological approach. Recently, the concept of multi-organ support therapy (MOST) has been introduced, and several forms of isolated ECOS devices are sequentially connected to provide simultaneous support to different organ systems. The future of critical illness goes towards the development of extracorporeal devices offering multiple organ support therapies on demand by a single hardware platform, where treatment lines can be used alternately or in conjunction. The aim of this industrial PhD project is to design and validate a device for multi-organ support, developing an auxiliary line for renal replacement therapy (hemofiltration) to be integrated on a platform for ECCO2R. The intended purpose of the ancillary line, which can be connected on demand, is to remove excess fluids by ultrafiltration and achieve volume control by the infusion of a replacement solution, as patients undergoing respiratory support are particularly prone to develop fluid overload. Furthermore, an ultrafiltration regulation system shall be developed using a powered and software-modulated pinch-valve on the effluent line of the hemofilter, proposed as an alternative to the state-of-the-art solution with peristaltic pump.
Resumo:
Introduction Only a proportion of patients with advanced NSCLC benefit from Immune checkpoint blockers (ICBs). No biomarker is validated to choose between ICBs monotherapy or in combination with chemotherapy (Chemo-ICB) when PD-L1 expression is above 50%. The aim of the present study is to validate the biomarker validity of total Metabolic Tumor Volume (tMTV) as assessed by 2-deoxy-2-[18F]fluoro-d-glucose positron emission tomography ([18F]FDG-PET) Material and methods This is a multicentric retrospective study. Patients with advanced NSCLC treated with ICBs, chemotherapy plus ICBs and chemotherapy were enrolled in 12 institutions from 4 countries. Inclusion criteria was a positive PET scan performed within 42 days from treatment start. TMTV was analyzed at each center based on a 42% SUVmax threshold. High tMTV was defined ad tMTV>median Results 493 patients were included, 163 treated with ICBs alone, 236 with chemo-ICBs and 94 with CT. No correlation was found between PD-L1 expression and tMTV. Median PFS for patients with high tMTV (100.1 cm3) was 3.26 months (95% CI 1.94–6.38) vs 14.70 (95% CI 11.51–22.59) for those with low tMTV (p=0.0005). Similarly median OS for pts with high tMTV was 11.4 months (95% CI 8.42 – 19.1) vs 33.1 months for those with low tMTV (95% CI 22.59 – NA), p .00067. In chemo-ICBs treated patients no correlation was found for OS (p = 0.11) and a borderline correlation was found for PFS (p=0.059). Patients with high tMTV and PD-L1 ≥ 50% had a better PFS when treated with combination of chemotherapy and ICBs respect to ICBs alone, with 3.26 months (95% CI 1.94 – 5.79) for ICBs vs 11.94 (95% CI 5.75 – NA) for Chemo ICBs (p = 0.043). Conclusion tMTV is predictive of ICBs benefit, not to CT benefit. tMTV can help to select the best upfront strategy in patients with high tMTV.
Resumo:
Landslides are common features of the landscape of the north-central Apennine mountain range and cause frequent damage to human facilities and infrastructure. Most of these landslides move periodically with moderate velocities and, only after particular rainfall events, some accelerate abruptly. Synthetic aperture radar interferometry (InSAR) provides a particularly convenient method for studying deforming slopes. We use standard two-pass interferometry, taking advantage of the short revisit time of the Sentinel-1 satellites. In this paper we present the results of the InSAR analysis developed on several study areas in central and Northern Italian Apennines. The aims of the work described within the articles contained in this paper, concern: i) the potential of the standard two-pass interferometric technique for the recognition of active landslides; ii) the exploration of the potential related to the displacement time series resulting from a two-pass multiple time-scale InSAR analysis; iii) the evaluation of the possibility of making comparisons with climate forcing for cognitive and risk assessment purposes. Our analysis successfully identified more than 400 InSAR deformation signals (IDS) in the different study areas corresponding to active slope movements. The comparison between IDSs and thematic maps allowed us to identify the main characteristics of the slopes most prone to landslides. The analysis of displacement time series derived from monthly interferometric stacks or single 6-day interferograms allowed the establishment of landslide activity thresholds. This information, combined with the displacement time series, allowed the relationship between ground deformation and climate forcing to be successfully investigated. The InSAR data also gave access to the possibility of validating geographical warning systems and comparing the activity state of landslides with triggering probability thresholds.
Resumo:
The electrocatalytic reduction of CO2 (CO2RR) is a captivating strategy for the conversion of CO2 into fuels, to realize a carbon neutral circular economy. In the recent years, research has focused on the development of new materials and technology capable of capturing and converting CO2 into useful products. The main problem of CO2RR is given by its poor selectivity, which can lead to the formation of numerous reaction products, to the detriment of efficiencies. For this reason, the design of new electrocatalysts that selectively and efficiently reduce CO2 is a fundamental step for the future exploitation of this technology. Here we present a new class of electrocatalysts, designed with a modular approach, namely, deriving from the combination of different building blocks in a single nanostructure. With this approach it is possible to obtain materials with an innovative design and new functionalities, where the interconnections between the various components are essential to obtain a highly selective and efficient reduction of CO2, thus opening up new possibilities in the design of optimized electrocatalytic materials. By combining the unique physic-chemical properties of carbon nanostructures (CNS) with nanocrystalline metal oxides (MO), we were able to modulate the selectivity of CO2RR, with the production of formic acid and syngas at low overpotentials. The CNS have not only the task of stabilizing the MO nanoparticles, but the creation of an optimal interface between two nanostructures is able to improve the catalytic activity of the active phase of the material. While the presence of oxygen atoms in the MO creates defects that accelerate the reaction kinetics and stabilize certain reaction intermediates, selecting the reaction pathway. Finally, a part was dedicated to the study of the experimental parameters influencing the CO2RR, with the aim of improving the experimental setup in order to obtain commercial catalytic performances.
Resumo:
Machine (and deep) learning technologies are more and more present in several fields. It is undeniable that many aspects of our society are empowered by such technologies: web searches, content filtering on social networks, recommendations on e-commerce websites, mobile applications, etc., in addition to academic research. Moreover, mobile devices and internet sites, e.g., social networks, support the collection and sharing of information in real time. The pervasive deployment of the aforementioned technological instruments, both hardware and software, has led to the production of huge amounts of data. Such data has become more and more unmanageable, posing challenges to conventional computing platforms, and paving the way to the development and widespread use of the machine and deep learning. Nevertheless, machine learning is not only a technology. Given a task, machine learning is a way of proceeding (a way of thinking), and as such can be approached from different perspectives (points of view). This, in particular, will be the focus of this research. The entire work concentrates on machine learning, starting from different sources of data, e.g., signals and images, applied to different domains, e.g., Sport Science and Social History, and analyzed from different perspectives: from a non-data scientist point of view through tools and platforms; setting a problem stage from scratch; implementing an effective application for classification tasks; improving user interface experience through Data Visualization and eXtended Reality. In essence, not only in a quantitative task, not only in a scientific environment, and not only from a data-scientist perspective, machine (and deep) learning can do the difference.
Resumo:
In rural and isolated areas without cellular coverage, Satellite Communication (SatCom) is the best candidate to complement terrestrial coverage. However, the main challenge for future generations of wireless networks will be to meet the growing demand for new services while dealing with the scarcity of frequency spectrum. As a result, it is critical to investigate more efficient methods of utilizing the limited bandwidth; and resource sharing is likely the only choice. The research community’s focus has recently shifted towards the interference management and exploitation paradigm to meet the increasing data traffic demands. In the Downlink (DL) and Feedspace (FS), LEO satellites with an on-board antenna array can offer service to numerous User Terminals (UTs) (VSAT or Handhelds) on-ground in FFR schemes by using cutting-edge digital beamforming techniques. Considering this setup, the adoption of an effective user scheduling approach is a critical aspect given the unusually high density of User terminals on the ground as compared to the on-board available satellite antennas. In this context, one possibility is that of exploiting clustering algorithms for scheduling in LEO MU-MIMO systems in which several users within the same group are simultaneously served by the satellite via Space Division Multiplexing (SDM), and then these different user groups are served in different time slots via Time Division Multiplexing (TDM). This thesis addresses this problem by defining a user scheduling problem as an optimization problem and discusses several algorithms to solve it. In particular, focusing on the FS and user service link (i.e., DL) of a single MB-LEO satellite operating below 6 GHz, the user scheduling problem in the Frequency Division Duplex (FDD) mode is addressed. The proposed State-of-the-Art scheduling approaches are based on graph theory. The proposed solution offers high performance in terms of per-user capacity, Sum-rate capacity, SINR, and Spectral Efficiency.
Resumo:
Frame. Assessing the difficulty of source texts and parts thereof is important in CTIS, whether for research comparability, for didactic purposes or setting price differences in the market. In order to empirically measure it, Campbell & Hale (1999) and Campbell (2000) developed the Choice Network Analysis (CNA) framework. Basically, the CNA’s main hypothesis is that the more translation options (a group of) translators have to render a given source text stretch, the higher the difficulty of that text stretch will be. We will call this the CNA hypothesis. In a nutshell, this research project puts the CNA hypothesis to the test and studies whether it does actually measure difficulty. Data collection. Two groups of participants (n=29) of different profiles and from two universities in different countries had three translation tasks keylogged with Inputlog, and filled pre- and post-translation questionnaires. Participants translated from English (L2) into their L1s (Spanish or Italian), and worked—first in class and then at home—using their own computers, on texts ca. 800–1000 words long. Each text was translated in approximately equal halves in two 1-hour sessions, in three consecutive weeks. Only the parts translated at home were considered in the study. Results. A very different picture emerged from data than that which the CNA hypothesis might predict: there was no prevalence of disfluent task segments when there were many translation options, nor was a prevalence of fluent task segments associated to fewer translation options. Indeed, there was no correlation between the number of translation options (many and few) and behavioral fluency. Additionally, there was no correlation between pauses and both behavioral fluency and typing speed. The discussed theoretical flaws and the empirical evidence lead to the conclusion that the CNA framework does not and cannot measure text and translation difficulty.
Resumo:
BRCA1 and BRCA2 are the most frequently mutated genes in ovarian cancer (OC), crucial both for the identification of cancer predisposition and therapeutic choices. However, germline variants in other genes could be involved in OC susceptibility. We characterized OC patients to detect mutations in genes other than BRCA1/2 that could be associated with a high risk to develop OC, and that could permit patients to enter the most appropriate treatment and surveillance program. Next-Generation Sequencing analysis with a 94-gene panel was performed on germline DNA of 219 OC patients. We identified 34 pathogenic/likely-pathogenic variants in BRCA1/2 and 38 in other 21 genes. Patients with pathogenic/likely-pathogenic variants in non-BRCA1/2 genes developed mainly OC alone compared to the other groups that developed also breast cancer or other tumors (p=0.001). Clinical correlation analysis showed that low-risk patients were significantly associated with platinum sensitivity (p<0.001). Regarding PARP inhibitors (PARPi) response, patients with pathogenic mutations in non-BRCA1/2 genes had significantly worse PFS and OS. Moreover, a statistically significant worse PFS was found for every increase of one thousand platelets before PARPi treatment. To conclude, knowledge about molecular alterations in genes beyond BRCA1/2 in OC could allow for more personalized diagnostic, predictive, prognostic, and therapeutic strategies for OC patients.
Resumo:
In medicine, innovation depends on a better knowledge of the human body mechanism, which represents a complex system of multi-scale constituents. Unraveling the complexity underneath diseases proves to be challenging. A deep understanding of the inner workings comes with dealing with many heterogeneous information. Exploring the molecular status and the organization of genes, proteins, metabolites provides insights on what is driving a disease, from aggressiveness to curability. Molecular constituents, however, are only the building blocks of the human body and cannot currently tell the whole story of diseases. This is why nowadays attention is growing towards the contemporary exploitation of multi-scale information. Holistic methods are then drawing interest to address the problem of integrating heterogeneous data. The heterogeneity may derive from the diversity across data types and from the diversity within diseases. Here, four studies conducted data integration using customly designed workflows that implement novel methods and views to tackle the heterogeneous characterization of diseases. The first study devoted to determine shared gene regulatory signatures for onco-hematology and it showed partial co-regulation across blood-related diseases. The second study focused on Acute Myeloid Leukemia and refined the unsupervised integration of genomic alterations, which turned out to better resemble clinical practice. In the third study, network integration for artherosclerosis demonstrated, as a proof of concept, the impact of network intelligibility when it comes to model heterogeneous data, which showed to accelerate the identification of new potential pharmaceutical targets. Lastly, the fourth study introduced a new method to integrate multiple data types in a unique latent heterogeneous-representation that facilitated the selection of important data types to predict the tumour stage of invasive ductal carcinoma. The results of these four studies laid the groundwork to ease the detection of new biomarkers ultimately beneficial to medical practice and to the ever-growing field of Personalized Medicine.