14 resultados para Non-convex optimization
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
In the most recent years there is a renovate interest for Mixed Integer Non-Linear Programming (MINLP) problems. This can be explained for different reasons: (i) the performance of solvers handling non-linear constraints was largely improved; (ii) the awareness that most of the applications from the real-world can be modeled as an MINLP problem; (iii) the challenging nature of this very general class of problems. It is well-known that MINLP problems are NP-hard because they are the generalization of MILP problems, which are NP-hard themselves. However, MINLPs are, in general, also hard to solve in practice. We address to non-convex MINLPs, i.e. having non-convex continuous relaxations: the presence of non-convexities in the model makes these problems usually even harder to solve. The aim of this Ph.D. thesis is to give a flavor of different possible approaches that one can study to attack MINLP problems with non-convexities, with a special attention to real-world problems. In Part 1 of the thesis we introduce the problem and present three special cases of general MINLPs and the most common methods used to solve them. These techniques play a fundamental role in the resolution of general MINLP problems. Then we describe algorithms addressing general MINLPs. Parts 2 and 3 contain the main contributions of the Ph.D. thesis. In particular, in Part 2 four different methods aimed at solving different classes of MINLP problems are presented. Part 3 of the thesis is devoted to real-world applications: two different problems and approaches to MINLPs are presented, namely Scheduling and Unit Commitment for Hydro-Plants and Water Network Design problems. The results show that each of these different methods has advantages and disadvantages. Thus, typically the method to be adopted to solve a real-world problem should be tailored on the characteristics, structure and size of the problem. Part 4 of the thesis consists of a brief review on tools commonly used for general MINLP problems, constituted an integral part of the development of this Ph.D. thesis (especially the use and development of open-source software). We present the main characteristics of solvers for each special case of MINLP.
Resumo:
The wheel - rail contact analysis plays a fundamental role in the multibody modeling of railway vehicles. A good contact model must provide an accurate description of the global contact phenomena (contact forces and torques, number and position of the contact points) and of the local contact phenomena (position and shape of the contact patch, stresses and displacements). The model has also to assure high numerical efficiency (in order to be implemented directly online within multibody models) and a good compatibility with commercial multibody software (Simpack Rail, Adams Rail). The wheel - rail contact problem has been discussed by several authors and many models can be found in the literature. The contact models can be subdivided into two different categories: the global models and the local (or differential) models. Currently, as regards the global models, the main approaches to the problem are the so - called rigid contact formulation and the semi – elastic contact description. The rigid approach considers the wheel and the rail as rigid bodies. The contact is imposed by means of constraint equations and the contact points are detected during the dynamic simulation by solving the nonlinear algebraic differential equations associated to the constrained multibody system. Indentation between the bodies is not permitted and the normal contact forces are calculated through the Lagrange multipliers. Finally the Hertz’s and the Kalker’s theories allow to evaluate the shape of the contact patch and the tangential forces respectively. Also the semi - elastic approach considers the wheel and the rail as rigid bodies. However in this case no kinematic constraints are imposed and the indentation between the bodies is permitted. The contact points are detected by means of approximated procedures (based on look - up tables and simplifying hypotheses on the problem geometry). The normal contact forces are calculated as a function of the indentation while, as in the rigid approach, the Hertz’s and the Kalker’s theories allow to evaluate the shape of the contact patch and the tangential forces. Both the described multibody approaches are computationally very efficient but their generality and accuracy turn out to be often insufficient because the physical hypotheses behind these theories are too restrictive and, in many circumstances, unverified. In order to obtain a complete description of the contact phenomena, local (or differential) contact models are needed. In other words wheel and rail have to be considered elastic bodies governed by the Navier’s equations and the contact has to be described by suitable analytical contact conditions. The contact between elastic bodies has been widely studied in literature both in the general case and in the rolling case. Many procedures based on variational inequalities, FEM techniques and convex optimization have been developed. This kind of approach assures high generality and accuracy but still needs very large computational costs and memory consumption. Due to the high computational load and memory consumption, referring to the current state of the art, the integration between multibody and differential modeling is almost absent in literature especially in the railway field. However this integration is very important because only the differential modeling allows an accurate analysis of the contact problem (in terms of contact forces and torques, position and shape of the contact patch, stresses and displacements) while the multibody modeling is the standard in the study of the railway dynamics. In this thesis some innovative wheel – rail contact models developed during the Ph. D. activity will be described. Concerning the global models, two new models belonging to the semi – elastic approach will be presented; the models satisfy the following specifics: 1) the models have to be 3D and to consider all the six relative degrees of freedom between wheel and rail 2) the models have to consider generic railway tracks and generic wheel and rail profiles 3) the models have to assure a general and accurate handling of the multiple contact without simplifying hypotheses on the problem geometry; in particular the models have to evaluate the number and the position of the contact points and, for each point, the contact forces and torques 4) the models have to be implementable directly online within the multibody models without look - up tables 5) the models have to assure computation times comparable with those of commercial multibody software (Simpack Rail, Adams Rail) and compatible with RT and HIL applications 6) the models have to be compatible with commercial multibody software (Simpack Rail, Adams Rail). The most innovative aspect of the new global contact models regards the detection of the contact points. In particular both the models aim to reduce the algebraic problem dimension by means of suitable analytical techniques. This kind of reduction allows to obtain an high numerical efficiency that makes possible the online implementation of the new procedure and the achievement of performance comparable with those of commercial multibody software. At the same time the analytical approach assures high accuracy and generality. Concerning the local (or differential) contact models, one new model satisfying the following specifics will be presented: 1) the model has to be 3D and to consider all the six relative degrees of freedom between wheel and rail 2) the model has to consider generic railway tracks and generic wheel and rail profiles 3) the model has to assure a general and accurate handling of the multiple contact without simplifying hypotheses on the problem geometry; in particular the model has to able to calculate both the global contact variables (contact forces and torques) and the local contact variables (position and shape of the contact patch, stresses and displacements) 4) the model has to be implementable directly online within the multibody models 5) the model has to assure high numerical efficiency and a reduced memory consumption in order to achieve a good integration between multibody and differential modeling (the base for the local contact models) 6) the model has to be compatible with commercial multibody software (Simpack Rail, Adams Rail). In this case the most innovative aspects of the new local contact model regard the contact modeling (by means of suitable analytical conditions) and the implementation of the numerical algorithms needed to solve the discrete problem arising from the discretization of the original continuum problem. Moreover, during the development of the local model, the achievement of a good compromise between accuracy and efficiency turned out to be very important to obtain a good integration between multibody and differential modeling. At this point the contact models has been inserted within a 3D multibody model of a railway vehicle to obtain a complete model of the wagon. The railway vehicle chosen as benchmark is the Manchester Wagon the physical and geometrical characteristics of which are easily available in the literature. The model of the whole railway vehicle (multibody model and contact model) has been implemented in the Matlab/Simulink environment. The multibody model has been implemented in SimMechanics, a Matlab toolbox specifically designed for multibody dynamics, while, as regards the contact models, the CS – functions have been used; this particular Matlab architecture allows to efficiently connect the Matlab/Simulink and the C/C++ environment. The 3D multibody model of the same vehicle (this time equipped with a standard contact model based on the semi - elastic approach) has been then implemented also in Simpack Rail, a commercial multibody software for railway vehicles widely tested and validated. Finally numerical simulations of the vehicle dynamics have been carried out on many different railway tracks with the aim of evaluating the performances of the whole model. The comparison between the results obtained by the Matlab/ Simulink model and those obtained by the Simpack Rail model has allowed an accurate and reliable validation of the new contact models. In conclusion to this brief introduction to my Ph. D. thesis, we would like to thank Trenitalia and the Regione Toscana for the support provided during all the Ph. D. activity. Moreover we would also like to thank the INTEC GmbH, the society the develops the software Simpack Rail, with which we are currently working together to develop innovative toolboxes specifically designed for the wheel rail contact analysis.
Resumo:
Several decision and control tasks in cyber-physical networks can be formulated as large- scale optimization problems with coupling constraints. In these "constraint-coupled" problems, each agent is associated to a local decision variable, subject to individual constraints. This thesis explores the use of primal decomposition techniques to develop tailored distributed algorithms for this challenging set-up over graphs. We first develop a distributed scheme for convex problems over random time-varying graphs with non-uniform edge probabilities. The approach is then extended to unknown cost functions estimated online. Subsequently, we consider Mixed-Integer Linear Programs (MILPs), which are of great interest in smart grid control and cooperative robotics. We propose a distributed methodological framework to compute a feasible solution to the original MILP, with guaranteed suboptimality bounds, and extend it to general nonconvex problems. Monte Carlo simulations highlight that the approach represents a substantial breakthrough with respect to the state of the art, thus representing a valuable solution for new toolboxes addressing large-scale MILPs. We then propose a distributed Benders decomposition algorithm for asynchronous unreliable networks. The framework has been then used as starting point to develop distributed methodologies for a microgrid optimal control scenario. We develop an ad-hoc distributed strategy for a stochastic set-up with renewable energy sources, and show a case study with samples generated using Generative Adversarial Networks (GANs). We then introduce a software toolbox named ChoiRbot, based on the novel Robot Operating System 2, and show how it facilitates simulations and experiments in distributed multi-robot scenarios. Finally, we consider a Pickup-and-Delivery Vehicle Routing Problem for which we design a distributed method inspired to the approach of general MILPs, and show the efficacy through simulations and experiments in ChoiRbot with ground and aerial robots.
Resumo:
The aspartic protease BACE1 (β-amyloid precursor protein cleaving enzyme, β-secretase) is recognized as one of the most promising targets in the treatment of Alzheimer's disease (AD). The accumulation of β-amyloid peptide (Aβ) in the brain is a major factor in the pathogenesis of AD. Aβ is formed by initial cleavage of β-amyloid precursor protein (APP) by β-secretase, therefore BACE1 inhibition represents one of the therapeutic approaches to control progression of AD, by preventing the abnormal generation of Aβ. For this reason, in the last decade, many research efforts have focused at the identification of new BACE1 inhibitors as drug candidates. Generally, BACE1 inhibitors are grouped into two families: substrate-based inhibitors, designed as peptidomimetic inhibitors, and non-peptidomimetic ones. The research on non-peptidomimetic small molecules BACE1 inhibitors remains the most interesting approach, since these compounds hold an improved bioavailability after systemic administration, due to a good blood-brain barrier permeability in comparison to peptidomimetic inhibitors. Very recently, our research group discovered a new promising lead compound for the treatment of AD, named lipocrine, a hybrid derivative between lipoic acid and the AChE inhibitor (AChEI) tacrine, characterized by a tetrahydroacridinic moiety. Lipocrine is one of the first compounds able to inhibit the catalytic activity of AChE and AChE-induced amyloid-β aggregation and to protect against reactive oxygen species. Due to this interesting profile, lipocrine was also evaluated for BACE1 inhibitory activity, resulting in a potent lead compound for BACE1 inhibition. Starting from this interesting profile, a series of tetrahydroacridine analogues were synthesised varying the chain length between the two fragments. Moreover, following the approach of combining in a single molecule two different pharmacophores, we designed and synthesised different compounds bearing the moieties of known AChEIs (rivastigmine and caproctamine) coupled with lipoic acid, since it was shown that dithiolane group is an important structural feature of lipocrine for the optimal inhibition of BACE1. All the tetrahydroacridines, rivastigmine and caproctamine-based compounds, were evaluated for BACE1 inhibitory activity in a FRET (fluorescence resonance energy transfer) enzymatic assay (test A). With the aim to enhancing the biological activity of the lead compound, we applied the molecular simplification approach to design and synthesize novel heterocyclic compounds related to lipocrine, in which the tetrahydroacridine moiety was replaced by 4-amino-quinoline or 4-amino-quinazoline rings. All the synthesized compounds were also evaluated in a modified FRET enzymatic assay (test B), changing the fluorescent substrate for enzymatic BACE1 cleavage. This test method guided deep structure-activity relationships for BACE1 inhibition on the most promising quinazoline-based derivatives. By varying the substituent on the 2-position of the quinazoline ring and by replacing the lipoic acid residue in lateral chain with different moieties (i.e. trans-ferulic acid, a known antioxidant molecule), a series of quinazoline derivatives were obtained. In order to confirm inhibitory activity of the most active compounds, they were evaluated with a third FRET assay (test C) which, surprisingly, did not confirm the previous good activity profiles. An evaluation study of kinetic parameters of the three assays revealed that method C is endowed with the best specificity and enzymatic efficiency. Biological evaluation of the modified 2,4-diamino-quinazoline derivatives measured through the method C, allow to obtain a new lead compound bearing the trans-ferulic acid residue coupled to 2,4-diamino-quinazoline core endowed with a good BACE1 inhibitory activity (IC50 = 0.8 mM). We reported on the variability of the results in the three different FRET assays that are known to have some disadvantages in term of interference rates that are strongly dependent on compound properties. The observed results variability could be also ascribed to different enzyme origin, varied substrate and different fluorescent groups. The inhibitors should be tested on a parallel screening in order to have a more reliable data prior to be tested into cellular assay. With this aim, preliminary cellular BACE1 inhibition assay carried out on lipocrine confirmed a good cellular activity profile (EC50 = 3.7 mM) strengthening the idea to find a small molecule non-peptidomimetic compound as BACE1 inhibitor. In conclusion, the present study allowed to identify a new lead compound endowed with BACE1 inhibitory activity in submicromolar range. Further lead optimization to the obtained derivative is needed in order to obtain a more potent and a selective BACE1 inhibitor based on 2,4-diamino-quinazoline scaffold. A side project related to the synthesis of novel enzymatic inhibitors of BACE1 in order to explore the pseudopeptidic transition-state isosteres chemistry was carried out during research stage at Università de Montrèal (Canada) in Hanessian's group. The aim of this work has been the synthesis of the δ-aminocyclohexane carboxylic acid motif with stereochemically defined substitution to incorporating such a constrained core in potential BACE1 inhibitors. This fragment, endowed with reduced peptidic character, is not known in the context of peptidomimetic design. In particular, we envisioned an alternative route based on an organocatalytic asymmetric conjugate addition of nitroalkanes to cyclohexenone in presence of D-proline and trans-2,5-dimethylpiperazine. The enantioenriched obtained 3-(α-nitroalkyl)-cyclohexanones were further functionalized to give the corresponding δ-nitroalkyl cyclohexane carboxylic acids. These intermediates were elaborated to the target structures 3-(α-aminoalkyl)-1-cyclohexane carboxylic acids in a new readily accessible way.
Resumo:
The research activity described in this thesis is focused mainly on the study of finite-element techniques applied to thermo-fluid dynamic problems of plant components and on the study of dynamic simulation techniques applied to integrated building design in order to enhance the energy performance of the building. The first part of this doctorate thesis is a broad dissertation on second law analysis of thermodynamic processes with the purpose of including the issue of the energy efficiency of buildings within a wider cultural context which is usually not considered by professionals in the energy sector. In particular, the first chapter includes, a rigorous scheme for the deduction of the expressions for molar exergy and molar flow exergy of pure chemical fuels. The study shows that molar exergy and molar flow exergy coincide when the temperature and pressure of the fuel are equal to those of the environment in which the combustion reaction takes place. A simple method to determine the Gibbs free energy for non-standard values of the temperature and pressure of the environment is then clarified. For hydrogen, carbon dioxide, and several hydrocarbons, the dependence of the molar exergy on the temperature and relative humidity of the environment is reported, together with an evaluation of molar exergy and molar flow exergy when the temperature and pressure of the fuel are different from those of the environment. As an application of second law analysis, a comparison of the thermodynamic efficiency of a condensing boiler and of a heat pump is also reported. The second chapter presents a study of borehole heat exchangers, that is, a polyethylene piping network buried in the soil which allows a ground-coupled heat pump to exchange heat with the ground. After a brief overview of low-enthalpy geothermal plants, an apparatus designed and assembled by the author to carry out thermal response tests is presented. Data obtained by means of in situ thermal response tests are reported and evaluated by means of a finite-element simulation method, implemented through the software package COMSOL Multyphysics. The simulation method allows the determination of the precise value of the effective thermal properties of the ground and of the grout, which are essential for the design of borehole heat exchangers. In addition to the study of a single plant component, namely the borehole heat exchanger, in the third chapter is presented a thorough process for the plant design of a zero carbon building complex. The plant is composed of: 1) a ground-coupled heat pump system for space heating and cooling, with electricity supplied by photovoltaic solar collectors; 2) air dehumidifiers; 3) thermal solar collectors to match 70% of domestic hot water energy use, and a wood pellet boiler for the remaining domestic hot water energy use and for exceptional winter peaks. This chapter includes the design methodology adopted: 1) dynamic simulation of the building complex with the software package TRNSYS for evaluating the energy requirements of the building complex; 2) ground-coupled heat pumps modelled by means of TRNSYS; and 3) evaluation of the total length of the borehole heat exchanger by an iterative method developed by the author. An economic feasibility and an exergy analysis of the proposed plant, compared with two other plants, are reported. The exergy analysis was performed by considering the embodied energy of the components of each plant and the exergy loss during the functioning of the plants.
Resumo:
The research activities described in the present thesis have been oriented to the design and development of components and technological processes aimed at optimizing the performance of plasma sources in advanced in material treatments. Consumables components for high definition plasma arc cutting (PAC) torches were studied and developed. Experimental activities have in particular focussed on the modifications of the emissive insert with respect to the standard electrode configuration, which comprises a press fit hafnium insert in a copper body holder, to improve its durability. Based on a deep analysis of both the scientific and patent literature, different solutions were proposed and tested. First, the behaviour of Hf cathodes when operating at high current levels (250A) in oxidizing atmosphere has been experimentally investigated optimizing, with respect to expected service life, the initial shape of the electrode emissive surface. Moreover, the microstructural modifications of the Hf insert in PAC electrodes were experimentally investigated during first cycles, in order to understand those phenomena occurring on and under the Hf emissive surface and involved in the electrode erosion process. Thereafter, the research activity focussed on producing, characterizing and testing prototypes of composite inserts, combining powders of a high thermal conductibility (Cu, Ag) and high thermionic emissivity (Hf, Zr) materials The complexity of the thermal plasma torch environment required and integrated approach also involving physical modelling. Accordingly, a detailed line-by-line method was developed to compute the net emission coefficient of Ar plasmas at temperatures ranging from 3000 K to 25000 K and pressure ranging from 50 kPa to 200 kPa, for optically thin and partially autoabsorbed plasmas. Finally, prototypal electrodes were studied and realized for a newly developed plasma source, based on the plasma needle concept and devoted to the generation of atmospheric pressure non-thermal plasmas for biomedical applications.
Resumo:
This study is focused on radio-frequency inductively coupled thermal plasma (ICP) synthesis of nanoparticles, combining experimental and modelling approaches towards process optimization and industrial scale-up, in the framework of the FP7-NMP SIMBA European project (Scaling-up of ICP technology for continuous production of Metallic nanopowders for Battery Applications). First the state of the art of nanoparticle production through conventional and plasma routes is summarized, then results for the characterization of the plasma source and on the investigation of the nanoparticle synthesis phenomenon, aiming at highlighting fundamental process parameters while adopting a design oriented modelling approach, are presented. In particular, an energy balance of the torch and of the reaction chamber, employing a calorimetric method, is presented, while results for three- and two-dimensional modelling of an ICP system are compared with calorimetric and enthalpy probe measurements to validate the temperature field predicted by the model and used to characterize the ICP system under powder-free conditions. Moreover, results from the modeling of critical phases of ICP synthesis process, such as precursor evaporation, vapour conversion in nanoparticles and nanoparticle growth, are presented, with the aim of providing useful insights both for the design and optimization of the process and on the underlying physical phenomena. Indeed, precursor evaporation, one of the phases holding the highest impact on industrial feasibility of the process, is discussed; by employing models to describe particle trajectories and thermal histories, adapted from the ones originally developed for other plasma technologies or applications, such as DC non-transferred arc torches and powder spherodization, the evaporation of micro-sized Si solid precursor in a laboratory scale ICP system is investigated. Finally, a discussion on the role of thermo-fluid dynamic fields on nano-particle formation is presented, as well as a study on the effect of the reaction chamber geometry on produced nanoparticle characteristics and process yield.
Resumo:
Thermal effects are rapidly gaining importance in nanometer heterogeneous integrated systems. Increased power density, coupled with spatio-temporal variability of chip workload, cause lateral and vertical temperature non-uniformities (variations) in the chip structure. The assumption of an uniform temperature for a large circuit leads to inaccurate determination of key design parameters. To improve design quality, we need precise estimation of temperature at detailed spatial resolution which is very computationally intensive. Consequently, thermal analysis of the designs needs to be done at multiple levels of granularity. To further investigate the flow of chip/package thermal analysis we exploit the Intel Single Chip Cloud Computer (SCC) and propose a methodology for calibration of SCC on-die temperature sensors. We also develop an infrastructure for online monitoring of SCC temperature sensor readings and SCC power consumption. Having the thermal simulation tool in hand, we propose MiMAPT, an approach for analyzing delay, power and temperature in digital integrated circuits. MiMAPT integrates seamlessly into industrial Front-end and Back-end chip design flows. It accounts for temperature non-uniformities and self-heating while performing analysis. Furthermore, we extend the temperature variation aware analysis of designs to 3D MPSoCs with Wide-I/O DRAM. We improve the DRAM refresh power by considering the lateral and vertical temperature variations in the 3D structure and adapting the per-DRAM-bank refresh period accordingly. We develop an advanced virtual platform which models the performance, power, and thermal behavior of a 3D-integrated MPSoC with Wide-I/O DRAMs in detail. Moving towards real-world multi-core heterogeneous SoC designs, a reconfigurable heterogeneous platform (ZYNQ) is exploited to further study the performance and energy efficiency of various CPU-accelerator data sharing methods in heterogeneous hardware architectures. A complete hardware accelerator featuring clusters of OpenRISC CPUs, with dynamic address remapping capability is built and verified on a real hardware.
Resumo:
The use of atmospheric pressure plasmas for thin film deposition on thermo-sensitive materials is currently one of the main challenges of the plasma scientific community. Despite the growing interest in this field, the existing knowledge gap between gas-phase reaction mechanisms and thin film properties is still one of the most important barriers to overcome for a complete understanding of the process. In this work, thin films surface characterization techniques, combined with passive and active gas-phase diagnostic methods, were used to provide a comprehensive study of the Ar/TEOS deposition process assisted by an atmospheric pressure plasma jet. SiO2-based thin films exhibiting a well-defined chemistry, a good morphological structure and high uniformity were studied in detail by FTIR, XPS, AFM and SEM analysis. Furthermore, non-intrusive spectroscopy techniques (OES, filter imaging) and laser spectroscopic methods (Rayleigh scattering, LIF and TALIF) were employed to shed light on the complexity of gas-phase mechanisms involved in the deposition process and discuss the influence of TEOS admixture on gas temperature, electron density and spatial-temporal behaviours of active species. The poly-diagnostic approach proposed in this work opens interesting perspectives both in terms of process control and optimization of thin film performances.
Resumo:
Since last century, the rising interest of value-added and advanced functional materials has spurred a ceaseless development in terms of industrial processes and applications. Among the emerging technologies, thanks to their unique features and versatility in terms of supported processes, non-equilibrium plasma discharges appear as a key solvent-free, high-throughput and cost-efficient technique. Nevertheless, applied research studies are needed with the aim of addressing plasma potentialities optimizing devices and processes for future industrial applications. In this framework, the aim of this dissertation is to report on the activities carried out and the results achieved concerning the development and optimization of plasma techniques for nanomaterial synthesis and processing to be applied in the biomedical field. In the first section, the design and investigation of a plasma assisted process for the production of silver (Ag) nanostructured multilayer coatings exhibiting anti-biofilm and anti-clot properties is described. With the aim on enabling in-situ and on-demand deposition of Ag nanoparticles (NPs), the optimization of a continuous in-flight aerosol process for particle synthesis is reported. The stability and promising biological performances of deposited coatings spurred further investigation through in-vitro and in-vivo tests which results are reported and discussed. With the aim of addressing the unanswered questions and tuning NPs functionalities, the second section concerns the study of silver containing droplet conversion in a flow-through plasma reactor. The presented results, obtained combining different analysis techniques, support a formation mechanism based on droplet to particle conversion driven by plasma induced precursor reduction. Finally, the third section deals with the development of a simulative and experimental approach used to investigate the in-situ droplet evaporation inside the plasma discharge addressing the main contributions to liquid evaporation in the perspective of process industrial scale up.
Resumo:
Reinforcement Learning (RL) provides a powerful framework to address sequential decision-making problems in which the transition dynamics is unknown or too complex to be represented. The RL approach is based on speculating what is the best decision to make given sample estimates obtained from previous interactions, a recipe that led to several breakthroughs in various domains, ranging from game playing to robotics. Despite their success, current RL methods hardly generalize from one task to another, and achieving the kind of generalization obtained through unsupervised pre-training in non-sequential problems seems unthinkable. Unsupervised RL has recently emerged as a way to improve generalization of RL methods. Just as its non-sequential counterpart, the unsupervised RL framework comprises two phases: An unsupervised pre-training phase, in which the agent interacts with the environment without external feedback, and a supervised fine-tuning phase, in which the agent aims to efficiently solve a task in the same environment by exploiting the knowledge acquired during pre-training. In this thesis, we study unsupervised RL via state entropy maximization, in which the agent makes use of the unsupervised interactions to pre-train a policy that maximizes the entropy of its induced state distribution. First, we provide a theoretical characterization of the learning problem by considering a convex RL formulation that subsumes state entropy maximization. Our analysis shows that maximizing the state entropy in finite trials is inherently harder than RL. Then, we study the state entropy maximization problem from an optimization perspective. Especially, we show that the primal formulation of the corresponding optimization problem can be (approximately) addressed through tractable linear programs. Finally, we provide the first practical methodologies for state entropy maximization in complex domains, both when the pre-training takes place in a single environment as well as multiple environments.
Resumo:
This thesis deals with the analysis and management of emergency healthcare processes through the use of advanced analytics and optimization approaches. Emergency processes are among the most complex within healthcare. This is due to their non-elective nature and their high variability. This thesis is divided into two topics. The first one concerns the core of emergency healthcare processes, the emergency department (ED). In the second chapter, we describe the ED that is the case study. This is a real case study with data derived from a large ED located in northern Italy. In the next two chapters, we introduce two tools for supporting ED activities. The first one is a new type of analytics model. Its aim is to overcome the traditional methods of analyzing the activities provided in the ED by means of an algorithm that analyses the ED pathway (organized as event log) as a whole. The second tool is a decision-support system, which integrates a deep neural network for the prediction of patient pathways, and an online simulator to evaluate the evolution of the ED over time. Its purpose is to provide a set of solutions to prevent and solve the problem of the ED overcrowding. The second part of the thesis focuses on the COVID-19 pandemic emergency. In the fifth chapter, we describe a tool that was used by the Bologna local health authority in the first part of the pandemic. Its purpose is to analyze the clinical pathway of a patient and from this automatically assign them a state. Physicians used the state for routing the patients to the correct clinical pathways. The last chapter is dedicated to the description of a MIP model, which was used for the organization of the COVID-19 vaccination campaign in the city of Bologna, Italy.
Resumo:
In rural and isolated areas without cellular coverage, Satellite Communication (SatCom) is the best candidate to complement terrestrial coverage. However, the main challenge for future generations of wireless networks will be to meet the growing demand for new services while dealing with the scarcity of frequency spectrum. As a result, it is critical to investigate more efficient methods of utilizing the limited bandwidth; and resource sharing is likely the only choice. The research community’s focus has recently shifted towards the interference management and exploitation paradigm to meet the increasing data traffic demands. In the Downlink (DL) and Feedspace (FS), LEO satellites with an on-board antenna array can offer service to numerous User Terminals (UTs) (VSAT or Handhelds) on-ground in FFR schemes by using cutting-edge digital beamforming techniques. Considering this setup, the adoption of an effective user scheduling approach is a critical aspect given the unusually high density of User terminals on the ground as compared to the on-board available satellite antennas. In this context, one possibility is that of exploiting clustering algorithms for scheduling in LEO MU-MIMO systems in which several users within the same group are simultaneously served by the satellite via Space Division Multiplexing (SDM), and then these different user groups are served in different time slots via Time Division Multiplexing (TDM). This thesis addresses this problem by defining a user scheduling problem as an optimization problem and discusses several algorithms to solve it. In particular, focusing on the FS and user service link (i.e., DL) of a single MB-LEO satellite operating below 6 GHz, the user scheduling problem in the Frequency Division Duplex (FDD) mode is addressed. The proposed State-of-the-Art scheduling approaches are based on graph theory. The proposed solution offers high performance in terms of per-user capacity, Sum-rate capacity, SINR, and Spectral Efficiency.
Resumo:
Alpha-particle emitters, notably used in 224Ra-DaRT, have emerged as effective in overcoming radiation resistance and providing targeted cancer therapy. These emitters cause DNA double-strand breaks, visualizable in human lymphocytes. The 224Ra DaRT technique, using a decay chain from seeds, extends alpha particle range, achieving complete tumor destruction while sparing healthy tissue. This thesis examines a biokinetic model, validated with patient data, and a feasibility study on skin squamous cell carcinomas are discussed. The study reports 75% tumor complete response rate and 48% patients experiencing acute grade 2 toxicity, resolving within a month. An observed abscopal effect (AE), where tumor regression occurs at non-irradiated sites, is examined, highlighting DaRT's potential in triggering anti-tumor immune responses. This effect, coupled with DaRT's high-linear energy transfer (LET), suggests its superiority over low-LET radiation in certain clinical scenarios. Improvements to DaRT, including the use of an external radio-opaque template for treatment planning, are explored. This advancement aids in determining source numbers for optimal tumor coverage, enhancing DaRT’s safety. The thesis outlines a typical DaRT procedure, from tumor measurements to source assessment and administration, emphasizing the importance of precise seed positioning. Furthermore, the thesis discusses DaRT's potential in treating prostate cancer, a prevalent global health issue, by offering an alternative to traditional salvage therapies. DaRT seeds, delivering alpha particle-based interstitial radiation, require precision in seed insertion due to their limited tissue range. In conclusion, the thesis advocates for DaRT's role in treating solid tumors, emphasizing its improved radiobiological potency and potential benefits over beta and gamma source-based therapies. Ongoing studies are assessing DaRT's feasibility in treating various solid tumors, including pancreatic, breast, prostate, and vulvar malignancies, suggesting a promising future in cancer treatment.