43 resultados para robust and stochastic optimization
Resumo:
The purpose of this Thesis is to develop a robust and powerful method to classify galaxies from large surveys, in order to establish and confirm the connections between the principal observational parameters of the galaxies (spectral features, colours, morphological indices), and help unveil the evolution of these parameters from $z \sim 1$ to the local Universe. Within the framework of zCOSMOS-bright survey, and making use of its large database of objects ($\sim 10\,000$ galaxies in the redshift range $0 < z \lesssim 1.2$) and its great reliability in redshift and spectral properties determinations, first we adopt and extend the \emph{classification cube method}, as developed by Mignoli et al. (2009), to exploit the bimodal properties of galaxies (spectral, photometric and morphologic) separately, and then combining together these three subclassifications. We use this classification method as a test for a newly devised statistical classification, based on Principal Component Analysis and Unsupervised Fuzzy Partition clustering method (PCA+UFP), which is able to define the galaxy population exploiting their natural global bimodality, considering simultaneously up to 8 different properties. The PCA+UFP analysis is a very powerful and robust tool to probe the nature and the evolution of galaxies in a survey. It allows to define with less uncertainties the classification of galaxies, adding the flexibility to be adapted to different parameters: being a fuzzy classification it avoids the problems due to a hard classification, such as the classification cube presented in the first part of the article. The PCA+UFP method can be easily applied to different datasets: it does not rely on the nature of the data and for this reason it can be successfully employed with others observables (magnitudes, colours) or derived properties (masses, luminosities, SFRs, etc.). The agreement between the two classification cluster definitions is very high. ``Early'' and ``late'' type galaxies are well defined by the spectral, photometric and morphological properties, both considering them in a separate way and then combining the classifications (classification cube) and treating them as a whole (PCA+UFP cluster analysis). Differences arise in the definition of outliers: the classification cube is much more sensitive to single measurement errors or misclassifications in one property than the PCA+UFP cluster analysis, in which errors are ``averaged out'' during the process. This method allowed us to behold the \emph{downsizing} effect taking place in the PC spaces: the migration between the blue cloud towards the red clump happens at higher redshifts for galaxies of larger mass. The determination of $M_{\mathrm{cross}}$ the transition mass is in significant agreement with others values in literature.
Resumo:
The general objective of this research is to explore theories and methodologies of sustainability indicators, environmental management and decision making disciplines with the operational purpose of producing scientific, robust and relevant information for supporting system understanding and decision making in real case studies. Several tools have been applied in order to increase the understanding of socio-ecological systems as well as providing relevant information on the choice between alternatives. These tools have always been applied having in mind the complexity of the issues and the uncertainty tied to the partial knowledge of the systems under study. Two case studies with specific application to performances measurement (environmental performances in the case of the K8 approach and sustainable development performances in the case of the EU Sustainable Development Strategy) and a case study about the selection of sustainable development indicators amongst Municipalities in Scotland, are discussed in the first part of the work. In the second part of the work, the common denominator among subjects consists in the application of spatial indices and indicators to address operational problems in land use management within the territory of the Ravenna province (Italy). The main conclusion of the thesis is that a ‘perfect’ methodological approach which always produces the best results in assessing sustainability performances does not exist. Rather, there is a pool of correct approaches answering different evaluation questions, to be used when methodologies fit the purpose of the analysis. For this reason, methodological limits and conceptual assumptions as well as consistency and transparency of the assessment, become the key factors for assessing the quality of the analysis.
Resumo:
The present work consists of the investigation of the navigation of Pioneer 10 and 11 probes becoming known as the “Pioneer Anomaly”: the trajectories followed by the spacecrafts did not match the ones retrieved with standard navigation software. Mismatching appeared as a linear drift in the Doppler data received by the spacecrafts, which has been ascribed to a constant sunward acceleration of about 8.5×10-10 m/s2. The study presented hereafter tries to find a convincing explanation to this discrepancy. The research is based on the analysis of Doppler tracking data through the ODP (Orbit Determination Program), developed by NASA/JPL. The method can be summarized as: seek for any kind of physics affecting the dynamics of the spacecraft or the propagation of radiometric data, which may have not been properly taken into account previously, and check whether or not these might rule out the anomaly. A major effort has been put to build a thermal model of the spacecrafts for predicting the force due to anisotropic thermal radiation, since this is a model not natively included in the ODP. Tracking data encompassing more than twenty years of Pioneer 10 interplanetary cruise, plus twelve years of Pioneer 11 have been analyzed in light of the results of the thermal model. Different strategies of orbit determination have been implemented, including single arc, multi arc and stochastic filters, and their performance compared. Orbital solutions have been obtained without the needing of any acceleration other than the thermal recoil one indicating it as the responsible for the observed linear drift in the Doppler residuals. As a further support to this we checked that inclusion of additional constant acceleration as does not improve the quality of orbital solutions. All the tests performed lead to the conclusion that no anomalous acceleration is acting on Pioneers spacecrafts.
Resumo:
This thesis deals with the study of optimal control problems for the incompressible Magnetohydrodynamics (MHD) equations. Particular attention to these problems arises from several applications in science and engineering, such as fission nuclear reactors with liquid metal coolant and aluminum casting in metallurgy. In such applications it is of great interest to achieve the control on the fluid state variables through the action of the magnetic Lorentz force. In this thesis we investigate a class of boundary optimal control problems, in which the flow is controlled through the boundary conditions of the magnetic field. Due to their complexity, these problems present various challenges in the definition of an adequate solution approach, both from a theoretical and from a computational point of view. In this thesis we propose a new boundary control approach, based on lifting functions of the boundary conditions, which yields both theoretical and numerical advantages. With the introduction of lifting functions, boundary control problems can be formulated as extended distributed problems. We consider a systematic mathematical formulation of these problems in terms of the minimization of a cost functional constrained by the MHD equations. The existence of a solution to the flow equations and to the optimal control problem are shown. The Lagrange multiplier technique is used to derive an optimality system from which candidate solutions for the control problem can be obtained. In order to achieve the numerical solution of this system, a finite element approximation is considered for the discretization together with an appropriate gradient-type algorithm. A finite element object-oriented library has been developed to obtain a parallel and multigrid computational implementation of the optimality system based on a multiphysics approach. Numerical results of two- and three-dimensional computations show that a possible minimum for the control problem can be computed in a robust and accurate manner.
Resumo:
This thesis proposes an integrated holistic approach to the study of neuromuscular fatigue in order to encompass all the causes and all the consequences underlying the phenomenon. Starting from the metabolic processes occurring at the cellular level, the reader is guided toward the physiological changes at the motorneuron and motor unit level and from this to the more general biomechanical alterations. In Chapter 1 a list of the various definitions for fatigue spanning several contexts has been reported. In Chapter 2, the electrophysiological changes in terms of motor unit behavior and descending neural drive to the muscle have been studied extensively as well as the biomechanical adaptations induced. In Chapter 3 a study based on the observation of temporal features extracted from sEMG signals has been reported leading to the need of a more robust and reliable indicator during fatiguing tasks. Therefore, in Chapter 4, a novel bi-dimensional parameter is proposed. The study on sEMG-based indicators opened a scenario also on neurophysiological mechanisms underlying fatigue. For this purpose, in Chapter 5, a protocol designed for the analysis of motor unit-related parameters during prolonged fatiguing contractions is presented. In particular, two methodologies have been applied to multichannel sEMG recordings of isometric contractions of the Tibialis Anterior muscle: the state-of-the-art technique for sEMG decomposition and a coherence analysis on MU spike trains. The importance of a multi-scale approach has been finally highlighted in the context of the evaluation of cycling performance, where fatigue is one of the limiting factors. In particular, the last chapter of this thesis can be considered as a paradigm: physiological, metabolic, environmental, psychological and biomechanical factors influence the performance of a cyclist and only when all of these are kept together in a novel integrative way it is possible to derive a clear model and make correct assessments.
Resumo:
This thesis is divided in three chapters. In the first chapter we analyse the results of the world forecasting experiment run by the Collaboratory for the Study of Earthquake Predictability (CSEP). We take the opportunity of this experiment to contribute to the definition of a more robust and reliable statistical procedure to evaluate earthquake forecasting models. We first present the models and the target earthquakes to be forecast. Then we explain the consistency and comparison tests that are used in CSEP experiments to evaluate the performance of the models. Introducing a methodology to create ensemble forecasting models, we show that models, when properly combined, are almost always better performing that any single model. In the second chapter we discuss in depth one of the basic features of PSHA: the declustering of the seismicity rates. We first introduce the Cornell-McGuire method for PSHA and we present the different motivations that stand behind the need of declustering seismic catalogs. Using a theorem of the modern probability (Le Cam's theorem) we show that the declustering is not necessary to obtain a Poissonian behaviour of the exceedances that is usually considered fundamental to transform exceedance rates in exceedance probabilities in the PSHA framework. We present a method to correct PSHA for declustering, building a more realistic PSHA. In the last chapter we explore the methods that are commonly used to take into account the epistemic uncertainty in PSHA. The most widely used method is the logic tree that stands at the basis of the most advanced seismic hazard maps. We illustrate the probabilistic structure of the logic tree, and then we show that this structure is not adequate to describe the epistemic uncertainty. We then propose a new probabilistic framework based on the ensemble modelling that properly accounts for epistemic uncertainties in PSHA.
Resumo:
The research activity was focused on the transformation of methyl propionate (MP) into methyl methacrylate (MMA), avoiding the use of formaldehyde (FAL) thanks to a one-pot strategy involving in situ methanol (MeOH) dehydrogenation over the same catalytic bed were the hydroxy-methylation/dehydration of MP with FAL occurs. The relevance of such research line is related to the availability of cheap renewable bio-glycerol from biodiesel production, from which MP can be obtained via a series of simple catalytic reactions. Moreover, the conventional MMA synthesis (Lucite process) suffers from safety issues related to the direct use of carcinogenic FAL and depends on non-renewable MP. During preliminary studies, ketonization of carboxylic acids and esters has been recognized as a detrimental reaction which hinders the selective synthesis of MMA at low temperature, together with H-transfer hydrogenation with FAL or MeOH as the H-donor at higher temperatures. Therefore, ketonization of propionic acid (PA) and MP was investigated over several catalysts (metal oxides and metal phosphates), to obtain a better understanding of the structure-activity relationship governing the reaction and to design a catalyst for MMA synthesis capable to promote the desired reaction while minimizing ketonization and H-transfer. However, ketonization possesses scientific and industrial value itself and represents a strategy for the upgrade of bio oils from fast pyrolysis of lignocellulosic materials, a robust and versatile technology capable to transform the most abundant biomass into liquid biofuels. The catalysts screening showed that ZrO2 and La2O3 are the best catalysts, while MgO possesses low ketonization activity, but still, H-transfer parasitic hydrogenation of MMA reduces its yield over all catalysts. Such study resulted in the design of Mg/Ga mixed oxides that showed enhanced dehydrogenating activity towards MeOH at low temperatures. It was found that the introduction of Ga not only minimize ketonization, but also modulates catalyst basicity reducing H-transfer hydrogenations.
Resumo:
Noise is constant presence in measurements. Its origin is related to the microscopic properties of matter. Since the seminal work of Brown in 1828, the study of stochastic processes has gained an increasing interest with the development of new mathematical and analytical tools. In the last decades, the central role that noise plays in chemical and physiological processes has become recognized. The dual role of noise as nuisance/resource pushes towards the development of new decomposition techniques that divide a signal into its deterministic and stochastic components. In this thesis I show how methods based on Singular Spectrum Analysis have the right properties to fulfil the previously mentioned requirement. During my work I applied SSA to different signals of interest in chemistry: I developed a novel iterative procedure for the denoising of powder X-ray diffractograms; I “denoised” bi-dimensional images from experiments of electrochemiluminescence imaging of micro-beads obtaining new insight on ECL mechanism. I also used Principal Component Analysis to investigate the relationship between brain electrophysiological signals and voice emission.
Resumo:
This PhD work arises from the necessity to give a contribution to the energy saving field, regarding automotive applications. The aim was to produce a multidisciplinary work to show how much important is to consider different aspects of an electric car realization: from innovative materials to cutting-edge battery thermal management systems (BTMSs), also dealing with the life cycle assessment (LCA) of the battery packs (BPs). Regarding the materials, it has been chosen to focus on carbon fiber composites as their use allows realizing light products with great mechanical properties. Processes and methods to produce carbon fiber goods have been analysed with a special attention on the university solar car Emilia 4. The work proceeds dealing with the common BTMSs on the market (air-cooled, cooling plates, heat pipes) and then it deepens some of the most innovative systems such as the PCM-based BTMSs after a previous experimental campaign to characterize the PCMs. After that, a complex experimental campaign regarding the PCM-based BTMSs has been carried on, considering both uninsulated and insulated systems. About the first category the tested systems have been pure PCM-based and copper-foam-loaded-PCM-based BTMSs; the insulated tested systems have been pure PCM-based and copper-foam-loaded-PCM-based BTMSs and both of these systems equipped with a liquid cooling circuit. The choice of lighter building materials and the optimization of the BTMS are strategies which helps in reducing the energy consumption, considering both the energy required by the car to move and the BP state of health (SOH). Focusing on this last factor, a clear explanation regarding the importance of taking care about the SOH is given by the analysis of a BP production energy consumption. This is why a final dissertation about the life cycle assessment (LCA) of a BP unit has been presented in this thesis.
Resumo:
Understanding why market manipulation is conducted, under which conditions it is the most profitable and investigating the magnitude of these practices are crucial questions for financial regulators. Closing price manipulation induced by derivatives’ expiration is the primary subject of this thesis. The first chapter provides a mathematical framework in continuous time to study the incentive to manipulate a set of securities induced by a derivative position. An agent holding a European-type contingent claim, depending on the price of a basket of underlying securities, is considered. The agent can affect the price of the underlying securities by trading on each of them before expiration. The elements of novelty are at least twofold: (1) a multi-asset market is considered; (2) the problem is solved by means of both classic optimisation and stochastic control techniques. Both linear and option payoffs are considered. In the second chapter an empirical investigation is conducted on the existence of expiration day effects on the UK equity market. Intraday data on FTSE 350 stocks over a six-year period from 2015-2020 are used. The results show that the expiration of index derivatives is associated with a rise in both trading activity and volatility, together with significant price distortions. The expiration of single stock options appears to have little to no impact on the underlying securities. The last chapter examines the existence of patterns in line with closing price manipulation of UK stocks on option expiration days. The main contributions are threefold: (1) this is one of the few empirical studies on manipulation induced by the options market; (2) proprietary equity orderbook and transaction data sets are used to define manipulation proxies, providing a more detailed analysis; (3) the behaviour of proprietary trading firms is studied. Despite the industry concerns, no evidence is found of this type of manipulative behaviour.
Resumo:
Digital forensics as a field has progressed alongside technological advancements over the years, just as digital devices have gotten more robust and sophisticated. However, criminals and attackers have devised means for exploiting the vulnerabilities or sophistication of these devices to carry out malicious activities in unprecedented ways. Their belief is that electronic crimes can be committed without identities being revealed or trails being established. Several applications of artificial intelligence (AI) have demonstrated interesting and promising solutions to seemingly intractable societal challenges. This thesis aims to advance the concept of applying AI techniques in digital forensic investigation. Our approach involves experimenting with a complex case scenario in which suspects corresponded by e-mail and deleted, suspiciously, certain communications, presumably to conceal evidence. The purpose is to demonstrate the efficacy of Artificial Neural Networks (ANN) in learning and detecting communication patterns over time, and then predicting the possibility of missing communication(s) along with potential topics of discussion. To do this, we developed a novel approach and included other existing models. The accuracy of our results is evaluated, and their performance on previously unseen data is measured. Second, we proposed conceptualizing the term “Digital Forensics AI” (DFAI) to formalize the application of AI in digital forensics. The objective is to highlight the instruments that facilitate the best evidential outcomes and presentation mechanisms that are adaptable to the probabilistic output of AI models. Finally, we enhanced our notion in support of the application of AI in digital forensics by recommending methodologies and approaches for bridging trust gaps through the development of interpretable models that facilitate the admissibility of digital evidence in legal proceedings.
Resumo:
The research project aims to improve the Design for Additive Manufacturing of metal components. Firstly, the scenario of Additive Manufacturing is depicted, describing its role in Industry 4.0 and in particular focusing on Metal Additive Manufacturing technologies and the Automotive sector applications. Secondly, the state of the art in Design for Additive Manufacturing is described, contextualizing the methodologies, and classifying guidelines, rules, and approaches. The key phases of product design and process design to achieve lightweight functional designs and reliable processes are deepened together with the Computer-Aided Technologies to support the approaches implementation. Therefore, a general Design for Additive Manufacturing workflow based on product and process optimization has been systematically defined. From the analysis of the state of the art, the use of a holistic approach has been considered fundamental and thus the use of integrated product-process design platforms has been evaluated as a key element for its development. Indeed, a computer-based methodology exploiting integrated tools and numerical simulations to drive the product and process optimization has been proposed. A validation of CAD platform-based approaches has been performed, as well as potentials offered by integrated tools have been evaluated. Concerning product optimization, systematic approaches to integrate topology optimization in the design have been proposed and validated through product optimization of an automotive case study. Concerning process optimization, the use of process simulation techniques to prevent manufacturing flaws related to the high thermal gradients of metal processes is developed, providing case studies to validate results compared to experimental data, and application to process optimization of an automotive case study. Finally, an example of the product and process design through the proposed simulation-driven integrated approach is provided to prove the method's suitability for effective redesigns of Additive Manufacturing based high-performance metal products. The results are then outlined, and further developments are discussed.
Resumo:
Allostery is a phenomenon of fundamental importance in biology, allowing regulation of function and dynamic adaptability of enzymes and proteins. Despite the allosteric effect was first observed more than a century ago allostery remains a biophysical enigma, defined as the “second secret of life”. The challenge is mainly associated to the rather complex nature of the allosteric mechanisms, which manifests itself as the alteration of the biological function of a protein/enzyme (e.g. ligand/substrate binding at the active site) by binding of “other object” (“allos stereos” in Greek) at a site distant (> 1 nanometer) from the active site, namely the effector site. Thus, at the heart of allostery there is signal propagation from the effector to the active site through a dense protein matrix, with a fundamental challenge being represented by the elucidation of the physico-chemical interactions between amino acid residues allowing communicatio n between the two binding sites, i.e. the “allosteric pathways”. Here, we propose a multidisciplinary approach based on a combination of computational chemistry, involving molecular dynamics simulations of protein motions, (bio)physical analysis of allosteric systems, including multiple sequence alignments of known allosteric systems, and mathematical tools based on graph theory and machine learning that can greatly help understanding the complexity of dynamical interactions involved in the different allosteric systems. The project aims at developing robust and fast tools to identify unknown allosteric pathways. The characterization and predictions of such allosteric spots could elucidate and fully exploit the power of allosteric modulation in enzymes and DNA-protein complexes, with great potential applications in enzyme engineering and drug discovery.
Resumo:
A general description of the work presented in this thesis can be divided into three areas of interest: micropore fabrication, nanopore modification, and their applications. The first part of the thesis is related to the novel, reliable, cost-effective, potable, mass-productive, robust, and ease of use micropore flowcell that works based on the RPS technique. Based on our first goal, which was finding an alternate materials and processes that would shorten production times while lowering costs and improving signal quality, the polyimide film was used as a substrate to create precise pores by femtosecond laser, and the resulting current blockades of different sizes of the nanoparticles were recorded. Based on the results, the device can detecting nano-sized particles by changing the current level. The experimental and theoretical investigation, scanning electron microscopy, and focus ion beam were performed to explain the micropore's performance. The second goal was design and fabrication of a leak-free, easy-to-assemble, and portable polymethyl methacrylate flowcell for nanopore experiments. Here, ion current rectification was studied in our nanodevice. We showed a self-assembly-based, controllable, and monitorable in situ Poly(l-lysine)- g-poly(ethylene glycol) coating method under voltage-driven electrolyte flow and electrostatic interaction between nanopore walls and PLL backbones. Using designed nanopore flowcell and in situ monolayer PLL-g-PEG functionalized 20±4 nm SiN nanopores, we observed non-sticky α-1 anti-trypsin protein translocation. additionally, we could show the enhancement of translocation events through this non-sticky nanopore, and also, estimate the volume of the translocated protein. In this study, by comparing the AAT protein translocation results from functionalized and non-functionalized nanopore we demonstrated the 105 times dwell time reduction (31-0.59ms), 25% amplitude enhancement (0.24-0.3 nA), and 15 times event’s number increase (1-15events/s) after functionalization in 1×PBS at physiological pH. Also, the AAT protein volume was measured, close to the calculated AAT protein hydrodynamic volume and previous reports.
Resumo:
A robust and well-distributed backbone charging network is the priority to ensure widespread electrification of road transport, providing a driving experience similar to that of internal combustion engine vehicles. International standards set multiple technical targets for on-board and off-board electric vehicle chargers; output voltage levels, harmonic emissions, and isolation requirements strongly influence the design of power converters. Additionally, smart-grid services such as vehicle-to-grid and vehicle-to-vehicle require the implementation of bi-directional stages that inevitably increase system complexity and component count. To face these design challenges, the present thesis provides a rigorous analysis of four-leg and split-capacitor three-phase four-wire active front-end topologies focusing on the harmonic description under different modulation techniques and conditions. The resulting analytical formulation paves the way for converter performance improvements while maintaining regulatory constraints and technical requirements under control. Specifically, split-capacitor inverter current ripple was characterized as providing closed-form formulations valid for every sub-case ranging from synchronous to interleaved PWM. Outcomes are the base for a novel variable switching PWM technique capable of mediating harmonic content limitation and switching loss reduction. A similar analysis is proposed for four-leg inverters with a broad range of continuous and discontinuous PWM modulations. The general superiority of discontinuous PWM modulation in reducing switching losses and limiting harmonic emission was demonstrated. Developments are realized through a parametric description of the neutral wire inductor. Finally, a novel class of integrated isolated converter topologies is proposed aiming at the neutral wire delivery without employing extra switching components rather than the one already available in typical three-phase inverter and dual-active-bridge back-to-back configurations. The fourth leg was integrated inside the dual-active-bridge input bridge providing relevant component count savings. A novel modified single-phase-shift modulation technique was developed to ensure a seamless transition between working conditions like voltage level and power factor. Several simulations and experiments validate the outcomes.