760 resultados para Lipschitz trivial
Resumo:
Slot and van Emde Boas Invariance Thesis states that a time (respectively, space) cost model is reasonable for a computational model C if there are mutual simulations between Turing machines and C such that the overhead is polynomial in time (respectively, linear in space). The rationale is that under the Invariance Thesis, complexity classes such as LOGSPACE, P, PSPACE, become robust, i.e. machine independent. In this dissertation, we want to find out if it possible to define a reasonable space cost model for the lambda-calculus, the paradigmatic model for functional programming languages. We start by considering an unusual evaluation mechanism for the lambda-calculus, based on Girard's Geometry of Interaction, that was conjectured to be the key ingredient to obtain a space reasonable cost model. By a fine complexity analysis of this schema, based on new variants of non-idempotent intersection types, we disprove this conjecture. Then, we change the target of our analysis. We consider a variant over Krivine's abstract machine, a standard evaluation mechanism for the call-by-name lambda-calculus, optimized for space complexity, and implemented without any pointer. A fine analysis of the execution of (a refined version of) the encoding of Turing machines into the lambda-calculus allows us to conclude that the space consumed by this machine is indeed a reasonable space cost model. In particular, for the first time we are able to measure also sub-linear space complexities. Moreover, we transfer this result to the call-by-value case. Finally, we provide also an intersection type system that characterizes compositionally this new reasonable space measure. This is done through a minimal, yet non trivial, modification of the original de Carvalho type system.
Resumo:
This thesis is a compilation of 6 papers that the author has written together with Alberto Lanconelli (chapters 3, 5 and 8) and Hyun-Jung Kim (ch 7). The logic thread that link all these chapters together is the interest to analyze and approximate the solutions of certain stochastic differential equations using the so called Wick product as the basic tool. In the first chapter we present arguably the most important achievement of this thesis; namely the generalization to multiple dimensions of a Wick-Wong-Zakai approximation theorem proposed by Hu and Oksendal. By exploiting the relationship between the Wick product and the Malliavin derivative we propose an original reduction method which allows us to approximate semi-linear systems of stochastic differential equations of the Itô type. Furthermore in chapter 4 we present a non-trivial extension of the aforementioned results to the case in which the system of stochastic differential equations are driven by a multi-dimensional fraction Brownian motion with Hurst parameter bigger than 1/2. In chapter 5 we employ our approach and present a “short time” approximation for the solution of the Zakai equation from non-linear filtering theory and provide an estimation of the speed of convergence. In chapters 6 and 7 we study some properties of the unique mild solution for the Stochastic Heat Equation driven by spatial white noise of the Wick-Skorohod type. In particular by means of our reduction method we obtain an alternative derivation of the Feynman-Kac representation for the solution, we find its optimal Hölder regularity in time and space and present a Feynman-Kac-type closed form for its spatial derivative. Chapter 8 treats a somewhat different topic; in particular we investigate some probabilistic aspects of the unique global strong solution of a two dimensional system of semi-linear stochastic differential equations describing a predator-prey model perturbed by Gaussian noise.
Resumo:
The thesis describes three studies concerning the role of the Economic Preference set investigated in the Global Preference Survey (GPS) in the following cases: 1) the needs of women with breast cancer; 2) pain undertreament in oncology; 3) legal status of euthanasia and assisted suicide. The analyses, based on regression techniques, were always conducted on the basis of aggregate data and revealed in all cases a possible role of the Economic Preferences studied, also resisting the concomitant effect of the other covariates that were considered from time to time. Regarding individual studies, the related conclusion are: 1) Economic Preferences appear to play a role in influencing the needs of women with breast cancer, albeit of non-trivial interpretation, statistically "resisting" the concomitant effect of the other independent variables considered. However, these results should be considered preliminary and need further confirmation, possibly with prospective studies conducted at the level of the individual; 2) the results show a good degree of internal consistency with regard to pro-social GPS scores, since they are all found to be non-statistically significant and united, albeit only weakly in trend, by a negative correlation with the % of pain undertreated patients. Sharper, at least statistically, is the role of Patience and Willingness to Take Risk, although of more complex empirical interpretation. 3) the results seem to indicate an obvious role of Economic Preferences, however difficult to interpret empirically. Less evidence, at least on the inferential level, emerged, however, regarding variables that, based on common sense, should play an even more obvious role than Economic Preferences in orienting attitudes toward euthanasia and assisted suicide, namely Healthcare System, Legal Origin, and Kinship Tightness; striking, in particular, is the inability to prove a role for the dominant religious orientation even with a simple bivariate analysis.
Resumo:
Values are beliefs or principles that are deemed significant or desirable within a specific society or culture, serving as the fundamental underpinnings for ethical and socio-behavioral norms. The objective of this research is to explore the domain encompassing moral, cultural, and individual values. To achieve this, we employ an ontological approach to formally represent the semantic relations within the value domain. The theoretical framework employed adopts Fillmore’s frame semantics, treating values as semantic frames. A value situation is thus characterized by the co-occurrence of specific semantic roles fulfilled within a given event or circumstance. Given the intricate semantics of values as abstract entities with high social capital, our investigation extends to two interconnected domains. The first domain is embodied cognition, specifically image schemas, which are cognitive patterns derived from sensorimotor experiences that shape our conceptualization of entities in the world. The second domain pertains to emotions, which are inherently intertwined with the realm of values. Consequently, our approach endeavors to formalize the semantics of values within an embodied cognition framework, recognizing values as emotional-laden semantic frames. The primary ontologies proposed in this work are: (i) ValueNet, an ontology network dedicated to the domain of values; (ii) ISAAC, the Image Schema Abstraction And Cognition ontology; and (iii) EmoNet, an ontology for theories of emotions. The knowledge formalization adheres to established modeling practices, including the reuse of semantic web resources such as WordNet, VerbNet, FrameNet, DBpedia, and alignment to foundational ontologies like DOLCE, as well as the utilization of Ontology Design Patterns. These ontological resources are operationalized through the development of a fully explainable frame-based detector capable of identifying values, emotions, and image schemas generating knowledge graphs from from natural language, leveraging the semantic dependencies of a sentence, and allowing non trivial higher layer knowledge inferences.
Resumo:
In next generation Internet-of-Things, the overhead introduced by grant-based multiple access protocols may engulf the access network as a consequence of the proliferation of connected devices. Grant-free access protocols are therefore gaining an increasing interest to support massive multiple access. In addition to scalability requirements, new demands have emerged for massive multiple access, including latency and reliability. The challenges envisaged for future wireless communication networks, particularly in the context of massive access, include: i) a very large population size of low power devices transmitting short packets; ii) an ever-increasing scalability requirement; iii) a mild fixed maximum latency requirement; iv) a non-trivial requirement on reliability. To this aim, we suggest the joint utilization of grant-free access protocols, massive MIMO at the base station side, framed schemes to let the contention start and end within a frame, and succesive interference cancellation techniques at the base station side. In essence, this approach is encapsulated in the concept of coded random access with massive MIMO processing. These schemes can be explored from various angles, spanning the protocol stack from the physical (PHY) to the medium access control (MAC) layer. In this thesis, we delve into both of these layers, examining topics ranging from symbol-level signal processing to succesive interference cancellation-based scheduling strategies. In parallel with proposing new schemes, our work includes a theoretical analysis aimed at providing valuable system design guidelines. As a main theoretical outcome, we propose a novel joint PHY and MAC layer design based on density evolution on sparse graphs.
Resumo:
This thesis aims to investigate the fundamental processes governing the performance of different types of photoelectrodes used in photoelectrochemical (PEC) applications, such as unbiased water splitting for hydrogen production. Unraveling the transport and recombination phenomena in nanostructured and surface-modified heterojunctions at a semiconductor/electrolyte interface is not trivial. To approach this task, the work presented here first focus on a hydrogen-terminated p-silicon photocathode in acetonitrile, considered as a standard reference for PEC studies. Steady-state and time-resolved excitation at long wavelength provided clear evidence of the formation of an inversion layer and revealed that the most optimal photovoltage and the longest electron-hole pair lifetime occurs when the reduction potential for the species in solution lies within the unfilled conduction band states. Understanding more complex systems is not as straight-forward and a complete characterization that combine time- and frequency-resolved techniques is needed. Intensity modulated photocurrent spectroscopy and transient absorption spectroscopy are used here on WO3/BiVO4 heterojunctions. By selectively probing the two layers of the heterojunction, the occurrence of interfacial recombination was identified. Then, the addition of Co-Fe based overlayers resulted in passivation of surface states and charge storage at the overlayer active sites, providing higher charge separation efficiency and suppression of recombination in time scales that go from picoseconds to seconds. Finally, the charge carrier kinetics of several different Cu(In,Ga)Se2 (CIGS)-based architectures used for water reduction was investigated. The efficiency of a CIGS photocathode is severely limited by charge transfer at the electrode/electrolyte interface compared to the same absorber layer used as a photovoltaic cell. A NiMo binary alloy deposited on the photocathode surface showed a remarkable enhancement in the transfer rate of electrons in solution. An external CIGS photovoltaic module assisting a NiMo dark cathode displayed optimal absorption and charge separation properties and a highly performing interface with the solution.
Resumo:
Activation functions within neural networks play a crucial role in Deep Learning since they allow to learn complex and non-trivial patterns in the data. However, the ability to approximate non-linear functions is a significant limitation when implementing neural networks in a quantum computer to solve typical machine learning tasks. The main burden lies in the unitarity constraint of quantum operators, which forbids non-linearity and poses a considerable obstacle to developing such non-linear functions in a quantum setting. Nevertheless, several attempts have been made to tackle the realization of the quantum activation function in the literature. Recently, the idea of the QSplines has been proposed to approximate a non-linear activation function by implementing the quantum version of the spline functions. Yet, QSplines suffers from various drawbacks. Firstly, the final function estimation requires a post-processing step; thus, the value of the activation function is not available directly as a quantum state. Secondly, QSplines need many error-corrected qubits and a very long quantum circuits to be executed. These constraints do not allow the adoption of the QSplines on near-term quantum devices and limit their generalization capabilities. This thesis aims to overcome these limitations by leveraging hybrid quantum-classical computation. In particular, a few different methods for Variational Quantum Splines are proposed and implemented, to pave the way for the development of complete quantum activation functions and unlock the full potential of quantum neural networks in the field of quantum machine learning.
Resumo:
Hand gesture recognition based on surface electromyography (sEMG) signals is a promising approach for the development of intuitive human-machine interfaces (HMIs) in domains such as robotics and prosthetics. The sEMG signal arises from the muscles' electrical activity, and can thus be used to recognize hand gestures. The decoding from sEMG signals to actual control signals is non-trivial; typically, control systems map sEMG patterns into a set of gestures using machine learning, failing to incorporate any physiological insight. This master thesis aims at developing a bio-inspired hand gesture recognition system based on neuromuscular spike extraction rather than on simple pattern recognition. The system relies on a decomposition algorithm based on independent component analysis (ICA) that decomposes the sEMG signal into its constituent motor unit spike trains, which are then forwarded to a machine learning classifier. Since ICA does not guarantee a consistent motor unit ordering across different sessions, 3 approaches are proposed: 2 ordering criteria based on firing rate and negative entropy, and a re-calibration approach that allows the decomposition model to retain information about previous sessions. Using a multilayer perceptron (MLP), the latter approach results in an accuracy up to 99.4% in a 1-subject, 1-degree of freedom scenario. Afterwards, the decomposition and classification pipeline for inference is parallelized and profiled on the PULP platform, achieving a latency < 50 ms and an energy consumption < 1 mJ. Both the classification models tested (a support vector machine and a lightweight MLP) yielded an accuracy > 92% in a 1-subject, 5-classes (4 gestures and rest) scenario. These results prove that the proposed system is suitable for real-time execution on embedded platforms and also capable of matching the accuracy of state-of-the-art approaches, while also giving some physiological insight on the neuromuscular spikes underlying the sEMG.
Resumo:
Despite the success of the ΛCDM model in describing the Universe, a possible tension between early- and late-Universe cosmological measurements is calling for new independent cosmological probes. Amongst the most promising ones, gravitational waves (GWs) can provide a self-calibrated measurement of the luminosity distance. However, to obtain cosmological constraints, additional information is needed to break the degeneracy between parameters in the gravitational waveform. In this thesis, we exploit the latest LIGO-Virgo-KAGRA Gravitational Wave Transient Catalog (GWTC-3) of GW sources to constrain the background cosmological parameters together with the astrophysical properties of Binary Black Holes (BBHs), using information from their mass distribution. We expand the public code MGCosmoPop, previously used for the application of this technique, by implementing a state-of-the-art model for the mass distribution, needed to account for the presence of non-trivial features, i.e. a truncated power law with two additional Gaussian peaks, referred to as Multipeak. We then analyse GWTC-3 comparing this model with simpler and more commonly adopted ones, both in the case of fixed and varying cosmology, and assess their goodness-of-fit with different model selection criteria, and their constraining power on the cosmological and population parameters. We also start to explore different sampling methods, namely Markov Chain Monte Carlo and Nested Sampling, comparing their performances and evaluating the advantages of both. We find concurring evidence that the Multipeak model is favoured by the data, in line with previous results, and show that this conclusion is robust to the variation of the cosmological parameters. We find a constraint on the Hubble constant of H0 = 61.10+38.65−22.43 km/s/Mpc (68% C.L.), which shows the potential of this method in providing independent constraints on cosmological parameters. The results obtained in this work have been included in [1].
Resumo:
Axion like particles (ALPs), i.e., pseudo-scalar bosons interacting via derivative couplings, are a generic feature of many new physics scenarios, including those addressing the strong-CP problem and/or the existence of dark matter. Their phenomenology is very rich, with a wide range of scales and interactions being directly probed at very different experiments, from accelerators to observatories. In this thesis, we explore the possibility that ALPs might indirectly affect precision collider observables. In particular, we consider an ALPs that preferably couple to the top quark (top-philic) and we study new-physics 1- loop corrections to processes involving top quarks in the final state. Our study stems from the simple, yet non-trivial observation that 1-loop corrections are infrared finite even in the case of negligible ALP masses and therefore can be considered on their own. We compute the 1-loop corrections of new physics analytically in key cases involving top quark pair production and then implement and validate a fully general next-to-leading-order model in MadGraph5_aMC@NLO that allows to compute virtual effects for any process of interest. A detailed study of the expected sensitivity to virtual ALPs in ttbar production at the LHC is performed.