983 resultados para Variational explanation
Resumo:
La distinción entre argumentación y explicación es una tarea complicada pero necesaria por diversas razones. Una de ellas es la necesidad de incorporar la explicación en un movimiento del diálogo como resultado de una obligación dialéctica. Se propusieron distintos sistemas de diálogo que exploran la distinción enfatizando aspectos pragmáticos. En el presente trabajo me ocupo de aspectos estructurales de la explicación analizados en el marco de la lógica por defecto que permite caracterizar ciertas objeciones en el diálogo. Asimismo, considero que la versión operacional de la lógica por defecto constituye una aproximaciónadecuada en la construcción de la explicación y en la representación de la instancia de diálogo en el intercambio dialéctico
Resumo:
It has been recently shown that the double exchange Hamiltonian, with weak antiferromagnetic interactions, has a richer variety of first- and second-order transitions than previously anticipated, and that such transitions are consistent with the magnetic properties of manganites. Here we present a thorough discussion of the variational mean-field approach that leads to these results. We also show that the effect of the Berry phase turns out to be crucial to produce first-order paramagnetic-ferromagnetic transitions near half filling with transition temperatures compatible with the experimental situation. The computation relies on two crucial facts: the use of a mean-field ansatz that retains the complexity of a system of electrons with off-diagonal disorder, not fully taken into account by the mean-field techniques, and the small but significant antiferromagnetic superexchange interaction between the localized spins.
Resumo:
Inverse problems are at the core of many challenging applications. Variational and learning models provide estimated solutions of inverse problems as the outcome of specific reconstruction maps. In the variational approach, the result of the reconstruction map is the solution of a regularized minimization problem encoding information on the acquisition process and prior knowledge on the solution. In the learning approach, the reconstruction map is a parametric function whose parameters are identified by solving a minimization problem depending on a large set of data. In this thesis, we go beyond this apparent dichotomy between variational and learning models and we show they can be harmoniously merged in unified hybrid frameworks preserving their main advantages. We develop several highly efficient methods based on both these model-driven and data-driven strategies, for which we provide a detailed convergence analysis. The arising algorithms are applied to solve inverse problems involving images and time series. For each task, we show the proposed schemes improve the performances of many other existing methods in terms of both computational burden and quality of the solution. In the first part, we focus on gradient-based regularized variational models which are shown to be effective for segmentation purposes and thermal and medical image enhancement. We consider gradient sparsity-promoting regularized models for which we develop different strategies to estimate the regularization strength. Furthermore, we introduce a novel gradient-based Plug-and-Play convergent scheme considering a deep learning based denoiser trained on the gradient domain. In the second part, we address the tasks of natural image deblurring, image and video super resolution microscopy and positioning time series prediction, through deep learning based methods. We boost the performances of supervised, such as trained convolutional and recurrent networks, and unsupervised deep learning strategies, such as Deep Image Prior, by penalizing the losses with handcrafted regularization terms.
Resumo:
This research investigates the use of Artificial Intelligence (AI) systems for profiling and decision-making, and the consequences that it poses to rights and freedoms of individuals. In particular, the research considers that automated decision-making systems (ADMs) are opaque, can be biased, and their logic is correlation-based. For these reasons, ADMs do not take decisions as human beings do. Against this background, the risks for the rights of individuals combined with the demand for transparency of algorithms have created a debate on the need for a new 'right to explanation'. Assuming that, except in cases provided for by law, a decision made by a human does not entitle to a right to explanation, the question has been raised as to whether – if the decision is made by an algorithm – it is necessary to configure a right to explanation for the decision-subject. Therefore, the research addresses a right to explanation of automated decision-making, examining the relation between today’s technology and legal concepts of explanation, reasoning, and transparency. In particular, it focuses on the existence and scope of the right to explanation, considering legal and technical issues surrounding the use of ADMs. The research analyses the use of AI and the problems arising from it from a legal perspective, studying the EU legal framework – especially in the data protection field. In this context, a part of the research is focused on transparency requirements under the GDPR (namely, Articles 13–15, 22, as well as Recital 71). The research aims to outline an interpretative framework of such a right and make recommendations about its development, aiming to provide guidelines for an adequate explanation of automated decisions. Hence, the thesis analyses what an explanation might consist of, and the benefits of explainable AI – examined from legal and technical perspectives.
Resumo:
In this thesis we focus on the analysis and interpretation of time dependent deformations recorded through different geodetic methods. Firstly, we apply a variational Bayesian Independent Component Analysis (vbICA) technique to GPS daily displacement solutions, to separate the postseismic deformation that followed the mainshocks of the 2016-2017 Central Italy seismic sequence from the other, hydrological, deformation sources. By interpreting the signal associated with the postseismic relaxation, we model an afterslip distribution on the faults involved by the mainshocks consistent with the co-seismic models available in literature. We find evidences of aseismic slip on the Paganica fault, responsible for the Mw 6.1 2009 L’Aquila earthquake, highlighting the importance of aseismic slip and static stress transfer to properly model the recurrence of earthquakes on nearby fault segments. We infer a possible viscoelastic relaxation of the lower crust as a contributing mechanism to the postseismic displacements. We highlight the importance of a proper separation of the hydrological signals for an accurate assessment of the tectonic processes, especially in cases of mm-scale deformations. Contextually, we provide a physical explanation to the ICs associated with the observed hydrological processes. In the second part of the thesis, we focus on strain data from Gladwin Tensor Strainmeters, working on the instruments deployed in Taiwan. We develop a novel approach, completely data driven, to calibrate these strainmeters. We carry out a joint analysis of geodetic (strainmeters, GPS and GRACE products) and hydrological (rain gauges and piezometers) data sets, to characterize the hydrological signals in Southern Taiwan. Lastly, we apply the calibration approach here proposed to the strainmeters recently installed in Central Italy. We provide, as an example, the detection of a storm that hit the Umbria-Marche regions (Italy), demonstrating the potential of strainmeters in following the dynamics of deformation processes with limited spatio-temporal signature
Resumo:
The main contribution of this thesis is the proposal of novel strategies for the selection of parameters arising in variational models employed for the solution of inverse problems with data corrupted by Poisson noise. In light of the importance of using a significantly small dose of X-rays in Computed Tomography (CT), and its need of using advanced techniques to reconstruct the objects due to the high level of noise in the data, we will focus on parameter selection principles especially for low photon-counts, i.e. low dose Computed Tomography. For completeness, since such strategies can be adopted for various scenarios where the noise in the data typically follows a Poisson distribution, we will show their performance for other applications such as photography, astronomical and microscopy imaging. More specifically, in the first part of the thesis we will focus on low dose CT data corrupted only by Poisson noise by extending automatic selection strategies designed for Gaussian noise and improving the few existing ones for Poisson. The new approaches will show to outperform the state-of-the-art competitors especially in the low-counting regime. Moreover, we will propose to extend the best performing strategy to the hard task of multi-parameter selection showing promising results. Finally, in the last part of the thesis, we will introduce the problem of material decomposition for hyperspectral CT, which data encodes information of how different materials in the target attenuate X-rays in different ways according to the specific energy. We will conduct a preliminary comparative study to obtain accurate material decomposition starting from few noisy projection data.
Resumo:
Activation functions within neural networks play a crucial role in Deep Learning since they allow to learn complex and non-trivial patterns in the data. However, the ability to approximate non-linear functions is a significant limitation when implementing neural networks in a quantum computer to solve typical machine learning tasks. The main burden lies in the unitarity constraint of quantum operators, which forbids non-linearity and poses a considerable obstacle to developing such non-linear functions in a quantum setting. Nevertheless, several attempts have been made to tackle the realization of the quantum activation function in the literature. Recently, the idea of the QSplines has been proposed to approximate a non-linear activation function by implementing the quantum version of the spline functions. Yet, QSplines suffers from various drawbacks. Firstly, the final function estimation requires a post-processing step; thus, the value of the activation function is not available directly as a quantum state. Secondly, QSplines need many error-corrected qubits and a very long quantum circuits to be executed. These constraints do not allow the adoption of the QSplines on near-term quantum devices and limit their generalization capabilities. This thesis aims to overcome these limitations by leveraging hybrid quantum-classical computation. In particular, a few different methods for Variational Quantum Splines are proposed and implemented, to pave the way for the development of complete quantum activation functions and unlock the full potential of quantum neural networks in the field of quantum machine learning.
Resumo:
Intermittent fasting (IF) is an often-used intervention to decrease body mass. In male Sprague-Dawley rats, 24 hour cycles of IF result in light caloric restriction, reduced body mass gain, and significant decreases in the efficiency of energy conversion. Here, we study the metabolic effects of IF in order to uncover mechanisms involved in this lower energy conversion efficiency. After 3 weeks, IF animals displayed overeating during fed periods and lower body mass, accompanied by alterations in energy-related tissue mass. The lower efficiency of energy use was not due to uncoupling of muscle mitochondria. Enhanced lipid oxidation was observed during fasting days, whereas fed days were accompanied by higher metabolic rates. Furthermore, an increased expression of orexigenic neurotransmitters AGRP and NPY in the hypothalamus of IF animals was found, even on feeding days, which could explain the overeating pattern. Together, these effects provide a mechanistic explanation for the lower efficiency of energy conversion observed. Overall, we find that IF promotes changes in hypothalamic function that explain differences in body mass and caloric intake.
Resumo:
to assess the construct validity and reliability of the Pediatric Patient Classification Instrument. correlation study developed at a teaching hospital. The classification involved 227 patients, using the pediatric patient classification instrument. The construct validity was assessed through the factor analysis approach and reliability through internal consistency. the Exploratory Factor Analysis identified three constructs with 67.5% of variance explanation and, in the reliability assessment, the following Cronbach's alpha coefficients were found: 0.92 for the instrument as a whole; 0.88 for the Patient domain; 0.81 for the Family domain; 0.44 for the Therapeutic procedures domain. the instrument evidenced its construct validity and reliability, and these analyses indicate the feasibility of the instrument. The validation of the Pediatric Patient Classification Instrument still represents a challenge, due to its relevance for a closer look at pediatric nursing care and management. Further research should be considered to explore its dimensionality and content validity.
Resumo:
Transfer of reaction products formed on the surfaces of two mutually rubbed dielectric solids makes an important if not dominating contribution to triboelectricity. New evidence in support of this statement is presented in this report, based on analytical electron microscopy coupled to electrostatic potential mapping techniques. Mechanical action on contacting surface asperities transforms them into hot-spots for free-radical formation, followed by electron transfer producing cationic and anionic polymer fragments, according to their electronegativity. Polymer ions accumulate creating domains with excess charge because they are formed at fracture surfaces of pulled-out asperities. Another factor for charge segregation is the low polymer mixing entropy, following Flory and Huggins. The formation of fractal charge patterns that was previously described is thus the result of polymer fragment fractal scatter on both contacting surfaces. The present results contribute to the explanation of the centuries-old difficulties for understanding the triboelectric series and triboelectricity in general, as well as the dissipative nature of friction, and they may lead to better control of friction and its consequences.
Resumo:
We perform variational studies of the interaction-localization problem to describe the interaction-induced renormalizations of the effective (screened) random potential seen by quasiparticles. Here we present results of careful finite-size scaling studies for the conductance of disordered Hubbard chains at half-filling and zero temperature. While our results indicate that quasiparticle wave functions remain exponentially localized even in the presence of moderate to strong repulsive interactions, we show that interactions produce a strong decrease of the characteristic conductance scale g^{*} signaling the crossover to strong localization. This effect, which cannot be captured by a simple renormalization of the disorder strength, instead reflects a peculiar non-Gaussian form of the spatial correlations of the screened disordered potential, a hitherto neglected mechanism to dramatically reduce the impact of Anderson localization (interference) effects.
Resumo:
Mechanically evoked reflexes have been postulated to be less sensitive to presynaptic inhibition (PSI) than the H-reflex. This has implications on investigations of spinal cord neurophysiology that are based on the T-reflex. Preceding studies have shown an enhanced effect of PSI on the H-reflex when a train of ~10 conditioning stimuli at 1 Hz was applied to the nerve of the antagonist muscle. The main questions to be addressed in the present study are if indeed T-reflexes are less sensitive to PSI and whether (and to what extent and by what possible mechanisms) the effect of low frequency conditioning, found previously for the H-reflex, can be reproduced on T-reflexes from the soleus muscle. We explored two different conditioning-to-test (C-T) intervals: 15 and 100 ms (corresponding to D1 and D2 inhibitions, respectively). Test stimuli consisted of either electrical pulses applied to the posterior tibial nerve to elicit H-reflexes or mechanical percussion to the Achilles tendon to elicit T-reflexes. The 1 Hz train of conditioning electrical stimuli delivered to the common peroneal nerve induced a stronger effect of PSI as compared to a single conditioning pulse, for both reflexes (T and H), regardless of C-T-intervals. Moreover, the conditioning train of pulses (with respect to a single conditioning pulse) was proportionally more effective for T-reflexes as compared to H-reflexes (irrespective of the C-T interval), which might be associated with the differential contingent of Ia afferents activated by mechanical and electrical test stimuli. A conceivable explanation for the enhanced PSI effect in response to a train of stimuli is the occurrence of homosynaptic depression at synapses on inhibitory interneurons interposed within the PSI pathway. The present results add to the discussion of the sensitivity of the stretch reflex pathway to PSI and its functional role.
Resumo:
A combination of the variational principle, expectation value and Quantum Monte Carlo method is used to solve the Schrödinger equation for some simple systems. The results are accurate and the simplicity of this version of the Variational Quantum Monte Carlo method provides a powerful tool to teach alternative procedures and fundamental concepts in quantum chemistry courses. Some numerical procedures are described in order to control accuracy and computational efficiency. The method was applied to the ground state energies and a first attempt to obtain excited states is described.
Resumo:
This research intended to investigate the use of diazepam in conjunction with behavioral strategies to manage uncooperative behavior of child dental patients. The 6 participants received dental treatment during 9 sessions. Using a double-blind design, children received placebo or diazepam and at the same time were submitted to behavior management produces (distraction, explanation, reinforcement and set rule and limits). All sessions were recorded in video-tapes biped in 15 seconds intervals, in which observers recorded child's (crying, body and/or head movements, escape and avoidance) and dentist's behavior. The results indicated that diazepam, considering the used dose, was only effective with one subject. The other participants didn't permit the treatment and showed an increase in their resistance. The behavioral preparation strategies for dental treatment should have been more precisely planned in order to help the child to face the real dental treatment conditions mainly in the first sessions avoiding to reinforce inappropriate behaviors.
Resumo:
This article shows that the term functionalism, very often understood as a single or uniform approach in linguistics, has to be understood in its different perspectives. I start by presenting an opposing conception similar to the I-language vs E-language in Chomsky (1986). As in the latter conception , language can be understood as an abstract model of a mind internal mechanism responsible for language production and perception or, as in the former one, it can be the description of the external use of language. Also like with formalists , there are functionalists who look for cross-linguistic variation (and universals of language use) and functionalists who look for language internal variation. It is also shown that functionalists can differ in the extent to which social variables are considered in the explanation of linguistic form.