897 resultados para Simulation-based methods
Resumo:
The self-reference effect (SRE) in memory is thought to depend on specialized mechanisms that enhance memory for self-relevant information. We investigated whether these mechanisms can be engaged “by proxy” when we simulate other people, by asking participants to interact with two virtual partners: one similar and one dissimilar to self. Participants viewed pairs of objects and picked one for themselves, for their similar partner, or their dissimilar partner. A surprise memory test followed that required participants to identify which object of each pair was chosen, and for whom. Finally, participants were shown both partners’ object pairs again, and asked to indicate their personal preference. Four key findings were observed. Overlap between participants’ own choice and those made for their partner was significantly higher for the similar than the dissimilar partner, revealing participants’ use of their own preferences to simulate the similar partner. Recollection of chosen objects was significantly higher for self than for both partners and, critically, was significantly higher for similar than dissimilar partners. Source confusion between self and the similar partner was also higher. These findings suggest that self-reference by proxy enhances memory for non-self-relevant material, and we consider the theoretical implications for functional interpretation of the SRE.
Resumo:
Recent years observed massive growth in wearable technology, everything can be smart: phones, watches, glasses, shirts, etc. These technologies are prevalent in various fields: from wellness/sports/fitness to the healthcare domain. The spread of this phenomenon led the World-Health-Organization to define the term 'mHealth' as "medical and public health practice supported by mobile devices, such as mobile phones, patient monitoring devices, personal digital assistants, and other wireless devices". Furthermore, mHealth solutions are suitable to perform real-time wearable Biofeedback (BF) systems: sensors in the body area network connected to a processing unit (smartphone) and a feedback device (loudspeaker) to measure human functions and return them to the user as (bio)feedback signal. During the COVID-19 pandemic, this transformation of the healthcare system has been dramatically accelerated by new clinical demands, including the need to prevent hospital surges and to assure continuity of clinical care services, allowing pervasive healthcare. Never as of today, we can say that the integration of mHealth technologies will be the basis of this new era of clinical practice. In this scenario, this PhD thesis's primary goal is to investigate new and innovative mHealth solutions for the Assessment and Rehabilitation of different neuromotor functions and diseases. For the clinical assessment, there is the need to overcome the limitations of subjective clinical scales. Creating new pervasive and self-administrable mHealth solutions, this thesis investigates the possibility of employing innovative systems for objective clinical evaluation. For rehabilitation, we explored the clinical feasibility and effectiveness of mHealth systems. In particular, we developed innovative mHealth solutions with BF capability to allow tailored rehabilitation. The main goal that a mHealth-system should have is improving the person's quality of life, increasing or maintaining his autonomy and independence. To this end, inclusive design principles might be crucial, next to the technical and technological ones, to improve mHealth-systems usability.
Resumo:
Nowadays robotic applications are widespread and most of the manipulation tasks are efficiently solved. However, Deformable-Objects (DOs) still represent a huge limitation for robots. The main difficulty in DOs manipulation is dealing with the shape and dynamics uncertainties, which prevents the use of model-based approaches (since they are excessively computationally complex) and makes sensory data difficult to interpret. This thesis reports the research activities aimed to address some applications in robotic manipulation and sensing of Deformable-Linear-Objects (DLOs), with particular focus to electric wires. In all the works, a significant effort was made in the study of an effective strategy for analyzing sensory signals with various machine learning algorithms. In the former part of the document, the main focus concerns the wire terminals, i.e. detection, grasping, and insertion. First, a pipeline that integrates vision and tactile sensing is developed, then further improvements are proposed for each module. A novel procedure is proposed to gather and label massive amounts of training images for object detection with minimal human intervention. Together with this strategy, we extend a generic object detector based on Convolutional-Neural-Networks for orientation prediction. The insertion task is also extended by developing a closed-loop control capable to guide the insertion of a longer and curved segment of wire through a hole, where the contact forces are estimated by means of a Recurrent-Neural-Network. In the latter part of the thesis, the interest shifts to the DLO shape. Robotic reshaping of a DLO is addressed by means of a sequence of pick-and-place primitives, while a decision making process driven by visual data learns the optimal grasping locations exploiting Deep Q-learning and finds the best releasing point. The success of the solution leverages on a reliable interpretation of the DLO shape. For this reason, further developments are made on the visual segmentation.
Resumo:
Standard indirect Inference (II) estimators take a given finite-dimensional statistic, Z_{n} , and then estimate the parameters by matching the sample statistic with the model-implied population moment. We here propose a novel estimation method that utilizes all available information contained in the distribution of Z_{n} , not just its first moment. This is done by computing the likelihood of Z_{n}, and then estimating the parameters by either maximizing the likelihood or computing the posterior mean for a given prior of the parameters. These are referred to as the maximum indirect likelihood (MIL) and Bayesian Indirect Likelihood (BIL) estimators, respectively. We show that the IL estimators are first-order equivalent to the corresponding moment-based II estimator that employs the optimal weighting matrix. However, due to higher-order features of Z_{n} , the IL estimators are higher order efficient relative to the standard II estimator. The likelihood of Z_{n} will in general be unknown and so simulated versions of IL estimators are developed. Monte Carlo results for a structural auction model and a DSGE model show that the proposed estimators indeed have attractive finite sample properties.
Resumo:
This study analyzes the issue of American option valuation when the underlying exhibits a GARCH-type volatility process. We propose the usage of Rubinstein's Edgeworth binomial tree (EBT) in contrast to simulation-based methods being considered in previous studies. The EBT-based valuation approach makes an implied calibration of the pricing model feasible. By empirically analyzing the pricing performance of American index and equity options, we illustrate the superiority of the proposed approach.
Resumo:
In dieser Arbeit werden neuere methodische Entwicklungen aus dem Bereich der Numerischen Integration für die näherungsweise Berechnung von Zustandraummodellen erprobt. Die resultierenden Algorithmen werden bzgl. ihrer Approximationsgüte mit den populären simulationsbasierten Näherungsverfahren verglichen.
Resumo:
Los fundamentos de la Teoría de la Decisión Bayesiana proporcionan un marco coherente en el que se pueden resolver los problemas de toma de decisiones. La creciente disponibilidad de ordenadores potentes está llevando a tratar problemas cada vez más complejos con numerosas fuentes de incertidumbre multidimensionales; varios objetivos conflictivos; preferencias, metas y creencias cambiantes en el tiempo y distintos grupos afectados por las decisiones. Estos factores, a su vez, exigen mejores herramientas de representación de problemas; imponen fuertes restricciones cognitivas sobre los decisores y conllevan difíciles problemas computacionales. Esta tesis tratará estos tres aspectos. En el Capítulo 1, proporcionamos una revisión crítica de los principales métodos gráficos de representación y resolución de problemas, concluyendo con algunas recomendaciones fundamentales y generalizaciones. Nuestro segundo comentario nos lleva a estudiar tales métodos cuando sólo disponemos de información parcial sobre las preferencias y creencias del decisor. En el Capítulo 2, estudiamos este problema cuando empleamos diagramas de influencia (DI). Damos un algoritmo para calcular las soluciones no dominadas en un DI y analizamos varios conceptos de solución ad hoc. El último aspecto se estudia en los Capítulos 3 y 4. Motivado por una aplicación de gestión de embalses, introducimos un método heurístico para resolver problemas de decisión secuenciales. Como muestra resultados muy buenos, extendemos la idea a problemas secuenciales generales y cuantificamos su bondad. Exploramos después en varias direcciones la aplicación de métodos de simulación al Análisis de Decisiones. Introducimos primero métodos de Monte Cario para aproximar el conjunto no dominado en problemas continuos. Después, proporcionamos un método de Monte Cario basado en cadenas de Markov para problemas con información completa con estructura general: las decisiones y las variables aleatorias pueden ser continuas, y la función de utilidad puede ser arbitraria. Nuestro esquema es aplicable a muchos problemas modelizados como DI. Finalizamos con un capítulo de conclusiones y problemas abiertos.---ABSTRACT---The foundations of Bayesian Decisión Theory provide a coherent framework in which decisión making problems may be solved. With the advent of powerful computers and given the many challenging problems we face, we are gradually attempting to solve more and more complex decisión making problems with high and multidimensional uncertainty, múltiple objectives, influence of time over decisión tasks and influence over many groups. These complexity factors demand better representation tools for decisión making problems; place strong cognitive demands on the decison maker judgements; and lead to involved computational problems. This thesis will deal with these three topics. In recent years, many representation tools have been developed for decisión making problems. In Chapter 1, we provide a critical review of most of them and conclude with recommendations and generalisations. Given our second query, we could wonder how may we deal with those representation tools when there is only partial information. In Chapter 2, we find out how to deal with such a problem when it is structured as an influence diagram (ID). We give an algorithm to compute nondominated solutions in ID's and analyse several ad hoc solution concepts.- The last issue is studied in Chapters 3 and 4. In a reservoir management case study, we have introduced a heuristic method for solving sequential decisión making problems. Since it shows very good performance, we extend the idea to general problems and quantify its goodness. We explore then in several directions the application of simulation based methods to Decisión Analysis. We first introduce Monte Cario methods to approximate the nondominated set in continuous problems. Then, we provide a Monte Cario Markov Chain method for problems under total information with general structure: decisions and random variables may be continuous, and the utility function may be arbitrary. Our scheme is applicable to many problems modeled as IDs. We conclude with discussions and several open problems.
Resumo:
Testing for simultaneous vicariance across comparative phylogeographic data sets is a notoriously difficult problem hindered by mutational variance, the coalescent variance, and variability across pairs of sister taxa in parameters that affect genetic divergence. We simulate vicariance to characterize the behaviour of several commonly used summary statistics across a range of divergence times, and to characterize this behaviour in comparative phylogeographic datasets having multiple taxon-pairs. We found Tajima's D to be relatively uncorrelated with other summary statistics across divergence times, and using simple hypothesis testing of simultaneous vicariance given variable population sizes, we counter-intuitively found that the variance across taxon pairs in Nei and Li's net nucleotide divergence (pi(net)), a common measure of population divergence, is often inferior to using the variance in Tajima's D across taxon pairs as a test statistic to distinguish ancient simultaneous vicariance from variable vicariance histories. The opposite and more intuitive pattern is found for testing more recent simultaneous vicariance, and overall we found that depending on the timing of vicariance, one of these two test statistics can achieve high statistical power for rejecting simultaneous vicariance, given a reasonable number of intron loci (> 5 loci, 400 bp) and a range of conditions. These results suggest that components of these two composite summary statistics should be used in future simulation-based methods which can simultaneously use a pool of summary statistics to test comparative the phylogeographic hypotheses we consider here.
Resumo:
Power efficiency is one of the most important constraints in the design of embedded systems since such systems are generally driven by batteries with limited energy budget or restricted power supply. In every embedded system, there are one or more processor cores to run the software and interact with the other hardware components of the system. The power consumption of the processor core(s) has an important impact on the total power dissipated in the system. Hence, the processor power optimization is crucial in satisfying the power consumption constraints, and developing low-power embedded systems. A key aspect of research in processor power optimization and management is “power estimation”. Having a fast and accurate method for processor power estimation at design time helps the designer to explore a large space of design possibilities, to make the optimal choices for developing a power efficient processor. Likewise, understanding the processor power dissipation behaviour of a specific software/application is the key for choosing appropriate algorithms in order to write power efficient software. Simulation-based methods for measuring the processor power achieve very high accuracy, but are available only late in the design process, and are often quite slow. Therefore, the need has arisen for faster, higher-level power prediction methods that allow the system designer to explore many alternatives for developing powerefficient hardware and software. The aim of this thesis is to present fast and high-level power models for the prediction of processor power consumption. Power predictability in this work is achieved in two ways: first, using a design method to develop power predictable circuits; second, analysing the power of the functions in the code which repeat during execution, then building the power model based on average number of repetitions. In the first case, a design method called Asynchronous Charge Sharing Logic (ACSL) is used to implement the Arithmetic Logic Unit (ALU) for the 8051 microcontroller. The ACSL circuits are power predictable due to the independency of their power consumption to the input data. Based on this property, a fast prediction method is presented to estimate the power of ALU by analysing the software program, and extracting the number of ALU-related instructions. This method achieves less than 1% error in power estimation and more than 100 times speedup in comparison to conventional simulation-based methods. In the second case, an average-case processor energy model is developed for the Insertion sort algorithm based on the number of comparisons that take place in the execution of the algorithm. The average number of comparisons is calculated using a high level methodology called MOdular Quantitative Analysis (MOQA). The parameters of the energy model are measured for the LEON3 processor core, but the model is general and can be used for any processor. The model has been validated through the power measurement experiments, and offers high accuracy and orders of magnitude speedup over the simulation-based method.
Resumo:
Computational model-based simulation methods were developed for the modelling of bioaffinity assays. Bioaffinity-based methods are widely used to quantify a biological substance in biological research, development and in routine clinical in vitro diagnostics. Bioaffinity assays are based on the high affinity and structural specificity between the binding biomolecules. The simulation methods developed are based on the mechanistic assay model, which relies on the chemical reaction kinetics and describes the forming of a bound component as a function of time from the initial binding interaction. The simulation methods were focused on studying the behaviour and the reliability of bioaffinity assay and the possibilities the modelling methods of binding reaction kinetics provide, such as predicting assay results even before the binding reaction has reached equilibrium. For example, a rapid quantitative result from a clinical bioaffinity assay sample can be very significant, e.g. even the smallest elevation of a heart muscle marker reveals a cardiac injury. The simulation methods were used to identify critical error factors in rapid bioaffinity assays. A new kinetic calibration method was developed to calibrate a measurement system by kinetic measurement data utilizing only one standard concentration. A nodebased method was developed to model multi-component binding reactions, which have been a challenge to traditional numerical methods. The node-method was also used to model protein adsorption as an example of nonspecific binding of biomolecules. These methods have been compared with the experimental data from practice and can be utilized in in vitro diagnostics, drug discovery and in medical imaging.
Resumo:
Phase-sensitive X-ray imaging shows a high sensitivity towards electron density variations, making it well suited for imaging of soft tissue matter. However, there are still open questions about the details of the image formation process. Here, a framework for numerical simulations of phase-sensitive X-ray imaging is presented, which takes both particle- and wave-like properties of X-rays into consideration. A split approach is presented where we combine a Monte Carlo method (MC) based sample part with a wave optics simulation based propagation part, leading to a framework that takes both particle- and wave-like properties into account. The framework can be adapted to different phase-sensitive imaging methods and has been validated through comparisons with experiments for grating interferometry and propagation-based imaging. The validation of the framework shows that the combination of wave optics and MC has been successfully implemented and yields good agreement between measurements and simulations. This demonstrates that the physical processes relevant for developing a deeper understanding of scattering in the context of phase-sensitive imaging are modelled in a sufficiently accurate manner. The framework can be used for the simulation of phase-sensitive X-ray imaging, for instance for the simulation of grating interferometry or propagation-based imaging.
Resumo:
Gene-based tests of association are frequently applied to common SNPs (MAF>5%) as an alternative to single-marker tests. In this analysis we conduct a variety of simulation studies applied to five popular gene-based tests investigating general trends related to their performance in realistic situations. In particular, we focus on the impact of non-causal SNPs and a variety of LD structures on the behavior of these tests. Ultimately, we find that non-causal SNPs can significantly impact the power of all gene-based tests. On average, we find that the “noise” from 6–12 non-causal SNPs will cancel out the “signal” of one causal SNP across five popular gene-based tests. Furthermore, we find complex and differing behavior of the methods in the presence of LD within and between non-causal and causal SNPs. Ultimately, better approaches for a priori prioritization of potentially causal SNPs (e.g., predicting functionality of non-synonymous SNPs), application of these methods to sequenced or fully imputed datasets, and limited use of window-based methods for assigning inter-genic SNPs to genes will improve power. However, significant power loss from non-causal SNPs may remain unless alternative statistical approaches robust to the inclusion of non-causal SNPs are developed.
Resumo:
Modern Integrated Circuit (IC) design is characterized by a strong trend of Intellectual Property (IP) core integration into complex system-on-chip (SOC) architectures. These cores require thorough verification of their functionality to avoid erroneous behavior in the final device. Formal verification methods are capable of detecting any design bug. However, due to state explosion, their use remains limited to small circuits. Alternatively, simulation-based verification can explore hardware descriptions of any size, although the corresponding stimulus generation, as well as functional coverage definition, must be carefully planned to guarantee its efficacy. In general, static input space optimization methodologies have shown better efficiency and results than, for instance, Coverage Directed Verification (CDV) techniques, although they act on different facets of the monitored system and are not exclusive. This work presents a constrained-random simulation-based functional verification methodology where, on the basis of the Parameter Domains (PD) formalism, irrelevant and invalid test case scenarios are removed from the input space. To this purpose, a tool to automatically generate PD-based stimuli sources was developed. Additionally, we have developed a second tool to generate functional coverage models that fit exactly to the PD-based input space. Both the input stimuli and coverage model enhancements, resulted in a notable testbench efficiency increase, if compared to testbenches with traditional stimulation and coverage scenarios: 22% simulation time reduction when generating stimuli with our PD-based stimuli sources (still with a conventional coverage model), and 56% simulation time reduction when combining our stimuli sources with their corresponding, automatically generated, coverage models.
Resumo:
Here, we examine morphological changes in cortical thickness of patients with Alzheimer`s disease (AD) using image analysis algorithms for brain structure segmentation and study automatic classification of AD patients using cortical and volumetric data. Cortical thickness of AD patients (n = 14) was measured using MRI cortical surface-based analysis and compared with healthy subjects (n = 20). Data was analyzed using an automated algorithm for tissue segmentation and classification. A Support Vector Machine (SVM) was applied over the volumetric measurements of subcortical and cortical structures to separate AD patients from controls. The group analysis showed cortical thickness reduction in the superior temporal lobe, parahippocampal gyrus, and enthorhinal cortex in both hemispheres. We also found cortical thinning in the isthmus of cingulate gyrus and middle temporal gyrus at the right hemisphere, as well as a reduction of the cortical mantle in areas previously shown to be associated with AD. We also confirmed that automatic classification algorithms (SVM) could be helpful to distinguish AD patients from healthy controls. Moreover, the same areas implicated in the pathogenesis of AD were the main parameters driving the classification algorithm. While the patient sample used in this study was relatively small, we expect that using a database of regional volumes derived from MRI scans of a large number of subjects will increase the SVM power of AD patient identification.
Resumo:
The spread and globalization of distributed generation (DG) in recent years has should highly influence the changes that occur in Electricity Markets (EMs). DG has brought a large number of new players to take action in the EMs, therefore increasing the complexity of these markets. Simulation based on multi-agent systems appears as a good way of analyzing players’ behavior and interactions, especially in a coalition perspective, and the effects these players have on the markets. MASCEM – Multi-Agent System for Competitive Electricity Markets was created to permit the study of the market operation with several different players and market mechanisms. MASGriP – Multi-Agent Smart Grid Platform is being developed to facilitate the simulation of micro grid (MG) and smart grid (SG) concepts with multiple different scenarios. This paper presents an intelligent management method for MG and SG. The simulation of different methods of control provides an advantage in comparing different possible approaches to respond to market events. Players utilize electric vehicles’ batteries and participate in Demand Response (DR) contracts, taking advantage on the best opportunities brought by the use of all resources, to improve their actions in response to MG and/or SG requests.