7 resultados para Linear time invariant systems
em Duke University
Resumo:
© 2015 IEEE.We consider the problem of verification of software implementations of linear time-invariant controllers. Commonly, different implementations use different representations of the controller's state, for example due to optimizations in a third-party code generator. To accommodate this variation, we exploit input-output controller specification captured by the controller's transfer function and show how to automatically verify correctness of C code controller implementations using a Frama-C/Why3/Z3 toolchain. Scalability of the approach is evaluated using randomly generated controller specifications of realistic size.
Resumo:
The long-term soil carbon dynamics may be approximated by networks of linear compartments, permitting theoretical analysis of transit time (i.e., the total time spent by a molecule in the system) and age (the time elapsed since the molecule entered the system) distributions. We compute and compare these distributions for different network. configurations, ranging from the simple individual compartment, to series and parallel linear compartments, feedback systems, and models assuming a continuous distribution of decay constants. We also derive the transit time and age distributions of some complex, widely used soil carbon models (the compartmental models CENTURY and Rothamsted, and the continuous-quality Q-Model), and discuss them in the context of long-term carbon sequestration in soils. We show how complex models including feedback loops and slow compartments have distributions with heavier tails than simpler models. Power law tails emerge when using continuous-quality models, indicating long retention times for an important fraction of soil carbon. The responsiveness of the soil system to changes in decay constants due to altered climatic conditions or plant species composition is found to be stronger when all compartments respond equally to the environmental change, and when the slower compartments are more sensitive than the faster ones or lose more carbon through microbial respiration. Copyright 2009 by the American Geophysical Union.
Resumo:
Software-based control of life-critical embedded systems has become increasingly complex, and to a large extent has come to determine the safety of the human being. For example, implantable cardiac pacemakers have over 80,000 lines of code which are responsible for maintaining the heart within safe operating limits. As firmware-related recalls accounted for over 41% of the 600,000 devices recalled in the last decade, there is a need for rigorous model-driven design tools to generate verified code from verified software models. To this effect, we have developed the UPP2SF model-translation tool, which facilitates automatic conversion of verified models (in UPPAAL) to models that may be simulated and tested (in Simulink/Stateflow). We describe the translation rules that ensure correct model conversion, applicable to a large class of models. We demonstrate how UPP2SF is used in themodel-driven design of a pacemaker whosemodel is (a) designed and verified in UPPAAL (using timed automata), (b) automatically translated to Stateflow for simulation-based testing, and then (c) automatically generated into modular code for hardware-level integration testing of timing-related errors. In addition, we show how UPP2SF may be used for worst-case execution time estimation early in the design stage. Using UPP2SF, we demonstrate the value of integrated end-to-end modeling, verification, code-generation and testing process for complex software-controlled embedded systems. © 2014 ACM.
Resumo:
We propose a novel unsupervised approach for linking records across arbitrarily many files, while simultaneously detecting duplicate records within files. Our key innovation is to represent the pattern of links between records as a {\em bipartite} graph, in which records are directly linked to latent true individuals, and only indirectly linked to other records. This flexible new representation of the linkage structure naturally allows us to estimate the attributes of the unique observable people in the population, calculate $k$-way posterior probabilities of matches across records, and propagate the uncertainty of record linkage into later analyses. Our linkage structure lends itself to an efficient, linear-time, hybrid Markov chain Monte Carlo algorithm, which overcomes many obstacles encountered by previously proposed methods of record linkage, despite the high dimensional parameter space. We assess our results on real and simulated data.
Resumo:
Social attitudes, attitudes toward financial risk and attitudes toward deferred gratification are thought to influence many important economic decisions over the life-course. In economic theory, these attitudes are key components in diverse models of behavior, including collective action, saving and investment decisions and occupational choice. The relevance of these attitudes have been confirmed empirically. Yet, the factors that influence them are not well understood. This research evaluates how these attitudes are affected by large disruptive events, namely, a natural disaster and a civil conflict, and also by an individual-specific life event, namely, having children.
By implementing rigorous empirical strategies drawing on rich longitudinal datasets, this research project advances our understanding of how life experiences shape these attitudes. Moreover, compelling evidence is provided that the observed changes in attitudes are likely to reflect changes in preferences given that they are not driven just by changes in financial circumstances. Therefore the findings of this research project also contribute to the discussion of whether preferences are really fixed, a usual assumption in economics.
In the first chapter, I study how altruistic and trusting attitudes are affected by exposure to the 2004 Indian Ocean tsunami as long as ten years after the disaster occurred. Establishing a causal relationship between natural disasters and attitudes presents several challenges as endogenous exposure and sample selection can confound the analysis. I take on these challenges by exploiting plausibly exogenous variation in exposure to the tsunami and by relying on a longitudinal dataset representative of the pre-tsunami population in two districts of Aceh, Indonesia. The sample is drawn from the Study of the Tsunami Aftermath and Recovery (STAR), a survey with data collected both before and after the disaster and especially designed to identify the impact of the tsunami. The altruistic and trusting attitudes of the respondents are measured by their behavior in the dictator and trust games. I find that witnessing closely the damage caused by the tsunami but without suffering severe economic damage oneself increases altruistic and trusting behavior, particularly towards individuals from tsunami affected communities. Having suffered severe economic damage has no impact on altruistic behavior but may have increased trusting behavior. These effects do not seem to be caused by the consequences of the tsunami on people’s financial situation. Instead they are consistent with how experiences of loss and solidarity may have shaped social attitudes by affecting empathy and perceptions of who is deserving of aid and trust.
In the second chapter, co-authored with Ryan Brown, Duncan Thomas and Andrea Velasquez, we investigate how attitudes toward financial risk are affected by elevated levels of insecurity and uncertainty brought on by the Mexican Drug War. To conduct our analysis, we pair the Mexican Family Life Survey (MxFLS), a rich longitudinal dataset ideally suited for our purposes, with a dataset on homicide rates at the month and municipality-level. The homicide rates capture well the overall crime environment created by the drug war. The MxFLS elicits risk attitudes by asking respondents to choose between hypothetical gambles with different payoffs. Our strategy to identify a causal effect has two key components. First, we implement an individual fixed effects strategy which allows us to control for all time-invariant heterogeneity. The remaining time variant heterogeneity is unlikely to be correlated with changes in the local crime environment given the well-documented political origins of the Mexican Drug War. We also show supporting evidence in this regard. The second component of our identification strategy is to use an intent-to-treat approach to shield our estimates from endogenous migration. Our findings indicate that exposure to greater local-area violent crime results in increased risk aversion. This effect is not driven by changes in financial circumstances, but may be explained instead by heightened fear of victimization. Nonetheless, we find that having greater economic resources mitigate the impact. This may be due to individuals with greater economic resources being able to avoid crime by affording better transportation or security at work.
The third chapter, co-authored with Duncan Thomas, evaluates whether attitudes toward deferred gratification change after having children. For this study we also exploit the MxFLS, which elicits attitudes toward deferred gratification (commonly known as time discounting) by asking individuals to choose between hypothetical payments at different points in time. We implement a difference-in-difference estimator to control for all time-invariant heterogeneity and show that our results are robust to the inclusion of time varying characteristics likely correlated with child birth. We find that becoming a mother increases time discounting especially in the first two years after childbirth and in particular for those women without a spouse at home. Having additional children does not have an effect and the effect for men seems to go in the opposite direction. These heterogeneous effects suggest that child rearing may affect time discounting due to generated stress or not fully anticipated spending needs.
Resumo:
Bayesian nonparametric models, such as the Gaussian process and the Dirichlet process, have been extensively applied for target kinematics modeling in various applications including environmental monitoring, traffic planning, endangered species tracking, dynamic scene analysis, autonomous robot navigation, and human motion modeling. As shown by these successful applications, Bayesian nonparametric models are able to adjust their complexities adaptively from data as necessary, and are resistant to overfitting or underfitting. However, most existing works assume that the sensor measurements used to learn the Bayesian nonparametric target kinematics models are obtained a priori or that the target kinematics can be measured by the sensor at any given time throughout the task. Little work has been done for controlling the sensor with bounded field of view to obtain measurements of mobile targets that are most informative for reducing the uncertainty of the Bayesian nonparametric models. To present the systematic sensor planning approach to leaning Bayesian nonparametric models, the Gaussian process target kinematics model is introduced at first, which is capable of describing time-invariant spatial phenomena, such as ocean currents, temperature distributions and wind velocity fields. The Dirichlet process-Gaussian process target kinematics model is subsequently discussed for modeling mixture of mobile targets, such as pedestrian motion patterns.
Novel information theoretic functions are developed for these introduced Bayesian nonparametric target kinematics models to represent the expected utility of measurements as a function of sensor control inputs and random environmental variables. A Gaussian process expected Kullback Leibler divergence is developed as the expectation of the KL divergence between the current (prior) and posterior Gaussian process target kinematics models with respect to the future measurements. Then, this approach is extended to develop a new information value function that can be used to estimate target kinematics described by a Dirichlet process-Gaussian process mixture model. A theorem is proposed that shows the novel information theoretic functions are bounded. Based on this theorem, efficient estimators of the new information theoretic functions are designed, which are proved to be unbiased with the variance of the resultant approximation error decreasing linearly as the number of samples increases. Computational complexities for optimizing the novel information theoretic functions under sensor dynamics constraints are studied, and are proved to be NP-hard. A cumulative lower bound is then proposed to reduce the computational complexity to polynomial time.
Three sensor planning algorithms are developed according to the assumptions on the target kinematics and the sensor dynamics. For problems where the control space of the sensor is discrete, a greedy algorithm is proposed. The efficiency of the greedy algorithm is demonstrated by a numerical experiment with data of ocean currents obtained by moored buoys. A sweep line algorithm is developed for applications where the sensor control space is continuous and unconstrained. Synthetic simulations as well as physical experiments with ground robots and a surveillance camera are conducted to evaluate the performance of the sweep line algorithm. Moreover, a lexicographic algorithm is designed based on the cumulative lower bound of the novel information theoretic functions, for the scenario where the sensor dynamics are constrained. Numerical experiments with real data collected from indoor pedestrians by a commercial pan-tilt camera are performed to examine the lexicographic algorithm. Results from both the numerical simulations and the physical experiments show that the three sensor planning algorithms proposed in this dissertation based on the novel information theoretic functions are superior at learning the target kinematics with
little or no prior knowledge
Resumo:
With the popularization of GPS-enabled devices such as mobile phones, location data are becoming available at an unprecedented scale. The locations may be collected from many different sources such as vehicles moving around a city, user check-ins in social networks, and geo-tagged micro-blogging photos or messages. Besides the longitude and latitude, each location record may also have a timestamp and additional information such as the name of the location. Time-ordered sequences of these locations form trajectories, which together contain useful high-level information about people's movement patterns.
The first part of this thesis focuses on a few geometric problems motivated by the matching and clustering of trajectories. We first give a new algorithm for computing a matching between a pair of curves under existing models such as dynamic time warping (DTW). The algorithm is more efficient than standard dynamic programming algorithms both theoretically and practically. We then propose a new matching model for trajectories that avoids the drawbacks of existing models. For trajectory clustering, we present an algorithm that computes clusters of subtrajectories, which correspond to common movement patterns. We also consider trajectories of check-ins, and propose a statistical generative model, which identifies check-in clusters as well as the transition patterns between the clusters.
The second part of the thesis considers the problem of covering shortest paths in a road network, motivated by an EV charging station placement problem. More specifically, a subset of vertices in the road network are selected to place charging stations so that every shortest path contains enough charging stations and can be traveled by an EV without draining the battery. We first introduce a general technique for the geometric set cover problem. This technique leads to near-linear-time approximation algorithms, which are the state-of-the-art algorithms for this problem in either running time or approximation ratio. We then use this technique to develop a near-linear-time algorithm for this
shortest-path cover problem.