891 resultados para PLC and SCADA programming
Resumo:
This research focuses on the design and implementation of a tool to speed-up the development and deployment of heterogeneous wireless sensor networks. The THAWS (Tyndall Heterogeneous Automated Wireless Sensors) tool can be used to quickly create and configure application-specific sensor networks. THAWS presents the user with a choice of options, in order to characterise the desired functionality of the network. With this information, THAWS generates the necessary code from pre-written templates and well-tested, optimized software modules. This is then automatically compiled to form binary files for each node in the network. Wireless programming of the network completes the task of targeting the wireless network towards a specific sensing application. THAWS is an adaptable tool that works with both homogeneous and heterogeneous networks built from wireless sensor nodes that have been developed in the Tyndall National Institute.
Resumo:
An aim of proactive risk management strategies is the timely identification of safety related risks. One way to achieve this is by deploying early warning systems. Early warning systems aim to provide useful information on the presence of potential threats to the system, the level of vulnerability of a system, or both of these, in a timely manner. This information can then be used to take proactive safety measures. The United Nation’s has recommended that any early warning system need to have four essential elements, which are the risk knowledge element, a monitoring and warning service, dissemination and communication and a response capability. This research deals with the risk knowledge element of an early warning system. The risk knowledge element of an early warning system contains models of possible accident scenarios. These accident scenarios are created by using hazard analysis techniques, which are categorised as traditional and contemporary. The assumption in traditional hazard analysis techniques is that accidents are occurred due to a sequence of events, whereas, the assumption of contemporary hazard analysis techniques is that safety is an emergent property of complex systems. The problem is that there is no availability of a software editor which can be used by analysts to create models of accident scenarios based on contemporary hazard analysis techniques and generate computer code that represent the models at the same time. This research aims to enhance the process of generating computer code based on graphical models that associate early warning signs and causal factors to a hazard, based on contemporary hazard analyses techniques. For this purpose, the thesis investigates the use of Domain Specific Modeling (DSM) technologies. The contributions of this thesis is the design and development of a set of three graphical Domain Specific Modeling languages (DSML)s, that when combined together, provide all of the necessary constructs that will enable safety experts and practitioners to conduct hazard and early warning analysis based on a contemporary hazard analysis approach. The languages represent those elements and relations necessary to define accident scenarios and their associated early warning signs. The three DSMLs were incorporated in to a prototype software editor that enables safety scientists and practitioners to create and edit hazard and early warning analysis models in a usable manner and as a result to generate executable code automatically. This research proves that the DSM technologies can be used to develop a set of three DSMLs which can allow user to conduct hazard and early warning analysis in more usable manner. Furthermore, the three DSMLs and their dedicated editor, which are presented in this thesis, may provide a significant enhancement to the process of creating the risk knowledge element of computer based early warning systems.
Resumo:
This work considers the static calculation of a program’s average-case time. The number of systems that currently tackle this research problem is quite small due to the difficulties inherent in average-case analysis. While each of these systems make a pertinent contribution, and are individually discussed in this work, only one of them forms the basis of this research. That particular system is known as MOQA. The MOQA system consists of the MOQA language and the MOQA static analysis tool. Its technique for statically determining average-case behaviour centres on maintaining strict control over both the data structure type and the labeling distribution. This research develops and evaluates the MOQA language implementation, and adds to the functions already available in this language. Furthermore, the theory that backs MOQA is generalised and the range of data structures for which the MOQA static analysis tool can determine average-case behaviour is increased. Also, some of the MOQA applications and extensions suggested in other works are logically examined here. For example, the accuracy of classifying the MOQA language as reversible is investigated, along with the feasibility of incorporating duplicate labels into the MOQA theory. Finally, the analyses that take place during the course of this research reveal some of the MOQA strengths and weaknesses. This thesis aims to be pragmatic when evaluating the current MOQA theory, the advancements set forth in the following work and the benefits of MOQA when compared to similar systems. Succinctly, this work’s significant expansion of the MOQA theory is accompanied by a realistic assessment of MOQA’s accomplishments and a serious deliberation of the opportunities available to MOQA in the future.
Resumo:
A model for understanding the formation and propagation of modes in curved optical waveguides is developed. A numerical method for the calculation of curved waveguide mode profiles and propagation constants in two dimensional waveguides is developed, implemented and tested. A numerical method for the analysis of propagation of modes in three dimensional curved optical waveguides is developed, implemented and tested. A technique for the design of curved waveguides with reduced transition loss is presented. A scheme for drawing these new waveguides and ensuring that they have constant width is also provided. Claims about the waveguide design technique are substantiated through numerical simulations.
Resumo:
In decision making problems where we need to choose a particular decision or alternative from a set of possible choices, we often have some preferences which determine if we prefer one decision over another. When these preferences give us an ordering on the decisions that is complete, then it is easy to choose the best or one of the best decisions. However it often occurs that the preferences relation is partially ordered, and we have no best decision. In this thesis, we look at what happens when we have such a partial order over a set of decisions, in particular when we have multiple orderings on a set of decisions, and we present a framework for qualitative decision making. We look at the different natural notions of optimal decision that occur in this framework, which gives us different optimality classes, and we examine the relationships between these classes. We then look in particular at a qualitative preference relation called Sorted-Pareto Dominance, which is an extension of Pareto Dominance, and we give a semantics for this relation as one that is compatible with any order-preserving mapping of an ordinal preference scale to a numerical one. We apply Sorted-Pareto dominance to a Soft Constraints setting, where we solve problems in which the soft constraints associate qualitative preferences to decisions in a decision problem. We also examine the Sorted-Pareto dominance relation in the context of our qualitative decision making framework, looking at the relevant optimality classes for the Sorted-Pareto case, which gives us classes of decisions that are necessarily optimal, and optimal for some choice of mapping of an ordinal scale to a quantitative one. We provide some empirical analysis of Sorted-Pareto constraints problems and examine the optimality classes that result.
Resumo:
In many real world situations, we make decisions in the presence of multiple, often conflicting and non-commensurate objectives. The process of optimizing systematically and simultaneously over a set of objective functions is known as multi-objective optimization. In multi-objective optimization, we have a (possibly exponentially large) set of decisions and each decision has a set of alternatives. Each alternative depends on the state of the world, and is evaluated with respect to a number of criteria. In this thesis, we consider the decision making problems in two scenarios. In the first scenario, the current state of the world, under which the decisions are to be made, is known in advance. In the second scenario, the current state of the world is unknown at the time of making decisions. For decision making under certainty, we consider the framework of multiobjective constraint optimization and focus on extending the algorithms to solve these models to the case where there are additional trade-offs. We focus especially on branch-and-bound algorithms that use a mini-buckets algorithm for generating the upper bound at each node of the search tree (in the context of maximizing values of objectives). Since the size of the guiding upper bound sets can become very large during the search, we introduce efficient methods for reducing these sets, yet still maintaining the upper bound property. We define a formalism for imprecise trade-offs, which allows the decision maker during the elicitation stage, to specify a preference for one multi-objective utility vector over another, and use such preferences to infer other preferences. The induced preference relation then is used to eliminate the dominated utility vectors during the computation. For testing the dominance between multi-objective utility vectors, we present three different approaches. The first is based on a linear programming approach, the second is by use of distance-based algorithm (which uses a measure of the distance between a point and a convex cone); the third approach makes use of a matrix multiplication, which results in much faster dominance checks with respect to the preference relation induced by the trade-offs. Furthermore, we show that our trade-offs approach, which is based on a preference inference technique, can also be given an alternative semantics based on the well known Multi-Attribute Utility Theory. Our comprehensive experimental results on common multi-objective constraint optimization benchmarks demonstrate that the proposed enhancements allow the algorithms to scale up to much larger problems than before. For decision making problems under uncertainty, we describe multi-objective influence diagrams, based on a set of p objectives, where utility values are vectors in Rp, and are typically only partially ordered. These can be solved by a variable elimination algorithm, leading to a set of maximal values of expected utility. If the Pareto ordering is used this set can often be prohibitively large. We consider approximate representations of the Pareto set based on ϵ-coverings, allowing much larger problems to be solved. In addition, we define a method for incorporating user trade-offs, which also greatly improves the efficiency.
Resumo:
This paper describes a methodology for detecting anomalies from sequentially observed and potentially noisy data. The proposed approach consists of two main elements: 1) filtering, or assigning a belief or likelihood to each successive measurement based upon our ability to predict it from previous noisy observations and 2) hedging, or flagging potential anomalies by comparing the current belief against a time-varying and data-adaptive threshold. The threshold is adjusted based on the available feedback from an end user. Our algorithms, which combine universal prediction with recent work on online convex programming, do not require computing posterior distributions given all current observations and involve simple primal-dual parameter updates. At the heart of the proposed approach lie exponential-family models which can be used in a wide variety of contexts and applications, and which yield methods that achieve sublinear per-round regret against both static and slowly varying product distributions with marginals drawn from the same exponential family. Moreover, the regret against static distributions coincides with the minimax value of the corresponding online strongly convex game. We also prove bounds on the number of mistakes made during the hedging step relative to the best offline choice of the threshold with access to all estimated beliefs and feedback signals. We validate the theory on synthetic data drawn from a time-varying distribution over binary vectors of high dimensionality, as well as on the Enron email dataset. © 1963-2012 IEEE.
Resumo:
This article describes advances in statistical computation for large-scale data analysis in structured Bayesian mixture models via graphics processing unit (GPU) programming. The developments are partly motivated by computational challenges arising in fitting models of increasing heterogeneity to increasingly large datasets. An example context concerns common biological studies using high-throughput technologies generating many, very large datasets and requiring increasingly high-dimensional mixture models with large numbers of mixture components.We outline important strategies and processes for GPU computation in Bayesian simulation and optimization approaches, give examples of the benefits of GPU implementations in terms of processing speed and scale-up in ability to analyze large datasets, and provide a detailed, tutorial-style exposition that will benefit readers interested in developing GPU-based approaches in other statistical models. Novel, GPU-oriented approaches to modifying existing algorithms software design can lead to vast speed-up and, critically, enable statistical analyses that presently will not be performed due to compute time limitations in traditional computational environments. Supplementalmaterials are provided with all source code, example data, and details that will enable readers to implement and explore the GPU approach in this mixture modeling context. © 2010 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.
Resumo:
We describe a general technique for determining upper bounds on maximal values (or lower bounds on minimal costs) in stochastic dynamic programs. In this approach, we relax the nonanticipativity constraints that require decisions to depend only on the information available at the time a decision is made and impose a "penalty" that punishes violations of nonanticipativity. In applications, the hope is that this relaxed version of the problem will be simpler to solve than the original dynamic program. The upper bounds provided by this dual approach complement lower bounds on values that may be found by simulating with heuristic policies. We describe the theory underlying this dual approach and establish weak duality, strong duality, and complementary slackness results that are analogous to the duality results of linear programming. We also study properties of good penalties. Finally, we demonstrate the use of this dual approach in an adaptive inventory control problem with an unknown and changing demand distribution and in valuing options with stochastic volatilities and interest rates. These are complex problems of significant practical interest that are quite difficult to solve to optimality. In these examples, our dual approach requires relatively little additional computation and leads to tight bounds on the optimal values. © 2010 INFORMS.
Resumo:
The health of clergy is important, and clergy may find health programming tailored to them more effective. Little is known about existing clergy health programs. We contacted Protestant denominational headquarters and searched academic databases and the Internet. We identified 56 clergy health programs and categorized them into prevention and personal enrichment; counseling; marriage and family enrichment; peer support; congregational health; congregational effectiveness; denominational enrichment; insurance/strategic pension plans; and referral-based programs. Only 13 of the programs engaged in outcomes evaluation. Using the Socioecological Framework, we found that many programs support individual-level and institutional-level changes, but few programs support congregational-level changes. Outcome evaluation strategies and a central repository for information on clergy health programs are needed. © 2011 Springer Science+Business Media, LLC.
Resumo:
This performance dissertation traced the evolution of the Russian romance from 1800 to the present. The Russian romance is a relatively unknown and greatly neglected genre of classical art songs. It is commonly believed that the Russian romance began with Dargomizhsky and Glinka proceeding directly to Tchaikovsky and Rachmaninoff. Forgotten are the composers before Dargornizhsky and Glinka, the bridge composers, and the post-Tchaikovsky and post-Rachmaninoff composers. This may be, in part, because of the difficulties in obtaining Russian vocal scores. While most of the musical world is acquainted with the magnificent Russian instrumental music, the "true soul" of the Russian people lies in its romances. I presented examples of the two different schools of composition, reflecting their philosophical differences in thinking that came about in the 1860s: (1) Russian National school, (2) Western European school. Each school's influence on generations of Russian composers and their pupils have been represented in the recital programs. Also represented was the effect of the October Revolution on music and the voice of the Russian people, Anna Akhmatova. The amount of music that could be included in this dissertation greatly exceeds the amount of available performance time and represents a selected portion of the repertoire. The first recital included repertoire from the beginning of the romance in the early nineteenth century to the beginning of the twentieth century and the second recital focused on the music of the twentieth century, pre and post, the October Revolution. Finally, given the status of Anna Akhmatova and her contributions, the third recital was devoted entirely to her poetry. The "Russian soul" is one of deep, heartfelt emotions and sorrow. Happiness and joy are also present, but always with a touch of melancholy. The audience did not simply go through a musical journey, but took a journey through the "Russian soul". With the strong response of the audience to these recitals, my belief that this repertoire deserves a prominent place in recital programming was confirmed.
Resumo:
Gemstone Team Peace in Prisons
Resumo:
Programmed death is often associated with a bacterial stress response. This behavior appears paradoxical, as it offers no benefit to the individual. This paradox can be explained if the death is 'altruistic': the killing of some cells can benefit the survivors through release of 'public goods'. However, the conditions where bacterial programmed death becomes advantageous have not been unambiguously demonstrated experimentally. Here, we determined such conditions by engineering tunable, stress-induced altruistic death in the bacterium Escherichia coli. Using a mathematical model, we predicted the existence of an optimal programmed death rate that maximizes population growth under stress. We further predicted that altruistic death could generate the 'Eagle effect', a counter-intuitive phenomenon where bacteria appear to grow better when treated with higher antibiotic concentrations. In support of these modeling insights, we experimentally demonstrated both the optimality in programmed death rate and the Eagle effect using our engineered system. Our findings fill a critical conceptual gap in the analysis of the evolution of bacterial programmed death, and have implications for a design of antibiotic treatment.
Resumo:
BACKGROUND: Scientists rarely reuse expert knowledge of phylogeny, in spite of years of effort to assemble a great "Tree of Life" (ToL). A notable exception involves the use of Phylomatic, which provides tools to generate custom phylogenies from a large, pre-computed, expert phylogeny of plant taxa. This suggests great potential for a more generalized system that, starting with a query consisting of a list of any known species, would rectify non-standard names, identify expert phylogenies containing the implicated taxa, prune away unneeded parts, and supply branch lengths and annotations, resulting in a custom phylogeny suited to the user's needs. Such a system could become a sustainable community resource if implemented as a distributed system of loosely coupled parts that interact through clearly defined interfaces. RESULTS: With the aim of building such a "phylotastic" system, the NESCent Hackathons, Interoperability, Phylogenies (HIP) working group recruited 2 dozen scientist-programmers to a weeklong programming hackathon in June 2012. During the hackathon (and a three-month follow-up period), 5 teams produced designs, implementations, documentation, presentations, and tests including: (1) a generalized scheme for integrating components; (2) proof-of-concept pruners and controllers; (3) a meta-API for taxonomic name resolution services; (4) a system for storing, finding, and retrieving phylogenies using semantic web technologies for data exchange, storage, and querying; (5) an innovative new service, DateLife.org, which synthesizes pre-computed, time-calibrated phylogenies to assign ages to nodes; and (6) demonstration projects. These outcomes are accessible via a public code repository (GitHub.com), a website (http://www.phylotastic.org), and a server image. CONCLUSIONS: Approximately 9 person-months of effort (centered on a software development hackathon) resulted in the design and implementation of proof-of-concept software for 4 core phylotastic components, 3 controllers, and 3 end-user demonstration tools. While these products have substantial limitations, they suggest considerable potential for a distributed system that makes phylogenetic knowledge readily accessible in computable form. Widespread use of phylotastic systems will create an electronic marketplace for sharing phylogenetic knowledge that will spur innovation in other areas of the ToL enterprise, such as annotation of sources and methods and third-party methods of quality assessment.
Resumo:
This dissertation consists of three essays on behavioral economics, with a general aim of enriching our understanding of economic decisions using behavioral insights and experimental methodology. Each essay takes on one particular topic with this general aim.
The first chapter studies savings behavior of the poor. In this project, partnering with a savings product provider in Kenya, we tested the extent to which behavioral interventions and financial incentives can increase the saving rate of individuals with low and irregular income. Our experiment lasted for six months and included a total of twelve conditions. The control condition received weekly reminders and balance reporting via text messages. The treatment conditions received in addition one of the following interventions: (1) reminder text messages framed as if they came from the participant’s kid (2) a golden colored coin with numbers for each week of the trial, on which participants were asked to keep track of their weekly deposits (3) a match of weekly savings: The match was either 10% or 20% up to a certain amount per week. The match was either deposited at the end of each week or the highest possible match was deposited at the start of each week and was adjusted at the end. Among these interventions, by far the most effective was the coin: Those in the coin condition saved on average the highest amount and more than twice as those in the control condition. We hypothesize that being a tangible track-keeping object; the coin made subjects remember to save more often. Our results support the line of literature suggesting that saving decisions involve psychological aspects and that policy makers and product designers should take these influences into account.
The second chapter is related to views towards inequality. In this project, we investigate how the perceived fairness of income distributions depends on the beliefs about the process that generated the inequality. Specifically, we examine how two crucial features of this process affect fairness views: (1) Procedural justice - equal treatment of all, (2) Agency - one's ability to determine his/her income. We do this in a lab experiment by varying the equality of opportunity (procedural justice), and one's ability to make choices, which consequently influence subjects’ ability to influence their income (agency). We then elicit ex-post redistribution decisions of the earnings as a function of these two elements. Our results suggest both agency and procedural justice matter for fairness. Our main findings can be summarized as follows: (1) Highlighting the importance of agency, we find that inequality resulting from risk is considered to be fair only when risk is chosen freely; (2) Highlighting the importance of procedural justice, we find that introducing inequality of opportunity significantly increases redistribution, however the share of subjects redistributing none remain close to the share of subjects redistributing fully revealing an underlying heterogeneity in the population about how fairness views should account for inequality of opportunity.
The third chapter is on morality. In this project, we study whether religious rituals act as an internal reminder for basic moral principles and thus affect moral judgments. To this end, we conducted two survey experiments in Turkey and Israel to specifically test the effect of Ramadan and Yom Kippur. The results from the Turkish sample how that Ramadan has a significant effect on moral judgments to some extent for those who report to believe in God. Those who believe in God judged the moral acceptability of ten out of sixty one actions significantly differently in Ramadan, whereas those who reported not to believe in God significantly changed their judgments only for one action in Ramadan. Our results extends the hypothesis established by lab experiments that religious reminders have a significant effect on morality, by testing it in the field in the natural environment of religious rituals.
This thesis is part of a broader collaborative research agenda with both colleagues and advisors. The programming, analyses, and writing, as well as any errors in this work, are my own.