774 resultados para Information privacy Framework
Resumo:
This paper presents a preliminary study of developing a novel distributed adaptive real-time learning framework for wide area monitoring of power systems integrated with distributed generations using synchrophasor technology. The framework comprises distributed agents (synchrophasors) for autonomous local condition monitoring and fault detection, and a central unit for generating global view for situation awareness and decision making. Key technologies that can be integrated into this hierarchical distributed learning scheme are discussed to enable real-time information extraction and knowledge discovery for decision making, without explicitly accumulating and storing all raw data by the central unit. Based on this, the configuration of a wide area monitoring system of power systems using synchrophasor technology, and the functionalities for locally installed open-phasor-measurement-units (OpenPMUs) and a central unit are presented. Initial results on anti-islanding protection using the proposed approach are given to illustrate the effectiveness.
Resumo:
The research reports on a survey of 228 blind and partially sighted persons in 15 health authorities across Scotland. The survey reports data on patient experience of receiving health information in accessible reading formats. Data indicated that about 90% of blind and partially sighted persons did not receive communications from various NHS health departments in a format that they could read by themselves. The implications for patient privacy, confidentiality and wider impact on life and health care are highlighted. The implications for professional ethical medical practice and for public policy are also discussed. Recommendations for improved practice are made.
Resumo:
People often struggle when making Bayesian probabilistic estimates on the basis of competing sources of statistical evidence. Recently, Krynski and Tenenbaum (Journal of Experimental Psychology: General, 136, 430–450, 2007) proposed that a causal Bayesian framework accounts for peoples’ errors in Bayesian reasoning and showed that, by clarifying the causal relations among the pieces of evidence, judgments on a classic statistical reasoning problem could be significantly improved. We aimed to understand whose statistical reasoning is facilitated by the causal structure intervention. In Experiment 1, although we observed causal facilitation effects overall, the effect was confined to participants high in numeracy. We did not find an overall facilitation effect in Experiment 2 but did replicate the earlier interaction between numerical ability and the presence or absence of causal content. This effect held when we controlled for general cognitive ability and thinking disposition. Our results suggest that clarifying causal structure facilitates Bayesian judgments, but only for participants with sufficient understanding of basic concepts in probability and statistics.
Resumo:
Structured parallel programming is recognised as a viable and effective means of tackling parallel programming problems. Recently, a set of simple and powerful parallel building blocks RISC pb2l) has been proposed to support modelling and implementation of parallel frameworks. In this work we demonstrate how that same parallel building block set may be used to model both general purpose parallel programming abstractions, not usually listed in classical skeleton sets, and more specialized domain specific parallel patterns. We show how an implementation of RISC pb2 l can be realised via the FastFlow framework and present experimental evidence of the feasibility and efficiency of the approach.
Resumo:
Correctly modelling and reasoning with uncertain information from heterogeneous sources in large-scale systems is critical when the reliability is unknown and we still want to derive adequate conclusions. To this end, context-dependent merging strategies have been proposed in the literature. In this paper we investigate how one such context-dependent merging strategy (originally defined for possibility theory), called largely partially maximal consistent subsets (LPMCS), can be adapted to Dempster-Shafer (DS) theory. We identify those measures for the degree of uncertainty and internal conflict that are available in DS theory and show how they can be used for guiding LPMCS merging. A simplified real-world power distribution scenario illustrates our framework. We also briefly discuss how our approach can be incorporated into a multi-agent programming language, thus leading to better plan selection and decision making.
Resumo:
To provide in-time reactions to a large volume of surveil- lance data, uncertainty-enabled event reasoning frameworks for CCTV and sensor based intelligent surveillance system have been integrated to model and infer events of interest. However, most of the existing works do not consider decision making under uncertainty which is important for surveillance operators. In this paper, we extend an event reasoning framework for decision support, which enables our framework to predict, rank and alarm threats from multiple heterogeneous sources.
Resumo:
Belief revision performs belief change on an agent’s beliefs when new evidence (either of the form of a propositional formula or of the form of a total pre-order on a set of interpretations) is received. Jeffrey’s rule is commonly used for revising probabilistic epistemic states when new information is probabilistically uncertain. In this paper, we propose a general epistemic revision framework where new evidence is of the form of a partial epistemic state. Our framework extends Jeffrey’s rule with uncertain inputs and covers well-known existing frameworks such as ordinal conditional function (OCF) or possibility theory. We then define a set of postulates that such revision operators shall satisfy and establish representation theorems to characterize those postulates. We show that these postulates reveal common characteristics of various existing revision strategies and are satisfied by OCF conditionalization, Jeffrey’s rule of conditioning and possibility conditionalization. Furthermore, when reducing to the belief revision situation, our postulates can induce Darwiche and Pearl’s postulates C1 and C2.
Resumo:
Biometric systems provide a valuable service in helping to identify individuals from their stored personal details. Unfortunately, with the rapidly increasing use of such systems, there is a growing concern about the possible misuse of that information. To counteract the threat, the European Union (EU) has introduced comprehensive legislation that seeks to regulate data collection and help strengthen an individual’s right to privacy. This article looks at the implications of the legislation for biometric system deployment. After an initial consideration of current privacy concerns, it examines what is meant by ‘personal data’ and its protection, in legislation terms. Also covered are issues around the storage of biometric data, including its accuracy, its security, and justification for what is collected. Finally, the privacy issues are illustrated through three biometric use cases: border security, online bank access control and customer profiling in stores.
Resumo:
Modern cancer research on prognostic and predictive biomarkers demands the integration of established and emerging high-throughput technologies. However, these data are meaningless unless carefully integrated with patient clinical outcome and epidemiological information. Integrated datasets hold the key to discovering new biomarkers and therapeutic targets in cancer. We have developed a novel approach and set of methods for integrating and interrogating phenomic, genomic and clinical data sets to facilitate cancer biomarker discovery and patient stratification. Applied to a known paradigm, the biological and clinical relevance of TP53, PICan was able to recapitulate the known biomarker status and prognostic significance at a DNA, RNA and protein levels.
Resumo:
Background: Serious case reviews and research studies have indicated weaknesses in risk assessments conducted by child protection social workers. Social workers are adept at gathering information but struggle with analysis and assessment of risk. The Department for Education wants to know if the use of a structured decision-making tool can improve child protection assessments of risk.
Methods/design: This multi-site, cluster-randomised trial will assess the effectiveness of the Safeguarding Children Assessment and Analysis Framework (SAAF). This structured decision-making tool aims to improve social workers' assessments of harm, of future risk and parents' capacity to change. The comparison is management as usual.
Inclusion criteria: Children's Services Departments (CSDs) in England willing to make relevant teams available to be randomised, and willing to meet the trial's training and data collection requirements.
Exclusion criteria: CSDs where there were concerns about performance; where a major organisational restructuring was planned or under way; or where other risk assessment tools were in use.
Six CSDs are participating in this study. Social workers in the experimental arm will receive 2 days training in SAAF together with a range of support materials, and access to limited telephone consultation post-training. The primary outcome is child maltreatment. This will be assessed using data collected nationally on two key performance indicators: the first is the number of children in a year who have been subject to a second Child Protection Plan (CPP); the second is the number of re-referrals of children because of related concerns about maltreatment. Secondary outcomes are: i) the quality of assessments judged against a schedule of quality criteria and ii) the relationship between the three assessments required by the structured decision-making tool (level of harm, risk of (re) abuse and prospects for successful intervention).
Discussion: This is the first study to examine the effectiveness of SAAF. It will contribute to a very limited literature on the contribution that structured decision-making tools can make to improving risk assessment and case planning in child protection and on what is involved in their effective implementation.
Resumo:
In many CCTV and sensor network based intelligent surveillance systems, a number of attributes or criteria are used to individually evaluate the degree of potential threat of a suspect. The outcomes for these attributes are in general from analytical algorithms where data are often pervaded with uncertainty and incompleteness. As a result, such individual threat evaluations are often inconsistent, and individual evaluations can change as time elapses. Therefore, integrating heterogeneous threat evaluations with temporal influence to obtain a better overall evaluation is a challenging issue. So far, this issue has rarely be considered by existing event reasoning frameworks under uncertainty in sensor network based surveillance. In this paper, we first propose a weighted aggregation operator based on a set of principles that constraints the fusion of individual threat evaluations. Then, we propose a method to integrate the temporal influence on threat evaluation changes. Finally, we demonstrate the usefulness of our system with a decision support event modeling framework using an airport security surveillance scenario.
Resumo:
In recent difficult economic times, the efficiency with which a charity spends the funds entrusted to it has become an increasingly important aspect of charitable performance. Transparency on efficiency, including the reporting of relevant measures and information to understand, contextualise and evaluate such measures, is suggested as important to a range of stakeholders. However, using a novel framework for the analysis of efficiency reporting in the context of transparency and stakeholder theory, this research provides evidence that reporting on efficiency in UK (United Kingdom) charities lacks transparency, both in terms of the extent and manner of disclosure. It is argued that efficiency reporting in UK charities is more concerned with legitimising these organisations rather than providing ethically-driven accounts of their efficiency.
Resumo:
Background
Low patient adherence to treatment is associated with poorer health outcomes in bronchiectasis. We sought to use the Theoretical Domains Framework (TDF) (a framework derived from 33 psychological theories) and behavioural change techniques (BCTs) to define the content of an intervention to change patients’ adherence in bronchiectasis (Stage 1 and 2) and stakeholder expert panels to define its delivery (Stage 3).
Methods
We conducted semi-structured interviews with patients with bronchiectasis about barriers and motivators to adherence to treatment and focus groups or interviews with bronchiectasis healthcare professionals (HCPs) about their ability to change patients’ adherence to treatment. We coded these data to the 12 domain TDF to identify relevant domains for patients and HCPs (Stage 1). Three researchers independently mapped relevant domains for patients and HCPs to a list of 35 BCTs to identify two lists (patient and HCP) of potential BCTs for inclusion (Stage 2). We presented these lists to three expert panels (two with patients and one with HCPs/academics from across the UK). We asked panels who the intervention should target, who should deliver it, at what intensity, in what format and setting, and using which outcome measures (Stage 3).
Results
Eight TDF domains were perceived to influence patients’ and HCPs’ behaviours: Knowledge, Skills, Beliefs about capability, Beliefs about consequences, Motivation, Social influences, Behavioural regulation and Nature of behaviours (Stage 1). Twelve BCTs common to patients and HCPs were included in the intervention: Monitoring, Self-monitoring, Feedback, Action planning, Problem solving, Persuasive communication, Goal/target specified:behaviour/outcome, Information regarding behaviour/outcome, Role play, Social support and Cognitive restructuring (Stage 2). Participants thought that an individualised combination of these BCTs should be delivered to all patients, by a member of staff, over several one-to-one and/or group visits in secondary care. Efficacy should be measured using pulmonary exacerbations, hospital admissions and quality of life (Stage 3).
Conclusions
Twelve BCTs form the intervention content. An individualised selection from these 12 BCTs will be delivered to all patients over several face-to-face visits in secondary care. Future research should focus on developing physical materials to aid delivery of the intervention prior to feasibility and pilot testing. If effective, this intervention may improve adherence and health outcomes for those with bronchiectasis in the future.
Resumo:
People usually perform economic interactions within the social setting of a small group, while they obtain relevant information from a broader source. We capture this feature with a dynamic interaction model based on two separate social networks. Individuals play a coordination game in an interaction network, while updating their strategies using information from a separate influence network through which information is disseminated. In each time period, the interaction and influence networks co-evolve, and the individuals’ strategies are updated through a modified naive learning process. We show that both network structures and players’ strategies always reach a steady state, in which players form fully connected groups and converge to local conventions. We also analyze the influence exerted by a minority group of strongly opinionated players on these outcomes.