774 resultados para Information privacy Framework


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three main changes to current risk analysis processes are proposed to improve their transparency, openness, and accountability. First, the addition of a formal framing stage would allow interested parties, experts and officials to work together as needed to gain an initial shared understanding of the issue, the objectives of regulatory action, and alternative risk management measures. Second, the scope of the risk assessment is expanded to include the assessment of health and environmental benefits as well as risks, and the explicit consideration of economic- and social-impacts of risk management action and their distribution. Moreover approaches were developed for deriving improved information from genomic, proteomic and metabolomic profiling methods and for probabilistic modelling of health impacts for risk assessment purposes. Third, in an added evaluation stage, interested parties, experts, and officials may compare and weigh the risks, costs, and benefits and their distribution. As part of a set of recommendations on risk communication, we propose that reports on each stage should be made public.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The transition to a low-carbon economy urgently demands better information on the drivers of energy consumption. UK government policy has prioritized energy efficiency in the built stock as a means of carbon reduction, but the sector is historically information poor, particularly the non-domestic building stock. This paper presents the results of a pilot study that investigated whether and how property and energy consumption data might be combined for non-domestic energy analysis. These data were combined in a ‘Non-Domestic Energy Efficiency Database’ to describe the location and physical attributes of each property and its energy consumption. The aim was to support the generation of a range of energy-efficiency statistics for the industrial, commercial and institutional sectors of the non-domestic building stock, and to provide robust evidence for national energy-efficiency and carbon-reduction policy development and monitoring. The work has brought together non-domestic energy data, property data and mapping in a ‘data framework’ for the first time. The results show what is possible when these data are integrated and the associated difficulties. A data framework offers the potential to inform energy-efficiency policy formation and to support its monitoring at a level of detail not previously possible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is growing interest in the ways in which the location of a person can be utilized by new applications and services. Recent advances in mobile technologies have meant that the technical capability to record and transmit location data for processing is appearing in off-the-shelf handsets. This opens possibilities to profile people based on the places they visit, people they associate with, or other aspects of their complex routines determined through persistent tracking. It is possible that services offering customized information based on the results of such behavioral profiling could become commonplace. However, it may not be immediately apparent to the user that a wealth of information about them, potentially unrelated to the service, can be revealed. Further issues occur if the user agreed, while subscribing to the service, for data to be passed to third parties where it may be used to their detriment. Here, we report in detail on a short case study tracking four people, in three European member states, persistently for six weeks using mobile handsets. The GPS locations of these people have been mined to reveal places of interest and to create simple profiles. The information drawn from the profiling activity ranges from intuitive through special cases to insightful. In this paper, these results and further extensions to the technology are considered in light of European legislation to assess the privacy implications of this emerging technology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose a scenario framework that could provide a scenario “thread” through the different climate research communities (climate change – vulnerability, impact, and adaptation (VIA) and mitigation) in order to provide assessment of mitigation and adaptation strategies and other VIA challenges. The scenario framework is organised around a matrix with two main axes: radiative forcing levels and socio-economic conditions. The radiative forcing levels (and the associated climate signal) are described by the new Representative Concentration Pathways. The second axis, socio-economic developments, comprises elements that affect the capacity for mitigation and adaptation, as well as the exposure to climate impacts. The proposed scenarios derived from this framework are limited in number, allow for comparison across various mitigation and adaptation levels, address a range of vulnerability characteristics, provide information across climate forcing and vulnerability states and span a full century time scale. Assessments based on the proposed scenario framework would strengthen cooperation between integrated-assessment modelers, climate modelers and vulnerability, impact and adaptation researchers, and most importantly, facilitate the development of more consistent and comparable research within and across communities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Semi-structured interviews with university students in the UK and Japan, undertaken in 2009 and 2010, are analysed with respect to the revealed attitudes to privacy, self-revelation and revelation by/of others on SNS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, life event approach has been widely used by governments all over the world for designing and providing web services to citizens through their e-government portals. Despite the wide usage of this approach, there is still a challenge of how to use this approach to design e-government portals in order to automatically provide personalised services to citizens. We propose a conceptual framework for e-government service provision based on life event approach and the use of citizen profile to capture the citizen needs, since the process of finding Web services from a government-to-citizen (G2C) system involves understanding the citizens’ needs and demands, selecting the relevant services, and delivering services that matches the requirements. The proposed framework that incorporates the citizen profile is based on three components that complement each other, namely, anticipatory life events, non-anticipatory life events and recurring services.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the primary features of modern government-to-citizen (G2C) service provision is the ability to offer a citizen-centric view of the e-government portal. Life-event approach is one of the most widely adopted paradigms supporting the idea of solving a complex event in a citizen’s life through a single service provision. Several studies have used this approach to design e-government portals. However, they were limited in terms of use and scalability. There were no mechanisms that show how to specify a life-event for structuring public e-services, or how to systematically match life-events with these services taking into consideration the citizen needs. We introduce the NOrm-Based Life-Event (NoBLE) framework for G2C e-service provision with a set of mechanisms as a guide for designing active life-event oriented e-government portals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Decadal predictions have a high profile in the climate science community and beyond, yet very little is known about their skill. Nor is there any agreed protocol for estimating their skill. This paper proposes a sound and coordinated framework for verification of decadal hindcast experiments. The framework is illustrated for decadal hindcasts tailored to meet the requirements and specifications of CMIP5 (Coupled Model Intercomparison Project phase 5). The chosen metrics address key questions about the information content in initialized decadal hindcasts. These questions are: (1) Do the initial conditions in the hindcasts lead to more accurate predictions of the climate, compared to un-initialized climate change projections? and (2) Is the prediction model’s ensemble spread an appropriate representation of forecast uncertainty on average? The first question is addressed through deterministic metrics that compare the initialized and uninitialized hindcasts. The second question is addressed through a probabilistic metric applied to the initialized hindcasts and comparing different ways to ascribe forecast uncertainty. Verification is advocated at smoothed regional scales that can illuminate broad areas of predictability, as well as at the grid scale, since many users of the decadal prediction experiments who feed the climate data into applications or decision models will use the data at grid scale, or downscale it to even higher resolution. An overall statement on skill of CMIP5 decadal hindcasts is not the aim of this paper. The results presented are only illustrative of the framework, which would enable such studies. However, broad conclusions that are beginning to emerge from the CMIP5 results include (1) Most predictability at the interannual-to-decadal scale, relative to climatological averages, comes from external forcing, particularly for temperature; (2) though moderate, additional skill is added by the initial conditions over what is imparted by external forcing alone; however, the impact of initialization may result in overall worse predictions in some regions than provided by uninitialized climate change projections; (3) limited hindcast records and the dearth of climate-quality observational data impede our ability to quantify expected skill as well as model biases; and (4) as is common to seasonal-to-interannual model predictions, the spread of the ensemble members is not necessarily a good representation of forecast uncertainty. The authors recommend that this framework be adopted to serve as a starting point to compare prediction quality across prediction systems. The framework can provide a baseline against which future improvements can be quantified. The framework also provides guidance on the use of these model predictions, which differ in fundamental ways from the climate change projections that much of the community has become familiar with, including adjustment of mean and conditional biases, and consideration of how to best approach forecast uncertainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sampling strategies for monitoring the status and trends in wildlife populations are often determined before the first survey is undertaken. However, there may be little information about the distribution of the population and so the sample design may be inefficient. Through time, as data are collected, more information about the distribution of animals in the survey region is obtained but it can be difficult to incorporate this information in the survey design. This paper introduces a framework for monitoring motile wildlife populations within which the design of future surveys can be adapted using data from past surveys whilst ensuring consistency in design-based estimates of status and trends through time. In each survey, part of the sample is selected from the previous survey sample using simple random sampling. The rest is selected with inclusion probability proportional to predicted abundance. Abundance is predicted using a model constructed from previous survey data and covariates for the whole survey region. Unbiased design-based estimators of status and trends and their variances are derived from two-phase sampling theory. Simulations over the short and long-term indicate that in general more precise estimates of status and trends are obtained using this mixed strategy than a strategy in which all of the sample is retained or all selected with probability proportional to predicted abundance. Furthermore the mixed strategy is robust to poor predictions of abundance. Estimates of status are more precise than those obtained from a rotating panel design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a world where data is captured on a large scale the major challenge for data mining algorithms is to be able to scale up to large datasets. There are two main approaches to inducing classification rules, one is the divide and conquer approach, also known as the top down induction of decision trees; the other approach is called the separate and conquer approach. A considerable amount of work has been done on scaling up the divide and conquer approach. However, very little work has been conducted on scaling up the separate and conquer approach.In this work we describe a parallel framework that allows the parallelisation of a certain family of separate and conquer algorithms, the Prism family. Parallelisation helps the Prism family of algorithms to harvest additional computer resources in a network of computers in order to make the induction of classification rules scale better on large datasets. Our framework also incorporates a pre-pruning facility for parallel Prism algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cross-bred cow adoption is an important and potent policy variable precipitating subsistence household entry into emerging milk markets. This paper focuses on the problem of designing policies that encourage and sustain milkmarket expansion among a sample of subsistence households in the Ethiopian highlands. In this context it is desirable to measure households’ ‘proximity’ to market in terms of the level of deficiency of essential inputs. This problem is compounded by four factors. One is the existence of cross-bred cow numbers (count data) as an important, endogenous decision by the household; second is the lack of a multivariate generalization of the Poisson regression model; third is the censored nature of the milk sales data (sales from non-participating households are, essentially, censored at zero); and fourth is an important simultaneity that exists between the decision to adopt a cross-bred cow, the decision about how much milk to produce, the decision about how much milk to consume and the decision to market that milk which is produced but not consumed internally by the household. Routine application of Gibbs sampling and data augmentation overcome these problems in a relatively straightforward manner. We model the count data from two sites close to Addis Ababa in a latent, categorical-variable setting with known bin boundaries. The single-equation model is then extended to a multivariate system that accommodates the covariance between crossbred-cow adoption, milk-output, and milk-sales equations. The latent-variable procedure proves tractable in extension to the multivariate setting and provides important information for policy formation in emerging-market settings

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Drought is a global problem that has far-reaching impacts and especially 47 on vulnerable populations in developing regions. This paper highlights the need for a Global Drought Early Warning System (GDEWS), the elements that constitute its underlying framework (GDEWF) and the recent progress made towards its development. Many countries lack drought monitoring systems, as well as the capacity to respond via appropriate political, institutional and technological frameworks, and these have inhibited the development of integrated drought management plans or early warning systems. The GDEWS will provide a source of drought tools and products via the GDEWF for countries and regions to develop tailored drought early warning systems for their own users. A key goal of a GDEWS is to maximize the lead time for early warning, allowing drought managers and disaster coordinators more time to put mitigation measures in place to reduce the vulnerability to drought. To address this, the GDEWF will take both a top-down approach to provide global real-time drought monitoring and seasonal forecasting, and a bottom-up approach that builds upon existing national and regional systems to provide continental to global coverage. A number of challenges must be overcome, however, before a GDEWS can become a reality, including the lack of in-situ measurement networks and modest seasonal forecast skill in many regions, and the lack of infrastructure to translate data into useable information. A set of international partners, through a series of recent workshops and evolving collaborations, has made progress towards meeting these challenges and developing a global system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article analyses the results of an empirical study on the 200 most popular UK-based websites in various sectors of e-commerce services. The study provides empirical evidence on unlawful processing of personal data. It comprises a survey on the methods used to seek and obtain consent to process personal data for direct marketing and advertisement, and a test on the frequency of unsolicited commercial emails (UCE) received by customers as a consequence of their registration and submission of personal information to a website. Part One of the article presents a conceptual and normative account of data protection, with a discussion of the ethical values on which EU data protection law is grounded and an outline of the elements that must be in place to seek and obtain valid consent to process personal data. Part Two discusses the outcomes of the empirical study, which unveils a significant departure between EU legal theory and practice in data protection. Although a wide majority of the websites in the sample (69%) has in place a system to ask separate consent for engaging in marketing activities, it is only 16.2% of them that obtain a consent which is valid under the standards set by EU law. The test with UCE shows that only one out of three websites (30.5%) respects the will of the data subject not to receive commercial communications. It also shows that, when submitting personal data in online transactions, there is a high probability (50%) of incurring in a website that will ignore the refusal of consent and will send UCE. The article concludes that there is severe lack of compliance of UK online service providers with essential requirements of data protection law. In this respect, it suggests that there is inappropriate standard of implementation, information and supervision by the UK authorities, especially in light of the clarifications provided at EU level.