994 resultados para Flowchart model


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work presents a demand side response model (DSR) which assists small electricity consumers, through an aggregator, exposed to the market price to proactively mitigate price and peak impact on the electrical system. The proposed model allows consumers to manage air-conditioning when as a function of possible price spikes. The main contribution of this research is to demonstrate how consumers can minimise the total expected cost by optimising air-conditioning to account for occurrences of a price spike in the electricity market. This model investigates how pre-cooling method can be used to minimise energy costs when there is a substantial risk of an electricity price spike. The model was tested with Queensland electricity market data from the Australian Energy Market Operator and Brisbane temperature data from the Bureau of Statistics during hot days on weekdays in the period 2011 to 2012.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A long query provides more useful hints for searching relevant documents, but it is likely to introduce noise which affects retrieval performance. In order to smooth such adverse effect, it is important to reduce noisy terms, introduce and boost additional relevant terms. This paper presents a comprehensive framework, called Aspect Hidden Markov Model (AHMM), which integrates query reduction and expansion, for retrieval with long queries. It optimizes the probability distribution of query terms by utilizing intra-query term dependencies as well as the relationships between query terms and words observed in relevance feedback documents. Empirical evaluation on three large-scale TREC collections demonstrates that our approach, which is automatic, achieves salient improvements over various strong baselines, and also reaches a comparable performance to a state of the art method based on user’s interactive query term reduction and expansion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a Bayesian hierarchical model is used to anaylze the female breast cancer mortality rates for the State of Missouri from 1969 through 2001. The logit transformations of the mortality rates are assumed to be linear over the time with additive spatial and age effects as intercepts and slopes. Objective priors of the hierarchical model are explored. The Bayesian estimates are quite robustness in terms change of the hyperparamaters. The spatial correlations are appeared in both intercepts and slopes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A spatial process observed over a lattice or a set of irregular regions is usually modeled using a conditionally autoregressive (CAR) model. The neighborhoods within a CAR model are generally formed deterministically using the inter-distances or boundaries between the regions. An extension of CAR model is proposed in this article where the selection of the neighborhood depends on unknown parameter(s). This extension is called a Stochastic Neighborhood CAR (SNCAR) model. The resulting model shows flexibility in accurately estimating covariance structures for data generated from a variety of spatial covariance models. Specific examples are illustrated using data generated from some common spatial covariance functions as well as real data concerning radioactive contamination of the soil in Switzerland after the Chernobyl accident.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Too often the relationship between client and external consultants is perceived as one of protagonist versus antogonist. Stories on dramatic, failed consultancies abound, as do related anecdotal quips. A contributing factor to many "apparently" failed consultancies is a poor appreciation by both the client and consultant of the client's true goals for the project and how to assess progress toward these goals. This paper presents and analyses a measurement model for assessing client success when engaging an external consultant. Three main areas of assessment are identified: (1) the consultant;s recommendations, (2) client learning, and (3) consultant performance. Engagement success is emperically measured along these dimensions through a series of case studies and a subsequent survey of clients and consultants involved in 85 computer-based information system selection projects. Validation fo the model constructs suggests the existence of six distinct and individually important dimensions of engagement success. both clients and consultants are encouraged to attend to these dimensions in pre-engagement proposal and selection processes, and post-engagement evaluation of outcomes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aim The aim of this paper was to discuss the potential development of a conceptual model of knowledge integration pertinent to critical care nursing practice. A review of the literature identified that reflective practice appeared to be at the forefront of professional development. Background It could be argued that advancing practice in critical care has been superseded by the advanced practice agenda. Some would suggest that advancing practice is focused on the core attributes of an individual’s practice, which then leads onto advanced practice status. However, advancing practice is more of a process than identifiable skills and as such is often negated when viewing the development of practitioners to the advanced practice level. For example, practice development initiatives can be seen as advancing practice for the masses, which ensures that practitioners are following the same level and practice of care. The question here is, are they developing individually? Relevance to clinical practice What this paper presents is that reflection may not be best suited to advancing practice if the individual practitioner does not have a sound knowledge base both theoretically and experientially. The knowledge integration model presented in this study uses multiple learning strategies that are focused in practice to develop practice, e.g. the use of work-based learning and clinical supervision. To demonstrate the models application, an exemplar of an issue from practice shows its relevance from a practical perspective. Conclusions In conclusion, further knowledge acquisition and its relationship with previously held theory and experience will enable individual practitioners to advance their own practice as well as being a resource for others.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a novel approach for developing summary statistics for use in approximate Bayesian computation (ABC) algorithms using indirect infer- ence. We embed this approach within a sequential Monte Carlo algorithm that is completely adaptive. This methodological development was motivated by an application involving data on macroparasite population evolution modelled with a trivariate Markov process. The main objective of the analysis is to compare inferences on the Markov process when considering two di®erent indirect mod- els. The two indirect models are based on a Beta-Binomial model and a three component mixture of Binomials, with the former providing a better ¯t to the observed data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present a unified sequential Monte Carlo (SMC) framework for performing sequential experimental design for discriminating between a set of models. The model discrimination utility that we advocate is fully Bayesian and based upon the mutual information. SMC provides a convenient way to estimate the mutual information. Our experience suggests that the approach works well on either a set of discrete or continuous models and outperforms other model discrimination approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The safety of passengers is a major concern to airports. In the event of crises, having an effective and efficient evacuation process in place can significantly aid in enhancing passenger safety. Hence, it is necessary for airport operators to have an in-depth understanding of the evacuation process of their airport terminal. Although evacuation models have been used in studying pedestrian behaviour for decades, little research has been done in considering the evacuees’ group dynamics and the complexity of the environment. In this paper, an agent-based model is presented to simulate passenger evacuation process. Different exits were allocated to passengers based on their location and security level. The simulation results show that the evacuation time can be influenced by passenger group dynamics. This model also provides a convenient way to design airport evacuation strategy and examine its efficiency. The model was created using AnyLogic software and its parameters were initialised using recent research data published in the literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Significant attention has been given in urban policy literature to the integration of land-use and transport planning and policies—with a view to curbing sprawling urban form and diminishing externalities associated with car-dependent travel patterns. By taking land-use and transport interaction into account, this debate mainly focuses on how a successful integration can contribute to societal well-being, providing efficient and balanced economic growth while accomplishing the goal of developing sustainable urban environments and communities. The integration is also a focal theme of contemporary urban development models, such as smart growth, liveable neighbourhoods, and new urbanism. Even though available planning policy options for ameliorating urban form and transport-related externalities have matured—owing to growing research and practice worldwide—there remains a lack of suitable evaluation models to reflect on the current status of urban form and travel problems or on the success of implemented integration policies. In this study we explore the applicability of indicator-based spatial indexing to assess land-use and transport integration at the neighbourhood level. For this, a spatial index is developed by a number of indicators compiled from international studies and trialled in Gold Coast, Queensland, Australia. The results of this modelling study reveal that it is possible to propose an effective metric to determine the success level of city plans considering their sustainability performance via composite indicator methodology. The model proved useful in demarcating areas where planning intervention is applicable, and in identifying the most suitable locations for future urban development and plan amendments. Lastly, we integrate variance-based sensitivity analysis with the spatial indexing method, and discuss the applicability of the model in other urban contexts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We construct an efficient identity based encryption system based on the standard learning with errors (LWE) problem. Our security proof holds in the standard model. The key step in the construction is a family of lattices for which there are two distinct trapdoors for finding short vectors. One trapdoor enables the real system to generate short vectors in all lattices in the family. The other trapdoor enables the simulator to generate short vectors for all lattices in the family except for one. We extend this basic technique to an adaptively-secure IBE and a Hierarchical IBE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Social tagging systems are shown to evidence a well known cognitive heuristic, the guppy effect, which arises from the combination of different concepts. We present some empirical evidence of this effect, drawn from a popular social tagging Web service. The guppy effect is then described using a quantum inspired formalism that has been already successfully applied to model conjunction fallacy and probability judgement errors. Key to the formalism is the concept of interference, which is able to capture and quantify the strength of the guppy effect.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The notion of certificateless public-key encryption (CL-PKE) was introduced by Al-Riyami and Paterson in 2003 that avoids the drawbacks of both traditional PKI-based public-key encryption (i.e., establishing public-key infrastructure) and identity-based encryption (i.e., key escrow). So CL-PKE like identity-based encryption is certificate-free, and unlike identity-based encryption is key escrow-free. In this paper, we introduce simple and efficient CCA-secure CL-PKE based on (hierarchical) identity-based encryption. Our construction has both theoretical and practical interests. First, our generic transformation gives a new way of constructing CCA-secure CL-PKE. Second, instantiating our transformation using lattice-based primitives results in a more efficient CCA-secure CL-PKE than its counterpart introduced by Dent in 2008.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An encryption scheme is non-malleable if giving an encryption of a message to an adversary does not increase its chances of producing an encryption of a related message (under a given public key). Fischlin introduced a stronger notion, known as complete non-malleability, which requires attackers to have negligible advantage, even if they are allowed to transform the public key under which the related message is encrypted. Ventre and Visconti later proposed a comparison-based definition of this security notion, which is more in line with the well-studied definitions proposed by Bellare et al. The authors also provide additional feasibility results by proposing two constructions of completely non-malleable schemes, one in the common reference string model using non-interactive zero-knowledge proofs, and another using interactive encryption schemes. Therefore, the only previously known completely non-malleable (and non-interactive) scheme in the standard model, is quite inefficient as it relies on generic NIZK approach. They left the existence of efficient schemes in the common reference string model as an open problem. Recently, two efficient public-key encryption schemes have been proposed by Libert and Yung, and Barbosa and Farshim, both of them are based on pairing identity-based encryption. At ACISP 2011, Sepahi et al. proposed a method to achieve completely non-malleable encryption in the public-key setting using lattices but there is no security proof for the proposed scheme. In this paper we review the mentioned scheme and provide its security proof in the standard model. Our study shows that Sepahi’s scheme will remain secure even for post-quantum world since there are currently no known quantum algorithms for solving lattice problems that perform significantly better than the best known classical (i.e., non-quantum) algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Numeric set watermarking is a way to provide ownership proof for numerical data. Numerical data can be considered to be primitives for multimedia types such as images and videos since they are organized forms of numeric information. Thereby, the capability to watermark numerical data directly implies the capability to watermark multimedia objects and discourage information theft on social networking sites and the Internet in general. Unfortunately, there has been very limited research done in the field of numeric set watermarking due to underlying limitations in terms of number of items in the set and LSBs in each item available for watermarking. In 2009, Gupta et al. proposed a numeric set watermarking model that embeds watermark bits in the items of the set based on a hash value of the items’ most significant bits (MSBs). If an item is chosen for watermarking, a watermark bit is embedded in the least significant bits, and the replaced bit is inserted in the fractional value to provide reversibility. The authors show their scheme to be resilient against the traditional subset addition, deletion, and modification attacks as well as secondary watermarking attacks. In this paper, we present a bucket attack on this watermarking model. The attack consists of creating buckets of items with the same MSBs and determine if the items of the bucket carry watermark bits. Experimental results show that the bucket attack is very strong and destroys the entire watermark with close to 100% success rate. We examine the inherent weaknesses in the watermarking model of Gupta et al. that leave it vulnerable to the bucket attack and propose potential safeguards that can provide resilience against this attack.