95 resultados para worst-case analysis


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose This paper seeks to investigate the conditions and processes affecting the operation and potential effectiveness of audit committees (ACs), with particular focus on the interaction between the AC, individuals from financial reporting and internal audit functions and the external auditors. Design/methodology/approach A case study approach is employed, based on direct engagement with participants in AC activities, including the AC chair, external auditors, internal auditors, and senior management. Findings The authors find that informal networks between AC participants condition the impact of the AC and that the most significant effects of the AC on governance outcomes occur outside the formal structures and processes. An AC has pervasive behavioural effects within the organization and may be used as a threat, an ally and an arbiter in bringing solutions to issues and conflicts. ACs are used in organizational politics, communication processes and power plays and also affect interpretations of events and cultural values. Research limitations/implications Further research on AC and governance processes is needed to develop better understanding of effectiveness. Longitudinal studies, focusing on the organizational and institutional context of AC operations, can examine how historical events in an organization and significant changes in the regulatory environment affect current structures and processes. Originality/value The case analysis highlights a number of significant factors which are not fully recognised either in theorizing the governance role of ACs or in the development of policy and regulations concerning ACs but which impinge on their governance contribution. They include the importance of informal processes around the AC; its influence on power relations between organizational participants; the relevance of the historical development of governance in an organization; and the possibility that the AC’s impact on governance may be greatest in non-routine situations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Formal incentives systems aim to encourage improved performance by offering a reward for the achievement of project-specific goals. Despite argued benefits of incentive systems on project delivery outcomes, there remains debate over how incentive systems can be designed to encourage the formation of strong project relationships within a complex social system such as an infrastructure project. This challenge is compounded by the increasing emphasis in construction management research on the important mediating influence of technical and organisational context on project performance. In light of this challenge, the research presented in this paper focuses on the design of incentive systems in four infrastructure projects: two road reconstructions in the Netherlands and two building constructions in Australia. Based on a motivational theory frame, a cross case analysis is conducted to examine differences and similarities across social and cultural drivers impacting on the effectiveness of the incentive systems in light of infrastructure project context. Despite significant differences in case project characteristics, results indicate the projects’ experience similar social drivers impacting on incentive effectiveness. Significant value across the projects was placed on: varied performance goals and multiple opportunities to across the project team to pursue incentive rewards; fair risk allocation across contract parties; value-driven tender selection; improved design-build integration; and promotion of future work opportunities. However, differences across the contexts were identified. Results suggest future work opportunities were a more powerful social driver in upholding reputation and establishing strong project relationships in the Australian context. On the other hand, the relationship initiatives in the Dutch context seemed to be more broadly embraced resulting in a greater willingness to collaboratively manage project risk. Although there are limitations with this research in drawing generalizations across two sets of case projects, the results provide a strong base to explore the social and cultural influences on incentive effectiveness across different geographical and contextual boundaries in future research.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Initial attempts to obtain lattice based signatures were closely related to reducing a vector modulo the fundamental parallelepiped of a secret basis (like GGH [9], or NTRUSign [12]). This approach leaked some information on the secret, namely the shape of the parallelepiped, which has been exploited on practical attacks [24]. NTRUSign was an extremely efficient scheme, and thus there has been a noticeable interest on developing countermeasures to the attacks, but with little success [6]. In [8] Gentry, Peikert and Vaikuntanathan proposed a randomized version of Babai’s nearest plane algorithm such that the distribution of a reduced vector modulo a secret parallelepiped only depended on the size of the base used. Using this algorithm and generating large, close to uniform, public keys they managed to get provably secure GGH-like lattice-based signatures. Recently, Stehlé and Steinfeld obtained a provably secure scheme very close to NTRUSign [26] (from a theoretical point of view). In this paper we present an alternative approach to seal the leak of NTRUSign. Instead of modifying the lattices and algorithms used, we do a classic leaky NTRUSign signature and hide it with gaussian noise using techniques present in Lyubashevky’s signatures. Our main contributions are thus a set of strong NTRUSign parameters, obtained by taking into account latest known attacks against the scheme, a statistical way to hide the leaky NTRU signature so that this particular instantiation of CVP-based signature scheme becomes zero-knowledge and secure against forgeries, based on the worst-case hardness of the O~(N1.5)-Shortest Independent Vector Problem over NTRU lattices. Finally, we give a set of concrete parameters to gauge the efficiency of the obtained signature scheme.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A number of online algorithms have been developed that have small additional loss (regret) compared to the best “shifting expert”. In this model, there is a set of experts and the comparator is the best partition of the trial sequence into a small number of segments, where the expert of smallest loss is chosen in each segment. The regret is typically defined for worst-case data / loss sequences. There has been a recent surge of interest in online algorithms that combine good worst-case guarantees with much improved performance on easy data. A practically relevant class of easy data is the case when the loss of each expert is iid and the best and second best experts have a gap between their mean loss. In the full information setting, the FlipFlop algorithm by De Rooij et al. (2014) combines the best of the iid optimal Follow-The-Leader (FL) and the worst-case-safe Hedge algorithms, whereas in the bandit information case SAO by Bubeck and Slivkins (2012) competes with the iid optimal UCB and the worst-case-safe EXP3. We ask the same question for the shifting expert problem. First, we ask what are the simple and efficient algorithms for the shifting experts problem when the loss sequence in each segment is iid with respect to a fixed but unknown distribution. Second, we ask how to efficiently unite the performance of such algorithms on easy data with worst-case robustness. A particular intriguing open problem is the case when the comparator shifts within a small subset of experts from a large set under the assumption that the losses in each segment are iid.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The development and maintenance of large and complex ontologies are often time-consuming and error-prone. Thus, automated ontology learning and revision have attracted intensive research interest. In data-centric applications where ontologies are designed or automatically learnt from the data, when new data instances are added that contradict to the ontology, it is often desirable to incrementally revise the ontology according to the added data. This problem can be intuitively formulated as the problem of revising a TBox by an ABox. In this paper we introduce a model-theoretic approach to such an ontology revision problem by using a novel alternative semantic characterisation of DL-Lite ontologies. We show some desired properties for our ontology revision. We have also developed an algorithm for reasoning with the ontology revision without computing the revision result. The algorithm is efficient as its computational complexity is in coNP in the worst case and in PTIME when the size of the new data is bounded.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Intermittent generation from wind farms leads to fluctuating power system operating conditions pushing the stability margin to its limits. The traditional way of determining the worst case generation dispatch for a system with several semi-scheduled wind generators yields a conservative solution. This paper proposes a fast estimation of the transient stability margin (TSM) incorporating the uncertainty of wind generation. First, the Kalman filter (KF) is used to provide linear estimation of system angle and then unscented transformation (UT) is used to estimate the distribution of the TSM. The proposed method is compared with the traditional Monte Carlo (MC) method and the effectiveness of the proposed approach is verified using Single Machine Infinite Bus (SMIB) and IEEE 14 generator Australian dynamic system. This method will aid grid operators to perform fast online calculations to estimate TSM distribution of a power system with high levels of intermittent wind generation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background Bloodstream infections resulting from intravascular catheters (catheter-BSI) in critical care increase patients' length of stay, morbidity and mortality, and the management of these infections and their complications has been estimated to cost the NHS annually £19.1–36.2M. Catheter-BSI are thought to be largely preventable using educational interventions, but guidance as to which types of intervention might be most clinically effective is lacking. Objective To assess the effectiveness and cost-effectiveness of educational interventions for preventing catheter-BSI in critical care units in England. Data sources Sixteen electronic bibliographic databases – including MEDLINE, MEDLINE In-Process & Other Non-Indexed Citations, Cumulative Index to Nursing and Allied Health Literature (CINAHL), NHS Economic Evaluation Database (NHS EED), EMBASE and The Cochrane Library databases – were searched from database inception to February 2011, with searches updated in March 2012. Bibliographies of systematic reviews and related papers were screened and experts contacted to identify any additional references. Review methods References were screened independently by two reviewers using a priori selection criteria. A descriptive map was created to summarise the characteristics of relevant studies. Further selection criteria developed in consultation with the project Advisory Group were used to prioritise a subset of studies relevant to NHS practice and policy for systematic review. A decision-analytic economic model was developed to investigate the cost-effectiveness of educational interventions for preventing catheter-BSI. Results Seventy-four studies were included in the descriptive map, of which 24 were prioritised for systematic review. Studies have predominantly been conducted in the USA, using single-cohort before-and-after study designs. Diverse types of educational intervention appear effective at reducing the incidence density of catheter-BSI (risk ratios statistically significantly < 1.0), but single lectures were not effective. The economic model showed that implementing an educational intervention in critical care units in England would be cost-effective and potentially cost-saving, with incremental cost-effectiveness ratios under worst-case sensitivity analyses of < £5000/quality-adjusted life-year. Limitations Low-quality primary studies cannot definitively prove that the planned interventions were responsible for observed changes in catheter-BSI incidence. Poor reporting gave unclear estimates of risk of bias. Some model parameters were sourced from other locations owing to a lack of UK data. Conclusions Our results suggest that it would be cost-effective and may be cost-saving for the NHS to implement educational interventions in critical care units. However, more robust primary studies are needed to exclude the possible influence of secular trends on observed reductions in catheter-BSI.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This chapter highlights the im portance of feedback in work - integrated learning (WIL) , the key role of workplace supervisors, and the importance of continuous improvement in systems to support feedback processes. The paper proposes a definition of feedback and formative feedback, as well as approaches for providing industry feedback in W I L . It further reports on a case analysis based on workplace supervisors providing feedback to students in engineering and urban development , yielding certain insights into student pe rformance in the workplace, and m ore importantly, hi ghlighting the need to enhance the use of feedback processes. This is requir ed in a context where delivering feedback in WIL is generally acknowledge d to be complex, and where the role of the industry supervisor in appraising the performance of the student in the workplace needs to be very c learly defined in order for supervisor s ’ feedback to have optimal impact. F eedback in WIL i s set against the bac kdrop of recognizing the importance and complexity of stakeholder engagement in WIL in general, and the intricacy associated with the provision of feedback from industry supervisors in particular. Student self - assessment is briefly considered as a fu rther dimension of their participation in providing feedback on their own performance in the workplace . ( Asia - Pacific Journal of Cooperative Education, Special Issue, 2014, 15 (3 ), 241 - 25 2

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present an algorithm for multiarmed bandits that achieves almost optimal performance in both stochastic and adversarial regimes without prior knowledge about the nature of the environment. Our algorithm is based on augmentation of the EXP3 algorithm with a new control lever in the form of exploration parameters that are tailored individually for each arm. The algorithm simultaneously applies the “old” control lever, the learning rate, to control the regret in the adversarial regime and the new control lever to detect and exploit gaps between the arm losses. This secures problem-dependent “logarithmic” regret when gaps are present without compromising on the worst-case performance guarantee in the adversarial regime. We show that the algorithm can exploit both the usual expected gaps between the arm losses in the stochastic regime and deterministic gaps between the arm losses in the adversarial regime. The algorithm retains “logarithmic” regret guarantee in the stochastic regime even when some observations are contaminated by an adversary, as long as on average the contamination does not reduce the gap by more than a half. Our results for the stochastic regime are supported by experimental validation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Troxel, Lipsitz, and Brennan (1997, Biometrics 53, 857-869) considered parameter estimation from survey data with nonignorable nonresponse and proposed weighted estimating equations to remove the biases in the complete-case analysis that ignores missing observations. This paper suggests two alternative modifications for unbiased estimation of regression parameters when a binary outcome is potentially observed at successive time points. The weighting approach of Robins, Rotnitzky, and Zhao (1995, Journal of the American Statistical Association 90, 106-121) is also modified to obtain unbiased estimating functions. The suggested estimating functions are unbiased only when the missingness probability is correctly specified, and misspecification of the missingness model will result in biases in the estimates. Simulation studies are carried out to assess the performance of different methods when the covariate is binary or normal. For the simulation models used, the relative efficiency of the two new methods to the weighting methods is about 3.0 for the slope parameter and about 2.0 for the intercept parameter when the covariate is continuous and the missingness probability is correctly specified. All methods produce substantial biases in the estimates when the missingness model is misspecified or underspecified. Analysis of data from a medical survey illustrates the use and possible differences of these estimating functions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

DNA evidence has made a significant contribution to criminal investigations in Australia and around the world since it was widely adopted in the 1990s (Gans & Urbas 2002). The direct matching of DNA profiles, such as comparing one obtained from a crime scene with one obtained from a suspect or database, remains a widely used technique in criminal investigations. A range of new DNA profiling techniques continues to be developed and applied in criminal investigations around the world (Smith & Urbas 2012). This paper is the third in a series by the Australian Institute of Criminology (AIC) on DNA evidence. The first, published in 1990 when the technology was in its relative infancy, outlined the scientific background for DNA evidence, considered early issues such as scientific reliability and privacy and described its application in early criminal cases (Easteal & Easteal 1990). The second, published in 2002, expanded on the scientific background and discussed a significant number of Australian cases in a 12-year period, illustrating issues that had arisen in investigations, at trial and in the use of DNA in the review of convictions and acquittals (Gans & Urbas 2002). There have been some significant developments in the science and technology behind DNA evidence in the 13 years since 2002 that have important implications for law enforcement and the legal system. These are discussed through a review of relevant legal cases and the latest empirical evidence. This paper is structured in three sections. The first examines the scientific techniques and how they have been applied in police investigations, drawing on a number of recent cases to illustrate them. The second considers empirical research evaluating DNA evidence and databases and the impact DNA has on investigative and court outcomes. The final section discusses significant cases that establish legal precedent relating to DNA evidence in criminal trials where significant issues have arisen or new techniques have been applied that have not yet been widely discussed in the literature. The paper concludes by reflecting on implications for policy and practice.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Enterprise Application Integration (EAI) is a challenging area that is attracting growing attention from the software industry and the research community. A landscape of languages and techniques for EAI has emerged and is continuously being enriched with new proposals from different software vendors and coalitions. However, little or no effort has been dedicated to systematically evaluate and compare these languages and techniques. The work reported in this paper is a first step in this direction. It presents an in-depth analysis of a language, namely the Business Modeling Language, specifically developed for EAI. The framework used for this analysis is based on a number of workflow and communication patterns. This framework provides a basis for evaluating the advantages and drawbacks of EAI languages with respect to recurrent problems and situations.