778 resultados para worst-case analysis
Resumo:
BACKGROUND Research on engineering design is a core area of concern within engineering education and a fundamental understanding of how engineering students approach and undertake design is necessary in order to develop effective design models and pedagogies. Understanding the factors related to design experiences in education and how they affect student practice can help educators as well as designers to leverage these factors as part of the design process. PURPOSE This study investigated the design practices of first-year engineering students’ and their experiences with a first-year engineering course design project. The research questions that guided the investigation were: 1. From a student perspective, what design parameters or criteria are most important? 2. How does this perspective impact subsequent student design practice throughout the design process? DESIGN/METHOD The authors employed qualitative multi-case study methods (Miles & Huberman, 1994) in order to the answer the research questions. Participant teams were observed and video recorded during team design meetings in which they researched the background for the design problem, brainstormed and sketched possible solutions, as well as built prototypes and final models of their design solutions as part of a course design project. Analysis focused on explanation building (Yin, 2009) and utilized within-case and cross-case analysis (Miles & Huberman, 1994). RESULTS We found that students focused disproportionally on the functional parameter, i.e. the physical implementation of their solution, and the possible/applicable parameter, i.e. a possible and applicable solution that benefited the user, in comparison to other given parameters such as safety and innovativeness. In addition, we found that individual teams focused on the functional and possible/ applicable parameters in early design phases such as brainstorming/ ideation and sketching. When prompted to discuss these non-salient parameters (from the student perspective) in the final design report, student design teams often used a post-hoc justification to support how the final designs fit the parameters that they did not initially consider. CONCLUSIONS This study suggests is that student design teams become fixated on (and consequently prioritize) certain parameters they interpret as important because they feel these parameters were described more explicitly in terms how they were met and assessed. Students fail to consider other parameters, perceived to be less directly assessable, unless prompted to do so. Failure to consider other parameters in the early design phases subsequently affects their approach in design phases as well. Case studies examining students’ study strategies within three Australian Universities illustrate similarities with some student approaches to design.
Resumo:
The growing importance of logistics in increasingly globalised production and consumption systems strengthens the case for explicit consideration of the climate risks that may impact on the operation of ports in the future, as well as the formulation of adaptation responses that act to enhance their resilience. Within a logistics chain, seaports are functional nodes of significant strategic importance, and are considered as critical gateways linking local and national supply chains to global markets. However, they are more likely to be exposed to vagaries of climate-related extreme events due to their coastal locations. As such, they need to be adaptive and respond to the projected impacts of climate change, in particular extreme weather events. These impacts are especially important in the logistics context as they could result in varying degrees of business interruption; including business closure in the worst case scenario. Since trans-shipment of freight for both the import and export of goods and raw materials has a significant impact on Australia’s sustained economic growth it was considered important to undertake a study of port functional assets, to assess their vulnerability to climate change, to model the potential impacts of climate-related extreme events, and to highlight possible adaptation responses.
Resumo:
This article examines the design of ePortfolios for music postgraduate students utilizing a practice-led design iterative research process. It is suggested that the availability of Web 2.0 technologies such as blogs and social network software potentially provide creative artist with an opportunity to engage in a dialogue about art with artefacts of the artist products and processes present in that discussion. The design process applied Software Development as Research (SoDaR) methodology to simultaneously develop design and pedagogy. The approach to designing ePortfolio systems applied four theoretical protocols to examine the use of digitized artefacts to enable a dynamic and inclusive dialogue around representations of the students work. A negative case analysis identified a disjuncture between university access and control policy, and the relative openness of Web2.0 systems outside the institution that led to the design of an integrated model of ePortfolio.
Resumo:
Proxy re-encryption (PRE) is a highly useful cryptographic primitive whereby Alice and Bob can endow a proxy with the capacity to change ciphertext recipients from Alice to Bob, without the proxy itself being able to decrypt, thereby providing delegation of decryption authority. Key-private PRE (KP-PRE) specifies an additional level of confidentiality, requiring pseudo-random proxy keys that leak no information on the identity of the delegators and delegatees. In this paper, we propose a CPA-secure PK-PRE scheme in the standard model (which we then transform into a CCA-secure scheme in the random oracle model). Both schemes enjoy highly desirable properties such as uni-directionality and multi-hop delegation. Unlike (the few) prior constructions of PRE and KP-PRE that typically rely on bilinear maps under ad hoc assumptions, security of our construction is based on the hardness of the standard Learning-With-Errors (LWE) problem, itself reducible from worst-case lattice hard problems that are conjectured immune to quantum cryptanalysis, or “post-quantum”. Of independent interest, we further examine the practical hardness of the LWE assumption, using Kannan’s exhaustive search algorithm coupling with pruning techniques. This leads to state-of-the-art parameters not only for our scheme, but also for a number of other primitives based on LWE published the literature.
Resumo:
What is the state of geographical education in the second decade of the 21st century? This volume presents a selection of peer reviewed papers presented at the 2012 Cologne Congress of the International Geographical Union (IGU) sessions on Geographical Education as representative of current thinking in the area. It then presents (perhaps for the first time) a cross-case analysis of the common factors of all these papers as a current summary of the “state of the art” of geographical education today. The primary aim of the individual authors as well as the editors is not only to record the current state of the art of geographical education but also to promote ongoing discussions of the longer term health and future prospects of international geographical education. We wish to encourage ongoing debate and discussion amongst local, national, regional and international education journals, conferences and discussion groups as part of the international mission of the Commission on Geographical Eduction. While the currency of these chapters in terms of their foci, breadth and recency of the theoretical literature on which they are based and the new research findings they present justifies considerable confidence in the current health of geographical education as an educational and research endeavour, each new publication should only be the start of new scholarly inquiry. Where should we, as a scholarly community, place our energies for the future? If readers are left with a new sense of direction, then the aims of the authors and editors will have been amply met.
Resumo:
We study two problems of online learning under restricted information access. In the first problem, prediction with limited advice, we consider a game of prediction with expert advice, where on each round of the game we query the advice of a subset of M out of N experts. We present an algorithm that achieves O(√(N/M)TlnN ) regret on T rounds of this game. The second problem, the multiarmed bandit with paid observations, is a variant of the adversarial N-armed bandit game, where on round t of the game we can observe the reward of any number of arms, but each observation has a cost c. We present an algorithm that achieves O((cNlnN) 1/3 T2/3+√TlnN ) regret on T rounds of this game in the worst case. Furthermore, we present a number of refinements that treat arm- and time-dependent observation costs and achieve lower regret under benign conditions. We present lower bounds that show that, apart from the logarithmic factors, the worst-case regret bounds cannot be improved.
Resumo:
Investment in early childhood education and care (ECEC) programs is a cornerstone policy of the Australian Government directed toward increasing the educational opportunities and life chances made available to Australian Aboriginal and Torres Strait Islander (Indigenous) children. Yet, ECEC programs are not always effective in supporting sustained attendance of Indigenous families. A site-case analysis of Mount Isa, Queensland was conducted to identify program features that engage and support attendance of Indigenous families. This first study, reports the perspectives of early childhood professionals from across the entire range of group-based licensed (kindergarten and long day care) and non-licensed (playgroups, parent-child education) programs (n=19). Early childhood professionals reported that Indigenous families preferred non-licensed over licensed programs. Reasons suggested for this choice were that non-licensed services provided integration with family supports, were responsive to family circumstance and had a stronger focus on relationship building. Implications for policy and service provision are discussed.
Resumo:
Enterprise Resource Planning (ERP) systems are integrated enterprise-wide standard information systems that automate all aspects of an organisations’ business processes. The ERP philosophy is that business systems incorporating sales, marketing, manufacturing, distribution, personnel and finance modules can be supported by a single integrated system with all of the company’s data captured in a central database. The ERP packages of vendors such as SAP, Baan, J.D. Edwards and Intentia represent more than a common systems platform for a business. They prescribe information blueprints of how organisation’s business processes should operate. In this paper, the scale and strategic importance of ERP systems is identified and the problem of ERP implementation is defined. Five company examples are analysed using a Critical Success Factors (CSFs) theoretical framework. The paper offers a framework for managers which provides the basis for developing an ERP implementation strategy. The case analysis identifies different approaches to ERP implementation, highlights the critical role of legacy systems in influencing the implementation process, and identifies the importance of business process change and software configuration in addition to factors already cited in the literature such as top management support and communication. The implications of the results and future research opportunities are outlined.
Resumo:
Purpose This paper seeks to investigate the conditions and processes affecting the operation and potential effectiveness of audit committees (ACs), with particular focus on the interaction between the AC, individuals from financial reporting and internal audit functions and the external auditors. Design/methodology/approach A case study approach is employed, based on direct engagement with participants in AC activities, including the AC chair, external auditors, internal auditors, and senior management. Findings The authors find that informal networks between AC participants condition the impact of the AC and that the most significant effects of the AC on governance outcomes occur outside the formal structures and processes. An AC has pervasive behavioural effects within the organization and may be used as a threat, an ally and an arbiter in bringing solutions to issues and conflicts. ACs are used in organizational politics, communication processes and power plays and also affect interpretations of events and cultural values. Research limitations/implications Further research on AC and governance processes is needed to develop better understanding of effectiveness. Longitudinal studies, focusing on the organizational and institutional context of AC operations, can examine how historical events in an organization and significant changes in the regulatory environment affect current structures and processes. Originality/value The case analysis highlights a number of significant factors which are not fully recognised either in theorizing the governance role of ACs or in the development of policy and regulations concerning ACs but which impinge on their governance contribution. They include the importance of informal processes around the AC; its influence on power relations between organizational participants; the relevance of the historical development of governance in an organization; and the possibility that the AC’s impact on governance may be greatest in non-routine situations.
Resumo:
Formal incentives systems aim to encourage improved performance by offering a reward for the achievement of project-specific goals. Despite argued benefits of incentive systems on project delivery outcomes, there remains debate over how incentive systems can be designed to encourage the formation of strong project relationships within a complex social system such as an infrastructure project. This challenge is compounded by the increasing emphasis in construction management research on the important mediating influence of technical and organisational context on project performance. In light of this challenge, the research presented in this paper focuses on the design of incentive systems in four infrastructure projects: two road reconstructions in the Netherlands and two building constructions in Australia. Based on a motivational theory frame, a cross case analysis is conducted to examine differences and similarities across social and cultural drivers impacting on the effectiveness of the incentive systems in light of infrastructure project context. Despite significant differences in case project characteristics, results indicate the projects’ experience similar social drivers impacting on incentive effectiveness. Significant value across the projects was placed on: varied performance goals and multiple opportunities to across the project team to pursue incentive rewards; fair risk allocation across contract parties; value-driven tender selection; improved design-build integration; and promotion of future work opportunities. However, differences across the contexts were identified. Results suggest future work opportunities were a more powerful social driver in upholding reputation and establishing strong project relationships in the Australian context. On the other hand, the relationship initiatives in the Dutch context seemed to be more broadly embraced resulting in a greater willingness to collaboratively manage project risk. Although there are limitations with this research in drawing generalizations across two sets of case projects, the results provide a strong base to explore the social and cultural influences on incentive effectiveness across different geographical and contextual boundaries in future research.
Resumo:
Initial attempts to obtain lattice based signatures were closely related to reducing a vector modulo the fundamental parallelepiped of a secret basis (like GGH [9], or NTRUSign [12]). This approach leaked some information on the secret, namely the shape of the parallelepiped, which has been exploited on practical attacks [24]. NTRUSign was an extremely efficient scheme, and thus there has been a noticeable interest on developing countermeasures to the attacks, but with little success [6]. In [8] Gentry, Peikert and Vaikuntanathan proposed a randomized version of Babai’s nearest plane algorithm such that the distribution of a reduced vector modulo a secret parallelepiped only depended on the size of the base used. Using this algorithm and generating large, close to uniform, public keys they managed to get provably secure GGH-like lattice-based signatures. Recently, Stehlé and Steinfeld obtained a provably secure scheme very close to NTRUSign [26] (from a theoretical point of view). In this paper we present an alternative approach to seal the leak of NTRUSign. Instead of modifying the lattices and algorithms used, we do a classic leaky NTRUSign signature and hide it with gaussian noise using techniques present in Lyubashevky’s signatures. Our main contributions are thus a set of strong NTRUSign parameters, obtained by taking into account latest known attacks against the scheme, a statistical way to hide the leaky NTRU signature so that this particular instantiation of CVP-based signature scheme becomes zero-knowledge and secure against forgeries, based on the worst-case hardness of the O~(N1.5)-Shortest Independent Vector Problem over NTRU lattices. Finally, we give a set of concrete parameters to gauge the efficiency of the obtained signature scheme.
Resumo:
A number of online algorithms have been developed that have small additional loss (regret) compared to the best “shifting expert”. In this model, there is a set of experts and the comparator is the best partition of the trial sequence into a small number of segments, where the expert of smallest loss is chosen in each segment. The regret is typically defined for worst-case data / loss sequences. There has been a recent surge of interest in online algorithms that combine good worst-case guarantees with much improved performance on easy data. A practically relevant class of easy data is the case when the loss of each expert is iid and the best and second best experts have a gap between their mean loss. In the full information setting, the FlipFlop algorithm by De Rooij et al. (2014) combines the best of the iid optimal Follow-The-Leader (FL) and the worst-case-safe Hedge algorithms, whereas in the bandit information case SAO by Bubeck and Slivkins (2012) competes with the iid optimal UCB and the worst-case-safe EXP3. We ask the same question for the shifting expert problem. First, we ask what are the simple and efficient algorithms for the shifting experts problem when the loss sequence in each segment is iid with respect to a fixed but unknown distribution. Second, we ask how to efficiently unite the performance of such algorithms on easy data with worst-case robustness. A particular intriguing open problem is the case when the comparator shifts within a small subset of experts from a large set under the assumption that the losses in each segment are iid.
Resumo:
The development and maintenance of large and complex ontologies are often time-consuming and error-prone. Thus, automated ontology learning and revision have attracted intensive research interest. In data-centric applications where ontologies are designed or automatically learnt from the data, when new data instances are added that contradict to the ontology, it is often desirable to incrementally revise the ontology according to the added data. This problem can be intuitively formulated as the problem of revising a TBox by an ABox. In this paper we introduce a model-theoretic approach to such an ontology revision problem by using a novel alternative semantic characterisation of DL-Lite ontologies. We show some desired properties for our ontology revision. We have also developed an algorithm for reasoning with the ontology revision without computing the revision result. The algorithm is efficient as its computational complexity is in coNP in the worst case and in PTIME when the size of the new data is bounded.
Resumo:
Intermittent generation from wind farms leads to fluctuating power system operating conditions pushing the stability margin to its limits. The traditional way of determining the worst case generation dispatch for a system with several semi-scheduled wind generators yields a conservative solution. This paper proposes a fast estimation of the transient stability margin (TSM) incorporating the uncertainty of wind generation. First, the Kalman filter (KF) is used to provide linear estimation of system angle and then unscented transformation (UT) is used to estimate the distribution of the TSM. The proposed method is compared with the traditional Monte Carlo (MC) method and the effectiveness of the proposed approach is verified using Single Machine Infinite Bus (SMIB) and IEEE 14 generator Australian dynamic system. This method will aid grid operators to perform fast online calculations to estimate TSM distribution of a power system with high levels of intermittent wind generation.
Resumo:
Background Bloodstream infections resulting from intravascular catheters (catheter-BSI) in critical care increase patients' length of stay, morbidity and mortality, and the management of these infections and their complications has been estimated to cost the NHS annually £19.1–36.2M. Catheter-BSI are thought to be largely preventable using educational interventions, but guidance as to which types of intervention might be most clinically effective is lacking. Objective To assess the effectiveness and cost-effectiveness of educational interventions for preventing catheter-BSI in critical care units in England. Data sources Sixteen electronic bibliographic databases – including MEDLINE, MEDLINE In-Process & Other Non-Indexed Citations, Cumulative Index to Nursing and Allied Health Literature (CINAHL), NHS Economic Evaluation Database (NHS EED), EMBASE and The Cochrane Library databases – were searched from database inception to February 2011, with searches updated in March 2012. Bibliographies of systematic reviews and related papers were screened and experts contacted to identify any additional references. Review methods References were screened independently by two reviewers using a priori selection criteria. A descriptive map was created to summarise the characteristics of relevant studies. Further selection criteria developed in consultation with the project Advisory Group were used to prioritise a subset of studies relevant to NHS practice and policy for systematic review. A decision-analytic economic model was developed to investigate the cost-effectiveness of educational interventions for preventing catheter-BSI. Results Seventy-four studies were included in the descriptive map, of which 24 were prioritised for systematic review. Studies have predominantly been conducted in the USA, using single-cohort before-and-after study designs. Diverse types of educational intervention appear effective at reducing the incidence density of catheter-BSI (risk ratios statistically significantly < 1.0), but single lectures were not effective. The economic model showed that implementing an educational intervention in critical care units in England would be cost-effective and potentially cost-saving, with incremental cost-effectiveness ratios under worst-case sensitivity analyses of < £5000/quality-adjusted life-year. Limitations Low-quality primary studies cannot definitively prove that the planned interventions were responsible for observed changes in catheter-BSI incidence. Poor reporting gave unclear estimates of risk of bias. Some model parameters were sourced from other locations owing to a lack of UK data. Conclusions Our results suggest that it would be cost-effective and may be cost-saving for the NHS to implement educational interventions in critical care units. However, more robust primary studies are needed to exclude the possible influence of secular trends on observed reductions in catheter-BSI.