95 resultados para worst-case analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present a novel application of scenario methods to engage a diverse constituency of senior stakeholders, with limited time availability, in debate to inform planning and policy development. Our case study project explores post-carbon futures for the Latrobe Valley region of the Australian state of Victoria. Our approach involved initial deductive development of two ‘extreme scenarios’ by a multi-disciplinary research team, based upon an extensive research programme. Over four workshops with the stakeholder constituency, these initial scenarios were discussed, challenged, refined and expanded through an inductive process, whereby participants took ‘ownership’ of a final set of three scenarios. These were both comfortable and challenging to them. The outcomes of this process subsequently informed public policy development for the region. Whilst this process did not follow a single extant structured, multi-stage scenario approach, neither was it devoid of form. Here, we seek to theorise and codify elements of our process – which we term ‘scenario improvisation’ – such that others may adopt it.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a new way to build a combined list from K base lists, each containing N items. A combined list consists of top segments of various sizes from each base list so that the total size of all top segments equals N. A sequence of item requests is processed and the goal is to minimize the total number of misses. That is, we seek to build a combined list that contains all the frequently requested items. We first consider the special case of disjoint base lists. There, we design an efficient algorithm that computes the best combined list for a given sequence of requests. In addition, we develop a randomized online algorithm whose expected number of misses is close to that of the best combined list chosen in hindsight. We prove lower bounds that show that the expected number of misses of our randomized algorithm is close to the optimum. In the presence of duplicate items, we show that computing the best combined list is NP-hard. We show that our algorithms still apply to a linearized notion of loss in this case. We expect that this new way of aggregating lists will find many ranking applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider online trading in a single security with the objective of getting rich when its price ever exhibits a large upcrossing, without risking bankruptcy. We investigate payoff guarantees that are expressed in terms of the extremity of the upcrossings. We obtain an exact and elegant characterisation of the guarantees that can be achieved. Moreover, we derive a simple canonical strategy for each attainable guarantee.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Floods are among the most devastating events that affect primarily tropical, archipelagic countries such as the Philippines. With the current predictions of climate change set to include rising sea levels, intensification of typhoon strength and a general increase in the mean annual precipitation throughout the Philippines, it has become paramount to prepare for the future so that the increased risk of floods on the country does not translate into more economic and human loss. Field work and data gathering was done within the framework of an internship at the former German Technical Cooperation (GTZ) in cooperation with the Local Government Unit of Ormoc City, Leyte, The Philippines, in order to develop a dynamic computer based flood model for the basin of the Pagsangaan River. To this end, different geo-spatial analysis tools such as PCRaster and ArcGIS, hydrological analysis packages and basic engineering techniques were assessed and implemented. The aim was to develop a dynamic flood model and use the development process to determine the required data, availability and impact on the results as case study for flood early warning systems in the Philippines. The hope is that such projects can help to reduce flood risk by including the results of worst case scenario analyses and current climate change predictions into city planning for municipal development, monitoring strategies and early warning systems. The project was developed using a 1D-2D coupled model in SOBEK (Deltares Hydrological modelling software package) and was also used as a case study to analyze and understand the influence of different factors such as land use, schematization, time step size and tidal variation on the flood characteristics. Several sources of relevant satellite data were compared, such as Digital Elevation Models (DEMs) from ASTER and SRTM data, as well as satellite rainfall data from the GIOVANNI server (NASA) and field gauge data. Different methods were used in the attempt to partially calibrate and validate the model to finally simulate and study two Climate Change scenarios based on scenario A1B predictions. It was observed that large areas currently considered not prone to floods will become low flood risk (0.1-1 m water depth). Furthermore, larger sections of the floodplains upstream of the Lilo- an’s Bridge will become moderate flood risk areas (1 - 2 m water depth). The flood hazard maps created for the development of the present project will be presented to the LGU and the model will be used to create a larger set of possible flood prone areas related to rainfall intensity by GTZ’s Local Disaster Risk Management Department and to study possible improvements to the current early warning system and monitoring of the basin section belonging to Ormoc City; recommendations about further enhancement of the geo-hydro-meteorological data to improve the model’s accuracy mainly on areas of interest will also be presented at the LGU.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The integration of stochastic wind power has accentuated a challenge for power system stability assessment. Since the power system is a time-variant system under wind generation fluctuations, pure time-domain simulations are difficult to provide real-time stability assessment. As a result, the worst-case scenario is simulated to give a very conservative assessment of system transient stability. In this study, a probabilistic contingency analysis through a stability measure method is proposed to provide a less conservative contingency analysis which covers 5-min wind fluctuations and a successive fault. This probabilistic approach would estimate the transfer limit of a critical line for a given fault with stochastic wind generation and active control devices in a multi-machine system. This approach achieves a lower computation cost and improved accuracy using a new stability measure and polynomial interpolation, and is feasible for online contingency analysis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The linguistics of violence in film and on television is a hotly debated topic, especially whenever outrageously violent crimes are committed in the community. The debate tends to proceed thus: was the perpetrator addicted to watching violent films and videos, and if so, did the language of mediated violence translate into the language of everyday action, blurring the boundaries between fantasy and reality? The cause—effect relationship between fantasies enacted on screen and horrific real-life crimes has never been proven scientifically, despite endless governmental inquiries and many attempts by academics to discover a causation formula. I will not be looking so much at the vexed question of the relationship between stylized violence on celluloid and real violence in a community. Rather, I wish to explore the nature of a particular form of mediated, gendered violence through an analysis of the language of several key films made in the past decade focusing on the violent crime of rape: Hollywood films The Accused (1988), Casualties of War (1989), Thelma and Louise (1991), Strange Days (1996), and the Australian films Shame (1988) and The Boys (1998). In this way, I wish to show how rape is depicted linguistically in film, and how such films may actually give solutions to this abhorrent kind of violence rather than thrill the viewer vicariously, or, in a worst case scenario, stimulate people to further violence.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The critical impact of innovation on national and the global economies has been discussed at length in the literature. Economic development requires the diffusion of innovations into markets. It has long been recognised that economic growth and development depends upon a constant stream of innovations. Governments have been keenly aware of the need to ensure this flow does not dry to a trickle and have introduced many and varied industry policies and interventions to assist in seeding, supporting and diffusing innovations. In Australia, as in many countries, Government support for the transfer of knowledge especially from publicly funded research has resulted in the creation of knowledge exchange intermediaries. These intermediaries are themselves service organisations, seeking innovative service offerings for their markets. The choice for most intermediaries is generally a dichotomous one, between market-pull and technology-push knowledge exchange programmes. In this article, we undertake a case analysis of one such innovative intermediary and its flagship programme. We then compare this case with other successful intermediaries in Europe. We put forward a research proposition that the design of intermediary programmes must match the service type they offer. That is, market-pull programmes require market-pull design, in close collaboration with industry, whereas technology programmes can be problem-solving innovations where demand is latent. The discussion reflects the need for an evolution in knowledge transfer policies and programmes beyond the first generation ushered in with the US Bayh-Dole Act (1980) and Stevenson-Wydler Act (1984). The data analysed is a case study comparison of market-pull and technology-push programmes, focusing on primary and secondary socio-economic benefits (using both Australian and international comparisons).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In recent years, multilevel converters are becoming more popular and attractive than traditional converters in high voltage and high power applications. Multilevel converters are particularly suitable for harmonic reduction in high power applications where semiconductor devices are not able to operate at high switching frequencies or in high voltage applications where multilevel converters reduce the need to connect devices in series to achieve high switch voltage ratings. This thesis investigated two aspects of multilevel converters: structure and control. The first part of this thesis focuses on inductance between a DC supply and inverter components in order to minimise loop inductance, which causes overvoltages and stored energy losses during switching. Three dimensional finite element simulations and experimental tests have been carried out for all sections to verify theoretical developments. The major contributions of this section of the thesis are as follows: The use of a large area thin conductor sheet with a rectangular cross section separated by dielectric sheets (planar busbar) instead of circular cross section wires, contributes to a reduction of the stray inductance. A number of approximate equations exist for calculating the inductance of a rectangular conductor but an assumption was made that the current density was uniform throughout the conductors. This assumption is not valid for an inverter with a point injection of current. A mathematical analysis of a planar bus bar has been performed at low and high frequencies and the inductance and the resistance values between the two points of the planar busbar have been determined. A new physical structure for a voltage source inverter with symmetrical planar bus bar structure called Reduced Layer Planar Bus bar, is proposed in this thesis based on the current point injection theory. This new type of planar busbar minimises the variation in stray inductance for different switching states. The reduced layer planar busbar is a new innovation in planar busbars for high power inverters with minimum separation between busbars, optimum stray inductance and improved thermal performances. This type of the planar busbar is suitable for high power inverters, where the voltage source is supported by several capacitors in parallel in order to provide a low ripple DC voltage during operation. A two layer planar busbar with different materials has been analysed theoretically in order to determine the resistance of bus bars during switching. Increasing the resistance of the planar busbar can gain a damping ratio between stray inductance and capacitance and affects the performance of current loop during switching. The aim of this section is to increase the resistance of the planar bus bar at high frequencies (during switching) and without significantly increasing the planar busbar resistance at low frequency (50 Hz) using the skin effect. This contribution shows a novel structure of busbar suitable for high power applications where high resistance is required at switching times. In multilevel converters there are different loop inductances between busbars and power switches associated with different switching states. The aim of this research is to consider all combinations of the switching states for each multilevel converter topology and identify the loop inductance for each switching state. Results show that the physical layout of the busbars is very important for minimisation of the loop inductance at each switch state. Novel symmetrical busbar structures are proposed for multilevel converters with diode-clamp and flying-capacitor topologies which minimise the worst case in stray inductance for different switching states. Overshoot voltages and thermal problems are considered for each topology to optimise the planar busbar structure. In the second part of the thesis, closed loop current techniques have been investigated for single and three phase multilevel converters. The aims of this section are to investigate and propose suitable current controllers such as hysteresis and predictive techniques for multilevel converters with low harmonic distortion and switching losses. This section of the thesis can be classified into three parts as follows: An optimum space vector modulation technique for a three-phase voltage source inverter based on a minimum-loss strategy is proposed. One of the degrees of freedom for optimisation of the space vector modulation is the selection of the zero vectors in the switching sequence. This new method improves switching transitions per cycle for a given level of distortion as the zero vector does not alternate between each sector. The harmonic spectrum and weighted total harmonic distortion for these strategies are compared and results show up to 7% weighted total harmonic distortion improvement over the previous minimum-loss strategy. The concept of SVM technique is a very convenient representation of a set of three-phase voltages or currents used for current control techniques. A new hysteresis current control technique for a single-phase multilevel converter with flying-capacitor topology is developed. This technique is based on magnitude and time errors to optimise the level change of converter output voltage. This method also considers how to improve unbalanced voltages of capacitors using voltage vectors in order to minimise switching losses. Logic controls require handling a large number of switches and a Programmable Logic Device (PLD) is a natural implementation for state transition description. The simulation and experimental results describe and verify the current control technique for the converter. A novel predictive current control technique is proposed for a three-phase multilevel converter, which controls the capacitors' voltage and load current with minimum current ripple and switching losses. The advantage of this contribution is that the technique can be applied to more voltage levels without significantly changing the control circuit. The three-phase five-level inverter with a pure inductive load has been implemented to track three-phase reference currents using analogue circuits and a programmable logic device.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The effects of tumour motion during radiation therapy delivery have been widely investigated. Motion effects have become increasingly important with the introduction of dynamic radiotherapy delivery modalities such as enhanced dynamic wedges (EDWs) and intensity modulated radiation therapy (IMRT) where a dynamically collimated radiation beam is delivered to the moving target, resulting in dose blurring and interplay effects which are a consequence of the combined tumor and beam motion. Prior to this work, reported studies on the EDW based interplay effects have been restricted to the use of experimental methods for assessing single-field non-fractionated treatments. In this work, the interplay effects have been investigated for EDW treatments. Single and multiple field treatments have been studied using experimental and Monte Carlo (MC) methods. Initially this work experimentally studies interplay effects for single-field non-fractionated EDW treatments, using radiation dosimetry systems placed on a sinusoidaly moving platform. A number of wedge angles (60º, 45º and 15º), field sizes (20 × 20, 10 × 10 and 5 × 5 cm2), amplitudes (10-40 mm in step of 10 mm) and periods (2 s, 3 s, 4.5 s and 6 s) of tumor motion are analysed (using gamma analysis) for parallel and perpendicular motions (where the tumor and jaw motions are either parallel or perpendicular to each other). For parallel motion it was found that both the amplitude and period of tumor motion affect the interplay, this becomes more prominent where the collimator tumor speeds become identical. For perpendicular motion the amplitude of tumor motion is the dominant factor where as varying the period of tumor motion has no observable effect on the dose distribution. The wedge angle results suggest that the use of a large wedge angle generates greater dose variation for both parallel and perpendicular motions. The use of small field size with a large tumor motion results in the loss of wedged dose distribution for both parallel and perpendicular motion. From these single field measurements a motion amplitude and period have been identified which show the poorest agreement between the target motion and dynamic delivery and these are used as the „worst case motion parameters.. The experimental work is then extended to multiple-field fractionated treatments. Here a number of pre-existing, multiple–field, wedged lung plans are delivered to the radiation dosimetry systems, employing the worst case motion parameters. Moreover a four field EDW lung plan (using a 4D CT data set) is delivered to the IMRT quality control phantom with dummy tumor insert over four fractions using the worst case parameters i.e. 40 mm amplitude and 6 s period values. The analysis of the film doses using gamma analysis at 3%-3mm indicate the non averaging of the interplay effects for this particular study with a gamma pass rate of 49%. To enable Monte Carlo modelling of the problem, the DYNJAWS component module (CM) of the BEAMnrc user code is validated and automated. DYNJAWS has been recently introduced to model the dynamic wedges. DYNJAWS is therefore commissioned for 6 MV and 10 MV photon energies. It is shown that this CM can accurately model the EDWs for a number of wedge angles and field sizes. The dynamic and step and shoot modes of the CM are compared for their accuracy in modelling the EDW. It is shown that dynamic mode is more accurate. An automation of the DYNJAWS specific input file has been carried out. This file specifies the probability of selection of a subfield and the respective jaw coordinates. This automation simplifies the generation of the BEAMnrc input files for DYNJAWS. The DYNJAWS commissioned model is then used to study multiple field EDW treatments using MC methods. The 4D CT data of an IMRT phantom with the dummy tumor is used to produce a set of Monte Carlo simulation phantoms, onto which the delivery of single field and multiple field EDW treatments is simulated. A number of static and motion multiple field EDW plans have been simulated. The comparison of dose volume histograms (DVHs) and gamma volume histograms (GVHs) for four field EDW treatments (where the collimator and patient motion is in the same direction) using small (15º) and large wedge angles (60º) indicates a greater mismatch between the static and motion cases for the large wedge angle. Finally, to use gel dosimetry as a validation tool, a new technique called the „zero-scan method. is developed for reading the gel dosimeters with x-ray computed tomography (CT). It has been shown that multiple scans of a gel dosimeter (in this case 360 scans) can be used to reconstruct a zero scan image. This zero scan image has a similar precision to an image obtained by averaging the CT images, without the additional dose delivered by the CT scans. In this investigation the interplay effects have been studied for single and multiple field fractionated EDW treatments using experimental and Monte Carlo methods. For using the Monte Carlo methods the DYNJAWS component module of the BEAMnrc code has been validated and automated and further used to study the interplay for multiple field EDW treatments. Zero-scan method, a new gel dosimetry readout technique has been developed for reading the gel images using x-ray CT without losing the precision and accuracy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cryptosystems based on the hardness of lattice problems have recently acquired much importance due to their average-case to worst-case equivalence, their conjectured resistance to quantum cryptanalysis, their ease of implementation and increasing practicality, and, lately, their promising potential as a platform for constructing advanced functionalities. In this work, we construct “Fuzzy” Identity Based Encryption from the hardness of the Learning With Errors (LWE) problem. We note that for our parameters, the underlying lattice problems (such as gapSVP or SIVP) are assumed to be hard to approximate within supexponential factors for adversaries running in subexponential time. We give CPA and CCA secure variants of our construction, for small and large universes of attributes. All our constructions are secure against selective-identity attacks in the standard model. Our construction is made possible by observing certain special properties that secret sharing schemes need to satisfy in order to be useful for Fuzzy IBE. We also discuss some obstacles towards realizing lattice-based attribute-based encryption (ABE).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

NTRUEncrypt is a fast and practical lattice-based public-key encryption scheme, which has been standardized by IEEE, but until recently, its security analysis relied only on heuristic arguments. Recently, Stehlé and Steinfeld showed that a slight variant (that we call pNE) could be proven to be secure under chosen-plaintext attack (IND-CPA), assuming the hardness of worst-case problems in ideal lattices. We present a variant of pNE called NTRUCCA, that is IND-CCA2 secure in the standard model assuming the hardness of worst-case problems in ideal lattices, and only incurs a constant factor overhead in ciphertext and key length over the pNE scheme. To our knowledge, our result gives the first IND-CCA2 secure variant of NTRUEncrypt in the standard model, based on standard cryptographic assumptions. As an intermediate step, we present a construction for an All-But-One (ABO) lossy trapdoor function from pNE, which may be of independent interest. Our scheme uses the lossy trapdoor function framework of Peikert and Waters, which we generalize to the case of (k − 1)-of-k-correlated input distributions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Follow-the-Leader (FTL) is an intuitive sequential prediction strategy that guarantees constant regret in the stochastic setting, but has poor performance for worst-case data. Other hedging strategies have better worst-case guarantees but may perform much worse than FTL if the data are not maximally adversarial. We introduce the FlipFlop algorithm, which is the first method that provably combines the best of both worlds. As a stepping stone for our analysis, we develop AdaHedge, which is a new way of dynamically tuning the learning rate in Hedge without using the doubling trick. AdaHedge refines a method by Cesa-Bianchi, Mansour, and Stoltz (2007), yielding improved worst-case guarantees. By interleaving AdaHedge and FTL, FlipFlop achieves regret within a constant factor of the FTL regret, without sacrificing AdaHedge’s worst-case guarantees. AdaHedge and FlipFlop do not need to know the range of the losses in advance; moreover, unlike earlier methods, both have the intuitive property that the issued weights are invariant under rescaling and translation of the losses. The losses are also allowed to be negative, in which case they may be interpreted as gains.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Chronic wounds, such as venous and diabetic leg ulcers, represent a significant health and financial burden to individuals and healthcare systems. In worst case scenarios this condition may require the amputation of an affected limb, with significant impact on patient quality of life and health. Presently there are no clinical biochemical analyses used in the diagnosis and management of this condition; moreover few biochemical therapies are accessible to patients. This presents a significant challenge in the efficient and efficacious treatment of chronic wounds by medical practitioners. A number of protein-centric investigations have analysed the wound environment and implicated a suite of molecular species predicted to be involved in the initiation or perpetuation of the condition. However, comprehensive proteomic investigation is yet to be engaged in the analysis of chronic wounds for the identification of molecular diagnostic/prognostic markers of healing or therapeutic targets. This review examines clinical chronic wound research and recommends a path towards proteomic investigation for the discovery of medically significant targets. Additionally, the supplementary documents associated with this review provide the first comprehensive summary of protein-centric, small molecule and elemental analyses in clinical chronic wound research.