136 resultados para Combining schemes
Resumo:
INTRODUCTION • Public bicycle share schemes have emerged as a method of increasing rates of bicycle riding. • The overwhelming majority of schemes have begun since 2005, taking advantage of various tracking and payment technologies making short term rental practical and affordable. • Very little research has been undertaken to determine their potentially broad impact on transport behaviour and consequently, it is difficult to understand the performance of these schemes in terms of reduced emissions and congestion, as well as possible increases in physical activity.
Resumo:
The US Securities and Exchange Comission requires registered management investment companies to disclose how they vote proxies relating to portfolio securities they hold. The primary purpose of this rule is to enable fund investors to monitor the role of institutional shareholders in the corporate governance practices of public companies. In Australia, despite reform proposals, there are no regulations requiring institutional investors to report proxy voting procedures and practises. There is little evidence of voluntary disclosure of proxy voting by Australian managed investment schemes in equities, indicating that there are costs involved in such disclosure.
Resumo:
Performance based planning is a form of planning regulation that is not well understood and the theoretical advantages of this type of planning are rarely achieved in practice. Normatively, this type of regulation relies on performance standards that are quantifiable and technically based which are designed to manage the effects of development, where performance standards provide certainty in respect of the level of performance and the means of achievement is flexible. Few empirical studies have attempted to examine how performance based planning has been conceptualised and implemented in practice. Existing literature is predominately anecdotal and consultant based (Baker et al. 2006) and has not sought to quantitatively examine how land use has been managed or determine how context influences implementation. The Integrated Planning Act 1997 (IPA) operated as Queensland’s principal planning legislation between March 1998 and December 2009. The IPA prevented Local Governments from prohibiting development or use and the term zone was absent from the legislation. While the IPA did not use the term performance based planning, the system is widely considered to be performance based in practice (e.g. Baker et al. 2006; Steele 2009a, 2009b). However, the degree to which the IPA and the planning system in Queensland is performance based is debated (e.g. Yearbury 1998; England 2004). Four research questions guided the research framework using Queensland as the case study. The questions sought to: determine if there is a common understanding of performance based planning; identify how performance based planning was expressed under the IPA; understand how performance based planning was implemented in plans; and explore the experiences of participants in the planning system. The research developed a performance adoption spectrum. The spectrum describes how performance based planning is implemented, ranging between pure and hybrid interpretations. An ex-post evaluation of seventeen IPA plans sought to determine plan performativity within the conceptual spectrum. Land use was examined from the procedural dimension of performance (Assessment Tables) and the substantive dimension of performance (Codes). A documentary analysis and forty one interviews supplemented the research. The analytical framework considered how context influenced performance based planning, including whether: the location of the local government affected land use management techniques; temporal variation in implementation exists; plan-making guidelines affected implementation; different perceptions of the concept exist; this type of planning applies to a range of spatial scales. Outcomes were viewed as the medium for determining the acceptability of development in Queensland, a significant departure from pure approaches found in the United States. Interviews highlighted the absence of plan-making direction in the IPA, which contributed to the confusion about the intended direction of the planning system and the myth that the IPA would guarantee a performance based system. A hybridised form of performance based planning evolved in Queensland which was dependent on prescriptive land use zones and specification of land use type, with some local governments going to extreme lengths to discourage certain activities in a predetermined manner. Context had varying degrees of influence on plan-making methods. Decision-making was found to be inconsistent and the system created a range of unforeseen consequences including difficulties associated with land valuation, increased development speculation, and the role of planners in court was found to be less critical than in the previous planning system.
Resumo:
Ocean processes are complex and have high variability in both time and space. Thus, ocean scientists must collect data over long time periods to obtain a synoptic view of ocean processes and resolve their spatiotemporal variability. One way to perform these persistent observations is to utilise an autonomous vehicle that can remain on deployment for long time periods. However, such vehicles are generally underactuated and slow moving. A challenge for persistent monitoring with these vehicles is dealing with currents while executing a prescribed path or mission. Here we present a path planning method for persistent monitoring that exploits ocean currents to increase navigational accuracy and reduce energy consumption.
Resumo:
The Balanced method was introduced as a class of quasi-implicit methods, based upon the Euler-Maruyama scheme, for solving stiff stochastic differential equations. We extend the Balanced method to introduce a class of stable strong order 1. 0 numerical schemes for solving stochastic ordinary differential equations. We derive convergence results for this class of numerical schemes. We illustrate the asymptotic stability of this class of schemes is illustrated and is compared with contemporary schemes of strong order 1. 0. We present some evidence on parametric selection with respect to minimising the error convergence terms. Furthermore we provide a convergence result for general Balanced style schemes of higher orders.
Resumo:
The benefits of applying tree-based methods to the purpose of modelling financial assets as opposed to linear factor analysis are increasingly being understood by market practitioners. Tree-based models such as CART (classification and regression trees) are particularly well suited to analysing stock market data which is noisy and often contains non-linear relationships and high-order interactions. CART was originally developed in the 1980s by medical researchers disheartened by the stringent assumptions applied by traditional regression analysis (Brieman et al. [1984]). In the intervening years, CART has been successfully applied to many areas of finance such as the classification of financial distress of firms (see Frydman, Altman and Kao [1985]), asset allocation (see Sorensen, Mezrich and Miller [1996]), equity style timing (see Kao and Shumaker [1999]) and stock selection (see Sorensen, Miller and Ooi [2000])...
Resumo:
Proving security of cryptographic schemes, which normally are short algorithms, has been known to be time-consuming and easy to get wrong. Using computers to analyse their security can help to solve the problem. This thesis focuses on methods of using computers to verify security of such schemes in cryptographic models. The contributions of this thesis to automated security proofs of cryptographic schemes can be divided into two groups: indirect and direct techniques. Regarding indirect ones, we propose a technique to verify the security of public-key-based key exchange protocols. Security of such protocols has been able to be proved automatically using an existing tool, but in a noncryptographic model. We show that under some conditions, security in that non-cryptographic model implies security in a common cryptographic one, the Bellare-Rogaway model [11]. The implication enables one to use that existing tool, which was designed to work with a different type of model, in order to achieve security proofs of public-key-based key exchange protocols in a cryptographic model. For direct techniques, we have two contributions. The first is a tool to verify Diffie-Hellmanbased key exchange protocols. In that work, we design a simple programming language for specifying Diffie-Hellman-based key exchange algorithms. The language has a semantics based on a cryptographic model, the Bellare-Rogaway model [11]. From the semantics, we build a Hoare-style logic which allows us to reason about the security of a key exchange algorithm, specified as a pair of initiator and responder programs. The other contribution to the direct technique line is on automated proofs for computational indistinguishability. Unlike the two other contributions, this one does not treat a fixed class of protocols. We construct a generic formalism which allows one to model the security problem of a variety of classes of cryptographic schemes as the indistinguishability between two pieces of information. We also design and implement an algorithm for solving indistinguishability problems. Compared to the two other works, this one covers significantly more types of schemes, but consequently, it can verify only weaker forms of security.
Resumo:
This paper outlines a novel approach for modelling semantic relationships within medical documents. Medical terminologies contain a rich source of semantic information critical to a number of techniques in medical informatics, including medical information retrieval. Recent research suggests that corpus-driven approaches are effective at automatically capturing semantic similarities between medical concepts, thus making them an attractive option for accessing semantic information. Most previous corpus-driven methods only considered syntagmatic associations. In this paper, we adapt a recent approach that explicitly models both syntagmatic and paradigmatic associations. We show that the implicit similarity between certain medical concepts can only be modelled using paradigmatic associations. In addition, the inclusion of both types of associations overcomes the sensitivity to the training corpus experienced by previous approaches, making our method both more effective and more robust. This finding may have implications for researchers in the area of medical information retrieval.
Resumo:
Objective: The nature of contemporary cancer therapy means that patients are faced with difficult treatment decisions about surgery, chemotherapy and radiotherapy. For some, this process may also involve consideration of therapies that sit outside the biomedical approach to cancer treatment, in our research, traditional Chinese medicine (TCM). Thus, it is important to explore how cancer patients in Taiwan incorporate TCM into their cancer treatment journey. This paper aims to explore of the patterns of combining the use of TCM and Western medicine into cancer treatment journey in Taiwanese people with cancer. Methods: The sampling was purposive and the data collected through in-depth interviews. Data collection occurred over an eleven month. The research was grounded in the premises of symbolic interactionism and adopted the methods of grounded theory. Twenty four participants who were patients receiving cancer treatment were recruited from two health care settings in Taiwan. Results: The study findings suggest that perceptions of health and illness are mediated through ongoing interactions with different forms of therapy. The participants in this study had a clear focus on “process and patterns of using TCM and Western medicine”. Further, ‘different importance in Western medicine and TCM’, ‘taken for granted to use TCM’, ‘each has specialized skills in Western medicine and TCM’ and ‘different symptoms use different approaches (Western medicine or TCM)’ may explicit how the participants in this study see CAM and Western medicine. Conclusions/Implications for practice: The descriptive frame of the study suggests that TCM and Western medicine occupy quite distinct domains in terms of decision making over their use. People used TCM based on interpretations of the present and against a background of an enduring cultural legacy grounded in Chinese philosophical beliefs about health and healthcare. The increasingly popular term of 'integrative medicine' obscures the complex contexts of the patterns of use of both therapeutic modalities. It is this latter point that is worthy of further exploration.
Resumo:
Infrastructure forms a vital component in supporting today’s way of life and has a significant role or impact on economic, environmental and social outcomes of the region around it. The design, construction and operation of such assets are a multi-billion dollar industry in Australia alone. Another issue that will play a major role in our way life is that of climate change and the greater concept of sustainability. With limited resources and a changing natural world it is necessary for infrastructure to be developed and maintained in a manner that is sustainable. In order to achieve infrastructure sustainability in operations it is necessary for there to be: a sustainability assessment scheme that provides a scientifically sound and realistic approach to measuring an assets level of sustainability; and, systems and tools to support the making of decisions that result in sustainable outcomes by providing feedback in a timely manner. Having these in place will then help drive the consideration of sustainability during the decision making process for infrastructure operations and maintenance. In this paper we provide two main contributions; a comparison and review of sustainability assessment schemes for infrastructure and their suitability for use in the operations phase; and, a review of decision support systems/tools in the area of infrastructure sustainability in operations. For this paper, sustainability covers not just the environment, but also finance/economic and societal/community aspects as well. This is often referred to as the Triple Bottom Line and forms one of the three dimensions of corporate sustainability [Stapledon, 2004].
Resumo:
Secure communications in distributed Wireless Sensor Networks (WSN) operating under adversarial conditions necessitate efficient key management schemes. In the absence of a priori knowledge of post-deployment network configuration and due to limited resources at sensor nodes, key management schemes cannot be based on post-deployment computations. Instead, a list of keys, called a key-chain, is distributed to each sensor node before the deployment. For secure communication, either two nodes should have a key in common in their key-chains, or they should establish a key through a secure-path on which every link is secured with a key. We first provide a comparative survey of well known key management solutions for WSN. Probabilistic, deterministic and hybrid key management solutions are presented, and they are compared based on their security properties and re-source usage. We provide a taxonomy of solutions, and identify trade-offs in them to conclude that there is no one size-fits-all solution. Second, we design and analyze deterministic and hybrid techniques to distribute pair-wise keys to sensor nodes before the deployment. We present novel deterministic and hybrid approaches based on combinatorial design theory and graph theory for deciding how many and which keys to assign to each key-chain before the sensor network deployment. Performance and security of the proposed schemes are studied both analytically and computationally. Third, we address the key establishment problem in WSN which requires key agreement algorithms without authentication are executed over a secure-path. The length of the secure-path impacts the power consumption and the initialization delay for a WSN before it becomes operational. We formulate the key establishment problem as a constrained bi-objective optimization problem, break it into two sub-problems, and show that they are both NP-Hard and MAX-SNP-Hard. Having established inapproximability results, we focus on addressing the authentication problem that prevents key agreement algorithms to be used directly over a wireless link. We present a fully distributed algorithm where each pair of nodes can establish a key with authentication by using their neighbors as the witnesses.
Resumo:
Authenticated Encryption (AE) is the cryptographic process of providing simultaneous confidentiality and integrity protection to messages. This approach is more efficient than applying a two-step process of providing confidentiality for a message by encrypting the message, and in a separate pass providing integrity protection by generating a Message Authentication Code (MAC). AE using symmetric ciphers can be provided by either stream ciphers with built in authentication mechanisms or block ciphers using appropriate modes of operation. However, stream ciphers have the potential for higher performance and smaller footprint in hardware and/or software than block ciphers. This property makes stream ciphers suitable for resource constrained environments, where storage and computational power are limited. There have been several recent stream cipher proposals that claim to provide AE. These ciphers can be analysed using existing techniques that consider confidentiality or integrity separately; however currently there is no existing framework for the analysis of AE stream ciphers that analyses these two properties simultaneously. This thesis introduces a novel framework for the analysis of AE using stream cipher algorithms. This thesis analyzes the mechanisms for providing confidentiality and for providing integrity in AE algorithms using stream ciphers. There is a greater emphasis on the analysis of the integrity mechanisms, as there is little in the public literature on this, in the context of authenticated encryption. The thesis has four main contributions as follows. The first contribution is the design of a framework that can be used to classify AE stream ciphers based on three characteristics. The first classification applies Bellare and Namprempre's work on the the order in which encryption and authentication processes take place. The second classification is based on the method used for accumulating the input message (either directly or indirectly) into the into the internal states of the cipher to generate a MAC. The third classification is based on whether the sequence that is used to provide encryption and authentication is generated using a single key and initial vector, or two keys and two initial vectors. The second contribution is the application of an existing algebraic method to analyse the confidentiality algorithms of two AE stream ciphers; namely SSS and ZUC. The algebraic method is based on considering the nonlinear filter (NLF) of these ciphers as a combiner with memory. This method enables us to construct equations for the NLF that relate the (inputs, outputs and memory of the combiner) to the output keystream. We show that both of these ciphers are secure from this type of algebraic attack. We conclude that using a keydependent SBox in the NLF twice, and using two different SBoxes in the NLF of ZUC, prevents this type of algebraic attack. The third contribution is a new general matrix based model for MAC generation where the input message is injected directly into the internal state. This model describes the accumulation process when the input message is injected directly into the internal state of a nonlinear filter generator. We show that three recently proposed AE stream ciphers can be considered as instances of this model; namely SSS, NLSv2 and SOBER-128. Our model is more general than a previous investigations into direct injection. Possible forgery attacks against this model are investigated. It is shown that using a nonlinear filter in the accumulation process of the input message when either the input message or the initial states of the register is unknown prevents forgery attacks based on collisions. The last contribution is a new general matrix based model for MAC generation where the input message is injected indirectly into the internal state. This model uses the input message as a controller to accumulate a keystream sequence into an accumulation register. We show that three current AE stream ciphers can be considered as instances of this model; namely ZUC, Grain-128a and Sfinks. We establish the conditions under which the model is susceptible to forgery and side-channel attacks.
Resumo:
The purpose of this paper is to explore the potential and value of positive management practices to address the pain and suffering that frequently accompanies periods of large-scale austerity in public sectors. Public managers are increasingly asked to implement severe austerity measures and at the same time to build service delivery capacity; contradictory tasks. We draw on and further develop Cameron’s (2012) model of Positive Leadership to identify seven positive shared leadership practices that, while not eliminating the pain and suffering associated with austerity measures at least offer some scope, compared to traditional public management practices, for managing the austerity-build capacity duality in ways that respond to those affected with compassion and respect. We draw on published reports of a large-scale austerity program to highlight the potential and value of positive shared leadership practices for creating what we refer to as positive organisational austerity. The paper contributes to the literature on public management response to crises in two main ways. First, the paper introduces and develops the concept of shared positive leadership (Cameron, 2012; Carson et al. 2007) as a way of managing in austerity. Second, the paper introduces the concept of positive organisational austerity as a means of highlighting a reorientation in thinking about austerity measures and their implementation.
Resumo:
Information that is elicited from experts can be treated as `data', so can be analysed using a Bayesian statistical model, to formulate a prior model. Typically methods for encoding a single expert's knowledge have been parametric, constrained by the extent of an expert's knowledge and energy regarding a target parameter. Interestingly these methods have often been deterministic, in that all elicited information is treated at `face value', without error. Here we sought a parametric and statistical approach for encoding assessments from multiple experts. Our recent work proposed and demonstrated the use of a flexible hierarchical model for this purpose. In contrast to previous mathematical approaches like linear or geometric pooling, our new approach accounts for several sources of variation: elicitation error, encoding error and expert diversity. Of interest are the practical, mathematical and philosophical interpretations of this form of hierarchical pooling (which is both statistical and parametric), and how it fits within the subjective Bayesian paradigm. Case studies from a bioassay and project management (on PhDs) are used to illustrate the approach.