918 resultados para rational pair
Resumo:
Heparan sulfate mimetics, which we have called the PG500 series, have been developed to target the inhibition of both angiogenesis and heparanase activity. This series extends the technology underpinning PI-88, a mixture of highly sulfated oligosaccharides which reached Phase III clinical development for hepatocellular carcinoma. Advances in the chemistry of the PG500 series provide numerous advantages over PI-88. These new compounds are fully sulfated, single entity oligosaccharides attached to a lipophilic moiety, which have been optimized for drug development. The rational design of these compounds has led to vast improvements in potency compared to PI-88, based on in vitro angiogenesis assays and in vivo tumor models. Based on these and other data, PG545 has been selected as the lead clinical candidate for oncology and is currently undergoing formal preclinical development as a novel treatment for advanced cancer.
Resumo:
Current regulatory requirements on data privacy make it increasingly important for enterprises to be able to verify and audit their compliance with their privacy policies. Traditionally, a privacy policy is written in a natural language. Such policies inherit the potential ambiguity, inconsistency and mis-interpretation of natural text. Hence, formal languages are emerging to allow a precise specification of enforceable privacy policies that can be verified. The EP3P language is one such formal language. An EP3P privacy policy of an enterprise consists of many rules. Given the semantics of the language, there may exist some rules in the ruleset which can never be used, these rules are referred to as redundant rules. Redundancies adversely affect privacy policies in several ways. Firstly, redundant rules reduce the efficiency of operations on privacy policies. Secondly, they may misdirect the policy auditor when determining the outcome of a policy. Therefore, in order to address these deficiencies it is important to identify and resolve redundancies. This thesis introduces the concept of minimal privacy policy - a policy that is free of redundancy. The essential component for maintaining the minimality of privacy policies is to determine the effects of the rules on each other. Hence, redundancy detection and resolution frameworks are proposed. Pair-wise redundancy detection is the central concept in these frameworks and it suggests a pair-wise comparison of the rules in order to detect redundancies. In addition, the thesis introduces a policy management tool that assists policy auditors in performing several operations on an EP3P privacy policy while maintaining its minimality. Formal results comparing alternative notions of redundancy, and how this would affect the tool, are also presented.
Resumo:
This paper is the second in a pair that Lesh, English, and Fennewald will be presenting at ICME TSG 19 on Problem Solving in Mathematics Education. The first paper describes three shortcomings of past research on mathematical problem solving. The first shortcoming can be seen in the fact that knowledge has not accumulated – in fact it has atrophied significantly during the past decade. Unsuccessful theories continue to be recycled and embellished. One reason for this is that researchers generally have failed to develop research tools needed to reliably observe, document, and assess the development of concepts and abilities that they claim to be important. The second shortcoming is that existing theories and research have failed to make it clear how concept development (or the development of basic skills) is related to the development of problem solving abilities – especially when attention is shifted beyond word problems found in school to the kind of problems found outside of school, where the requisite skills and even the questions to be asked might not be known in advance. The third shortcoming has to do with inherent weaknesses in observational studies and teaching experiments – and the assumption that a single grand theory should be able to describe all of the conceptual systems, instructional systems, and assessment systems that strongly molded and shaped by the same theoretical perspectives that are being used to develop them. Therefore, this paper will describe theoretical perspectives and methodological tools that are proving to be effective to combat the preceding kinds or shortcomings. We refer to our theoretical framework as models & modeling perspectives (MMP) on problem solving (Lesh & Doerr, 2003), learning, and teaching. One of the main methodologies of MMP is called multi-tier design studies (MTD).
Resumo:
Psychologists investigating dreams in non-Western cultures have generally not considered the meanings of dreams within the unique meaning-structure of the person in his or her societal context. The majority of dream studies in African societies are no exception. Researchers approaching dreams within rural Xhosa and Zulu speaking societies have either adopted an anthropological or a psychodynamic orientation. The latter approach particularly imposes a Western perspective in the interpretation of dream material. There have been no comparable studies of dream interpretation among urban blacks participating in the African Independent Church Movement. The present study focuses on the rural Xhosa speaking people and the urban black population who speak one of the Nguni languages and identify with the African Independent Church Movement. The study is concerned with understanding the meanings of dreams within the cultural context in which they occur. The specific aims of the study are: 1. To explicate the indigenous system of dream interpretation as revealed by acknowledged dream experts. 2. To examine the commonalities and the differences between the interpretation of dreams in two groups, drawn from a rural and urban setting respectively. 3. To elaborate upon the life-world of the participants by the interpretations gained from the above investigation. One hundred dreams and interpretations are collected from two categories of participants referred to as the Rural Group and the Urban Group. The Rural Group is made up of amagqira [traditional healers] and their clients, while the Urban Group consists of prophets and members of the African Independent Churches. Each group includes acknowledged dream experts. A phenomenological methodology is adopted in explicating the data. The methodological precedure involves a number of rigorous stages of expl ication whereby the original data is reduced to Constituent Profiles leading to the construction of a Thematic Index File. By searching and reflect ing upon the data, interpretative themes are identified. These themes are explicated to provide a rigorous description of the interpretative-reality of each group. Themes explicated w i thin the Rural Group are: the physiognomy of the dreamer's life-world as revealed by ithongo, the interpretation of ithongo as revealed through action, the dream relationship as an anticipatory mode-of-existence, iphupha as disclosing a vulnerable mode-of-being, human bodiliness as revealed in dream interpretations and the legitimation of the interpretative-reality within the life-world. Themes explicated within the Urban Group are: the phys iognomy of the dreamer's life-world revealed in their dream-existence, the interpretative-reality revealed through the enaction of dreams, tension between the newer Christian-based cosomology and the traditional cultural-based cosmology, a moral imperative, prophetic perception and human bodiliness, as revealed in dream interpretations and the legitimation of the interpretative-reality within the life-world. The essence of the interpretative-reality of both groups is very similar and is expressed in the notion of relatedness to a cosmic mode-of-being. The cosmic mode-of-being includes a numinous dimension which is expressed through divine presence in the form of ancestors, Holy Spirit or God. These notions cannot be apprehended by theoretical constructs alone but may be grasped and given form in meaning-disclosing intuitions which are expressed in the lifeworld in terms of bodiliness, revelatory knowledge, action and healing. Some differences b e tween the two groups are evident and reveal some conflict between the monotheistic Christian cosmology and the traditional cosmology. Unique aspects of the interpetative-reality of the Urban Group are expressed in terms of difficulties in the urban social environment and the notion of a moral imperative. It is observed that cul tural self-expression based upon traditional ideas continues to play a significant role in the urban environment. The apparent conflict revealed between the respective cosmologies underlies an integration of the aditional meanings with Christian concepts. This finding is consistent with the literature suggesting that the African Independent Church is a syncretic movement. The life-world is based upon the immediate and vivid experience of the numinous as revealed in the dream phenomenon. The participants' approach to dreams is not based upon an explicit theory, but upon an immediate and pathic understanding of the dream phenomenon. The understanding is based upon the interpreter's concrete understanding of the life-world, which includes the possibility of cosmic integration and continuity between the personal and transpersonal realms of being. The approach is characterized as an expression of man's primordial attunement with the cosmos. The approach of the participants to dreams may not b e consistent with a Western rational orientation, but neverthele ss, it is a valid approach . The validity is based upon the immediate life-world of experience which is intelligible, coherent, and above all, it is meaning-giving in revealing life-possibility within the context of human existence.
Resumo:
This study aimed to develop and assess the reliability and validity of a pair of self-report questionnaires to measure self-efficacy and expectancy associated with benzodiazepine use, the Benzodiazepine Refusal Self- Efficacy Questionnaire (BRSEQ) and the Benzodiazepine Expectancy Questionnaire (BEQ). Internal structure of the questionnaireswas established by principal component analysis (PCA) in a sample of 155 respondents, and verified by confirmatory factor analyses (CFA) in a second independent sample (n=139) using structural equation modeling. The PCA of the BRSEQ resulted in a 16-item, 4-factor scale, and the BEQ formed an 18-item, 2-factor scale. Both scales were internally reliable. CFA confirmed these internal structures and reduced the questionnaires to a 14-item self-efficacy scale and a 12-item expectancy scale. Lower self-efficacy and higher expectancy were moderately associated with higher scores on the SDS-B. The scales provide reliable measures for assessing benzodiazepine self-efficacy and expectancies. Future research will examine the utility of the scales in prospective prediction of benzodiazepine cessation.
Resumo:
The title of this book, Hard Lesson: Reflections on Crime control in Late Modernity, contains a number of clues about its general theoretical direction. It is a book concerned, fist and foremost, with the vagaries of crime control in western neo-liberal and English speaking countries. More specifically, Hard Lessons draws attention to a number of examples in which discrete populations – those who have in one way or another offended against the criminal law - have become the subjects of various forms of stare intervention, regulation and control. We are concerned most of all with the ways in which recent criminal justice policies and practices have resulted in what are variously described as unintended consequences, unforeseen outcomes, unanticipated results, counter-productive effects or negative side effects. At their simplest, such terms refer to the apparent gulf between intention and outcome; they often form the basis for considerable amount of policy reappraisal, soul searching and even nihilistic despair among the mamandirns of crime control. Unintended consequences can, of course, be both positive and negative. Occasionally, crime control measures may result in beneficial outcomes, such as the use of DNA to acquit wrongly convicted prisoners. Generally, however, unforeseen effects tend to be negative and even entirely counterproductive, and/or directly opposite to what were originally intended. All this, of course, presupposes some sort of rational, well meaning and transparent policy making process so beloved by liberal social policy theorists. Yet, as Judith Bessant points out in her chapter, this view of policy formulation tends to obscure the often covert, regulatory and downright malevolent intentions contained in many government policies and practices. Indeed, history is replete with examples of governments seeking to mask their real aims from a prying public eye. Denials and various sorts of ‘techniques of neutralisation’ serve to cloak the real or ‘underlying’ aims of the powerful (Cohen 2000). The latest crop of ‘spin doctors’ and ‘official spokespersons’ has ensured that the process of governmental obfuscation, distortion and concealment remains deeply embedded in neo-liberal forms of governance. There is little new or surprising in this; nor should we be shocked when things ‘go wrong’ in the domain of crime control since many unintended consequences are, more often than not, quite predictable. Prison riots, high rates of recidivism and breaches of supervision orders, expansion rather than contraction of control systems, laws that create the opposite of what was intended – all these are normative features of western crime control. Indeed, without the deep fault lines running between policy and outcome it would be hard to imagine what many policy makers, administrators and practitioners would do: their day to day work practices and (and incomes) are directly dependent upon emergent ‘service delivery’ problems. Despite recurrent howls of official anguish and occasional despondency it is apparent that those involved in the propping up the apparatus of crime control have a vested interest in ensuring that polices and practices remain in an enduring state of review and reform.
Resumo:
This paper is a deductive theoretical enquiry into the flow of effects from the geometry of price bubbles/busts, to price indices, to pricing behaviours of sellers and buyers, and back to price bubbles/busts. The intent of the analysis is to suggest analytical approaches to identify the presence, maturity, and/or sustainability of a price bubble. We present a pricing model to emulate market behaviour, including numeric examples and charts of the interaction of supply and demand. The model extends into dynamic market solutions myopic (single- and multi-period) backward looking rational expectations to demonstrate how buyers and sellers interact to affect supply and demand and to show how capital gain expectations can be a destabilising influence – i.e. the lagged effects of past price gains can drive the market price away from long-run market-worth. Investing based on the outputs of past price-based valuation models appear to be more of a game-of-chance than a sound investment strategy.
Resumo:
The movement toward evidence-based practice in psychology and medicine should offer few problems in cognitive-behavior therapies because it is consistent with the principles by which they have been developed and disseminated. However, the criteria for assessing empirical status, including the heavy emphasis on manualized treatments, need close examination. A possible outcome of the evidence-based movement would be to focus on the application of manualized treatments in both training and clinical practice; problems with that approach are discussed. Commitment to evidence-based treatment should also include comparisons between psychological and pharmacological interventions, so that rational health care decisions can be made. Psychologists should not be afraid of following the evidence, even when it supports treatments that are not cognitive-behavioral in stated orientation. Such results should be taken as an opportunity for theoretical development and new empirical inquiry rather than be a cause for concern.
Resumo:
This paper describes the development and preliminary experimental evaluation of a visionbased docking system to allow an Autonomous Underwater Vehicle (AUV) to identify and attach itself to a set of uniquely identifiable targets. These targets, docking poles, are detected using Haar rectangular features and rotation of integral images. A non-holonomic controller allows the Starbug AUV to orient itself with respect to the target whilst maintaining visual contact during the manoeuvre. Experimental results show the proposed vision system is capable of robustly identifying a pair of docking poles simultaneously in a variety of orientations and lighting conditions. Experiments in an outdoor pool show that this vision system enables the AUV to dock autonomously from a distance of up to 4m with relatively low visibility.
Resumo:
The selection criteria for contractor pre-qualification are characterized by the co-existence of both quantitative and qualitative data. The qualitative data is non-linear, uncertain and imprecise. An ideal decision support system for contractor pre-qualification should have the ability of handling both quantitative and qualitative data, and of mapping the complicated nonlinear relationship of the selection criteria, such that rational and consistent decisions can be made. In this research paper, an artificial neural network model was developed to assist public clients identifying suitable contractors for tendering. The pre-qualification criteria (variables) were identified for the model. One hundred and twelve real pre-qualification cases were collected from civil engineering projects in Hong Kong, and eighty-eight hypothetical pre-qualification cases were also generated according to the “If-then” rules used by professionals in the pre-qualification process. The results of the analysis totally comply with current practice (public developers in Hong Kong). Each pre-qualification case consisted of input ratings for candidate contractors’ attributes and their corresponding pre-qualification decisions. The training of the neural network model was accomplished by using the developed program, in which a conjugate gradient descent algorithm was incorporated for improving the learning performance of the network. Cross-validation was applied to estimate the generalization errors based on the “re-sampling” of training pairs. The case studies show that the artificial neural network model is suitable for mapping the complicated nonlinear relationship between contractors’ attributes and their corresponding pre-qualification (disqualification) decisions. The artificial neural network model can be concluded as an ideal alternative for performing the contractor pre-qualification task.
Resumo:
Matrix function approximation is a current focus of worldwide interest and finds application in a variety of areas of applied mathematics and statistics. In this thesis we focus on the approximation of A^(-α/2)b, where A ∈ ℝ^(n×n) is a large, sparse symmetric positive definite matrix and b ∈ ℝ^n is a vector. In particular, we will focus on matrix function techniques for sampling from Gaussian Markov random fields in applied statistics and the solution of fractional-in-space partial differential equations. Gaussian Markov random fields (GMRFs) are multivariate normal random variables characterised by a sparse precision (inverse covariance) matrix. GMRFs are popular models in computational spatial statistics as the sparse structure can be exploited, typically through the use of the sparse Cholesky decomposition, to construct fast sampling methods. It is well known, however, that for sufficiently large problems, iterative methods for solving linear systems outperform direct methods. Fractional-in-space partial differential equations arise in models of processes undergoing anomalous diffusion. Unfortunately, as the fractional Laplacian is a non-local operator, numerical methods based on the direct discretisation of these equations typically requires the solution of dense linear systems, which is impractical for fine discretisations. In this thesis, novel applications of Krylov subspace approximations to matrix functions for both of these problems are investigated. Matrix functions arise when sampling from a GMRF by noting that the Cholesky decomposition A = LL^T is, essentially, a `square root' of the precision matrix A. Therefore, we can replace the usual sampling method, which forms x = L^(-T)z, with x = A^(-1/2)z, where z is a vector of independent and identically distributed standard normal random variables. Similarly, the matrix transfer technique can be used to build solutions to the fractional Poisson equation of the form ϕn = A^(-α/2)b, where A is the finite difference approximation to the Laplacian. Hence both applications require the approximation of f(A)b, where f(t) = t^(-α/2) and A is sparse. In this thesis we will compare the Lanczos approximation, the shift-and-invert Lanczos approximation, the extended Krylov subspace method, rational approximations and the restarted Lanczos approximation for approximating matrix functions of this form. A number of new and novel results are presented in this thesis. Firstly, we prove the convergence of the matrix transfer technique for the solution of the fractional Poisson equation and we give conditions by which the finite difference discretisation can be replaced by other methods for discretising the Laplacian. We then investigate a number of methods for approximating matrix functions of the form A^(-α/2)b and investigate stopping criteria for these methods. In particular, we derive a new method for restarting the Lanczos approximation to f(A)b. We then apply these techniques to the problem of sampling from a GMRF and construct a full suite of methods for sampling conditioned on linear constraints and approximating the likelihood. Finally, we consider the problem of sampling from a generalised Matern random field, which combines our techniques for solving fractional-in-space partial differential equations with our method for sampling from GMRFs.
Resumo:
The social construction of sexuality over the past one hundred and fifty years has created a dichotomy between heterosexual and non-heterosexual identities that essentially positions the former as “normal” and the latter as deviant. Even Kinsey’s and others’ work on the continuum of sexualities did little to alter the predominantly heterosexist perception of the non-heterosexual as “other” (Kinsey, Pomeroy and Martin 2007; Esterberg 2006; Franceour and Noonan 2007). Some political action and academic work is beginning to challenge such perceptions. Even some avenues of social interaction, such as the recent proliferation of online communities, may also challenge such views, or at least contribute to their being rethought in some ways. This chapter explores a specific kind of online community devoted to fan fiction, specifically homoerotic – or what is known colloquially as “slash” – fan fiction. Fan fiction is fiction, published on the internet, and written by fans of well-known books and television shows, using the characters to create new and varied plots. “Slash” refers to the pairing of two of the male characters in a romantic relationship, and the term comes from the punctuation mark dividing the named pair as, for example, Spock/Kirk from the Star Trek television series. Although there are some slash fan-fiction stories devoted to female-female relationships – called “femmeslash” – the term “slash” generally refers to male-male relationships, and will be utilized throughout this chapter, given that the research discussed focuses on communities centered around one such male pairing.
Resumo:
The consistently high failure rate in Queensland University of Technology’s introductory programming subject reflects a similar dilemma facing other universities worldwide. Experiments were conducted to quantify the effectiveness of collaborative learning on introductory level programming students over a number of semesters, replicating previous studies in this area. A selection of workshops in the introductory programming subject required students to problem-solve and program in pairs, mimicking the eXtreme Programming concept of pair programming. The failure rate for the subject fell from what had been an average of 30% since 2003 (with a high of 41% in 2006), to just 5% for those students who worked consistently in pairs.
Resumo:
Dragon is a word-based stream cipher. It was submitted to the eSTREAM project in 2005 and has advanced to Phase 3 of the software profile. This paper discusses the Dragon cipher from three perspectives: design, security analysis and implementation. The design of the cipher incorporates a single word-based non-linear feedback shift register and a non-linear filter function with memory. This state is initialized with 128- or 256-bit key-IV pairs. Each clock of the stream cipher produces 64 bits of keystream, using simple operations on 32-bit words. This provides the cipher with a high degree of efficiency in a wide variety of environments, making it highly competitive relative to other symmetric ciphers. The components of Dragon were designed to resist all known attacks. Although the design has been open to public scrutiny for several years, the only published attacks to date are distinguishing attacks which require keystream lengths greatly exceeding the stated 264 bit maximum permitted keystream length for a single key-IV pair.
Resumo:
Background: Reducing rates of healthcare acquired infection has been identified by the Australian Commission on Safety and Quality in Health Care as a national priority. One of the goals is the prevention of central venous catheter-related bloodstream infection (CR-BSI). At least 3,500 cases of CR-BSI occur annually in Australian hospitals, resulting in unnecessary deaths and costs to the healthcare system between $25.7 and $95.3 million. Two approaches to preventing these infections have been proposed: use of antimicrobial catheters (A-CVCs); or a catheter care and management ‘bundle’. Given finite healthcare budgets, decisions about the optimal infection control policy require consideration of the effectiveness and value for money of each approach. Objectives: The aim of this research is to use a rational economic framework to inform efficient infection control policy relating to the prevention of CR-BSI in the intensive care unit. It addresses three questions relating to decision-making in this area: 1. Is additional investment in activities aimed at preventing CR-BSI an efficient use of healthcare resources? 2. What is the optimal infection control strategy from amongst the two major approaches that have been proposed to prevent CR-BSI? 3. What uncertainty is there in this decision and can a research agenda to improve decision-making in this area be identified? Methods: A decision analytic model-based economic evaluation was undertaken to identify an efficient approach to preventing CR-BSI in Queensland Health intensive care units. A Markov model was developed in conjunction with a panel of clinical experts which described the epidemiology and prognosis of CR-BSI. The model was parameterised using data systematically identified from the published literature and extracted from routine databases. The quality of data used in the model and its validity to clinical experts and sensitivity to modelling assumptions was assessed. Two separate economic evaluations were conducted. The first evaluation compared all commercially available A-CVCs alongside uncoated catheters to identify which was cost-effective for routine use. The uncertainty in this decision was estimated along with the value of collecting further information to inform the decision. The second evaluation compared the use of A-CVCs to a catheter care bundle. We were unable to estimate the cost of the bundle because it is unclear what the full resource requirements are for its implementation, and what the value of these would be in an Australian context. As such we undertook a threshold analysis to identify the cost and effectiveness thresholds at which a hypothetical bundle would dominate the use of A-CVCs under various clinical scenarios. Results: In the first evaluation of A-CVCs, the findings from the baseline analysis, in which uncertainty is not considered, show that the use of any of the four A-CVCs will result in health gains accompanied by cost-savings. The MR catheters dominate the baseline analysis generating 1.64 QALYs and cost-savings of $130,289 per 1.000 catheters. With uncertainty, and based on current information, the MR catheters remain the optimal decision and return the highest average net monetary benefits ($948 per catheter) relative to all other catheter types. This conclusion was robust to all scenarios tested, however, the probability of error in this conclusion is high, 62% in the baseline scenario. Using a value of $40,000 per QALY, the expected value of perfect information associated with this decision is $7.3 million. An analysis of the expected value of perfect information for individual parameters suggests that it may be worthwhile for future research to focus on providing better estimates of the mortality attributable to CR-BSI and the effectiveness of both SPC and CH/SSD (int/ext) catheters. In the second evaluation of the catheter care bundle relative to A-CVCs, the results which do not consider uncertainty indicate that a bundle must achieve a relative risk of CR-BSI of at least 0.45 to be cost-effective relative to MR catheters. If the bundle can reduce rates of infection from 2.5% to effectively zero, it is cost-effective relative to MR catheters if national implementation costs are less than $2.6 million ($56,610 per ICU). If the bundle can achieve a relative risk of 0.34 (comparable to that reported in the literature) it is cost-effective, relative to MR catheters, if costs over an 18 month period are below $613,795 nationally ($13,343 per ICU). Once uncertainty in the decision is considered, the cost threshold for the bundle increases to $2.2 million. Therefore, if each of the 46 Level III ICUs could implement an 18 month catheter care bundle for less than $47,826 each, this approach would be cost effective relative to A-CVCs. However, the uncertainty is substantial and the probability of error in concluding that the bundle is the cost-effective approach at a cost of $2.2 million is 89%. Conclusions: This work highlights that infection control to prevent CR-BSI is an efficient use of healthcare resources in the Australian context. If there is no further investment in infection control, an opportunity cost is incurred, which is the potential for a more efficient healthcare system. Minocycline/rifampicin catheters are the optimal choice of antimicrobial catheter for routine use in Australian Level III ICUs, however, if a catheter care bundle implemented in Australia was as effective as those used in the large studies in the United States it would be preferred over the catheters if it was able to be implemented for less than $47,826 per Level III ICU. Uncertainty is very high in this decision and arises from multiple sources. There are likely greater costs to this uncertainty for A-CVCs, which may carry hidden costs, than there are for a catheter care bundle, which is more likely to provide indirect benefits to clinical practice and patient safety. Research into the mortality attributable to CR-BSI, the effectiveness of SPC and CH/SSD (int/ext) catheters and the cost and effectiveness of a catheter care bundle in Australia should be prioritised to reduce uncertainty in this decision. This thesis provides the economic evidence to inform one area of infection control, but there are many other infection control decisions for which information about the cost-effectiveness of competing interventions does not exist. This work highlights some of the challenges and benefits to generating and using economic evidence for infection control decision-making and provides support for commissioning more research into the cost-effectiveness of infection control.