979 resultados para Choice literature
Resumo:
This thesis is the result of an investigation of a Queensland example of curriculum reform based on outcomes, a type of reform common to many parts of the world during the last decade. The purpose of the investigation was to determine the impact of outcomes on teacher perspectives of professional practice. The focus was chosen to permit investigation not only of changes in behaviour resulting from the reform but also of teachers' attitudes and beliefs developed during implementation. The study is based on qualitative methodology, chosen because of its suitability for the investigation of attitudes and perspectives. The study exploits the researcher's opportunities for prolonged, direct contact with groups of teachers through the selection of an over-arching ethnography approach, an approach designed to capture the holistic nature of the reform and to contextualise the data within a broad perspective. The selection of grounded theory as a basis for data analysis reflects the open nature of this inquiry and demonstrates the study's constructivist assumptions about the production of knowledge. The study also constitutes a multi-site case study by virtue of the choice of three individual school sites as objects to be studied and to form the basis of the report. Three primary school sites administered by Brisbane Catholic Education were chosen as the focus of data collection. Data were collected from three school sites as teachers engaged in the first year of implementation of Student Performance Standards, the Queensland version of English outcomes based on the current English syllabus. Teachers' experience of outcomes-driven curriculum reform was studied by means of group interviews conducted at individual school sites over a period of fourteen months, researcher observations and the collection of artefacts such as report cards. Analysis of data followed grounded theory guidelines based on a system of coding. Though classification systems were not generated prior to data analysis, the labelling of categories called on standard, non-idiosyncratic terminology and analytic frames and concepts from existing literature wherever practicable in order to permit possible comparisons with other related research. Data from school sites were examined individually and then combined to determine teacher understandings of the reform, changes that have been made to practice and teacher responses to these changes in terms of their perspectives of professionalism. Teachers in the study understood the reform as primarily an accountability mechanism. Though teachers demonstrated some acceptance of the intentions of the reform, their responses to its conceptualisation, supporting documentation and implications for changing work practices were generally characterised by reduced confidence, anger and frustration. Though the impact of outcomes-based curriculum reform must be interpreted through the inter-relationships of a broad range of elements which comprise teachers' work and their attitudes towards their work, it is proposed that the substantive findings of the study can be understood in terms of four broad themes. First, when the conceptual design of outcomes did not serve teachers' accountability requirements and outcomes were perceived to be expressed in unfamiliar technical language, most teachers in the study lost faith in the value of the reform and lost confidence in their own abilities to understand or implement it. Second, this reduction of confidence was intensified when the scope of outcomes was outside the scope of the teachers' existing curriculum and assessment planning and teachers were confronted with the necessity to include aspects of syllabuses or school programs which they had previously omitted because of a lack of understanding or appreciation. The corollary was that outcomes promoted greater syllabus fidelity when frameworks were closely aligned. Third, other benefits the teachers associated with outcomes included the development of whole school curriculum resources and greater opportunity for teacher collaboration, particularly among schools. The teachers, however, considered a wide range of factors when determining the overall impact of the reform, and perceived a number of them in terms of the costs of implementation. These included the emergence of ethical dilemmas concerning relationships with students, colleagues and parents, reduced individual autonomy, particularly with regard to the selection of valued curriculum content and intensification of workload with the capacity to erode the relationships with students which teachers strongly associated with the rewards of their profession. Finally, in banding together at the school level to resist aspects of implementation, some teachers showed growing awareness of a collective authority capable of being exercised in response to top-down reform. These findings imply that Student Performance Standards require review and, additional implementation resourcing to support teachers through times of reduced confidence in their own abilities. Outcomes prove an effective means of high-fidelity syllabus implementation, and, provided they are expressed in an accessible way and aligned with syllabus frameworks and terminology, should be considered for inclusion in future syllabuses across a range of learning areas. The study also identifies a range of unintended consequences of outcomes-based curriculum and acknowledges the complexity of relationships among all the aspects of teachers' work. It also notes that the impact of reform on teacher perspectives of professional practice may alter teacher-teacher and school-system relationships in ways that have the potential to influence the effectiveness of future curriculum reform.
Resumo:
This dissertation is primarily an applied statistical modelling investigation, motivated by a case study comprising real data and real questions. Theoretical questions on modelling and computation of normalization constants arose from pursuit of these data analytic questions. The essence of the thesis can be described as follows. Consider binary data observed on a two-dimensional lattice. A common problem with such data is the ambiguity of zeroes recorded. These may represent zero response given some threshold (presence) or that the threshold has not been triggered (absence). Suppose that the researcher wishes to estimate the effects of covariates on the binary responses, whilst taking into account underlying spatial variation, which is itself of some interest. This situation arises in many contexts and the dingo, cypress and toad case studies described in the motivation chapter are examples of this. Two main approaches to modelling and inference are investigated in this thesis. The first is frequentist and based on generalized linear models, with spatial variation modelled by using a block structure or by smoothing the residuals spatially. The EM algorithm can be used to obtain point estimates, coupled with bootstrapping or asymptotic MLE estimates for standard errors. The second approach is Bayesian and based on a three- or four-tier hierarchical model, comprising a logistic regression with covariates for the data layer, a binary Markov Random field (MRF) for the underlying spatial process, and suitable priors for parameters in these main models. The three-parameter autologistic model is a particular MRF of interest. Markov chain Monte Carlo (MCMC) methods comprising hybrid Metropolis/Gibbs samplers is suitable for computation in this situation. Model performance can be gauged by MCMC diagnostics. Model choice can be assessed by incorporating another tier in the modelling hierarchy. This requires evaluation of a normalization constant, a notoriously difficult problem. Difficulty with estimating the normalization constant for the MRF can be overcome by using a path integral approach, although this is a highly computationally intensive method. Different methods of estimating ratios of normalization constants (N Cs) are investigated, including importance sampling Monte Carlo (ISMC), dependent Monte Carlo based on MCMC simulations (MCMC), and reverse logistic regression (RLR). I develop an idea present though not fully developed in the literature, and propose the Integrated mean canonical statistic (IMCS) method for estimating log NC ratios for binary MRFs. The IMCS method falls within the framework of the newly identified path sampling methods of Gelman & Meng (1998) and outperforms ISMC, MCMC and RLR. It also does not rely on simplifying assumptions, such as ignoring spatio-temporal dependence in the process. A thorough investigation is made of the application of IMCS to the three-parameter Autologistic model. This work introduces background computations required for the full implementation of the four-tier model in Chapter 7. Two different extensions of the three-tier model to a four-tier version are investigated. The first extension incorporates temporal dependence in the underlying spatio-temporal process. The second extensions allows the successes and failures in the data layer to depend on time. The MCMC computational method is extended to incorporate the extra layer. A major contribution of the thesis is the development of a fully Bayesian approach to inference for these hierarchical models for the first time. Note: The author of this thesis has agreed to make it open access but invites people downloading the thesis to send her an email via the 'Contact Author' function.
Resumo:
Stream ciphers are encryption algorithms used for ensuring the privacy of digital telecommunications. They have been widely used for encrypting military communications, satellite communications, pay TV encryption and for voice encryption of both fixed lined and wireless networks. The current multi year European project eSTREAM, which aims to select stream ciphers suitable for widespread adoptation, reflects the importance of this area of research. Stream ciphers consist of a keystream generator and an output function. Keystream generators produce a sequence that appears to be random, which is combined with the plaintext message using the output function. Most commonly, the output function is binary addition modulo two. Cryptanalysis of these ciphers focuses largely on analysis of the keystream generators and of relationships between the generator and the keystream it produces. Linear feedback shift registers are widely used components in building keystream generators, as the sequences they produce are well understood. Many types of attack have been proposed for breaking various LFSR based stream ciphers. A recent attack type is known as an algebraic attack. Algebraic attacks transform the problem of recovering the key into a problem of solving multivariate system of equations, which eventually recover the internal state bits or the key bits. This type of attack has been shown to be effective on a number of regularly clocked LFSR based stream ciphers. In this thesis, algebraic attacks are extended to a number of well known stream ciphers where at least one LFSR in the system is irregularly clocked. Applying algebriac attacks to these ciphers has only been discussed previously in the open literature for LILI-128. In this thesis, algebraic attacks are first applied to keystream generators using stop-and go clocking. Four ciphers belonging to this group are investigated: the Beth-Piper stop-and-go generator, the alternating step generator, the Gollmann cascade generator and the eSTREAM candidate: the Pomaranch cipher. It is shown that algebraic attacks are very effective on the first three of these ciphers. Although no effective algebraic attack was found for Pomaranch, the algebraic analysis lead to some interesting findings including weaknesses that may be exploited in future attacks. Algebraic attacks are then applied to keystream generators using (p; q) clocking. Two well known examples of such ciphers, the step1/step2 generator and the self decimated generator are investigated. Algebraic attacks are shown to be very powerful attack in recovering the internal state of these generators. A more complex clocking mechanism than either stop-and-go or the (p; q) clocking keystream generators is known as mutual clock control. In mutual clock control generators, the LFSRs control the clocking of each other. Four well known stream ciphers belonging to this group are investigated with respect to algebraic attacks: the Bilateral-stop-and-go generator, A5/1 stream cipher, Alpha 1 stream cipher, and the more recent eSTREAM proposal, the MICKEY stream ciphers. Some theoretical results with regards to the complexity of algebraic attacks on these ciphers are presented. The algebraic analysis of these ciphers showed that generally, it is hard to generate the system of equations required for an algebraic attack on these ciphers. As the algebraic attack could not be applied directly on these ciphers, a different approach was used, namely guessing some bits of the internal state, in order to reduce the degree of the equations. Finally, an algebraic attack on Alpha 1 that requires only 128 bits of keystream to recover the 128 internal state bits is presented. An essential process associated with stream cipher proposals is key initialization. Many recently proposed stream ciphers use an algorithm to initialize the large internal state with a smaller key and possibly publicly known initialization vectors. The effect of key initialization on the performance of algebraic attacks is also investigated in this thesis. The relationships between the two have not been investigated before in the open literature. The investigation is conducted on Trivium and Grain-128, two eSTREAM ciphers. It is shown that the key initialization process has an effect on the success of algebraic attacks, unlike other conventional attacks. In particular, the key initialization process allows an attacker to firstly generate a small number of equations of low degree and then perform an algebraic attack using multiple keystreams. The effect of the number of iterations performed during key initialization is investigated. It is shown that both the number of iterations and the maximum number of initialization vectors to be used with one key should be carefully chosen. Some experimental results on Trivium and Grain-128 are then presented. Finally, the security with respect to algebraic attacks of the well known LILI family of stream ciphers, including the unbroken LILI-II, is investigated. These are irregularly clock- controlled nonlinear filtered generators. While the structure is defined for the LILI family, a particular paramater choice defines a specific instance. Two well known such instances are LILI-128 and LILI-II. The security of these and other instances is investigated to identify which instances are vulnerable to algebraic attacks. The feasibility of recovering the key bits using algebraic attacks is then investigated for both LILI- 128 and LILI-II. Algebraic attacks which recover the internal state with less effort than exhaustive key search are possible for LILI-128 but not for LILI-II. Given the internal state at some point in time, the feasibility of recovering the key bits is also investigated, showing that the parameters used in the key initialization process, if poorly chosen, can lead to a key recovery using algebraic attacks.
Resumo:
Typical daily decision-making process of individuals regarding use of transport system involves mainly three types of decisions: mode choice, departure time choice and route choice. This paper focuses on the mode and departure time choice processes and studies different model specifications for a combined mode and departure time choice model. The paper compares different sets of explanatory variables as well as different model structures to capture the correlation among alternatives and taste variations among the commuters. The main hypothesis tested in this paper is that departure time alternatives are also correlated by the amount of delay. Correlation among different alternatives is confirmed by analyzing different nesting structures as well as error component formulations. Random coefficient logit models confirm the presence of the random taste heterogeneity across commuters. Mixed nested logit models are estimated to jointly account for the random taste heterogeneity and the correlation among different alternatives. Results indicate that accounting for the random taste heterogeneity as well as inter-alternative correlation improves the model performance.
Resumo:
Many of the costs associated with greenfield residential development are apparent and tangible. For example, regulatory fees, government taxes, acquisition costs, selling fees, commissions and others are all relatively easily identified since they represent actual costs incurred at a given point in time. However, identification of holding costs are not always immediately evident since by contrast they characteristically lack visibility. One reason for this is that, for the most part, they are typically assessed over time in an ever-changing environment. In addition, wide variations exist in development pipeline components: they are typically represented from anywhere between a two and over sixteen years time period - even if located within the same geographical region. Determination of the starting and end points, with regards holding cost computation, can also prove problematic. Furthermore, the choice between application of prevailing inflation, or interest rates, or a combination of both over time, adds further complexity. Although research is emerging in these areas, a review of the literature reveals attempts to identify holding cost components are limited. Their quantification (in terms of relative weight or proportionate cost to a development project) is even less apparent; in fact, the computation and methodology behind the calculation of holding costs varies widely and in some instances completely ignored. In addition, it may be demonstrated that ambiguities exists in terms of the inclusion of various elements of holding costs and assessment of their relative contribution. Yet their impact on housing affordability is widely acknowledged to be profound, with their quantification potentially maximising the opportunities for delivering affordable housing. This paper seeks to build on earlier investigations into those elements related to holding costs, providing theoretical modelling of the size of their impact - specifically on the end user. At this point the research is reliant upon quantitative data sets, however additional qualitative analysis (not included here) will be relevant to account for certain variations between expectations and actual outcomes achieved by developers. Although this research stops short of cross-referencing with a regional or international comparison study, an improved understanding of the relationship between holding costs, regulatory charges, and housing affordability results.
Resumo:
The topics of corruption and tax evasion have attracted significant attention in the literature in recent years. We build on that literature by investigating empirically: (1) whether attitudes toward corruption and tax evasion vary systematically with gender and (2) whether gender differences decline as men and women face similar opportunities for illicit behavior. We use data on eight Western European countries from the World Values Survey and the European Values Survey. The results reveal significantly greater aversion to corruption and tax evasion among women. This holds across countries and time, and across numerous empirical specifications. (JEL H260, D730, J160, Z130)
Resumo:
This paper uses a multivariate analysis to examine how countries‘ tax morale and institutional quality affect the shadow economy. The literature strongly emphasizes the quantitative importance of these factors in understanding the level of and changes in the shadow economy. Newly available data sources offer the unique opportunity to further illuminate a topic that has received increased attention. After controlling for a variety of potential factors, we find strong support that a higher tax morale and a higher institutional quality lead to a smaller shadow economy.
Resumo:
This paper reviews the main studies on transit users’ route choice in thecontext of transit assignment. The studies are categorized into three groups: static transit assignment, within-day dynamic transit assignment, and emerging approaches. The motivations and behavioural assumptions of these approaches are re-examined. The first group includes shortest-path heuristics in all-or-nothing assignment, random utility maximization route-choice models in stochastic assignment, and user equilibrium based assignment. The second group covers within-day dynamics in transit users’ route choice, transit network formulations, and dynamic transit assignment. The third group introduces the emerging studies on behavioural complexities, day-to-day dynamics, and real-time dynamics in transit users’ route choice. Future research directions are also discussed.
Resumo:
There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros
Resumo:
Statisticians along with other scientists have made significant computational advances that enable the estimation of formerly complex statistical models. The Bayesian inference framework combined with Markov chain Monte Carlo estimation methods such as the Gibbs sampler enable the estimation of discrete choice models such as the multinomial logit (MNL) model. MNL models are frequently applied in transportation research to model choice outcomes such as mode, destination, or route choices or to model categorical outcomes such as crash outcomes. Recent developments allow for the modification of the potentially limiting assumptions of MNL such as the independence from irrelevant alternatives (IIA) property. However, relatively little transportation-related research has focused on Bayesian MNL models, the tractability of which is of great value to researchers and practitioners alike. This paper addresses MNL model specification issues in the Bayesian framework, such as the value of including prior information on parameters, allowing for nonlinear covariate effects, and extensions to random parameter models, so changing the usual limiting IIA assumption. This paper also provides an example that demonstrates, using route-choice data, the considerable potential of the Bayesian MNL approach with many transportation applications. This paper then concludes with a discussion of the pros and cons of this Bayesian approach and identifies when its application is worthwhile
Resumo:
Aim: There has been no systematic research on the role of cooking skills for improving dietary intakes in Australia. Cooking skills are proposed to be declining and/or being devalued. If cooking skills have been devalued and declining, then what evidence is there for this decline and what impact might this have on dietary intakes? The aim of the present paper is to explore these assumptions with particular reference to Australia. The objectives of the present paper are to define the terms cooking and cooking skills, discuss evidence on levels of cooking skills in Australia and describe the evidence linking cooking skills to dietary intakes.---------- Methods: A review of the peer-reviewed literature using multiple databases from 1990 to September 2009.---------- Results: Cooking skills are complex and require a range of processes for people to develop efficiency or confidence in food preparation. There is little evidence on the level of cooking skills in the Australian population and how this relates to dietary intakes. The Australian Bureau of Statistic’s latest Time Use Survey and Household Expenditure Survey suggest that cooking is still a gendered activity and that the time devoted to cooking has changed little in the past 15 years, but there is an increasing use of foods prepared outside the home.---------- Conclusion: Further research is required to examine the prevalence of different types and levels of cooking skills in Australia as well as their potential effects on dietary intakes. Dietitians need evidence about the level of cooking skills people require for healthy dietary intakes.
Resumo:
Eating is an essential everyday life activity that has fascinated, captivated and defined society since time began. We currently exist in a society where over-consumption of food is an established risk factor chronic disease, the rate of which is increasing alarmingly. 'Food literacy' is an emerging term used to describe what we, as individuals and as a community know and understand about food and how to use it to meet our need, and thus potentially support and empower citizens to make healthy food choices. What exactly the components of food literacy are and how they influence food choice are poorly defined and understood, but increasingly gaining interest among health professionals, policy makers, community workers, educators and members of the public. This paper will build the argument for why concepts of 'food literacy' need to extend beyond existing terms and measures used in the literature to describe the food skills and knowledge needed to make use of public health nutrition messages.